question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
where does the show the brave take place | The Brave (TV series) - wikipedia
The Brave is an American military drama series. It stars Anne Heche and Mike Vogel, and was created by Dean Georgaris. The series premiered on September 25, 2017 on NBC.
NBC ordered the pilot to series on May 4, 2017 together with Rise, making both series the first regular series orders for the 2017 -- 18 United States network television schedule. In May 2017 NBC announced that Matt Corman and Chris Ord would be the series ' showrunners.
The review aggregator website Rotten Tomatoes reported a 38 % approval rating, with an average rating of 5.75 / 10 based on 21 reviews. Metacritic, which uses a weighted average, assigned a score of 54 out of 100 based on 11 critics, indicating "mixed or average reviews ''.
|
dwyane wade career high points in a game | List of career achievements by Dwyane Wade - wikipedia
This page details the career achievements of American basketball player Dwyane Wade.
Statistics accurate as of the conclusion of games on February 8, 2015
|
where did most of the fighting take place in the war of 1812 | War of 1812 - Wikipedia
Treaty of Ghent
United States
British Empire
Bourbon Spain
2,200 -- 3,721 killed in action
British Empire: 1,160 - 1,960 killed in action
Native allies: 10,000 dead from all causes (warriors and civilians)
East Coast
Great Lakes / Saint Lawrence River
West Indies / Gulf Coast
Pacific Ocean
The War of 1812 (1812 -- 1815) was a conflict fought between the United States, the United Kingdom, and their respective allies. Historians in Britain often see it as a minor theatre of the Napoleonic Wars; in the United States and Canada, it is seen as a war in its own right.
Since the outbreak of war with Napoleonic France, Britain had enforced a naval blockade to choke off neutral trade to France, which the United States contested as illegal under international law. To man the blockade, Britain impressed American merchant sailors into the Royal Navy. Incidents such as the Chesapeake -- Leopard affair inflamed anti-British sentiment. In 1811, the British were in turn outraged by the Little Belt affair, in which 11 British sailors died. The British supplied Indians who conducted raids on American settlers on the frontier, which hindered American expansion and also provoked resentment. Historians remain divided on whether the desire to annex some or all of British North America contributed to the American decision to go to war. On June 18, 1812, United States President James Madison, after receiving heavy pressure from the War Hawks in Congress, signed the American declaration of war into law.
With the majority of their army in Europe fighting Napoleon, the British adopted a defensive strategy. American prosecution of the war effort suffered from its unpopularity, especially in New England, where it was derogatorily referred to as "Mr. Madison 's War ''. American defeats at the Siege of Detroit and the Battle of Queenston Heights thwarted attempts to seize Upper Canada, improving British morale. American attempts to invade Lower Canada and capture Montreal also failed. In 1813, at the Battle of Lake Erie the Americans won control of Lake Erie, and at the Battle of the Thames defeated Tecumseh 's Confederacy, securing a primary war goal. At sea, the powerful Royal Navy blockaded American ports, cutting off trade and allowing the British to raid the coast at will. In 1814, one of these raids burned the capital, Washington, although the Americans subsequently repulsed British attempts to invade New England and capture Baltimore.
At home, the British faced mounting opposition to wartime taxation and demands to reopen trade with America. With the abdication of Napoleon, the blockade of France ended and the British ceased impressment, rendering the issue of the impressment of American sailors moot. The British were then able to increase the strength of the blockade on the United States coast, annihilating American maritime trade and bringing the United States government near to bankruptcy. Peace negotiations began in August 1814 and the Treaty of Ghent was signed on December 24 as neither side wanted to continue fighting. News of the peace did not reach America for some time. Unaware that the treaty had been signed, British forces invaded Louisiana and were defeated at the Battle of New Orleans in January 1815. These late victories were viewed by Americans as having restored national honour, leading to the collapse of anti-war sentiment and the beginning of the Era of Good Feelings, a period of national unity. News of the treaty arrived shortly thereafter, halting military operations. The treaty was unanimously ratified by the United States on February 17, 1815, ending the war with status quo ante bellum (no boundary changes).
Historians have long debated the relative weight of the multiple reasons underlying the origins of the War of 1812. This section summarizes several contributing factors which resulted in the declaration of war by the United States.
As Risjord (1961) notes, a powerful motivation for the Americans was the desire to uphold national honour in the face of what they considered to be British insults such as the Chesapeake -- Leopard affair. H.W. Brands says, "The other war hawks spoke of the struggle with Britain as a second war of independence; (Andrew) Jackson, who still bore scars from the first war of independence, held that view with special conviction. The approaching conflict was about violations of American rights, but it was also about vindication of American identity. '' Americans at the time and historians since have often called it the United States ' "Second War of Independence ''.
The British were also offended by what they considered insults such as the Little Belt affair. This gave the British a particular interest in capturing the United States flagship President, which they succeeded in doing in 1815.
In 1807, Britain introduced a series of trade restrictions via the Orders in Council to impede neutral trade with France, which Britain was then fighting in the Napoleonic Wars. The United States contested these restrictions as illegal under international law. Historian Reginald Horsman states, "a large section of influential British opinion, both in the government and in the country, thought that America presented a threat to British maritime supremacy. ''
The American merchant marine had come close to doubling between 1802 and 1810, making it by far the largest neutral fleet. Britain was the largest trading partner, receiving 80 % of U.S. cotton and 50 % of other U.S. exports. The British public and press were resentful of the growing mercantile and commercial competition. The United States ' view was that Britain 's restrictions violated its right to trade with others.
During the Napoleonic Wars, the Royal Navy expanded to 176 ships of the line and 600 ships overall, requiring 140,000 sailors to man. While the Royal Navy could man its ships with volunteers in peacetime, it competed in wartime with merchant shipping and privateers for a small pool of experienced sailors and turned to impressment from ashore and foreign or domestic shipping when it could not operate its ships with volunteers alone.
The United States believed that British deserters had a right to become U.S. citizens. Britain did not recognize a right whereby a British subject could relinquish his status as a British subject, emigrate and transfer his national allegiance as a naturalized citizen to any other country. This meant that in addition to recovering naval deserters, it considered any United States citizens who were born British liable for impressment. Aggravating the situation was the reluctance of the United States to issue formal naturalization papers and the widespread use of unofficial or forged identity or protection papers by sailors. This made it difficult for the Royal Navy to distinguish Americans from non-Americans and led it to impress some Americans who had never been British. Some gained freedom on appeal. Thus while the United States recognized British - born sailors on American ships as Americans, Britain did not. It was estimated by the Admiralty that there were 11,000 naturalized sailors on United States ships in 1805. U.S. Secretary of the Treasury Albert Gallatin stated that 9,000 U.S. sailors were born in Britain. Moreover, a great number of these British born sailors were Irish. An investigation by Captain Isaac Chauncey in 1808 found that 58 % of sailors based in New York City were either naturalized citizens or recent immigrants, the majority of these foreign born sailors (134 of 150) being from Britain. Moreover, 80 of the 134 British sailors were Irish.
American anger at impressment grew when British frigates were stationed just outside U.S. harbours in view of U.S. shores and searched ships for contraband and impressed men while within U.S. territorial waters. Well publicized impressment actions such as the Leander affair and the Chesapeake -- Leopard affair outraged the American public.
The British public in turn were outraged by the Little Belt affair, in which a larger American ship clashed with a small British sloop, resulting in the deaths of 11 British sailors. Both sides claimed the other fired first, but the British public in particular blamed the U.S. for attacking a smaller vessel, with calls for revenge by some newspapers, while the U.S. was encouraged by the fact they had won a victory over the Royal Navy. The U.S. Navy also forcibly recruited British sailors but the British government saw impressment as commonly accepted practice and preferred to rescue British sailors from American impressment on a case - by - case basis.
The Northwest Territory, which consisted of the modern states of Ohio, Indiana, Illinois, Michigan, and Wisconsin, was the battleground for conflict between the Native American Nations and the United States. The British Empire had ceded the area to the United States in the Treaty of Paris in 1783, both sides ignoring the fact that the land was already inhabited by various Native American nations. These included the Miami, Winnebago, Shawnee, Fox, Sauk, Kickapoo, Delaware and Wyandot. Some warriors, who had left their nations of origin, followed Tenskwatawa, the Shawnee Prophet and the brother of Tecumseh. Tenskwatawa had a vision of purifying his society by expelling the "children of the Evil Spirit '': the American settlers. The Indians wanted to create their own state in the Northwest to end the American threat forever as it became clear that the Americans wanted all of the land in the Old Northwest for themselves. Tenskwatawa and Tecumseh formed a confederation of numerous tribes to block American expansion. The British saw the Native American nations as valuable allies and a buffer to its Canadian colonies and provided arms. Attacks on American settlers in the Northwest further aggravated tensions between Britain and the United States. Raiding grew more common in 1810 and 1811; Westerners in Congress found the raids intolerable and wanted them permanently ended. British policy towards the Indians of the Northwest was torn between the desire to keep the Americans fighting in the Northwest, and to preserve a region that provided rich profits for Canadian fur traders, versus the fear that too much support for the Indians would cause a war with the United States. Through Tecumseh 's plans for an Indian state in the Northwest would benefit British North America by making it more defensible, at the same time, the defeats suffered by Tecumseh 's confederation had the British leery of going too far to support what was probably a losing cause. In the months running up to the war, British diplomats attempted to defuse tensions on the frontier.
The confederation 's raids, and its very existence, hindered American expansion into rich farmlands in the Northwest Territory. Pratt writes:
There is ample proof that the British authorities did all in their power to hold or win the allegiance of the Indians of the Northwest with the expectation of using them as allies in the event of war. Indian allegiance could be held only by gifts, and to an Indian no gift was as acceptable as a lethal weapon. Guns and ammunition, tomahawks and scalping knives were dealt out with some liberality by British agents.
However, according to the U.S. Army Center of Military History, the "land - hungry frontiersmen '', with "no doubt that their troubles with the Native Americans were the result of British intrigue '', exacerbated the problem by "(circulating stories) after every Native American raid of British Army muskets and equipment being found on the field ''. Thus, "the westerners were convinced that their problems could best be solved by forcing the British out of Canada ''.
The British had the long - standing goal of creating a large, "neutral '' Native American state to cover much of Ohio, Indiana, and Michigan. They made the demand as late as the fall of 1814 at the peace conference, but lost control of western Ontario in 1813 at key battles on and around Lake Erie. These battles destroyed the Indian confederacy which had been the main ally of the British in that region, weakening its negotiating position. Although much of the area remained under British or British - allied Native Americans ' control until the end of the war, the British, at American insistence and with higher priorities, dropped the demands.
American expansion into the Northwest Territory was being obstructed by various Indian tribes since the end of the Revolution, who were supplied and encouraged by the British. Americans on the western frontier demanded that interference be stopped. There is dispute, however, over whether or not the American desire to annex Canada brought on the war. Several historians believe that the capture of Canada was intended only as a means to secure a bargaining chip, which would then be used to force Britain to back down on the maritime issues. It would also cut off food supplies for Britain 's West Indian colonies, and temporarily prevent the British from continuing to arm the Indians. However, many historians believe that a desire to annex Canada was a cause of the war. This view was more prevalent before 1940, but remains widely held today. Congressman Richard Mentor Johnson told Congress that the constant Indian atrocities along the Wabash River in Indiana were enabled by supplies from Canada and were proof that "the war has already commenced... I shall never die contented until I see England 's expulsion from North America and her territories incorporated into the United States. ''
Madison believed that British economic policies designed to foster imperial preference were harming the American economy and that as British North America existed, here was a conduit for American strugglers who were undercutting his trade policies, which thus required that the United States annex British North America. Furthermore, Madison believed that the Great Lakes - St. Lawrence trade route might become the main trade route for the export of North American goods to Europe at the expense of the U.S. economy, and if the United States controlled the resources of British North America like timber which the British needed for their navy, then Britain would be forced to change its maritime policies which had so offended American public opinion. Many Americans believed it was only natural that their country should swallow up North America with one Congressman, John Harper saying in a speech that "the Author of Nature Himself had marked our limits in the south, by the Gulf of Mexico and on the north, by the regions of eternal frost ''. Upper Canada (modern southern Ontario) had been settled mostly by Revolution - era exiles from the United States (United Empire Loyalists) or postwar American immigrants. The Loyalists were hostile to union with the United States, while the immigrant settlers were generally uninterested in politics and remained neutral or supported the British during the war. The Canadian colonies were thinly populated and only lightly defended by the British Army. Americans then believed that many men in Upper Canada would rise up and greet an American invading army as liberators. That did not happen. One reason American forces retreated after one successful battle inside Canada was that they could not obtain supplies from the locals. But the Americans thought that the possibility of local support suggested an easy conquest, as former President Thomas Jefferson believed: "The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us the experience for the attack on Halifax, the next and final expulsion of England from the American continent. ''
Annexation was supported by American border businessmen who wanted to gain control of Great Lakes trade.
Carl Benn noted that the War Hawks ' desire to annex the Canadas was similar to the enthusiasm for the annexation of Spanish Florida by inhabitants of the American South; both expected war to facilitate expansion into long - desired lands and end support for hostile Indian tribes (Tecumseh 's Confederacy in the North and the Creek in the South).
Tennessee Congressman Felix Grundy considered it essential to acquire Canada to preserve domestic political balance, arguing that annexing Canada would maintain the free state - slave state balance, which might otherwise be thrown off by the acquisition of Florida and the settlement of the southern areas of the new Louisiana Purchase.
Historian Richard Maass argued in 2015 that the expansionist theme is a myth that goes against the "relative consensus among experts that the primary U.S. objective was the repeal of British maritime restrictions ''. He argues that consensus among scholars is that the United States went to war "because six years of economic sanctions had failed to bring Britain to the negotiating table, and threatening the Royal Navy 's Canadian supply base was their last hope. '' Maass agrees that theoretically expansionism might have tempted Americans, but finds that "leaders feared the domestic political consequences of doing so. Notably, what limited expansionism there was focused on sparsely populated western lands rather than the more populous eastern settlements (of Canada). '' Nevertheless, Maas notes that many historians continue to believe that expansionism was a cause.
Horsman argued expansionism played a role as a secondary cause after maritime issues, noting that many historians have mistakenly rejected expansionism as a cause for the war. He notes that it was considered key to maintaining sectional balance between free and slave states thrown off by American settlement of the Louisiana Territory, and widely supported by dozens of War Hawk congressmen such as John A. Harper, Felix Grundy, Henry Clay, and Richard M. Johnson, who voted for war with expansion as a key aim.
In disagreeing with those interpretations that have simply stressed expansionism and minimized maritime causation, historians have ignored deep - seated American fears for national security, dreams of a continent completely controlled by the republican United States, and the evidence that many Americans believed that the War of 1812 would be the occasion for the United States to achieve the long - desired annexation of Canada... Thomas Jefferson well - summarized American majority opinion about the war... to say "that the cession of Canada... must be a sine qua non at a treaty of peace ''.
However, Horsman states that in his view "the desire for Canada did not cause the War of 1812 '' and that "The United States did not declare war because it wanted to obtain Canada, but the acquisition of Canada was viewed as a major collateral benefit of the conflict. ''
Historian Alan Taylor says that many Democratic - Republican congressmen, such as Richard M. Johnson, John A. Harper and Peter B. Porter, "longed to oust the British from the continent and to annex Canada. '' A few Southerners opposed this, fearing an imbalance of free and slave states if Canada was annexed, while anti-Catholicism also caused many to oppose annexing mainly Catholic Lower Canada, believing its French - speaking inhabitants "unfit... for republican citizenship ''. Even major figures such as Henry Clay and James Monroe expected to keep at least Upper Canada in the event of an easy conquest. Notable American generals, like William Hull were led by this sentiment to issue proclamations to Canadians during the war promising republican liberation through incorporation into the United States; a proclamation the government never officially disavowed. General Alexander Smyth similarly declared to his troops that when they invaded Canada "You will enter a country that is to become one of the United States. You will arrive among a people who are to become your fellow - citizens. '' A lack of clarity about American intentions undercut these appeals, however.
David and Jeanne Heidler argue that "Most historians agree that the War of 1812 was not caused by expansionism but instead reflected a real concern of American patriots to defend United States ' neutral rights from the overbearing tyranny of the British Navy. That is not to say that expansionist aims would not potentially result from the war. ''
However, they also argue otherwise, saying that "acquiring Canada would satisfy America 's expansionist desires '', also describing it as a key goal of western expansionists, who, they argue, believed that "eliminating the British presence in Canada would best accomplish '' their goal of halting British support for Indian raids. They argue that the "enduring debate '' is over the relative importance of expansionism as a factor, and whether "expansionism played a greater role in causing the War of 1812 than American concern about protecting neutral maritime rights. ''
While the British government was largely oblivious to the deteriorating North American situation because of its involvement in a continent - wide European war, the U.S. was in a period of significant political conflict between the Federalist Party (based mainly in the Northeast), which favoured a strong central government and closer ties to Britain, and the Democratic - Republican Party (with its greatest power base in the South and West), which favoured a weak central government, preservation of states ' rights (including slavery), expansion into Indian land, and a stronger break with Britain. By 1812, the Federalist Party had weakened considerably, and the Republicans, with James Madison completing his first term of office and control of Congress, were in a strong position to pursue their more aggressive agenda against Britain. Throughout the war, support for the U.S. cause was weak (or sometimes non-existent) in Federalist areas of the Northeast. Few men volunteered to serve; the banks avoided financing the war. The negativism of the Federalists, especially as exemplified by the Hartford Convention of 1814 -- 15, ruined its reputation and the Party survived only in scattered areas. By 1815 there was broad support for the war from all parts of the country. This allowed the triumphant Democratic - Republicans to adopt some Federalist policies, such as a national bank, which Madison reestablished in 1816.
The United States Navy (USN) had 7,250 sailors and Marines in 1812. The American Navy was well trained and a professional force that fought well against the Barbary pirates and France in the Quasi-War. The USN had 13 ocean - going warships, three of them "super-frigates '' and its principal problem was a lack of funding as many in Congress did not see the need for a strong navy. The American warships were all well - built ships that were equal, if not superior to British ships of a similar class (British shipbuilding emphasized quantity over quality). However, the biggest ships in the USN were frigates, and the Americans had no ships - of - the - line capable of engaging in a fleet action with the Royal Navy at sea.
On the high seas, the Americans could only pursue a strategy of commerce raiding, taking British merchantmen with their frigates and privateers. Before the war, the USN was largely concentrated on the Atlantic coast and at the war 's outbreak had only two gunboats on Lake Champlain, one brig on Lake Ontario and another brig in Lake Erie.
The United States Army was much larger than the British Army in North America, and the soldiers well trained and brave. However, leadership in the American officer corps was inconsistent; some officers proved themselves to be outstanding but many others inept, owing their positions to political favors. Congress was hostile to a standing army, and during the war, the U.S. government called out 450,000 men from the state militas, a number that was slightly smaller than the entire population of British North America. However, the state militias were poorly trained, armed, and led. After the Battle of Bladensburg in 1814, in which the Maryland and Virginia militias were soundly defeated by the British Army, President Madison commented: "I could never have believed so great a difference existed between regular troops and a militia force, if I not witnessed the scenes of this day. ''
The British Royal Navy was a well - led, professional force, considered the world 's most powerful navy. However, as long as the war with France continued, North America was a secondary concern. In 1813, France had 80 ships - of - the - line while building another 35. Therefore, containing the French fleet had to be the main British naval concern. In Upper Canada, the British had the Provincial Marine was essential for keeping the army supplied since the roads in Upper Canada were abysmal. On Lake Ontario and the St. Lawrence, the Royal Navy had two schooners while the Provincial Marine maintained four small warships on Lake Erie. The British Army in North America was a very professional and well trained force, but suffered from being outnumbered.
The militias of Upper Canada and Lower Canada had a much more lower level of military effectiveness. Nevertheless, Canadian militia (and locally recruited regular units known as "Fencibles '') were often more reliable than American militia, particularly when defending their own territory. As such they played pivotal roles in various engagements, including at the Battle of the Chateauguay where Canadian and Indian forces alone stopped a much larger American force despite not having assistance from regular British units.
Because of their lower population compared to whites, and lacking artillery, Indian allies of the British avoided pitched battles and instead relied on irregular warfare, including raids and ambushes. Given their low population, it was crucial to avoid heavy losses and, in general, Indian chiefs sought to fight only under favorable conditions; any battle that promised heavy losses was avoided if possible. The main Indian weapons were a mixture of tomahawks, knives, swords, rifles, clubs, arrows and muskets. Indian warriors were brave, but the need to avoid heavy losses meant that they fought only under the most favorable conditions and their tactics favored a defensive as opposed to offensive style.
In the words of Benn, those Indians fighting with the Americans provided the U.S. with their "most effective light troops '' while the British desperately needed the Indian tribes to compensate for their numerical inferiority. The Indians, regardless of which side they fought for, saw themselves as allies, not subordinates and Indian chiefs did what they viewed as best for their tribes, much to the annoyance of both American and British generals, who often complained about their unreliability.
On June 1, 1812, President James Madison sent a message to Congress recounting American grievances against Great Britain, though not specifically calling for a declaration of war. After Madison 's message, the House of Representatives deliberated for four days behind closed doors before voting 79 to 49 (61 %) in favor of the first declaration of war. The Senate concurred in the declaration by a 19 to 13 (59 %) vote in favour. The conflict began formally on June 18, 1812, when Madison signed the measure into law and proclaimed it the next day. This was the first time that the United States had declared war on another nation, and the Congressional vote was the closest vote to formally declare war in American history. The Authorization for Use of Military Force Against Iraq Resolution of 1991, while not a formal declaration of war, was a closer vote. None of the 39 Federalists in Congress voted in favour of the war; critics of war subsequently referred to it as "Mr. Madison 's War. ''
Earlier in London on May 11, an assassin had killed Prime Minister Spencer Perceval, which resulted in Lord Liverpool coming to power. Liverpool wanted a more practical relationship with the United States. On June 23, he issued a repeal of the Orders in Council, but the United States was unaware of this, as it took three weeks for the news to cross the Atlantic. On June 28, 1812, HMS Colibri was despatched from Halifax under a flag of truce to New York. On July 9, she anchored off Sandy Hook, and three days later sailed on her return with a copy of the declaration of war, in addition to transporting the British ambassador to the United States, Mr. Foster and consul, Colonel Barclay. She arrived in Halifax, Nova Scotia eight days later. The news of the declaration took even longer to reach London.
However, the British commander in Upper Canada received news of the American declaration of war much faster. In response to the U.S. declaration of war, Isaac Brock issued a proclamation alerting the citizenry in Upper Canada of the state of war and urging all military personnel "to be vigilant in the discharge of their duty '' to prevent communication with the enemy and to arrest anyone suspected of helping the Americans. He also issued orders to the commander of the British post at Fort St. Joseph to initiate offensive operations against U.S. forces in northern Michigan, who it turned out, were not yet aware of their own government 's declaration of war. The resulting Siege of Fort Mackinac on July 17 was the first major land engagement of the war, and ended in an easy British victory.
The war was conducted in three theatres:
Although the outbreak of the war had been preceded by years of angry diplomatic dispute, neither side was ready for war when it came. Britain was heavily engaged in the Napoleonic Wars, most of the British Army was deployed in the Peninsular War (in Portugal and Spain), and the Royal Navy was compelled to blockade most of the coast of Europe. The number of British regular troops present in Canada in July 1812 was officially stated to be 6,034, supported by Canadian militia. Throughout the war, the British Secretary of State for War and the Colonies was Earl Bathurst. For the first two years of the war, he could spare few troops to reinforce North America and urged the commander - in - chief in North America (Lieutenant General Sir George Prévost) to maintain a defensive strategy. The naturally cautious Prévost followed these instructions, concentrating on defending Lower Canada at the expense of Upper Canada (which was more vulnerable to American attacks) and allowing few offensive actions.
The United States was not prepared to prosecute a war, for Madison had assumed that the state militias would easily seize Canada and that negotiations would follow. In 1812, the regular army consisted of fewer than 12,000 men. Congress authorized the expansion of the army to 35,000 men, but the service was voluntary and unpopular; it offered poor pay, and there were few trained and experienced officers, at least initially. The militia objected to serving outside their home states, were not open to discipline, and performed poorly against British forces when outside their home states. American prosecution of the war suffered from its unpopularity, especially in New England, where anti-war speakers were vocal. "Two of the Massachusetts members (of Congress), Seaver and Widgery, were publicly insulted and hissed on Change in Boston; while another, Charles Turner, member for the Plymouth district, and Chief - Justice of the Court of Sessions for that county, was seized by a crowd on the evening of August 3, (1812) and kicked through the town ''. The United States had great difficulty financing its war. It had disbanded its national bank, and private bankers in the Northeast were opposed to the war. The United States was able to obtain financing from London - based Barings Bank to cover overseas bond obligations. The failure of New England to provide militia units or financial support was a serious blow. Threats of secession by New England states were loud, as evidenced by the Hartford Convention. Britain exploited these divisions, blockading only southern ports for much of the war and encouraging smuggling.
While the London government was well administered, in terms of its army, navy, and financial offices, the government in Washington was badly organized, with inexperience, incompetence and confusion the main hallmarks. The federal government 's management system was Designed to minimize the federal role before 1812. The Republicans in power deliberately wanted to downsize the power and roles of the federal government; when the war began, the Federalist opposition worked hard to sabotage operations. Problems multiplied rapidly in 1812, and all the weaknesses were magnified, especially regarding the Army and the Treasury. There were no serious reforms before the war ended. In financial matters, the decentralizing ideology of the Republicans meant they wanted the First Bank of the United States to expire in 1811, when its 20 - year charter ran out. Its absence made it much more difficult to handle the financing of the war, and cause special problems in terms of moving money from state to state, since state banks were not allowed to operate across state lines. The bureaucracy was terrible, often missing deadlines. On the positive side, over 120 new state banks were created all over the country, and they issued notes that financed much of the war effort, along with loans raised by Washington. Some key Republicans, especially Secretary of the Treasury Albert Gallatin realized the need for new taxes, but the Republican Congress was very reluctant and only raised small amounts. The whole time, the Federalists in Congress and especially the Federalist - controlled state governments in the Northeast, and the Federalist - aligned financial system in the Northeast, was strongly opposed to the war and refused to help in the financing. Indeed, they facilitated smuggling across the Canadian border, and sent large amounts of gold and silver to Canada, which created serious shortages in the US. Across the two and half years of the war, 1812 -- 1815, the federal government took in more money than it spent. Cash out was $119.5 million, cash in was $154.0 million. Two - thirds of the income was borrowing that had to be paid back in later years; the national debt went from $56.0 million in 1812 to $127.3 million in 1815. Out of the GDP (gross domestic product) of about $925 million (in 1815), this was not a large burden for a national population of 8 million people. A new Second Bank of the United States was set up in 1816, and after that the financial system performed very well, even though there was still a shortage of gold and silver.
U.S. leaders assumed that Canada could be easily overrun. Former President Jefferson optimistically referred to the conquest of Canada as "a matter of marching ''. Many Loyalist Americans had migrated to Upper Canada after the Revolutionary War. There was also significant non-Loyalist American immigration to the area due to the offer of land grants to immigrants, and the U.S. assumed the latter would favour the American cause, but they did not. In prewar Upper Canada, General Prévost was in the unusual position of having to purchase many provisions for his troops from the American side. This peculiar trade persisted throughout the war in spite of an abortive attempt by the U.S. government to curtail it. In Lower Canada, which was much more populous, support for Britain came from the English elite with strong loyalty to the Empire, and from the French - speaking Canadien elite, who feared American conquest would destroy the old order by introducing Protestantism, Anglicization, republican democracy, and commercial capitalism; and weakening the Catholic Church. The Canadien inhabitants feared the loss of a shrinking area of good lands to potential American immigrants.
In 1812 -- 13, British military experience prevailed over inexperienced American commanders. Geography dictated that operations take place in the west: principally around Lake Erie, near the Niagara River between Lake Erie and Lake Ontario, and near the Saint Lawrence River area and Lake Champlain. This was the focus of the three - pronged attacks by the Americans in 1812. Although cutting the St. Lawrence River through the capture of Montreal and Quebec would have made Britain 's hold in North America unsustainable, the United States began operations first in the western frontier because of the general popularity there of a war with the British, who had sold arms to the Native Americans opposing the settlers.
The British scored an important early success when their detachment at St. Joseph Island, on Lake Huron, learned of the declaration of war before the nearby American garrison at the important trading post at Mackinac Island in Michigan. A scratch force landed on the island on July 17, 1812, and mounted a gun overlooking Fort Mackinac. After the British fired one shot from their gun, the Americans, taken by surprise, surrendered. This early victory encouraged the natives, and large numbers moved to help the British at Amherstburg. The island totally controlled access to the Old Northwest, giving the British nominal control of this area, and, more vitally, a monopoly on the fur trade.
An American army under the command of William Hull invaded Canada on July 12, with his forces chiefly composed of untrained and ill - disciplined militiamen. Once on Canadian soil, Hull issued a proclamation ordering all British subjects to surrender, or "the horrors, and calamities of war will stalk before you ''. This led many of the British forces to defect. John Bennett, printer and publisher of the York Gazette & Oracle, was a prominent defector. Andrew Mercer, who had the publication 's production moved to his house, lost the press and type destroyed during American occupation, an example of what happened to resisters. He also threatened to kill any British prisoner caught fighting alongside a native. The proclamation helped stiffen resistance to the American attacks. Hull 's army was too weak in artillery and badly supplied to achieve its objectives, and had to fight just to maintain its own lines of communication.
The senior British officer in Upper Canada, Major General Isaac Brock, felt that he should take bold measures to calm the settler population in Canada, and to convince the aboriginals who were needed to defend the region that Britain was strong. He moved rapidly to Amherstburg near the western end of Lake Erie with reinforcements and immediately decided to attack Detroit. Hull, fearing that the British possessed superior numbers and that the Indians attached to Brock 's force would commit massacres if fighting began, surrendered Detroit without a fight on August 16. Knowing of British - instigated indigenous attacks on other locations, Hull ordered the evacuation of the inhabitants of Fort Dearborn (Chicago) to Fort Wayne. After initially being granted safe passage, the inhabitants (soldiers and civilians) were attacked by Potowatomis on August 15 after travelling only 2 miles (3.2 km) in what is known as the Battle of Fort Dearborn. The fort was subsequently burned.
Brock promptly transferred himself to the eastern end of Lake Erie, where American General Stephen Van Rensselaer was attempting a second invasion. An armistice (arranged by Prévost in the hope the British renunciation of the Orders in Council to which the United States objected might lead to peace) prevented Brock from invading American territory. When the armistice ended, the Americans attempted an attack across the Niagara River on October 13, but suffered a crushing defeat at Queenston Heights. Brock was killed during the battle. While the professionalism of the American forces improved by the war 's end, British leadership suffered after Brock 's death. A final attempt in 1812 by American General Henry Dearborn to advance north from Lake Champlain failed when his militia refused to advance beyond American territory.
In contrast to the American militia, the Canadian militia performed well. French Canadians, who found the anti-Catholic stance of most of the United States troublesome, and United Empire Loyalists, who had fought for the Crown during the American Revolutionary War, strongly opposed the American invasion. Many in Upper Canada were recent settlers from the United States who had no obvious loyalties to the Crown; nevertheless, while there were some who sympathized with the invaders, the American forces found strong opposition from men loyal to the Empire.
After Hull 's surrender of Detroit, General William Henry Harrison was given command of the U.S. Army of the Northwest. He set out to retake the city, which was now defended by Colonel Henry Procter in conjunction with Tecumseh. A detachment of Harrison 's army was defeated at Frenchtown along the River Raisin on January 22, 1813. Procter left the prisoners with an inadequate guard, who could not prevent some of his North American aboriginal allies from attacking and killing perhaps as many as sixty Americans, many of whom were Kentucky militiamen. The incident became known as the River Raisin Massacre. The defeat ended Harrison 's campaign against Detroit, and the phrase "Remember the River Raisin! '' became a rallying cry for the Americans.
In May 1813, Procter and Tecumseh set siege to Fort Meigs in northwestern Ohio. American reinforcements arriving during the siege were defeated by the natives, but the fort held out. The Indians eventually began to disperse, forcing Procter and Tecumseh to return north to Canada. A second offensive against Fort Meigs also failed in July. In an attempt to improve Indian morale, Procter and Tecumseh attempted to storm Fort Stephenson, a small American post on the Sandusky River, only to be repulsed with serious losses, marking the end of the Ohio campaign.
On Lake Erie, American commander Captain Oliver Hazard Perry fought the Battle of Lake Erie on September 10, 1813. His decisive victory at "Put - in - Bay '' ensured American military control of the lake, improved American morale after a series of defeats, and compelled the British to fall back from Detroit. This paved the way for General Harrison to launch another invasion of Upper Canada, which culminated in the U.S. victory at the Battle of the Thames on October 5, 1813, in which Tecumseh was killed.
Because of the difficulties of land communications, control of the Great Lakes and the St. Lawrence River corridor was crucial. When the war began, the British already had a small squadron of warships on Lake Ontario and had the initial advantage. To redress the situation, the Americans established a Navy yard at Sackett 's Harbor in northwestern New York. Commodore Isaac Chauncey took charge of the large number of sailors and shipwrights sent there from New York; they completed the second warship built there in a mere 45 days. Ultimately, almost 3,000 men worked at the naval shipyard, building eleven warships and many smaller boats and transports. Having regained the advantage by their rapid building program, Chauncey and Dearborn attacked York, on the northern shore of the lake, the capital of Upper Canada, on April 27, 1813. The Battle of York was a "pyrrhic '' American victory, marred by looting and the burning of the small Provincial Parliament buildings and a library (resulting in a spirit of revenge by the British / Canadians led by Gov. George Prévost, who later demanded satisfaction encouraging the British Admiralty to issue orders to their officers later operating in the Chesapeake Bay region to exact similar devastation on the American Federal capital village of Washington the following year). However, Kingston was strategically much more valuable to British supply and communications routes along the St. Lawrence corridor. Without control of Kingston, the U.S. Navy could not effectively control Lake Ontario or sever the British supply line from Lower Canada.
On May 25, 1813 the guns of the American Lake Ontario squadron joined by Fort Niagara began bombarding Fort George. On May 27, 1813, an American amphibious force from Lake Ontario assaulted Fort George on the northern end of the Niagara River and captured it without serious losses. The British also abandoned Fort Erie and headed towards the Burlington Heights. With the British position in Upper Canada on the verge of collapse, the Iroquois Indians living along the banks of the Grand River considered changing side and ignored a British appeal to come to their aid. The retreating British forces were not pursued, however, until they had largely escaped and organized a counteroffensive against the advancing Americans at the Battle of Stoney Creek on June 5. With Upper Canada on the line, the British launched a surprise attack at Stoney Creek at 2: 00 am, leading to much confused fighting. Through tactically a draw, the battle was a strategic British victory as the Americans pulled back to Forty Mile Creek rather than continuing their advance into Upper Canada. At this point, the Six Nations living on the Grand River began to come out to fight for the British as an American victory no longer seemed inevitable. The Iroquis ambushed an American patrol at Forty Mile Creek while the Royal Navy squadron based in Kingston came to bombard the American camp, leading to General Dearborn to retreat back to Fort George as he now mistakenly believed he was outnumbered and outgunned. The British commander, General John Vincent was heartened by the fact that more and more First Nations warriors were now arriving to assist him, providing about 800 additional men. On June 24, with the help of advance warning by Laura Secord, another American force was forced to surrender by a much smaller British and native force at the Battle of Beaver Dams, marking the end of the American offensive into Upper Canada. The British commander General Francis de Rottenberg did not have the strength to retake Fort George, so he built a blockade, hoping to starve the Americans into surrender. Meanwhile, Commodore James Lucas Yeo had taken charge of the British ships on the lake and mounted a counterattack, which was nevertheless repulsed at the Battle of Sackett 's Harbor. Thereafter, Chauncey and Yeo 's squadrons fought two indecisive actions, neither commander seeking a fight to the finish.
Late in 1813, the Americans abandoned the Canadian territory they occupied around Fort George. They set fire to the village of Newark (now Niagara - on - the - Lake) on December 10, 1813, incensing the Canadians and politicians in control. Many of the inhabitants were left without shelter, freezing to death in the snow. This led to British retaliation following the Capture of Fort Niagara on December 18, 1813. Early the next morning on December 19, the British and their native allies stormed the neighbouring town of Lewiston, New York, torching homes and buildings and killing about a dozen civilians. As the British were chasing the surviving residents out of town, a small force of Tuscarora natives intervened and stopped the pursuit, buying enough time for the locals to escape to safer ground. It is notable in that the Tuscaroras defended the Americans against their own Iroquois brothers, the Mohawks, who sided with the British. Later, the British attacked and burned Buffalo on December 30, 1813.
In 1814, the contest for Lake Ontario turned into a building race. Naval superiority shifted between the opposing fleets as each built new, bigger ships. However, neither was able to bring the other to battle when in a position of superiority, leaving the Engagements on Lake Ontario a draw. At war 's end, the British held the advantage with the 112 - gun HMS St Lawrence, but the Americans had laid down two even larger ships. The majority of these ships never saw action and were decommissioned after the war.
The British were potentially most vulnerable over the stretch of the St. Lawrence where it formed the frontier between Upper Canada and the United States. During the early days of the war, there was illicit commerce across the river. Over the winter of 1812 and 1813, the Americans launched a series of raids from Ogdensburg on the American side of the river, which hampered British supply traffic up the river. On February 21, Sir George Prévost passed through Prescott on the opposite bank of the river with reinforcements for Upper Canada. When he left the next day, the reinforcements and local militia attacked. At the Battle of Ogdensburg, the Americans were forced to retire.
For the rest of the year, Ogdensburg had no American garrison, and many residents of Ogdensburg resumed visits and trade with Prescott. This British victory removed the last American regular troops from the Upper St. Lawrence frontier and helped secure British communications with Montreal. Late in 1813, after much argument, the Americans made two thrusts against Montreal. Taking Montreal would have cut off the British forces in Upper Canada and thus potentially changed the war. The plan eventually agreed upon was for Major General Wade Hampton to march north from Lake Champlain and join a force under General James Wilkinson that would embark in boats and sail from Sackett 's Harbor on Lake Ontario and descend the St. Lawrence. Hampton was delayed by bad roads and supply problems and also had an intense dislike of Wilkinson, which limited his desire to support his plan. On October 25, his 4,000 - strong force was defeated at the Chateauguay River by Charles de Salaberry 's smaller force of Canadian Voltigeurs and Mohawks. Salaberry 's force of Lower Canada militia and Indians numbered only 339, but had a strong defensive position. Wilkinson 's force of 8,000 set out on October 17, but was also delayed by bad weather. After learning that Hampton had been checked, Wilkinson heard that a British force under Captain William Mulcaster and Lieutenant Colonel Joseph Wanton Morrison was pursuing him, and by November 10, he was forced to land near Morrisburg, about 150 kilometres (90 mi) from Montreal. On November 11, Wilkinson 's rear guard, numbering 2,500, attacked Morrison 's force of 800 at Crysler 's Farm and was repulsed with heavy losses. After learning that Hampton could not renew his advance, Wilkinson retreated to the U.S. and settled into winter quarters. He resigned his command after a failed attack on a British outpost at Lacolle Mills. Had the Americans taken Montreal as planned, Upper Canada would have certainly been lost and the failure of the campaign ended in the greatest British defeat in the Canadas during the war.
Rather than trying to take Montreal or Kingston, the Americans chose again to invade the Niagara frontier to take Upper Canada. The Americans had occupied southwestern Upper Canada after their victory in Moraviantown, and they believed taking the rest of the province would force the British to cede it to them. The end of the war in Europe in April 1814 meant the British could now redeploy their Army to North America, so the Americans were anxious to secure Upper Canada to negotiate from a position of strength. They planned to invade via the Niagara frontier while sending another force to recapture Mackinac; the British were supplying the Indians in the Old Northwest from Montreal via Mackinac. By the middle of 1814, American generals, including Major Generals Jacob Brown and Winfield Scott, had drastically improved the fighting abilities and discipline of the army. The Americans renewed their attack on the Niagara peninsula and quickly captured Fort Erie on July 3, 1814, with the garrison of 170 quickly surrendering to the 5,000 Americans. General Phineas Riall rushed towards the frontier and, unaware of Fort Erie 's fall or the size of the American force, chose to engage in battle. Winfield Scott then gained a victory over an inferior British force at the Battle of Chippawa on July 5. The Americans brought out overwhelming firepower against the attacking British, who lost about 600 dead to the 350 dead on the American side. An attempt to advance further ended with a hard - fought but inconclusive Battle of Lundy 's Lane on July 25. Both sides stood their ground, but after the battle, the American commander, General Jacob Brown, pulled back to Fort George and the British did not pursue them.
The outnumbered Americans withdrew but withstood a prolonged Siege of Fort Erie. The British tried to storm Fort Erie on August 14, 1814, but suffered heavy losses, losing 950 killed, wounded and captured compared to only 84 dead and wounded on the American side. The British were further weakened by exposure and shortage of supplies in their siege lines. Eventually they raised the siege, but American Major General George Izard took over command on the Niagara front and followed up only halfheartedly. An American raid along the Grand River destroyed many farms and weakened British logistics. In October 1814 the Americans advanced into Upper Canada and engaged in skirmishes at Cook 's Mill, but pulled back when they heard that the new British warship, HMS St. Lawrence -- armed with 104 guns and launched in Kingston that September -- was on its way. The Americans lacked provisions, and eventually destroyed the Fort Erie and retreated across the Niagara.
Meanwhile, following the abdication of Napoleon, 15,000 British troops were sent to North America under four of Wellington 's ablest brigade commanders. Fewer than half were veterans of the Peninsula and the rest came from garrisons. Prévost was ordered to neutralize American power on the lakes by burning Sackets Harbor to gain naval control of Lake Erie, Lake Ontario and the Upper Lakes, and to defend Lower Canada from attack. He did defend Lower Canada but otherwise failed to achieve his objectives. Given the late season, he decided to invade New York State. His army outnumbered the American defenders of Plattsburgh, but he was worried about his flanks, so he decided he needed naval control of Lake Champlain. On the lake, the British squadron under Captain George Downie and the Americans under Master Commandant Thomas Macdonough were more evenly matched.
Upon reaching Plattsburgh, Prévost delayed the assault until the arrival of Downie in the hastily completed 36 - gun frigate HMS Confiance. Prévost forced Downie into a premature attack, but then unaccountably failed to provide the promised military backing. Downie was killed and his naval force defeated at the naval Battle of Plattsburgh in Plattsburgh Bay on September 11, 1814. The Americans now had control of Lake Champlain; Theodore Roosevelt later termed it "the greatest naval battle of the war ''. The successful land defence was led by Alexander Macomb. To the astonishment of his senior officers, Prévost then turned back, saying it was too hazardous to remain on enemy territory after the loss of naval supremacy. Prévost was recalled and in London, a naval court - martial decided that defeat had been caused principally by Prévost 's urging the squadron into premature action and then failing to afford the promised support from the land forces. Prévost died suddenly, just before his own court - martial was to convene. Prévost 's reputation sank to a new low, as Canadians claimed that their militia under Brock did the job and he failed. Recently, however, historians have been more kindly, measuring him not against Wellington but against his American foes. They judge Prévost 's preparations for defending the Canadas with limited means to be energetic, well - conceived, and comprehensive; and against the odds, he had achieved the primary objective of preventing an American conquest.
To the east, the northern part of Massachusetts, soon to be Maine, was invaded. Fort Sullivan at Eastport was taken by Sir Thomas Hardy on July 11. Castine, Hampden, Bangor, and Machias were taken, and Castine became the main British base till April 15, 1815, when the British left, taking £ 10,750 in tariff duties, the "Castine Fund '' which was used to found Dalhousie University. Eastport was not returned to the United States till 1818.
The Mississippi River valley was the western frontier of the United States in 1812. The territory acquired in the Louisiana Purchase of 1803 contained almost no U.S. settlements west of the Mississippi except around Saint Louis and a few forts and trading posts. Fort Belle Fontaine, an old trading post converted to a U.S. Army post in 1804, served as regional headquarters. Fort Osage, built in 1808 along the Missouri was the western-most U.S. outpost, it was abandoned at the start of the war. Fort Madison, built along the Mississippi in what is now Iowa, was also built in 1808, and had been repeatedly attacked by British - allied Sauk since its construction. In September 1813 Fort Madison was abandoned after it was attacked and besieged by natives, who had support from the British. This was one of the few battles fought west of the Mississippi. Black Hawk played a leadership role.
Little of note took place on Lake Huron in 1813, but the American victory on Lake Erie and the recapture of Detroit isolated the British there. During the ensuing winter, a Canadian party under Lieutenant Colonel Robert McDouall established a new supply line from York to Nottawasaga Bay on Georgian Bay. When he arrived at Fort Mackinac with supplies and reinforcements, he sent an expedition to recapture the trading post of Prairie du Chien in the far west. The Siege of Prairie du Chien ended in a British victory on July 20, 1814.
Earlier in July, the Americans sent a force of five vessels from Detroit to recapture Mackinac. A mixed force of regulars and volunteers from the militia landed on the island on August 4. They did not attempt to achieve surprise, and at the brief Battle of Mackinac Island, they were ambushed by natives and forced to re-embark. The Americans discovered the new base at Nottawasaga Bay, and on August 13, they destroyed its fortifications and the schooner Nancy that they found there. They then returned to Detroit, leaving two gunboats to blockade Mackinac. On September 4, these gunboats were taken unawares and captured by British boarding parties from canoes and small boats. These Engagements on Lake Huron left Mackinac under British control.
The British garrison at Prairie du Chien also fought off another attack by Major Zachary Taylor. In this distant theatre, the British retained the upper hand until the end of the war, through the allegiance of several indigenous tribes that received British gifts and arms, enabling them to take control of parts of what is now Michigan and Illinois, as well as the whole of modern Wisconsin. In 1814 U.S. troops retreating from the Battle of Credit Island on the upper Mississippi attempted to make a stand at Fort Johnson, but the fort was soon abandoned, along with most of the upper Mississippi valley.
After the U.S. was pushed out of the Upper Mississippi region, they held on to eastern Missouri and the St. Louis area. Two notable battles fought against the Sauk were the Battle of Cote Sans Dessein, in April 1815, at the mouth of the Osage River in the Missouri Territory, and the Battle of the Sink Hole, in May 1815, near Fort Cap au Gris.
At the conclusion of peace, Mackinac and other captured territory was returned to the United States. At the end of the war, some British officers and Canadians objected to handing back Prairie du Chien and especially Mackinac under the terms of the Treaty of Ghent. However, the Americans retained the captured post at Fort Malden, near Amherstburg, until the British complied with the treaty.
Fighting between Americans, the Sauk, and other indigenous tribes continued through 1817, well after the war ended in the east.
In 1812, Britain 's Royal Navy was the world 's largest, with over 600 cruisers in commission and some smaller vessels. Although most of these were involved in blockading the French navy and protecting British trade against (usually French) privateers, the Royal Navy still had 85 vessels in American waters, counting all British Navy vessels in North American and the Caribbean waters. However, the Royal Navy 's North American squadron based in Halifax, Nova Scotia (which bore the brunt of the war), numbered one small ship of the line, seven frigates, nine smaller sloops and brigs along with five schooners. By contrast, the United States Navy comprised 8 frigates, 14 smaller sloops and brigs, and no ships of the line. The U.S. had embarked on a major shipbuilding program before the war at Sackets Harbor, New York and continued to produce new ships. Three of the existing American frigates were exceptionally large and powerful for their class, larger than any British frigate in North America. Whereas the standard British frigate of the time was rated as a 38 gun ship, usually carrying up to 50 guns, with its main battery consisting of 18 - pounder guns; USS Constitution, President, and United States, in comparison, were rated as 44 - gun ships, carrying 56 -- 60 guns with a main battery of 24 - pounders.
The British strategy was to protect their own merchant shipping to and from Halifax, Nova Scotia, and the West Indies, and to enforce a blockade of major American ports to restrict American trade. Because of their numerical inferiority, the American strategy was to cause disruption through hit - and - run tactics, such as the capture of prizes and engaging Royal Navy vessels only under favourable circumstances. Days after the formal declaration of war, however, it put out two small squadrons, including the frigate President and the sloop Hornet under Commodore John Rodgers, and the frigates United States and Congress, with the brig Argus under Captain Stephen Decatur. These were initially concentrated as one unit under Rodgers, who intended to force the Royal Navy to concentrate its own ships to prevent isolated units being captured by his powerful force.
Large numbers of American merchant ships were returning to the United States with the outbreak of war, and if the Royal Navy was concentrated, it could not watch all the ports on the American seaboard. Rodgers ' strategy worked, in that the Royal Navy concentrated most of its frigates off New York Harbor under Captain Philip Broke, allowing many American ships to reach home. But, Rodgers ' own cruise captured only five small merchant ships, and the Americans never subsequently concentrated more than two or three ships together as a unit.
Both American and British naval honor had been challenged in the leadup to the war. The Chesapeake -- Leopard affair had left the United States insulted by the Royal Navy 's impressment of sailors. Given that honor was at stake, the appropriate method for the United States Navy to redeem itself was by dueling. Similarly, British honor was challenged in the Little Belt Affair where the British sloop HMS Little Belt was fired upon by the United States frigate President after President had mistaken Little Belt for the British frigate HMS Guerriere. Captain James Dacres of Guerriere began a cycle of frigate duels by challenging USS President to a single ship duel to avenge the losses aboard Little Belt. Commodore John Rodgers of USS President declined the challenge because he feared the intervention of the rest of British squadron under Commodore Philip Broke that Guerriere was part of.
Meanwhile, USS Constitution, commanded by Captain Isaac Hull, sailed from the Chesapeake Bay on July 12. On July 17, Commodore Broke 's British squadron which included Guerriere gave chase off New York, but Constitution evaded her pursuers after two days. Constitution briefly called at Boston to replenish water. Commodore Borke detached Guerriere from his squadron to seek out repairs as Guerriere being a French - built ship had weak scantlings (beams fastened with a thickened clamp rather than vertical and horizontal knees) and had, therefore, become leaky and rotten. Furthermore, she had been struck by lightning severely damaging her masts. Constitution encountered and engaged engaged Guerriere in a duel to redeem American honor. Captain Dacres was eager to engage the American frigate as Constitution was the sister ship of President and would serve equally well as ship to duel against to redeem British honor. USS Constitution had nearly 50 percent more men, more firepower, heavier tonnage and heavier scantlings (which determine how much damage enemy shot does to a ship) than Guerriere. Unsurprisingly, Constitution emerged the victor. After a 35 - minute battle, Guerriere had been dis - masted and captured, and was later burned. Constitution earned the nickname "Old Ironsides '' following this battle as many of the British cannonballs were seen to bounce off her hull due to Constitution 's heavy scantlings. Hull returned to Boston with news of this significant victory.
Similarly, On October 25, USS United States, commanded by Captain Decatur, captured the British frigate HMS Macedonian, which he then carried back to port. At the close of the month, Constitution sailed south, now under the command of Captain William Bainbridge. On December 29, off Bahia, Brazil, she met the British frigate HMS Java. After a battle lasting three hours, Java struck her colors and was burned after being judged unsalvageable. Constitution, at first seemed relatively undamaged in the battle, but it was later determined that Java had successfully hit Constitution 's masts with 18 - pounder shot, but the mast had n't fallen due to its immense diameter. Constitution 's fell while she was docked. United States, Constitution and President were all almost 50 percent larger by tonnage, crew, firepower, and scantling size than the Macedonian, Guerriere and Java (Guerriere was rotten and had lightning damage as well as being weakly built as a French ship; Java had extra marines onboard making the disparity in crew more similar although she too was a French - built ship; Macedonian fitted the 50 percent statistic near perfectly).
The United States Navy 's sloops had also won several victories over Royal Navy sloops of approximately equal armament. The American sloops Hornet, Wasp (1807), Peacock, Wasp (1813), and Frolic were all ship - rigged while the British Cruizer - class sloops they encountered were brig rigged, which gave the Americans a significant advantage. Ship rigged vessels are more maneuverable in battle because they have a wider variety of sails the can back allowing ship - rigged vessels to wear as well as heave to. More significantly, if some spars are shot away on a brig because it is more difficult to wear, the brig loses the ability to steer, while a ship could adjust its more diverse canvas as if to wear to compensate for the imbalance caused by damage in battle. Furthermore, ship - rigged vessels which three masts simply have more masts to shoot away than brigs with two masts before the vessel is unmanagable. In addition While the American ships had experienced and well - drilled volunteer crews, the enormous size of the overstretched Royal Navy meant that many ships were shorthanded and the average quality of crews suffered and the constant sea duties of those serving in North America interfered with their training and exercises. The only engagement between two brig - sloops was between the British Cruizer - class brig Pelican (1812) and the United States ' Argus where Pelican emerged the victor as she had greater firepower and tonnage, despite having fewer crew. Although not a sloop, the gun - brig Boxer was taken by the brig - sloop Enterprise in a bloody battle where Enterprise emerged the victor again due to superior force.
It was clear that in single ship battles, superior force was the most significant factor. In response to the majority of the American ships being of greater force than the British ships of the same class, Britain constructed five 40 - gun, 24 - pounder heavy frigates and two "spar - decked '' frigates (the 60 - gun HMS Leander and HMS Newcastle) and to razee three old 74 - gun ships of the line to convert them to heavy frigates. To counter the American sloops of war, the British constructed the Cyrus - class ship - sloops of 22 guns. The British Admiralty also instituted a new policy that the three American heavy frigates should not be engaged except by a ship of the line or frigates in squadron strength.
Commodore Philip Broke had lost Guerriere to Constitution from his very own squadron. He knew that Dacres of Guerriere intended to duel the American frigate to avenge the losses on Little Belt caused by USS President in 1811. Since, Constitution had taken Guerriere, Broke intended to redeem Dacres ' honor by taking Constitution, which was undergoing repairs in Boston in early 1813. Broke found that Constitution was not ready for sea. Instead, he decided to challenge Chesapeake as Broke was short on water and provisions and could not wait for Constitution. Captain James Lawrence of Chesapeake was misguided by propaganda intended to boost American morale (and successfully did) that claimed that the three frigate duels of 1812 were of equal force leading Lawrence to believe taking Broke 's Shannon (1806) would be easy. Lawrence even went to the extent of preemptively arranging for a banquet to be held for his victorious crew. Broke, on the other hand, had spent years training his crew and developing artillery innovations on his ship, making Shannon particularly well prepared for battle. On June 1, 1813, Shannon took Chesapeake in a duel that lasted less than fifteen minutes in Boston Harbor. Lawrence was mortally wounded and famously cried out, "Tell the men to fire faster! Do n't give up the ship! '' The two frigates were of near - identical armament and length. Chesapeake 's crew was larger, had greater tonnage and was of greater scantling strength (which led to the British claiming she was overbuilt), but many of her crew had not served or trained together. Shannon had been at sea for a long time, and her hull had begun to rot, further exaggerating the disparity in scantling strength. Nevertheless, this engagement proved to the only single - ship action where both ships were of essentially equal force during the War of 1812. British citizens reacted with celebration and relief that the run of American victories had ended. Notably, this action was by ratio one of the bloodiest contests recorded during this age of sail due to the close - range engagement, the boarding (hand - to - hand combat) and Broke 's philosophy of artillery being "Kill the men and the ship is yours '', with more dead and wounded than HMS Victory suffered in four hours of combat at Trafalgar. Captain Lawrence was killed, and Captain Broke was so badly wounded that he never again held a sea command. The Americans then did as the British had done in 1812 and banned single - ship duels after this engagement.
In January 1813, the American frigate Essex, under the command of Captain David Porter, sailed into the Pacific to harass British shipping. Many British whaling ships carried letters of marque allowing them to prey on American whalers, and they nearly destroyed the industry. Essex challenged this practice. She inflicted considerable damage on British interests. Essex consort USS Essex Junior (armed with twenty guns) were captured off Valparaíso, Chile, by the British frigate HMS Phoebe and the sloop HMS Cherub on March 28, 1814 in what statistically appeared to be a battle of equal force as Essex and Phoebe were of similar tonnage, scantling, and broadside weight as well as Cherub and Essex Junior (with the exception of scantling, which Essex Junior was much more lightly built than Cherub). Once again the Americans had more men. Nevertheless, Phoebe was armed with long guns which none of the other ships engaged had. Furthermore, Captain Hillyar had used Phillip Broke 's methods of artillery on Phoebe and Cherub with tangent and dispart sights. This gave the British ships a significant advantage at the range at which the battle was fought. Once again proving that superior force was the deciding factor.
To conclude the cycle of duels cause by the Little Belt affair, USS President was finally captured in January 1815. Unlike the previous engagements, President was not taken in a duel. Following the both Royal Navy 's requirements, President was pursued by a squadron consisting of four frigates, one being a 56 - gun razee. President was an extremely fast ship and successfully outsailed the fast British squadron with the exception of HMS Endymion which has been regarded as the fastest ship in the age of fighting sail. Captain Henry Hope of Endymion had fitted his ship with Phillip Broke 's technology as Captain Hillyar had done on Phoebe. This gave him the slight advantage at range and slowed President. Commodore Decatur on President had the advantage in scantling strength, firepower, crew, and tonnage, but not in maneuverability. Despite having fewer guns, Endymion was armed with 24 - pounders just like President. This meant that Endymion shot could pierce the hull of President unlike Guerriere 's which bounced of Constitution 's hull or Java 's that failed to cut through Constitution 's mast. Following Broke 's philosophy of "Kill the man and the ship is your 's '', Endymion fired into President 's hull severely damaging her (shot holes below the waterline, 10 / 15 starboard guns on the gundeck disabled, water in the hold, and shot from Endymion found inside President 's magazine.). Decatur knew his only hope was to dismantle Endymion and sail away from the rest of the squadron. When he failed, he surrendered his ship to "the captain of the black frigate (Endymion) ''. Decatur took advantage of the fact Endymion had no boats that were intact and attempted to sneak away under the cover of night, only to be caught up by HMS Pomone. Decatur surrendered without a fight. Decatur had surrendered the United States finest frigate and flagship President to a smaller ship, but part of a squadron of greater force.
Decatur gave unreliable accounts of the battle stating that President was already "severely damaged '' by a grounding before the engagement, but undamaged after the engagement with Endymion. He stated Pomone caused "significant '' losses aboard President, although President 's crew claim they were below deck gathering their belongings as they had already surrendered. Despite saying "I surrender my ship to the captain of the black frigate '', Decatur also writes that he said, "I surrender to the squadron ''. Nevertheless, many historians such as Ian Toll, Theodore Roosevelt, and William James quote Decatur 's remarks to either enforce that Endymion alone took President or that President surrendered to the whole squadron, when actually it was something in - between. These arguments show the significance to honor of last frigate engagement 's of the War of 1812 as the question as to how President was taken is only significant to honor as the outcome is maintained that President was taken. British honor was finally redeemed as President was taken and Little Belt was avenged. Furthermore, President was used as an example to prove to the British that the American 44 - gun frigates were nowhere near evenly matched to the standard British 38 - gun frigates which had been taken in 1812. Nevertheless, this was not a duel (which was stressed by Decatur) and the United States still maintained greater success in frigate duels, and American honor was maintained. Both countries therefore emerged with the sense that they had redeemed their honor.
Success in single ship battles raised American morale after the repeated failed invasion attempts in Upper and Lower Canada. However, these victories had no military effect on the war at sea as they did not alter the balance of naval power, impede British supplies and reinforcements, or even raise insurance rates for British trade. During the war, the United States Navy captured 165 British merchantmen (although privateers captured many more), while the Royal Navy captured 1,400 American merchantmen. More significantly, the British blockade of the Atlantic coast caused the majority warships to be unable to put to sea and devastated the United States economy.
The operations of American privateers proved a more significant threat to British trade than the U.S. Navy. They operated throughout the Atlantic and continued until the close of the war, most notably from ports such as Baltimore. American privateers reported taking 1300 British merchant vessels, compared to 254 taken by the U.S. Navy. although the insurer Lloyd 's of London reported that only 1,175 British ships were taken, 373 of which were recaptured, for a total loss of 802. The Canadian historian Carl Benn wrote that American privateers took 1, 344 British ships, of which 750 were retaken by the British. However the British were able to limit privateering losses by the strict enforcement of convoy by the Royal Navy and by capturing 278 American privateers. Due to the massive size of the British merchant fleet, American captures only affected 7.5 % of the fleet, resulting in no supply shortages or lack of reinforcements for British forces in North America. Of 526 American privateers, 148 were captured by the Royal Navy and only 207 ever took a prize.
Due to the large size of their navy, the British did not rely as much on privateering. The majority of the 1,407 captured American merchant ships were taken by the Royal Navy. The war was the last time the British allowed privateering, since the practice was coming to be seen as politically inexpedient and of diminishing value in maintaining its naval supremacy. However privateering remained popular in British colonies. It was the last hurrah for privateers in Bermuda who vigorously returned to the practice after experience in previous wars. The nimble Bermuda sloops captured 298 American ships. Privateer schooners based in British North America, especially from Nova Scotia took 250 American ships and proved especially effective in crippling American coastal trade and capturing American ships closer to shore than the Royal Navy cruisers.
The naval blockade of the United States began informally in 1812 and expanded to cut off more ports as the war progressed. Twenty ships were on station in 1812 and 135 were in place by the end of the conflict. In March 1813, the Royal Navy punished the Southern states, who were most vocal about annexing British North America, by blockading Charleston, Port Royal, Savannah and New York city was well. However, as additional ships were sent to North America in 1813, the Royal Navy was able to tighten the blockade and extend it, first to the coast south of Narragansett by November 1813 and to the entire American coast on May 31, 1814. In May 1814, following the abdication of Napoleon, and the end of the supply problems with Wellington 's army, New England was blockaded.
The British government, having need of American foodstuffs for its army in Spain, benefited from the willingness of the New Englanders to trade with them, so no blockade of New England was at first attempted. The Delaware River and Chesapeake Bay were declared in a state of blockade on December 26, 1812. Illicit trade was carried on by collusive captures arranged between American traders and British officers. American ships were fraudulently transferred to neutral flags. Eventually, the U.S. government was driven to issue orders to stop illicit trading; this put only a further strain on the commerce of the country. The overpowering strength of the British fleet enabled it to occupy the Chesapeake and to attack and destroy numerous docks and harbours.
The blockade of American ports later tightened to the extent that most American merchant ships and naval vessels were confined to port. The American frigates USS United States and USS Macedonian ended the war blockaded and hulked in New London, Connecticut. USS United States and USS Macedonian attempted to set sail to raid British shipping in the Caribbean, but were forced to turn back when confronted with a British squadron, and by the end of the war, the United States had six frigates and four ships - of - the - line sitting in port. Some merchant ships were based in Europe or Asia and continued operations. Others, mainly from New England, were issued licences to trade by Admiral Sir John Borlase Warren, commander in chief on the American station in 1813. This allowed Wellington 's army in Spain to receive American goods and to maintain the New Englanders ' opposition to the war. The blockade nevertheless resulted in American exports decreasing from $130 million in 1807 to $7 million in 1814. Most of these were food exports that ironically went to supply their enemies in Britain or British colonies. The blockade had a devastating effect on the American economy with the value of American exports and imports falling from $114 million in 1811 down to $20 million by 1814 while the US Customs took in $13 million in 1811 and $6 million in 1814, despite the fact that Congress had voted to double the rates. The British blockade further damaged the American economy by forcing merchants to abandon the cheap and fast coastal trade to the slow and more expensive inland roads. In 1814, only 1 out of 14 American merchantmen risked leaving port as a high probability that any ship leaving port would be seized.
As the Royal Navy base that supervised the blockade, Halifax profited greatly during the war. From that base British privateers seized many French and American ships and sold their prizes in Halifax.
The British Royal Navy 's blockades and raids allowed about 4,000 African Americans to escape slavery by fleeing American plantations to find freedom aboard British ships, migrants known, as regards those who settled in Canada, as the Black Refugees. The blockading British fleet in Chesapeake Bay received increasing numbers of enslaved black Americans during 1813. By British government order they were treated as free persons when reaching British hands. Alexander Cochrane 's proclamation of April 2, 1814, invited Americans who wished to emigrate to join the British, and though not explicitly mentioning slaves was taken by all as addressed to them. About 2,400 of the escaped slaves and their families who were carried on ships of the Royal Navy following their escape settled in Nova Scotia and New Brunswick during and after the war. From May 1814, younger men among the volunteers were recruited into a new Corps of Colonial Marines. They fought for Britain throughout the Atlantic campaign, including the Battle of Bladensburg and the attacks on Washington, D.C. and Battle of Baltimore, later settling in Trinidad after rejecting British government orders for transfer to the West India Regiments, forming the community of the Merikins. The slaves who escaped to the British represented the largest emancipation of African Americans before the American Civil War.
Maine, then part of Massachusetts, was a base for smuggling and illegal trade between the U.S. and the British. Until 1813 the region was generally quiet except for privateer actions near the coast. In September 1813, there was a notable naval action when the U.S. Navy 's brig Enterprise fought and captured the Royal Navy brig Boxer off Pemaquid Point. The first British assault came in July 1814, when Sir Thomas Masterman Hardy took Moose Island (Eastport, Maine) without a shot, with the entire American garrison of Fort Sullivan -- which became the British Fort Sherbrooke -- surrendering. Next, from his base in Halifax, Nova Scotia, in September 1814, Sir John Coape Sherbrooke led 3,000 British troops in the "Penobscot Expedition ''. In 26 days, he raided and looted Hampden, Bangor, and Machias, destroying or capturing 17 American ships. He won the Battle of Hampden (losing two killed while the Americans lost one killed). Retreating American forces were forced to destroy the frigate Adams. The British occupied the town of Castine and most of eastern Maine for the rest of the war, re-establishing the colony of New Ireland. The Treaty of Ghent returned this territory to the United States, though Machias Seal Island has remained in dispute. The British left in April 1815, at which time they took ₤ 10,750 obtained from tariff duties at Castine. This money, called the "Castine Fund '', was used to establish Dalhousie University, in Halifax, Nova Scotia.
The strategic location of the Chesapeake Bay near America 's new national capital, Washington, D.C. on the major tributary of the Potomac River, made it a prime target for the British. Starting in March 1813, a squadron under Rear Admiral George Cockburn started a blockade of the mouth of the Bay at Hampton Roads harbour and raided towns along the Bay from Norfolk, Virginia, to Havre de Grace, Maryland.
On July 4, 1813, Commodore Joshua Barney, a Revolutionary War naval hero, convinced the Navy Department to build the Chesapeake Bay Flotilla, a squadron of twenty barges powered by small sails or oars (sweeps) to defend the Chesapeake Bay. Launched in April 1814, the squadron was quickly cornered in the Patuxent River, and while successful in harassing the Royal Navy, they were powerless to stop the British campaign that ultimately led to the "Burning of Washington ''. This expedition, led by Cockburn and General Robert Ross, was carried out between August 19 and 29, 1814, as the result of the hardened British policy of 1814. As part of this, Admiral Warren had been replaced as commander in chief by Admiral Alexander Cochrane, with reinforcements and orders to coerce the Americans into a favourable peace.
A force of 2,500 soldiers under General Ross had just arrived in Bermuda aboard HMS Royal Oak, three frigates, three sloops and ten other vessels. Released from the Peninsular War by victory, the British intended to use them for diversionary raids along the coasts of Maryland and Virginia. In response to Prévost 's request, they decided to employ this force, together with the naval and military units already on the station, to strike at the national capital.
On August 24, U.S. Secretary of War John Armstrong Jr. insisted that the British were going to attack Baltimore rather than Washington, even when British army and naval units were obviously on their way to Washington. The inexperienced state militia was easily routed in the Battle of Bladensburg, opening the route to Washington. While First Lady Dolley Madison saved valuables from what is now the "White House ''), senior officials fled to Virginia. Secretary of the Navy William Jones ordered setting fire to the Washington Navy Yard to prevent the capture of supplies. The nation 's public buildings were destroyed by the British (and by a furious thunderstorm that ruined a great deal of property, although it did quench the flames). American morale was challenged, and many Federalists swung around and rallied to a patriotic defense of their homeland.
The British moved on to their major target, the heavily fortified major city of Baltimore. They delayed their movement allowing Baltimore an opportunity to strengthen the fortifications and bring in new federal troops and state militia units. The "Battle for Baltimore '' began with the British landing on September 12, 1814, at North Point, where they were met by American militia further up the "Patapsco Neck '' peninsula. An exchange of fire began, with casualties on both sides. The British Army commander Major Gen. Robert Ross was killed by snipers. The British paused, then continued to march northwestward to face the stationed Maryland and Baltimore City militia units at "Godly Wood. '' The Battle of North Point was fought for several afternoon hours in a musketry and artillery duel. The British also planned to simultaneously attack Baltimore by water on the following day, September 13, to support their military facing the massed, heavily dug - in and fortified American units of approximately 15,000 with about a hundred cannon gathered along the eastern heights of the city named "Loudenschlager 's Hill '' (later "Hampstead Hill '' -- now part of Patterson Park). The Baltimore defences had been planned in advance and overseen by the state militia commander, Maj. Gen. Samuel Smith. The Royal Navy was unable to reduce Fort McHenry at the entrance to Baltimore Harbor in support of an attack from the northeast by the British Army.
The British naval guns, mortars and new "Congreve rockets '' had a longer range than the American cannon onshore. The ships mostly stood out of range of the Americans, who returned very little fire. The fort was not heavily damaged except for a burst over a rear brickwall knocking out some fieldpieces but with few casualties. The British eventually realized that they could not force the passage to attack Baltimore in coordination with the land force. A last ditch night feint and barge attack during a heavy rain storm was led by Capt. Charles Napier around the fort up the Middle Branch of the river to the west. Split and misdirected partly in the storm, it turned back after suffering heavy casualties from the alert gunners of Fort Covington and Battery Babcock. The British called off the attack and sailed downriver to pick up their army, which had retreated from the east side of Baltimore. All the lights were extinguished in Baltimore the night of the attack, and the fort was bombarded for 25 hours. The only light was given off by the exploding shells over Fort McHenry, illuminating the flag that was still flying over the fort. The defence of the fort inspired the American lawyer Francis Scott Key to write "Defence of Fort M'Henry '', a poem that was later set to music as "The Star - Spangled Banner ''.
Before 1813, the war between the Creeks (or Muscogee) had been largely an internal affair sparked by the ideas of Tecumseh farther north in the Mississippi Valley. A faction known as the Red Sticks, so named for the color of their war paint, had broken away from the rest of the Creek Confederacy, which wanted peace with the United States. The Red Sticks were allied with Tecumseh, who about a year before 1813 had visited the Creeks and encouraged greater resistance to the Americans. The Creek Nation was a trading partner of the United States actively involved with Spanish and British trade as well. The Red Sticks, as well as many southern Muscogeean people like the Seminole, had a long history of alliance with the Spanish and British Empires. This alliance helped the North American and European powers protect each other 's claims to territory in the south.
The Battle of Burnt Corn between Red Sticks and U.S. troops, occurred in the southern parts of Alabama on July 27, 1813. It prompted the state of Georgia as well as the Mississippi territory militia to immediately take major action against Creek offensives. The Red Sticks chiefs gained power in the east along the Alabama, Coosa, and Tallapoosa Rivers -- Upper Creek territory. The Lower Creek lived along the Chattahoochee River. Many Creeks tried to remain friendly to the United States, and some were organized by federal Indian Agent Benjamin Hawkins to aid the 6th Military District under General Thomas Pinckney and the state militias. The United States combined forces were large. At its peak the Red Stick faction had 4,000 warriors, only a quarter of whom had muskets.
On August 30, 1813, Red Sticks, led by chiefs Red Eagle and Peter McQueen, attacked Fort Mimms, north of Mobile, the only American - held port in the territory of West Florida. The attack on Fort Mimms resulted in the death of 400 settlers and became an ideological rallying point for the Americans.
The Indian frontier of western Georgia was the most vulnerable but was partially fortified already. From November 1813 to January 1814, Georgia 's militia and auxiliary Federal troops -- from the Creek and Cherokee Indian nations and the states of North Carolina and South Carolina -- organized the fortification of defences along the Chattahoochee River and expeditions into Upper Creek territory in present - day Alabama. The army, led by General John Floyd, went to the heart of the "Creek Holy Grounds '' and won a major offensive against one of the largest Creek towns at Battle of Autosee, killing an estimated two hundred people. In November, the militia of Mississippi with a combined 1200 troops attacked the "Econachca '' encampment ("Battle of Holy Ground '') on the Alabama River. Tennessee raised a militia of 5,000 under Major Generals Andrew Jackson and Brigadier General John Coffee and won the battles of Tallushatchee and Talladega in November 1813.
Jackson suffered enlistment problems in the winter. He decided to combine his force with that of the Georgia militia. However, from January 22 -- 24, 1814, while on their way, the Tennessee militia and allied Muscogee were attacked by the Red Sticks at the Battles of Emuckfaw and Enotachopo Creek. Jackson 's troops repelled the attackers, but outnumbered, were forced to withdraw to his base at Fort Strother.
In January Floyd 's force of 1,300 state militia and 400 Creek Indians moved to join the U.S. forces in Tennessee, but were attacked in camp on the Calibee Creek by Tukabatchee Indians on the 27th.
Jackson 's force increased in numbers with the arrival of U.S. Army soldiers and a second draft of Tennessee state militia and Cherokee and Creek allies swelled his army to around 5,000. In March 1814 they moved south to attack the Creek. On March 27, Jackson decisively defeated the Creek Indian force at Horseshoe Bend, killing 800 of 1,000 Creeks at a cost of 49 killed and 154 wounded out of approximately 2,000 American and Cherokee forces. The American army moved to Fort Jackson on the Alabama River. On August 9, 1814, the Upper Creek chiefs and Jackson 's army signed the "Treaty of Fort Jackson ''. The most of western Georgia and part of Alabama was taken from the Creeks to pay for expenses borne by the United States. The Treaty also "demanded '' that the "Red Stick '' insurgents cease communicating with the Spanish or British, and only trade with U.S. - approved agents.
British aid to the Red Sticks arrived after the end of the Napoleonic Wars in April 1814 and after Admiral Sir Alexander Cochrane assumed command from Admiral Warren in March. The Creek promised to join any body of ' troops that should aid them in regaining their lands, and suggesting an attack on the tower off Mobile. ' In April 1814 the British established an outpost on the Apalachicola River (see Fort Gadsden Historic Site). Cochrane sent a company of Royal Marines, the vessels HMS Hermes and HMS Carron, commanded by Edward Nicolls, and further supplies to meet the Indians. In addition to training the Indians, Nicolls was tasked to raise a force from escaped slaves, as part of the Corps of Colonial Marines.
In July 1814, General Jackson complained to the Governor of Pensacola, Mateo González Manrique, that combatants from the Creek War were being harboured in Spanish territory, and made reference to the British presence on Spanish soil. Although he gave an angry reply to Jackson, Manrique was alarmed at the weak position he found himself in. He appealed to the British for help, with Woodbine arriving on July 28, and Nicolls arriving at Pensacola on August 24.
The first engagement of the British and their Creek allies against the Americans on the Gulf Coast was the attack on Fort Bowyer September 14, 1814. Captain William Percy tried to take the U.S. fort, hoping to then move on Mobile and block U.S. trade and encroachment on the Mississippi. After the Americans repulsed Percy 's forces, the British established a military presence of up to 200 Marines at Pensacola. In November, Jackson 's force of 4,000 men took the town. This underlined the superiority of numbers of Jackson 's force in the region. The U.S. force moved to New Orleans in late 1814. Jackson 's army of 1,000 regulars and 3,000 to 4,000 militia, pirates and other fighters, as well as civilians and slaves built fortifications south of the city.
American forces under General James Wilkinson, who was himself earning $4,000 per year as a Spanish secret agent, took the Mobile area -- formerly part of West Florida -- from the Spanish in March 1813; this was the only territory permanently gained by the U.S. during the war. The Americans built Fort Bowyer, a log and earthenwork fort with 14 guns, on Mobile Point.
At the end of 1814, the British launched a double offensive in the South weeks before the Treaty of Ghent was signed. On the Atlantic coast, Admiral George Cockburn was to close the Intracoastal Waterway trade and land Royal Marine battalions to advance through Georgia to the western territories. On the Gulf coast, Admiral Alexander Cochrane moved on the new state of Louisiana and the Mississippi Territory. Admiral Cochrane 's ships reached the Louisiana coast December 9, and Cockburn arrived in Georgia December 14.
On January 8, 1815, a British force of 8,000 under General Edward Pakenham attacked Jackson 's defences in New Orleans. The Battle of New Orleans was an American victory, as the British failed to take the fortifications on the East Bank. The British suffered high casualties: 291 dead, 1262 wounded, and 484 captured or missing whereas American casualties were 13 dead, 39 wounded, and 19 missing. It was hailed as a great victory across the U.S., making Jackson a national hero and eventually propelling him to the presidency. The American garrison at Fort St. Philip endured ten days of bombardment from Royal Navy guns, which was a final attempt to invade Louisiana; British ships sailed away from the Mississippi River on January 18. However, it was not until January 27, 1815, that the army had completely rejoined the fleet, allowing for their departure.
After New Orleans, the British tried to take Mobile a second time; General John Lambert laid siege for five days and took the fort, winning the Second Battle of Fort Bowyer on February 12, 1815. HMS Brazen brought news of the Treaty of Ghent the next day, and the British abandoned the Gulf coast.
In January 1815, Admiral Cockburn succeeded in blockading the southeastern coast by occupying Camden County, Georgia. The British quickly took Cumberland Island, Fort Point Peter, and Fort St. Tammany in a decisive victory. Under the orders of his commanding officers, Cockburn 's forces relocated many refugee slaves, capturing St. Simons Island as well, to do so. During the invasion of the Georgia coast, an estimated 1,485 people chose to relocate in British territories or join the military. In mid-March, several days after being informed of the Treaty of Ghent, British ships finally left the area.
By 1814, both sides had either achieved their main war goals or were weary of a costly war that offered little but stalemate. They both sent delegations to a neutral site in Ghent, Flanders (now part of Belgium). The negotiations began in early August and concluded on December 24, when a final agreement was signed; both sides had to ratify it before it could take effect. Meanwhile, both sides planned new invasions.
In 1814 the British began blockading the United States, and brought the American economy to near bankruptcy, < / ref > forcing it to rely on loans for the rest of the war. American foreign trade was reduced to a trickle. The parlous American economy was thrown into chaos with prices soaring and unexpected shortages causing hardship in New England which was considering secession. The Hartford Convention led to widespread fears that the New England states might attempt to leave the Union, which was exaggerated as most New Englanders did not wish to leave the Union and merely wanted an end to a war which was bringing much economic hardship, suggested that the continuation of the war might threaten the union. But also to a lesser extent British interests were hurt in the West Indies and Canada that had depended on that trade. Although American privateers found chances of success much reduced, with most British merchantmen now sailing in convoy, privateering continued to prove troublesome to the British, as shown by high insurance rates. British landowners grew weary of high taxes, and colonial interests and merchants called on the government to reopen trade with the U.S. by ending the war.
At last in August 1814, peace discussions began in the neutral city of Ghent. Both sides began negotiations warily. The British diplomats stated their case first, demanding the creation of an Indian barrier state in the American Northwest Territory (the area from Ohio to Wisconsin). It was understood the British would sponsor this Indian state. The British strategy for decades had been to create a buffer state to block American expansion. Britain demanded naval control of the Great Lakes and access to the Mississippi River. The Americans refused to consider a buffer state and the proposal was dropped. Although article IX of the treaty included provisions to restore to Natives "all possessions, rights and privileges which they may have enjoyed, or been entitled to in 1811 '', the provisions were unenforceable; the British did not try and the Americans simply broke the treaty. The Americans (at a later stage) demanded damages for the burning of Washington and for the seizure of ships before the war began.
American public opinion was outraged when Madison published the demands; even the Federalists were now willing to fight on. The British had planned three invasions. One force burned Washington but failed to capture Baltimore, and sailed away when its commander was killed. In northern New York State, 10,000 British veterans were marching south until a decisive defeat at the Battle of Plattsburgh forced them back to Canada. Nothing was known of the fate of the third large invasion force aimed at capturing New Orleans and southwest. The Prime Minister wanted the Duke of Wellington to command in Canada and take control of the Great Lakes. Wellington said that he would go to America but he believed he was needed in Europe. Wellington emphasized that the war was a draw and the peace negotiations should not make territorial demands:
I think you have no right, from the state of war, to demand any concession of territory from America... You have not been able to carry it into the enemy 's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack. You can not on any principle of equality in negotiation claim a cessation of territory except in exchange for other advantages which you have in your power... Then if this reasoning be true, why stipulate for the uti possidetis? You can get no territory: indeed, the state of your military operations, however creditable, does not entitle you to demand any.
The Prime Minister, Lord Liverpool, aware of growing opposition to wartime taxation and the demands of Liverpool and Bristol merchants to reopen trade with America, realized Britain also had little to gain and much to lose from prolonged warfare especially after the growing concern about the situation in Europe. After months of negotiations, against the background of changing military victories, defeats and losses, the parties finally realized that their nations wanted peace and there was no real reason to continue the war. The main focus on British foreign policy was the Congress of Vienna, during which British diplomats had clashed with Russian and Prussian diplomats over the terms of the peace with France, and there were fears at the Britain might have go to war with Russia and Prussia. Now each side was tired of the war. Export trade was all but paralyzed and after Napoleon fell in 1814 France was no longer an enemy of Britain, so the Royal Navy no longer needed to stop American shipments to France, and it no longer needed to impress more seamen. It had ended the practices that so angered the Americans in 1812. The British were preoccupied in rebuilding Europe after the apparent final defeat of Napoleon.
British negotiators were urged by Lord Liverpool to offer a status quo and dropped their demands for the creation of an Indian barrier state, which was in any case hopeless after the collapse of Tecumseh 's alliance. This allowed negotiations to resume at the end of October. British diplomats soon offered the status quo to the U.S. negotiators, who accepted them. Prisoners were to be exchanged and captured slaves returned to the United States or paid for by Britain. At this point, the number of slaves was approximately 6,000. Britain eventually refused the demand, allowing many to either emigrate to Canada or Trinidad.
On December 24, 1814 the diplomats had finished and signed the Treaty of Ghent. The treaty was ratified by the British three days later on December 27 and arrived in Washington on February 17, where it was quickly ratified and went into effect, thus finally ending the war. The terms called for all occupied territory to be returned, the prewar boundary between Canada and the United States to be restored, and the Americans were to gain fishing rights in the Gulf of Saint Lawrence.
The Treaty of Ghent failed to secure official British acknowledgement of American maritime rights or ending impressment. However, in the century of peace until World War I these rights were not seriously violated. The defeat of Napoleon made irrelevant all of the naval issues over which the United States had fought. The Americans had achieved their goal of ending the Indian threat; furthermore the American armies had scored enough victories (especially at New Orleans) to satisfy honour and the sense of becoming fully independent from Britain.
British losses in the war were about 1,160 killed in action and 3,679 wounded; 3,321 British died from disease. American losses were 2,260 killed in action and 4,505 wounded. While the number of Americans who died from disease is not known, it is estimated that about 15,000 died from all causes directly related to the war. These figures do not include deaths among Canadian militia forces or losses among native tribes.
There have been no estimates of the cost of the American war to Britain, but it did add some £ 25 million to the national debt. In the U.S., the cost was $105 million, about the same as the cost to Britain. The national debt rose from $45 million in 1812 to $127 million by the end of 1815, although by selling bonds and treasury notes at deep discounts -- and often for irredeemable paper money due to the suspension of specie payment in 1814 -- the government received only $34 million worth of specie. Stephen Girard, the richest man in America at the time, was one of those who personally funded the United States government involvement in the war.
In addition, at least 3,000 American slaves escaped to the British lines. Many other slaves simply escaped in the chaos of war and achieved their freedom on their own. The British settled some of the newly freed slaves in Nova Scotia. Four hundred freedmen were settled in New Brunswick. The Americans protested that Britain 's failure to return the slaves violated the Treaty of Ghent. After arbitration by the Tsar of Russia the British paid $1,204,960 in damages to Washington, which reimbursed the slaveowners.
In the United States, the economy grew every year 1812 -- 1815, despite a large loss of business by East Coast shipping interests. Prices were 15 % higher -- inflated -- in 1815 compared to 1812, an annual rate of 4.8 %. There was inflation of about %. The national economy grew 1812 -- 1815 at the rate of 3.7 % a year, after accounting for inflation. Per capita GDP grew at 2.2 % a year, after accounting for inflation. Hundreds of new banks were opened; they largely handled the loans that financed the war since tax revenues were down. Money that would have been spent on foreign trade was diverted to opening new factories, which were profitable since British factory - made products were not for sale. This gave a major boost to the Industrial Revolution in the U.S., as typified by the Boston Associates. The Boston Manufacturing Company, built the first integrated spinning and weaving factory in the world at Waltham, Massachusetts, in 1813.
During the 19th century the popular image of the war in the United States was of an American victory, and in Canada, of a Canadian victory. Each young country saw its self - perceived victory as an important foundation of its growing nationhood. The British, on the other hand, who had been preoccupied by Napoleon 's challenge in Europe, paid little attention to what was to them a peripheral and secondary dispute, a distraction from the principal task at hand.
In British North America, the War of 1812 was seen by Loyalists as a victory, as they had claimed they had successfully defended their country from an American takeover.
A long - term consequence of the Canadian militia 's success was the view widely held in Canada at least until the First World War that Canada did not need a regular professional army. While Canadian militia units had played instrumental roles in several engagements, such as at the Battle of the Chateauguay, it was the regular units of the British Army, including its "Fencible '' regiments which were recruited within North America, which ensured that Canada was successfully defended.
The U.S. Army had done poorly, on the whole, in several attempts to invade Canada, and the Canadians had fought bravely to defend their territory. But the British did not doubt that the thinly populated territory would remain vulnerable in a third war. "We can not keep Canada if the Americans declare war against us again '', Admiral Sir David Milne wrote to a correspondent in 1817, although the Rideau Canal was built for just such a scenario.
By the 21st century it was a forgotten war in Britain, although still remembered in Canada, especially Ontario. In a 2009 poll, 37 % of Canadians said the war was a Canadian victory, 9 % said the U.S. won, 15 % called it a draw, and 39 % said they knew too little to comment. A 2012 poll found that in a list of items that could be used to define Canadians ' identity, the belief that Canada successfully repelled an American invasion in the War of 1812 places second (25 %).
Today, American popular memory includes the British capture and the burning of Washington in August 1814, which necessitated its extensive renovation. The fact that before the war, many Americans wanted to annex British North America, was swiftly forgotten, and instead American popular memory focused on the victories at Baltimore, Plattsburg and New Orleans to present the war as a successful effort to assert American national honour, the "second war of independence '' that saw the mighty British empire humbled and humiliated. In a speech before Congress on February 18, 1815, President Madison proclaimed the war a complete American victory. This interpretation of the war was and remains the dominant American view of the war The American newspaper the Niles Register in an editorial on September 14, 1816, announced that the Americans had crushed the British, declaring "... we did virtually dictate the treaty of Ghent to the British ''. A minority of Americans, mostly associated with the Federalists, saw the war as a defeat and an act of folly on Madison 's part, caustically asking if the Americans were "dictating '' the terms of the treaty of Ghent, why the British Crown did not cede British North America to the United States? However, the Federalist view of the war is not the mainstream American memory of the war. The view of Congressman George Troup, who stated in a speech in 1815 that the Treaty of Ghent was "the glorious termination of the most glorious war ever waged by any people '', is the way that most Americans remembered the war. Another memory is the successful American defence of Fort McHenry in September 1814, which inspired the lyrics of the U.S. national anthem, "The Star - Spangled Banner ''. The successful captains of the U.S. Navy became popular heroes with plates with the likeness of Decatur, Steward, Hull, and others becoming popular items. Ironically, many were made in England. The navy became a cherished institution, lauded for the victories that it won against all odds. After engagements during the final actions of the war, U.S. Marines had acquired a well - deserved reputation as excellent marksmen, especially in ship - to - ship actions.
Historians have differing and complex interpretations of the war. In recent decades the view of the majority of historians has been that the war ended in stalemate, with the Treaty of Ghent closing a war that had become militarily inconclusive. Neither side wanted to continue fighting since the main causes had disappeared and since there were no large lost territories for one side or the other to reclaim by force. Insofar as they see the war 's resolution as allowing two centuries of peaceful and mutually beneficial intercourse between the U.S., Britain and Canada, these historians often conclude that all three nations were the "real winners '' of the War of 1812. These writers often add that the war could have been avoided in the first place by better diplomacy. It is seen as a mistake for everyone concerned because it was badly planned and marked by multiple fiascoes and failures on both sides, as shown especially by the repeated American failures to seize parts of Canada, and the failed British attack on New Orleans and upstate New York.
However, other scholars hold that the war constituted a British victory and an American defeat. They argue that the British achieved their military objectives in 1812 (by stopping the repeated American invasions of Canada) and retaining their Canadian colonies. By contrast, they say, the Americans suffered a defeat when their armies failed to achieve their war goal of seizing part or all of Canada. Additionally, they argue the U.S. lost as it failed to stop impressment, which the British refused to repeal until the end of the Napoleonic Wars, arguing that the U.S. actions had no effect on the Orders in Council, which were rescinded before the war started.
Historian Troy Bickham, author of The Weight of Vengeance: The United States, the British Empire, and the War of 1812, sees the British as having fought to a much stronger position than the United States.
Even tied down by ongoing wars with Napoleonic France, the British had enough capable officers, well - trained men, and equipment to easily defeat a series of American invasions of Canada. In fact, in the opening salvos of the war, the American forces invading Upper Canada were pushed so far back that they ended up surrendering Michigan Territory. The difference between the two navies was even greater. While the Americans famously (shockingly for contemporaries on both sides of the Atlantic) bested British ships in some one - on - one actions at the war 's start, the Royal Navy held supremacy throughout the war, blockading the U.S. coastline and ravaging coastal towns, including Washington, D.C. Yet in late 1814, the British offered surprisingly generous peace terms despite having amassed a large invasion force of veteran troops in Canada, naval supremacy in the Atlantic, an opponent that was effectively bankrupt, and an open secessionist movement in New England.
He considers that the British offered the United States generous terms, in place of their initially harsh terms (which included massive forfeiture of land to Canada and the American Indians), because the "reigning Liverpool ministry in Britain held a loose grip on power and feared the war - weary, tax - exhausted public ''. The war was also technically a British victory "because the United States failed to achieve the aims listed in its declaration of war ''.
A second minority view is that both the U.S. and Britain won the war -- that is, both achieved their main objectives, as the U.S. restored its independence and honor, and opened the way to westward expansion, while Britain defeated Napoleon and ruled the seas. American historian Norman K. Risjord argues that the main motivation was restoring the nation 's honor in the face of relentless British Aggression toward American neutral rights on the high seas, and in the Western lands. The Results in terms of honor satisfied the War Hawks. American historian Donald Hickey asks, "Did the cost in blood and treasure justify the U.S. decision to go to war? Most Republicans thought it did. In the beginning they called the contest a "second war of independence '', and while Britain 's maritime practices never truly threatened the Republic 's independence, the war did in a broad sense vindicate U.S. sovereignty. But it ended in a draw on the battlefield. '' Historians argue that it was an American success to end the threat of Indian raids, kill the British plan for a semi-independent Indian sanctuary, and hereby to open an unimpeded path for westward expansion. Winston Churchill concluded:
The lessons of the war were taken to heart. Anti-American feeling in Great Britain ran high for several years, but the United States were never again refused proper treatment as an independent power.
American naval historian George C. Daughan argues that the US achieved enough of its war goals to claim a victorious result of the conflict, and subsequent impact it had on the negotiations in Ghent. Daughan uses official correspondences from President Madison to the delegates at Ghent strictly prohibiting negotiations with regards to maritime law, stating:
Madison 's latest dispatches (arrived July 25 -- 27, 1814) permitted them (the delegates) to simply ignore the entire question of maritime rights. Free trade with liberated Europe had already been restored, and the Admiralty no longer needed impressment to man its warships. The president felt that with Europe at peace the issues of neutral trading rights and impressment could safely be set aside in the interests of obtaining peace... Thus, from the start of the negotiations, the disagreements that started the war and sustained it were acknowledged by both parties to be no longer important.
The British permanently stopped impressing Americans, although they never publicly rescinding the possibility of resuming that practice. The US delegates at the meeting understood it to be a dead issue after the 1814 surrender of Napoleon. In addition, the successful defence of Baltimore, Plattsburgh, and Fort Erie (a strategic fortress located in Upper Canada on the Niagara River, and occupied during the 3rd and most successful offensive into Canada) had very favorable influence on the negotiations for the Americans and prompted several famous responses from both sides. Henry Clay wrote to the delegates in October 1814, "for in our own country, my dear sir, at last must we conquer the peace. '' With growing pressure in Britain, The Duke of Wellington when asked to command the forces in America wrote to Liverpool on November 9, 1814 "I confess that I think you have no right, from the state of the war, to demand any concession of territory from America... You have not been able to carry... (the war) into the enemy 's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack (Fort Erie)... Why Stipulate for uti possidetis? '' The argument that the US failed to capture any Canadian territory that influenced the negotiations is an outdated and highly criticized position, argues Daughan. He cites the Edinburgh Review, a British newspaper who had remained silent about the war with America for two years wrote "the British government had embarked on a war of conquest, after the American government had dropped its maritime demands, and the British had lost. It was folly to attempt to invade and conquer the United States. To do so would result in the same tragedy as the first war against them, and with the same result. ''
Historians have different views on who won the War of 1812, and there is an element of national bias to this. British and Canadian historians follow the view that the war was a British victory, and some US historians also support this view. The opposing position, held by most US historians along with some Canadians and British, is that the result was a stalemate. Only US historians follow the minority view that the US was the victorious party in the war. Similarly, a survey of school textbooks found that historians from Canada, Britain, and the United States emphasize different aspects of the war according to their national narratives; some British texts will scarcely mention the war.
Historians generally agree that the real losers of the War of 1812 were the Indians (called First Nations in Canada). Hickey says:
The big losers in the war were the Indians. As a proportion of their population, they had suffered the heaviest casualties. Worse, they were left without any reliable European allies in North America... The crushing defeats at the Thames and Horseshoe Bend left them at the mercy of the Americans, hastening their confinement to reservations and the decline of their traditional way of life.
The Indians of the Old Northwest (the modern Midwest) had hoped to create an Indian state to be a British protectorate. American settlers into the Middle West had been repeatedly blocked and threatened by Indian raids before 1812, and that now came to an end. Throughout the war the British had played on terror of the tomahawks and scalping knives of their Indian allies; it worked especially at Hull 's surrender at Detroit. By 1813 Americans had killed Tecumseh and broken his coalition of tribes. Jackson then defeated the Creek in the Southwest. Historian John Sugden notes that in both theatres, the Indians ' strength had been broken prior to the arrival of the major British forces in 1814. The one campaign that the Americans had decisively won was the campaign in the Old Northwest, which put the British in a weak hand to insist upon an Indian state in the Old Northwest.
Notwithstanding the sympathy and support from commanders (such as Brock, Cochrane and Nicolls), the policymakers in London reneged in assisting the Indians, as making peace was a higher priority for the politicians. At the peace conference the British demanded an independent Indian state in the Midwest, but, although the British and their Indian allies maintained control over the territories in question (i.e. most of the Upper Midwest), British diplomats did not press the demand after an American refusal, effectively abandoning their Indian allies. The withdrawal of British protection gave the Americans a free hand, which resulted in the removal of most of the tribes to Indian Territory (present - day Oklahoma). In that sense according to historian Alan Taylor, the final victory at New Orleans had "enduring and massive consequences ''. It gave the Americans "continental predominance '' while it left the Indians dispossessed, powerless, and vulnerable.
The Treaty of Ghent technically required the United States to cease hostilities and "forthwith to restore to such Tribes or Nations respectively all possessions, rights and privileges which they may have enjoyed, or been entitled to in 1811 ''; the United States ignored this article of the treaty and proceeded to expand into this territory regardless; Britain was unwilling to provoke further war to enforce it. A shocked Henry Goulburn, one of the British negotiators at Ghent, remarked:
Till I came here, I had no idea of the fixed determination which there is in the heart of every American to extirpate the Indians and appropriate their territory.
The Creek War came to an end, with the Treaty of Fort Jackson being imposed upon the Indians. About half of the Creek territory was ceded to the United States, with no payment made to the Creeks. This was, in theory, invalidated by Article 9 of the Treaty of Ghent. The British failed to press the issue, and did not take up the Indian cause as an infringement of an international treaty. Without this support, the Indians ' lack of power was apparent and the stage was set for further incursions of territory by the United States in subsequent decades.
Neither side lost territory in the war, nor did the treaty that ended it address the original points of contention -- and yet it changed much between the United States of America and Britain.
The Treaty of Ghent established the status quo ante bellum; that is, there were no territorial losses by either side. The issue of impressment was made moot when the Royal Navy, no longer needing sailors, stopped impressment after the defeat of Napoleon in spring 1814 ended the war. (Napoleon unexpectedly returned in 1815, after the final end of the war of 1812.) Except for occasional border disputes and some tensions during the American Civil War, relations between the U.S. and Britain remained peaceful for the rest of the 19th century, and the two countries became close allies in the 20th century.
The Rush -- Bagot Treaty between the United States and Britain was enacted in 1817. It demilitarized the Great Lakes and Lake Champlain, where many British naval arrangements and forts still remained. The treaty laid the basis for a demilitarized boundary. It remains in effect to this day.
Although Britain had defeated the American invasions she was in no mood to have more conflicts with the United States since her attention was to her growing Indian possessions. Indicative of forbearance, or at least improved relations, Britain never seriously challenged the US over land claims after 1846: she had hoped to keep Texas out of the US and had designs of taking California. From the 1880s, because of the burgeoning industrial power of the US, Britain had designs on getting the US on her side in a hypothetical European war. Border adjustments between the U.S. and British North America were made in the Treaty of 1818. Eastport, Massachusetts, was returned to the U.S. in 1818; it became part of the new State of Maine in 1820. A border dispute along the Maine -- New Brunswick border was settled by the 1842 Webster -- Ashburton Treaty after the bloodless Aroostook War, and the border in the Oregon Country was settled by splitting the disputed area in half by the 1846 Oregon Treaty. A further dispute about the line of the border through the island in the Strait of Juan de Fuca resulted in another almost bloodless standoff in the Pig War of 1859. The line of the border was finally settled by an international arbitration commission in 1872.
The U.S. suppressed the Native American resistance on its western and southern borders. The nation also gained a psychological sense of complete independence as people celebrated their "second war of independence ''. Nationalism soared after the victory at the Battle of New Orleans. The opposition Federalist Party collapsed, and the Era of Good Feelings ensued.
No longer questioning the need for a strong Navy, the U.S. built three new 74 - gun ships of the line and two new 44 - gun frigates shortly after the end of the war. (Another frigate had been destroyed to prevent it being captured on the stocks.) In 1816, the U.S. Congress passed into law an "Act for the gradual increase of the Navy '' at a cost of $1,000,000 a year for eight years, authorizing 9 ships of the line and 12 heavy frigates. The Captains and Commodores of the U.S. Navy became the heroes of their generation in the U.S. Decorated plates and pitchers of Decatur, Hull, Bainbridge, Lawrence, Perry, and Macdonough were made in Staffordshire, England, and found a ready market in the United States. Several war heroes used their fame to win election to national office. Andrew Jackson and William Henry Harrison both took advantage of their military successes to win the presidency, while Richard Mentor Johnson used his wartime exploits to help attain the vice presidency.
During the war, New England states became increasingly frustrated over how the war was being conducted and how the conflict was affecting them. They complained that the U.S. government was not investing enough in the states ' defences militarily and financially, and that the states should have more control over their militias. The increased taxes, the British blockade, and the occupation of some of New England by enemy forces also agitated public opinion in the states. As a result, at the Hartford Convention (December 1814 -- January 1815) Federalist delegates deprecated the war effort and sought more autonomy for the New England states. They did not call for secession but word of the angry anti-war resolutions appeared at the same time that peace was announced and the victory at New Orleans was known. The upshot was that the Federalists were permanently discredited and quickly disappeared as a major political force.
This war enabled thousands of slaves to escape to British lines or ships for freedom, despite the difficulties. The planters ' complacency about slave contentment was shocked by their seeing slaves, who risked so much to be free.
After the decisive defeat of the Creek Indians at the battle of Horseshoe Bend in 1814, some Indian warriors escaped to join the Seminoles in Florida. The remaining Creek chiefs signed away about half their lands, comprising 23,000,000 acres, covering much of southern Georgia and two thirds of modern Alabama. The Creeks were now separated from any future help from the Spanish in Florida, or from the Choctaw and Chickasaw to the west. During the war the United States seized Mobile, Alabama, which was a strategic location providing oceanic outlet to the cotton lands to the north. Jackson invaded Florida in 1818, demonstrating to Spain that it could no longer control that territory with a small force. Spain sold Florida to the United States in 1819 in the Adams -- Onís Treaty following the First Seminole War. Pratt concludes:
Thus indirectly the War of 1812 brought about the acquisition of Florida... To both the Northwest and the South, therefore, the War of 1812 brought substantial benefits. It broke the power of the Creek Confederacy and opened to settlement a great province of the future Cotton Kingdom.
Pro-British leaders demonstrated a strong hostility to American influences in western Canada (Ontario) after the war and shaped its policies, including a hostility to American - style republicanism. Immigration from the U.S. was discouraged, and favour was shown to the Anglican Church as opposed to the more Americanized Methodist Church.
The Battle of York showed the vulnerability of Upper and Lower Canada. In the 1820s, work began on La Citadelle at Quebec City as a defence against the United States. Additionally, work began on the Halifax citadel to defend the port against foreign navies. From 1826 to 1832, the Rideau Canal was built to provide a secure waterway not at risk from American cannon fire. To defend the western end of the canal, the British Army also built Fort Henry at Kingston.
The Native Americans allied to the British lost their cause. The British proposal to create a "neutral '' Indian zone in the American West was rejected at the Ghent peace conference and never resurfaced. After 1814 the natives, who lost most of their fur - gathering territory, became an undesirable burden to British policymakers who now looked to the United States for markets and raw materials. British agents in the field continued to meet regularly with their former American Indian partners, but they did not supply arms or encouragement and there were no American Indian campaigns to stop U.S. expansionism in the Midwest. Abandoned by their powerful sponsor, American Great Lakes - area Indians ultimately migrated or reached accommodations with the American authorities and settlers.
Bermuda had been largely left to the defences of its own militia and privateers before U.S. independence, but the Royal Navy had begun buying up land and operating from there in 1795, as its location was a useful substitute for the lost U.S. ports. It originally was intended to be the winter headquarters of the North American Squadron, but the war saw it rise to a new prominence. As construction work progressed through the first half of the 19th century, Bermuda became the permanent naval headquarters in Western waters, housing the Admiralty and serving as a base and dockyard. The military garrison was built up to protect the naval establishment, heavily fortifying the archipelago that came to be described as the "Gibraltar of the West ''. Defence infrastructure remained the central leg of Bermuda 's economy until after World War II.
The war is seldom remembered in Great Britain. The massive ongoing conflict in Europe against the French Empire under Napoleon ensured that the War of 1812 against America was never seen as more than a sideshow to the main event by the British. Britain 's blockade of French trade had been entirely successful and the Royal Navy was the world 's dominant nautical power (and remained so for another century). While the land campaigns had contributed to saving Canada, the Royal Navy had shut down American commerce, bottled up the U.S. Navy in port and heavily suppressed privateering. British businesses, some affected by rising insurance costs, were demanding peace so that trade could resume with the U.S. The peace was generally welcomed by the British, though there was disquiet at the rapid growth of the U.S. However, the two nations quickly resumed trade after the end of the war and, over time, a growing friendship.
Hickey argues that for Britain:
the most important lesson of all (was) that the best way to defend Canada was to accommodate the United States. This was the principal rationale for Britain 's long - term policy of rapprochement with the United States in the nineteenth century and explains why they were so often willing to sacrifice other imperial interests to keep the republic happy.
|
what is eddy current loss in a transformer | Eddy current - wikipedia
Eddy currents (also called Foucault currents) are loops of electrical current induced within conductors by a changing magnetic field in the conductor due to Faraday 's law of induction. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time - varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the strength of the magnetic field, the area of the loop, and the rate of change of flux, and inversely proportional to the resistivity of the material.
By Lenz 's law, an eddy current creates a magnetic field that opposes the change in the magnetic field that created it, and thus eddy currents react back on the source of the magnetic field. For example, a nearby conductive surface will exert a drag force on a moving magnet that opposes its motion, due to eddy currents induced in the surface by the moving magnetic field. This effect is employed in eddy current brakes which are used to stop rotating power tools quickly when they are turned off. The current flowing through the resistance of the conductor also dissipates energy as heat in the material. Thus eddy currents are a cause of energy loss in alternating current (AC) inductors, transformers, electric motors and generators, and other AC machinery, requiring special construction such as laminated magnetic cores or ferrite cores to minimize them. Eddy currents are also used to heat objects in induction heating furnaces and equipment, and to detect cracks and flaws in metal parts using eddy - current testing instruments.
The term eddy current comes from analogous currents seen in water in fluid dynamics, causing localised areas of turbulence known as eddies giving rise to persistent vortices. Somewhat analogously, eddy currents can take time to build up and can persist for very short times in conductors due to their inductance.
The first person to observe eddy currents was François Arago (1786 -- 1853), the 25th Prime Minister of France, who was also a mathematician, physicist and astronomer. In 1824 he observed what has been called rotatory magnetism, and that most conductive bodies could be magnetized; these discoveries were completed and explained by Michael Faraday (1791 -- 1867).
In 1834, Heinrich Lenz stated Lenz 's law, which says that the direction of induced current flow in an object will be such that its magnetic field will oppose the change of magnetic flux that caused the current flow. Eddy currents produce a secondary field that cancels a part of the external field and causes some of the external flux to avoid the conductor.
French physicist Léon Foucault (1819 -- 1868) is credited with having discovered eddy currents. In September, 1855, he discovered that the force required for the rotation of a copper disc becomes greater when it is made to rotate with its rim between the poles of a magnet, the disc at the same time becoming heated by the eddy current induced in the metal. The first use of eddy current for non-destructive testing occurred in 1879 when David E. Hughes used the principles to conduct metallurgical sorting tests.
A magnet induces circular electric currents in a metal sheet moving past it. See the diagram at right. It shows a metal sheet (C) moving to the right under a stationary magnet. The magnetic field (B, green arrows) of the magnet 's north pole N passes down through the sheet. Since the metal is moving, the magnetic flux through the sheet is changing. At the part of the sheet under the leading edge of the magnet (left side) the magnetic field through the sheet is increasing as it gets nearer the magnet, d B d t > 0 (\ displaystyle (dB \ over dt) \; > \; 0). From Faraday 's law of induction, this creates a circular electric field in the sheet in a counterclockwise direction around the magnetic field lines. This field induces a counterclockwise flow of electric current (I, red), in the sheet. This is the eddy current. At the trailing edge of the magnet (right side) the magnetic field through the sheet is decreasing, d B d t < 0 (\ displaystyle (dB \ over dt) \; < \; 0), inducing a second eddy current in a clockwise direction in the sheet.
Another way to understand the current is to see that the free charge carriers (electrons) in the metal sheet are moving with the sheet to the right, so the magnetic field exerts a sideways force on them due to the Lorentz force. Since the velocity v of the charges is to the right and the magnetic field B is directed down, from the right hand rule the Lorentz force on positive charges F = q (v × B) is toward the rear of the diagram (to the left when facing in the direction of motion v). This causes a current I toward the rear under the magnet, which circles around through parts of the sheet outside the magnetic field, clockwise to the right and counterclockwise to the left, to the front of the magnet again. The mobile charge carriers in the metal, the electrons, actually have a negative charge (q < 0) so their motion is opposite in direction to the conventional current shown.
Due to Ampere 's circuital law each of these circular currents creates a counter magnetic field (blue arrows), which due to Lenz 's law opposes the change in magnetic field which caused it, exerting a drag force on the sheet. At the leading edge of the magnet (left side) by the right hand rule the counterclockwise current creates a magnetic field pointed up, opposing the magnet 's field, causing a repulsive force between the sheet and the leading edge of the magnet. In contrast, at the trailing edge (right side), the clockwise current causes a magnetic field pointed down, in the same direction as the magnet 's field, creating an attractive force between the sheet and the trailing edge of the magnet. Both of these forces oppose the motion of the sheet. The kinetic energy which is consumed overcoming this drag force is dissipated as heat by the currents flowing through the resistance of the metal, so the metal gets warm under the magnet.
Eddy currents in conductors of non-zero resistivity generate heat as well as electromagnetic forces. The heat can be used for induction heating. The electromagnetic forces can be used for levitation, creating movement, or to give a strong braking effect. Eddy currents can also have undesirable effects, for instance power loss in transformers. In this application, they are minimized with thin plates, by lamination of conductors or other details of conductor shape.
Self - induced eddy currents are responsible for the skin effect in conductors. The latter can be used for non-destructive testing of materials for geometry features, like micro-cracks. A similar effect is the proximity effect, which is caused by externally induced eddy currents.
An object or part of an object experiences steady field intensity and direction where there is still relative motion of the field and the object (for example in the center of the field in the diagram), or unsteady fields where the currents can not circulate due to the geometry of the conductor. In these situations charges collect on or within the object and these charges then produce static electric potentials that oppose any further current. Currents may be initially associated with the creation of static potentials, but these may be transitory and small.
Eddy currents generate resistive losses that transform some forms of energy, such as kinetic energy, into heat. This Joule heating reduces efficiency of iron - core transformers and electric motors and other devices that use changing magnetic fields. Eddy currents are minimized in these devices by selecting magnetic core materials that have low electrical conductivity (e.g., ferrites) or by using thin sheets of magnetic material, known as laminations. Electrons can not cross the insulating gap between the laminations and so are unable to circulate on wide arcs. Charges gather at the lamination boundaries, in a process analogous to the Hall effect, producing electric fields that oppose any further accumulation of charge and hence suppressing the eddy currents. The shorter the distance between adjacent laminations (i.e., the greater the number of laminations per unit area, perpendicular to the applied field), the greater the suppression of eddy currents.
The conversion of input energy to heat is not always undesirable, however, as there are some practical applications. One is in the brakes of some trains known as eddy current brakes. During braking, the metal wheels are exposed to a magnetic field from an electromagnet, generating eddy currents in the wheels. This eddy current is formed by the movement of the wheels. So, by Lenz 's law, the magnetic field formed by the Eddy current will oppose its cause. Thus the wheel will face a force opposing the initial movement of the wheel. The faster the wheels are spinning, the stronger the effect, meaning that as the train slows the braking force is reduced, producing a smooth stopping motion.
Induction heating makes use of eddy currents to provide heating of metal objects.
Under certain assumptions (uniform material, uniform magnetic field, no skin effect, etc.) the power lost due to eddy currents per unit mass for a thin sheet or wire can be calculated from the following equation:
where
This equation is valid only under the so - called quasi-static conditions, where the frequency of magnetisation does not result in the skin effect; that is, the electromagnetic wave fully penetrates the material.
In very fast - changing fields, the magnetic field does not penetrate completely into the interior of the material. This skin effect renders the above equation invalid. However, in any case increased frequency of the same value of field will always increase eddy currents, even with non-uniform field penetration.
The penetration depth for a good conductor can be calculated from the following equation:
where δ is the penetration depth (m), f is the frequency (Hz), μ is the magnetic permeability of the material (H / m), and σ is the electrical conductivity of the material (S / m).
The derivation of a useful equation for modelling the effect of eddy currents in a material starts with the differential, magnetostatic form of Ampère 's Law, providing an expression for the magnetizing field H surrounding a current density J:
Taking the curl on both sides of this equation and then using a common vector calculus identity for the curl of the curl results in
From Gauss 's law for magnetism, ∇ H = 0, so
Using Ohm 's law, J = σE, which relates current density J to electric field E in terms of a material 's conductivity σ, and assuming isotropic homogeneous conductivity, the equation can be written as
Using the differential form of Faraday 's law, ∇ × E = − ∂ B / ∂ t, this gives
By definition, B = μ (H + M), where M is the magnetization of the material and μ is the vacuum permeability. The diffusion equation therefore is
Eddy current brakes use the drag force created by eddy currents as a brake to slow or stop moving objects. Since there is no contact with a brake shoe or drum, there is no mechanical wear. However, an eddy current brake can not provide a "holding '' torque and so may be used in combination with mechanical brakes, for example, on overhead cranes. Another application is on some roller coasters, where heavy copper plates extending from the car are moved between pairs of very strong permanent magnets. Electrical resistance within the plates causes a dragging effect analogous to friction, which dissipates the kinetic energy of the car. The same technique is used in electromagnetic brakes in railroad cars and to quickly stop the blades in power tools such as circular saws. Using electromagnets, as opposed to permanent magnets, the strength of the magnetic field can be adjusted and so the magnitude of braking effect changed.
In a varying magnetic field the induced currents exhibit diamagnetic - like repulsion effects. A conductive object will experience a repulsion force. This can lift objects against gravity, though with continual power input to replace the energy dissipated by the eddy currents. An example application is separation of aluminum cans from other metals in an eddy current separator. Ferrous metals cling to the magnet, and aluminum (and other non-ferrous conductors) are forced away from the magnet; this can separate a waste stream into ferrous and non-ferrous scrap metal.
With a very strong handheld magnet, such as those made from neodymium, one can easily observe a very similar effect by rapidly sweeping the magnet over a coin with only a small separation. Depending on the strength of the magnet, identity of the coin, and separation between the magnet and coin, one may induce the coin to be pushed slightly ahead of the magnet -- even if the coin contains no magnetic elements, such as the US penny. Another example involves dropping a strong magnet down a tube of copper -- the magnet falls at a dramatically slow pace.
In a perfect conductor with no resistance (a superconductor), surface eddy currents exactly cancel the field inside the conductor, so no magnetic field penetrates the conductor. Since no energy is lost in resistance, eddy currents created when a magnet is brought near the conductor persist even after the magnet is stationary, and can exactly balance the force of gravity, allowing magnetic levitation. Superconductors also exhibit a separate inherently quantum mechanical phenomenon called the Meissner effect in which any magnetic field lines present in the material when it becomes superconducting are expelled, thus the magnetic field in a superconductor is always zero.
Using electromagnets with electronic switching comparable to electronic speed control it is possible to generate electromagnetic fields moving in an arbitrary direction. As described in the section above about eddy current brakes, a non-ferromagnetic conductor surface tends to rest within this moving field. When however this field is moving, a vehicle can be levitated and propulsed. This is comparable to a maglev but is not bound to a rail.
In a coin - operated vending machine, eddy currents are used to detect counterfeit coins, or slugs. The coin rolls past a stationary magnet, and eddy currents slow its speed. The strength of the eddy currents, and thus the retardation, depends on the conductivity of the coin 's metal. Slugs are slowed to a different degree than genuine coins, and this is used to send them into the rejection slot.
Eddy currents are used in certain types of proximity sensors to observe the vibration and position of rotating shafts within their bearings. This technology was originally pioneered in the 1930s by researchers at General Electric using vacuum tube circuitry. In the late 1950s, solid - state versions were developed by Donald E. Bently at Bently Nevada Corporation. These sensors are extremely sensitive to very small displacements making them well suited to observe the minute vibrations (on the order of several thousandths of an inch) in modern turbomachinery. A typical proximity sensor used for vibration monitoring has a scale factor of 200 mV / mil. Widespread use of such sensors in turbomachinery has led to development of industry standards that prescribe their use and application. Examples of such standards are American Petroleum Institute (API) Standard 670 and ISO 7919.
A Ferraris acceleration sensor, also called a Ferraris sensor, is a contactless sensor that uses eddy currents to measure relative acceleration.
Eddy current techniques are commonly used for the nondestructive examination (NDE) and condition monitoring of a large variety of metallic structures, including heat exchanger tubes, aircraft fuselage, and aircraft structural components.
Eddy currents are the root cause of the skin effect in conductors carrying AC current.
Similarly, in magnetic materials of finite conductivity eddy currents cause the confinement of the majority of the magnetic fields to only a couple skin depths of the surface of the material. This effect limits the flux linkage in inductors and transformers having magnetic cores.
|
who wins at the end of rocky 4 | Rocky IV - wikipedia
Rocky IV is a 1985 American sports drama film written, directed by, and starring Sylvester Stallone. The film co-stars Dolph Lundgren, Burt Young, Talia Shire, Carl Weathers, Tony Burton, Brigitte Nielsen and Michael Pataki. Rocky IV was the highest grossing sports movie for 24 years, before it was overtaken by The Blind Side. It is the fourth and most financially successful entry in the Rocky film series.
In the film, the Soviet Union and its top boxer make an entrance into professional boxing with their best athlete Ivan Drago, who initially wants to take on World champion Rocky Balboa. Rocky 's best friend Apollo Creed decides to fight him instead but is fatally beaten in the ring. Enraged, Rocky decides to fight Drago in the Soviet Union to avenge the death of his friend and defend the honor of his country.
Critical reception was mixed, but the film earned $300 million at the box office. This film marked Carl Weathers ' final appearance in the series. Its success led to a fifth entry released on November 16, 1990.
Ivan Drago, a Soviet boxer, arrives in the United States with his wife, Ludmilla, a Soviet swimmer and a team of trainers from the Soviet Union and Cuba. His manager, Nicolai Koloff, takes every opportunity to promote Drago 's athleticism as a hallmark of Soviet superiority. Motivated by patriotism and an innate desire to prove himself, Apollo Creed challenges Drago to an exhibition bout. Rocky has reservations, but agrees to train Apollo despite his misgivings about the match. He asks Apollo whether the fight is against Drago, or "you against you? ''
During a press conference regarding the match, hostility sparks between Apollo and Drago 's respective camps. The boxing exhibition takes place at the MGM Grand Hotel in Las Vegas. Apollo enters the ring in an over-the - top patriotic entrance with James Brown performing "Living in America '' complete with showgirls. The bout starts tamely with Apollo landing several punches that are ineffective against Drago, and Drago retaliates with devastating effect. By the end of the first round, Rocky and Apollo 's trainer, Duke, plead with him to give up, but Apollo refuses to do so, and tells Rocky not to stop the match "no matter what. '' Drago continues to pummel him in the second round, Duke begs Rocky to throw in the towel. Eventually, Drago lands one final punch on Apollo, killing him. In the aftermath, Drago displays no sense of remorse commenting to the assembled media: "If he dies, he dies. ''
Enraged by guilt and the Soviets ' cold indifference, Rocky decides to challenge Drago himself. Drago 's camp agrees to an unsanctioned 15 - round fight in the Soviet Union on Christmas Day, an arrangement meant to protect Drago from the threats of violence he has been receiving in the U.S. Rocky travels to the Soviet Union without Adrian, setting up his training base in Krasnogourbinsk with only Duke and brother - in - law Paulie to accompany him. Duke opens up to Rocky, stating that he actually raised Apollo, and that his death felt like a father losing his son. He expresses his faith in Rocky to prevail. To prepare for the match, Drago uses high - tech equipment, steroid enhancement, and a team of trainers and doctors monitoring his every movement. Rocky, on the other hand, lifts and throws heavy logs, chops down trees, pulls an overloaded snow sleigh with Paulie atop, jogs through heavy snow under treacherous icy conditions, and climbs the largest icy mountain. Adrian arrives unexpectedly to give Rocky her support after initially refusing to travel to the Soviet Union, because of her worry that Rocky would be killed like Apollo.
Before the match, Drago is introduced with an elaborate patriotic ceremony, inspired by Apollo Creed 's intro. The home crowd is squarely on Drago 's side and hostile to Rocky. In stark contrast to his match with Apollo, Drago immediately goes on the offensive. Rocky takes a fierce pounding, and is thrown and shoved across the ring in the first round, but comes back toward the end of the second round, and cuts Drago 's left eye, stunning both him and the crowd. This prompts Rocky to continue punching even after the bell rings. While Duke and Paulie encourage Rocky, they remind him that Drago is not a machine, but rather still human. Drago ironically comments to his trainers that Rocky "is not human, he is like a piece of iron, '' after his trainers reprimand him for his performance against the "weak '' American.
The two boxers continue their battle over the next dozen rounds, with Rocky managing to continually hold his ground, despite Drago 's valiant efforts. His resilience and persistence rallies the previously hostile Soviet crowd to his side, which unsettles Drago to the point that he picks Koloff up by the throat and throws him off the ring for berating his performance. In the final round, Rocky overcomes Drago with a knockout, to the shock of the Soviet politburo members watching the match.
Rocky gives a victory speech, acknowledging that the local crowd 's disdain of him had turned to respect during the fight. He compares it to the animosity between the U.S. and the Soviets, and says that seeing him and Drago fight was "better than 20 million '', alluding to a possible war between the U.S. and the Soviets. Rocky finally declares, "If I can change, and you can change, then everybody can change! '' The Soviet General Secretary stands up and reluctantly applauds Rocky, and his aides follow suit. Rocky ends his speech by wishing his son watching the match on TV a Merry Christmas, and raises his arms into the air in victory as the crowd applauds.
LeRoy Neiman plays the ring announcer in the Creed - Drago match. Burgess Meredith appears as Mickey Goldmill in archive footage. Appearing as themselves are singer James Brown and commentators Stu Nahan, Warner Wolf, R.J. Adams, Barry Tompkins and Al Bandiero.
Wyoming doubled for the frozen expanse of the Soviet Union. The small farm where Rocky lived and trained was in Jackson Hole, and the Grand Teton National Park was used for filming many of the outdoor sequences in the Soviet Union. The PNE Agrodome at Hastings Park in Vancouver, British Columbia, served as the location of Rocky 's Soviet bout.
Sylvester Stallone has stated that the original punching scenes filmed between him and Dolph Lundgren in the first portion of the fight are completely authentic. Stallone wanted to capture a realistic scene and Lundgren agreed that they would engage in legitimate sparring. One particularly forceful Lundgren punch to Stallone 's chest slammed his heart against his breastbone, causing the heart to swell. Stallone, suffering from labored breathing and a blood pressure over 200, was flown from the set in Canada to Saint John 's Regional Medical Center in Santa Monica, and was forced into intensive care for eight days. Stallone later commented that he believed Lundgren had the athletic ability and talent to fight in the professional heavyweight division of boxing.
Additionally, Stallone has stated that Lundgren nearly forced Carl Weathers to quit in the middle of filming the Apollo - vs. - Drago "exhibition '' fight. At one point in the filming of the scene, Lundgren tossed Weathers into the corner of the boxing ring. Weathers shouted profanities at Lundgren while leaving the ring, and announcing that he was calling his agent and quitting the movie. Only after Stallone forced the two actors to reconcile did filming continue. The event caused a four - day work stoppage, while Weathers was talked back into the part and Lundgren agreed to tone down his aggressiveness.
Rocky IV is one of the few sport movies that applies genuine sound effects from actual punches, bona fide training methods created by boxing consultants, and a bevy of other new special effects. The film is recognized as being ahead of its time in its demonstration of groundbreaking high - tech sporting equipment, some of which was experimental and 20 years from public use. In 2012, Olympians Michael Phelps and Ryan Lochte noted that the training sequences in Rocky IV inspired them to use a cabin similar to what the resourceful Balboa utilized in the film.
Sportscaster Stu Nahan makes his fourth appearance in the series as commentator for the Creed -- Drago fight. Warner Wolf replaces Bill Baldwin, who died following filming for Rocky III, as co-commentator. For the fight between Rocky and Drago, commentators Barry Tompkins and Al Bandiero portray themselves as USA Network broadcasters.
Apollo Creed 's wife Mary Anne (Sylvia Meals) made her second appearance in the series, the first being Rocky II, although the character was mainly featured in Rocky II. Stallone 's future wife, Brigitte Nielsen, appeared as Drago 's wife, Ludmilla.
Paulie 's robot, a character that through the years has enjoyed a cult following of its own, was created by International Robotics Inc. in New York City. The robot 's initial voice was that of the company 's CEO, Robert Doornick. The robot is identified by its engineers as "SICO '' and is / was a member of the Screen Actors Guild. It toured with James Brown in the 1980s. The robot was written into the movie after it had been used to help treat Stallone 's autistic son, Seargeoh.
The Soviet premier in the sky box during the Rocky -- Drago match, played by David Lloyd Austin, strongly resembles contemporary Soviet leader Mikhail Gorbachev. Austin later played Gorbachev in The Naked Gun, and Russian characters in other films.
The musical score for Rocky IV was composed by Vince DiCola, who would later compose the music for The Transformers: The Movie. Rocky IV is the only film in the series not to feature original music by Bill Conti, who was replaced by DiCola; however, it does feature arrangements of themes composed by Conti from previous films in the series, such as "The Final Bell ''. Conti, who was too busy with the first two Karate Kid films at the time, would return for Rocky V and Rocky Balboa. Conti 's famous piece of music from the Rocky series, "Gonna Fly Now '', does not appear at all in Rocky IV (the first time in the series this happened), though a few bars of it are incorporated into DiCola 's training montage instrumental.
Songs from the movie included "Living in America '', by James Brown, and also music by John Cafferty ("Heart 's on Fire '', featuring Vince DiCola), Survivor, Kenny Loggins, and Robert Tepper. Go West wrote "One Way Street '' for the movie by request of Sylvester Stallone. Europe 's hit "The Final Countdown '', written earlier in the decade by lead singer Joey Tempest, is often incorrectly stated as being featured in the film due to its similarity to DiCola 's "Training Montage. '' However, Europe 's track was not released as a single until late 1986, after Rocky IV 's release.
According to singer Peter Cetera, he originally wrote his best - selling solo single "Glory of Love '' as the end title for this film, but was passed over by United Artists, and instead used the theme for The Karate Kid Part II.
Rocky IV grossed $127.8 million in United States and Canada, and $300 million worldwide, the most of any Rocky film. It was the highest - grossing sports film of all time, until The Blind Side (2009), which grossed $309 million (without accounting for inflation).
Stallone has been quoted as saying the enormous financial success and fan - following of Rocky IV once had him envisioning another Rocky movie, devoted to Drago and his post-boxing life, with Balboa 's storyline running parallel to Drago 's. However, he noted the damage both boxers sustained in the fight made them "incapable of reason '', and thus instead planned Rocky V as a showcase of the dangers of boxing.
The film has a 39 % approval rating on Rotten Tomatoes, from 43 critics, indicating mixed reviews; the critical consensus states, "Rocky IV inflates the action to absurd heights, but it ultimately rings hollow thanks to a story that hits the same basic beats as the first three entries in the franchise. '' On Metacritic, the film has a score of 42 out of 100, based on 11 critics, indicating "mixed or average reviews. ''
Roger Ebert gave the film two out of four stars, stating that with this film the Rocky series began "finally losing its legs. It 's been a long run, one hit movie after another, but Rocky IV is a last gasp, a film so predictable that viewing it is like watching one of those old sitcoms where the characters never change and the same situations turn up again and again. '' Ian Nathan of Empire gave the film two out of five stars, calling the script a "laughable turd '' and describing Rocky IV as "the (film) where the Rocky series threw in the towel on the credibility. ''
Gene Siskel of The Chicago Tribune gave the film a 3.5 out of 4 stars, and stated in his review, "(Stallone) creates credible villains worthy of his heroic character. ''
Dolph Lundgren received acclaim for his performance as Ivan Drago. He won the Marshall Trophy for Best Actor at the Napierville Cinema Festival. Rocky IV also won Germany 's Golden Screen Award.
The film won five Golden Raspberry Awards, including Worst Actor (Sylvester Stallone, along with Rambo: First Blood Part II), Worst Director (Stallone), Worst Supporting Actress (Brigitte Nielsen), Worst New Star (Nielsen, and also for Red Sonja) and Worst Musical Score. It also received nominations for Worst Picture, Worst Supporting Actress (Talia Shire), Worst Supporting Actor (Burt Young) and Worst Screenplay.
Scholars note that the film 's strong yet formulaic structure emphasizes the power of the individual, embodied by Rocky, the prototypically American hero who is inventive, determined, and idealistic. They contrast that with Ivan Drago 's hyperbolic characterization as a representation of Soviet power in the context of the latter part of the Cold War. Writer / director Stallone highlights the nationalistic overtones of the Balboa -- Drago fight throughout the film, such as when Drago 's wife calls the United States an "antagonistic and violent government, '' that is filled with "threats of violence '' to her husband. Drago 's trainer comments that American society has become "pathetic and weak. '' Drago represents the totalitarian regime, demonstrating his power when he topples an arrogant opponent (Creed). Later on, the radio announcer says, "Ivan Drago is a man with an entire country in his corner. '' Scholars note that Drago 's ultimate defeat -- and the Soviet crowd 's embrace of Rocky -- represents a crumbling of the Soviet Union.
Rocky IV has also been interpreted as a commentary on the power struggle between technology and humans, illustrated by both Paulie 's robot and the technology utilized by Drago 's trainers.
A novelization was published by Ballantine Books in 1985. Sylvester Stallone was credited as the author.
The script development was the subject of a famous copyright lawsuit, Anderson v. Stallone. Timothy Anderson developed a treatment for Rocky IV on spec; after the studio decided not to buy his treatment, he sued when the resulting movie script was similar to his treatment. The court held that Anderson had prepared an unauthorized derivative work of the characters Stallone had developed in Rocky I through III, and thus he could not enforce his unauthorized story extension against the owner of the character 's copyrights.
|
what is a good score on the medical boards | USMLE Step 1 - wikipedia
The USMLE Step 1 (more commonly just Step 1 or colloquially, The Boards) is the first part of the United States Medical Licensing Examination. It aims to assess whether medical school students or graduates can apply important concepts of the foundational sciences fundamental to the practice of medicine. US medical students, as well as non-US medical students who wish to seek licensure to practice medicine in the US, typically take Step 1 at the end of the second year of medical school. Graduates of international medical schools (i.e., those outside the US or Canada) must also take Step 1 if they want to practice in the US. Graduates from international medical schools must apply through ECFMG, and the registration fee is $850. For 2016, the NBME registration fee for the test is $600, with additional charges for applicants who choose a testing region outside the United States or Canada.
Prior to 1992, the NBME Part I examination served as the staple basic science examination for medical students at the end of their second year. Upon the launch of the three - part United States Medical Licensing Examination, NBME Part I exam was carried forward in its new format, the USMLE Step 1 examination, which has since evolved to become an increasingly clinically - applied examination of the foundational sciences. The exam became computer based several years later. In May 2015, the USMLE began emphasizing of concepts regarding patient safety and quality improvement across all parts of the USMLE exam series, including Step 1.
In recent years and in part resulting from the American Medical Association 's "Accelerating Change in Education '' (ACE) Initiative, a number of U.S. allopathic and osteopathic medical schools have begun to consolidate the amount of time allocated in the 4 - year curriculum for dedicated preclinical / classroom - based education in order to provide more time for clinical training and direct application of the foundational sciences to clinical medicine. As a result of this paradigm shift away from the 20th century "2 + 2 '' curriculum, students at many of these medical schools are now instructed to take the Step 1 exam following the core clerkship year, rather than at the conclusion of the preclinical curriculum, at which time students are also prepared to sit for the Step 2 CK exam. Those schools which have adopted this scheduling have seen a statistically significant large increase in Step 1 scores as a result. Institutions that have implemented such a policy include Yale, NYU, Columbia, Baylor, Duke, and University of Pennsylvania ("Penn ''), amongst others. Johns Hopkins offers this as an optional scheduling configuration, left to individual student discretion.
The exam is currently an eight - hour computer - based test taken in a single - day, composed of seven 40 - question sections with a maximum 280 multiple - choice questions. On May 9, 2016, the NBME shortened the test to seven 40 - question sections. One hour is provided for each section, allotting an average of a minute and thirty seconds to answer each question. Between test sections, the test taker is allotted a cumulative 45 minutes (during the test day) for personal breaks. (There is a 15 - minute tutorial at the beginning of the exam, which the test - taker can choose to skip and have that time added to break time.) If the taker finishes any section before the allotted one hour time limit, the unused time is added to the break time total. The test is administered at any of several Prometric computer testing sites.
Step 1 is designed to test the knowledge learned during the basic science years of medical school as applied in the form of clinical vignettes. This includes anatomy, behavioral sciences, biochemistry, microbiology, pathology, pharmacology, and physiology, as well as to interdisciplinary areas including genetics, aging, immunology, nutrition, and molecular and cell biology. Epidemiology, medical ethics and questions on empathy are also emphasized. Each exam is dynamically generated for each test taker; while the general proportion of questions derived from a particular subject is the same, some test takers report that certain subjects are either emphasized or deemphasized.
Currently, students receive a three - digit score following sitting for the Step 1 examination. In 1999, the USMLE phased out the use of a percentile - based system in favor of a three - digit and two - digit scaled scoring system. In October 2011, two - digit scaled scores were no longer reported to any parties besides the examinees. In April 2013, the two - digit score was eliminated completely from the score report.
While the USMLE program does not disclose how the three - digit score is calculated, Step 1 scores theoretically range from 1 to 300, most examinees score in the range of 140 to 260, the passing score is 192, and the national mean and standard deviation are approximately 229 and 20, respectively. Inflation in the national mean Step 1 score has been observed with time, as shown in the table below. According to the National Resident Matching Program, the mean score for U.S. allopathic seniors who matched to residency programs in 2016 was 233.2 (sd = 17.4).
The USMLE score is one of many factors considered by residency programs in selecting applicants. However, at present, this test is the only standardized measure of all applicants. The median USMLE Step 1 scores for graduates of U.S. Medical Schools for various residencies are charted in Chart 6 on page 9 of "Charting Outcomes in the Match '' available through NRMP Website
Many residency programs use a "cutoff '' score for Step 1, below which applicants are unlikely to be considered, although in some cases individuals with significantly higher Step 2 CK scores may still receive further consideration. The NRMP Residency Program Director survey contains more information, both overall and by specialty, regarding "cutoff '' scores (i.e., scores below which programs generally do not grant interviews).
|
what kind of important scientific information have pendulums been used to collect | History of timekeeping devices - wikipedia
For thousands of years, devices have been used to measure and keep track of time. The current sexagesimal system of time measurement dates to approximately 2000 BC from the Sumerians.
The Egyptians divided the day into two 12 - hour periods, and used large obelisks to track the movement of the sun. They also developed water clocks, which were probably first used in the Precinct of Amun - Re, and later outside Egypt as well; they were employed frequently by the Ancient Greeks, who called them clepsydrae. The Zhou dynasty is believed to have used the outflow water clock around the same time, devices which were introduced from Mesopotamia as early as 2000 BC.
Other ancient timekeeping devices include the candle clock, used in ancient China, ancient Japan, England and Mesopotamia; the timestick, widely used in India and Tibet, as well as some parts of Europe; and the hourglass, which functioned similarly to a water clock. The sundial, another early clock, relies on shadows to provide a good estimate of the hour on a sunny day. It is not so useful in cloudy weather or at night and requires recalibration as the seasons change (if the gnomon was not aligned with the Earth 's axis).
The earliest known clock with a water - powered escapement mechanism, which transferred rotational energy into intermittent motions, dates back to 3rd century BC in ancient Greece; Chinese engineers later invented clocks incorporating mercury - powered escapement mechanisms in the 10th century, followed by Iranian engineers inventing water clocks driven by gears and weights in the 11th century.
The first mechanical clocks, employing the verge escapement mechanism with a foliot or balance wheel timekeeper, were invented in Europe at around the start of the 14th century, and became the standard timekeeping device until the pendulum clock was invented in 1656. The invention of the mainspring in the early 15th century allowed portable clocks to be built, evolving into the first pocketwatches by the 17th century, but these were not very accurate until the balance spring was added to the balance wheel in the mid 17th century.
The pendulum clock remained the most accurate timekeeper until the 1930s, when quartz oscillators were invented, followed by atomic clocks after World War 2. Although initially limited to laboratories, the development of microelectronics in the 1960s made quartz clocks both compact and cheap to produce, and by the 1980s they became the world 's dominant timekeeping technology in both clocks and wristwatches.
Atomic clocks are far more accurate than any previous timekeeping device, and are used to calibrate other clocks and to calculate the International Atomic Time; a standardized civil system, Coordinated Universal Time, is based on atomic time.
Many ancient civilizations observed astronomical bodies, often the Sun and Moon, to determine times, dates, and seasons. The first calendars may have been created during the last glacial period, by hunter - gatherers who employed tools such as sticks and bones to track the phases of the moon or the seasons. Stone circles, such as England 's Stonehenge, were built in various parts of the world, especially in Prehistoric Europe, and are thought to have been used to time and predict seasonal and annual events such as equinoxes or solstices. As those megalithic civilizations left no recorded history, little is known of their calendars or timekeeping methods. Methods of sexagesimal timekeeping, now common in both Western and Eastern societies, are first attested nearly 4,000 years ago in Mesopotamia and Egypt. Mesoamericans similarly modified their usual vigesimal counting system when dealing with calendars to produce a 360 - day year.
The oldest known sundial is from Egypt; it dates back to around 1500 BC (19th Dynasty), and was discovered in the Valley of the Kings in 2013. Sundials have their origin in shadow clocks, which were the first devices used for measuring the parts of a day. Ancient Egyptian obelisks, constructed about 3500 BC, are also among the earliest shadow clocks.
Egyptian shadow clocks divided daytime into 12 parts with each part further divided into more precise parts. One type of shadow clock consisted of a long stem with five variable marks and an elevated crossbar which cast a shadow over those marks. It was positioned eastward in the morning, and was turned west at noon. Obelisks functioned in much the same manner: the shadow cast on the markers around it allowed the Egyptians to calculate the time. The obelisk also indicated whether it was morning or afternoon, as well as the summer and winter solstices. A third shadow clock, developed c. 1500 BC, was similar in shape to a bent T - square. It measured the passage of time by the shadow cast by its crossbar on a non-linear rule. The T was oriented eastward in the mornings, and turned around at noon, so that it could cast its shadow in the opposite direction.
Although accurate, shadow clocks relied on the sun, and so were useless at night and in cloudy weather. The Egyptians therefore developed a number of alternative timekeeping instruments, including water clocks, and a system for tracking star movements. The oldest description of a water clock is from the tomb inscription of the 16th - century BC Egyptian court official Amenemhet, identifying him as its inventor. There were several types of water clocks, some more elaborate than others. One type consisted of a bowl with small holes in its bottom, which was floated on water and allowed to fill at a near - constant rate; markings on the side of the bowl indicated elapsed time, as the surface of the water reached them. The oldest - known waterclock was found in the tomb of pharaoh Amenhotep I (1525 -- 1504 BC), suggesting that they were first used in ancient Egypt. Another Egyptian method of determining the time during the night was using plumb - lines called merkhets. In use since at least 600 BC, two of these instruments were aligned with Polaris, the north pole star, to create a north -- south meridian. The time was accurately measured by observing certain stars as they crossed the line created with the merkhets.
Water clocks, or clepsydrae, were commonly used in Ancient Greece following their introduction by Plato, who also invented a water - based alarm clock. One account of Plato 's alarm clock describes it as depending on the nightly overflow of a vessel containing lead balls, which floated in a columnar vat. The vat held a steadily increasing amount of water, supplied by a cistern. By morning, the vessel would have floated high enough to tip over, causing the lead balls to cascade onto a copper platter. The resultant clangor would then awaken Plato 's students at the Academy. Another possibility is that it comprised two jars, connected by a siphon. Water emptied until it reached the siphon, which transported the water to the other jar. There, the rising water would force air through a whistle, sounding an alarm. The Greeks and Chaldeans regularly maintained timekeeping records as an essential part of their astronomical observations.
Greek astronomer, Andronicus of Cyrrhus, supervised the construction of the Tower of the Winds in Athens in the 1st century BC.
In Greek tradition, clepsydrae were used in court; later, the Romans adopted this practice, as well. There are several mentions of this in historical records and literature of the era; for example, in Theaetetus, Plato says that "Those men, on the other hand, always speak in haste, for the flowing water urges them on ''. Another mention occurs in Lucius Apuleius ' The Golden Ass: "The Clerk of the Court began bawling again, this time summoning the chief witness for the prosecution to appear. Up stepped an old man, whom I did not know. He was invited to speak for as long as there was water in the clock; this was a hollow globe into which water was poured through a funnel in the neck, and from which it gradually escaped through fine perforations at the base ''. The clock in Apuleius 's account was one of several types of water clock used. Another consisted of a bowl with a hole in its centre, which was floated on water. Time was kept by observing how long the bowl took to fill with water.
Although clepsydrae were more useful than sundials -- they could be used indoors, during the night, and also when the sky was cloudy -- they were not as accurate; the Greeks, therefore, sought a way to improve their water clocks. Although still not as accurate as sundials, Greek water clocks became more accurate around 325 BC, and they were adapted to have a face with an hour hand, making the reading of the clock more precise and convenient. One of the more common problems in most types of clepsydrae was caused by water pressure: when the container holding the water was full, the increased pressure caused the water to flow more rapidly. This problem was addressed by Greek and Roman horologists beginning in 100 BC, and improvements continued to be made in the following centuries. To counteract the increased water flow, the clock 's water containers -- usually bowls or jugs -- were given a conical shape; positioned with the wide end up, a greater amount of water had to flow out in order to drop the same distance as when the water was lower in the cone. Along with this improvement, clocks were constructed more elegantly in this period, with hours marked by gongs, doors opening to miniature figurines, bells, or moving mechanisms. There were some remaining problems, however, which were never solved, such as the effect of temperature. Water flows more slowly when cold, or may even freeze.
Between 270 BC and AD 500, Hellenistic (Ctesibius, Hero of Alexandria, Archimedes) and Roman horologists and astronomers began developing more elaborate mechanized water clocks. The added complexity was aimed at regulating the flow and at providing fancier displays of the passage of time. For example, some water clocks rang bells and gongs, while others opened doors and windows to show figurines of people, or moved pointers, and dials. Some even displayed astrological models of the universe.
Although the Greeks and Romans did much to advance water clock technology, they still continued to use shadow clocks. The mathematician and astronomer Theodosius of Bithynia, for example, is said to have invented a universal sundial that was accurate anywhere on Earth, though little is known about it. Others wrote of the sundial in the mathematics and literature of the period. Marcus Vitruvius Pollio, the Roman author of De Architectura, wrote on the mathematics of gnomons, or sundial blades. During the reign of Emperor Augustus, the Romans constructed the largest sundial ever built, the Solarium Augusti. Its gnomon was an obelisk from Heliopolis. Similarly, the obelisk from Campus Martius was used as the gnomon for Augustus 's zodiacal sundial. Pliny the Elder records that the first sundial in Rome arrived in 264 BC, looted from Catania, Sicily; according to him, it gave the incorrect time until the markings and angle appropriate for Rome 's latitude were used -- a century later.
According to Callisthenes, the Persians were using water clocks in 328 BC to ensure a just and exact distribution of water from qanats to their shareholders for agricultural irrigation. The use of water clocks in Iran, especially in Zeebad, dates back to 500 BC. Later they were also used to determine the exact holy days of pre-Islamic religions, such as the Nowruz, Chelah, or Yaldā -- the shortest, longest, and equal - length days and nights of the years. The water clocks used in Iran were one of the most practical ancient tools for timing the yearly calendar.
Water clocks, or Fenjaan, in Persia reached a level of accuracy comparable to today 's standards of timekeeping. The fenjaan was the most accurate and commonly used timekeeping device for calculating the amount or the time that a farmer must take water from a qanat or well for irrigation of the farms, until it was replaced by more accurate current clock. Persian water clocks were a practical and useful tool for the qanat 's shareholders to calculate the length of time they could divert water to their farm. The qanat was the only water source for agriculture and irrigation so a just and fair water distribution was very important. Therefore, a very fair and clever old person was elected to be the manager of the water clock, and at least two full - time managers were needed to control and observe the number of fenjaans and announce the exact time during the days and nights.
The fenjaan was a big pot full of water and a bowl with small hole in the center. When the bowl become full of water, it would sink into the pot, and the manager would empty the bowl and again put it on the top of the water in the pot. He would record the number of times the bowl sank by putting small stones into a jar.
The place where the clock was situated, and its managers, were collectively known as khaneh fenjaan. Usually this would be the top floor of a public - house, with west - and east - facing windows to show the time of sunset and sunrise. There was also another time - keeping tool named a staryab or astrolabe, but it was mostly used for superstitious beliefs and was not practical for use as a farmers ' calendar. The Zeebad Gonabad water clock was in use until 1965 when it was substituted by modern clocks.
Joseph Needham speculated that the introduction of the outflow clepsydra to China, perhaps from Mesopotamia, occurred as far back as the 2nd millennium BC, during the Shang Dynasty, and at the latest by the 1st millennium BC. By the beginning of the Han Dynasty, in 202 BC, the outflow clepsydra was gradually replaced by the inflow clepsydra, which featured an indicator rod on a float. To compensate for the falling pressure head in the reservoir, which slowed timekeeping as the vessel filled, Zhang Heng added an extra tank between the reservoir and the inflow vessel. Around 550 AD, Yin Gui was the first in China to write of the overflow or constant - level tank added to the series, which was later described in detail by the inventor Shen Kuo. Around 610, this design was trumped by two Sui Dynasty inventors, Geng Xun and Yuwen Kai, who were the first to create the balance clepsydra, with standard positions for the steelyard balance. Joseph Needham states that:
... (the balance clepsydra) permitted the seasonal adjustment of the pressure head in the compensating tank by having standard positions for the counterweight graduated on the beam, and hence it could control the rate of flow for different lengths of day and night. With this arrangement no overflow tank was required, and the two attendants were warned when the clepsydra needed refilling.
The term ' clock ' encompasses a wide spectrum of devices, ranging from wristwatches to the Clock of the Long Now. The English word clock is said to derive from the Middle English clokke, Old North French cloque, or Middle Dutch clocke, all of which mean bell, and are derived from the Medieval Latin clocca, also meaning bell. Indeed, bells were used to mark the passage of time; they marked the passage of the hours at sea and in abbeys.
Throughout history, clocks have had a variety of power sources, including gravity, springs, and electricity. Mechanical clocks became widespread in the 14th century, when they were used in medieval monasteries to keep the regulated schedule of prayers. The clock continued to be improved, with the first pendulum clock being designed and built in the 17th century.
The earliest mention of candle clocks comes from a Chinese poem, written in AD 520 by You Jianfu. According to the poem, the graduated candle was a means of determining time at night. Similar candles were used in Japan until the early 10th century.
The candle clock most commonly mentioned and written of is attributed to King Alfred the Great. It consisted of six candles made from 72 pennyweights of wax, each 12 inches (30 cm) high, and of uniform thickness, marked every inch (2.54 cm). As these candles burned for about four hours, each mark represented 20 minutes. Once lit, the candles were placed in wooden framed glass boxes, to prevent the flame from extinguishing.
The most sophisticated candle clocks of their time were those of Al - Jazari in 1206. One of his candle clocks included a dial to display the time and, for the first time, employed a bayonet fitting, a fastening mechanism still used in modern times. Donald Routledge Hill described Al - Jazari 's candle clocks as follows:
The candle, whose rate of burning was known, bore against the underside of the cap, and its wick passed through the hole. Wax collected in the indentation and could be removed periodically so that it did not interfere with steady burning. The bottom of the candle rested in a shallow dish that had a ring on its side connected through pulleys to a counterweight. As the candle burned away, the weight pushed it upward at a constant speed. The automata were operated from the dish at the bottom of the candle. No other candle clocks of this sophistication are known.
A variation on this theme were oil - lamp clocks. These early timekeeping devices consisted of a graduated glass reservoir to hold oil -- usually whale oil, which burned cleanly and evenly -- supplying the fuel for a built - in lamp. As the level in the reservoir dropped, it provided a rough measure of the passage of time.
In addition to water, mechanical, and candle clocks, incense clocks were used in the Far East, and were fashioned in several different forms. Incense clocks were first used in China around the 6th century; in Japan, one still exists in the Shōsōin, although its characters are not Chinese, but Devanagari. Due to their frequent use of Devanagari characters, suggestive of their use in Buddhist ceremonies, Edward H. Schafer speculated that incense clocks were invented in India. Although similar to the candle clock, incense clocks burned evenly and without a flame; therefore, they were more accurate and safer for indoor use.
Several types of incense clock have been found, the most common forms include the incense stick and incense seal. An incense stick clock was an incense stick with calibrations; most were elaborate, sometimes having threads, with weights attached, at even intervals. The weights would drop onto a platter or gong below, signifying that a certain amount of time had elapsed. Some incense clocks were held in elegant trays; open - bottomed trays were also used, to allow the weights to be used together with the decorative tray. Sticks of incense with different scents were also used, so that the hours were marked by a change in fragrance. The incense sticks could be straight or spiraled; the spiraled ones were longer, and were therefore intended for long periods of use, and often hung from the roofs of homes and temples.
In Japan, a geisha was paid for the number of senkodokei (incense sticks) that had been consumed while she was present, a practice which continued until 1924. Incense seal clocks were used for similar occasions and events as the stick clock; while religious purposes were of primary importance, these clocks were also popular at social gatherings, and were used by Chinese scholars and intellectuals. The seal was a wooden or stone disk with one or more grooves etched in it into which incense was placed. These clocks were common in China, but were produced in fewer numbers in Japan. To signal the passage of a specific amount of time, small pieces of fragrant woods, resins, or different scented incenses could be placed on the incense powder trails. Different powdered incense clocks used different formulations of incense, depending on how the clock was laid out. The length of the trail of incense, directly related to the size of the seal, was the primary factor in determining how long the clock would last; all burned for long periods of time, ranging between 12 hours and a month.
While early incense seals were made of wood or stone, the Chinese gradually introduced disks made of metal, most likely beginning during the Song dynasty. This allowed craftsmen to more easily create both large and small seals, as well as design and decorate them more aesthetically. Another advantage was the ability to vary the paths of the grooves, to allow for the changing length of the days in the year. As smaller seals became more readily available, the clocks grew in popularity among the Chinese, and were often given as gifts. Incense seal clocks are often sought by modern - day clock collectors; however, few remain that have not already been purchased or been placed on display at museums or temples.
Sundials had been used for timekeeping since Ancient Egypt. Ancient dials were nodus - based with straight hour - lines that indicated unequal hours -- also called temporary hours -- that varied with the seasons. Every day was divided into 12 equal segments regardless of the time of year; thus, hours were shorter in winter and longer in summer. The sundial was further developed by Muslim astronomers. The idea of using hours of equal length throughout the year was the innovation of Abu'l - Hasan Ibn al - Shatir in 1371, based on earlier developments in trigonometry by Muhammad ibn Jābir al - Harrānī al - Battānī (Albategni). Ibn al - Shatir was aware that "using a gnomon that is parallel to the Earth 's axis will produce sundials whose hour lines indicate equal hours on any day of the year ''. His sundial is the oldest polar - axis sundial still in existence. The concept appeared in Western sundials starting in 1446.
Following the acceptance of heliocentrism and equal hours, as well as advances in trigonometry, sundials appeared in their present form during the Renaissance, when they were built in large numbers. In 1524, the French astronomer Oronce Finé constructed an ivory sundial, which still exists; later, in 1570, the Italian astronomer Giovanni Padovani published a treatise including instructions for the manufacture and laying out of mural (vertical) and horizontal sundials. Similarly, Giuseppe Biancani 's Constructio instrumenti ad horologia solaria (c. 1620) discusses how to construct sundials.
Since the hourglass was one of the few reliable methods of measuring time at sea, it is speculated that it was used on board ships as far back as the 11th century, when it would have complemented the magnetic compass as an aid to navigation. However, the earliest unambiguous evidence of their use appears in the painting Allegory of Good Government, by Ambrogio Lorenzetti, from 1338. From the 15th century onwards, hourglasses were used in a wide range of applications at sea, in churches, in industry, and in cooking; they were the first dependable, reusable, reasonably accurate, and easily constructed time - measurement devices. The hourglass also took on symbolic meanings, such as that of death, temperance, opportunity, and Father Time, usually represented as a bearded, old man. Though also used in China, the hourglass 's history there is unknown. The Portuguese navigator Ferdinand Magellan used 18 hourglasses on each ship during his circumnavigation of the globe in 1522.
The earliest instance of a liquid - driven escapement was described by the Greek engineer Philo of Byzantium (fl. 3rd century BC) in his technical treatise Pneumatics (chapter 31) where he likens the escapement mechanism of a washstand automaton with those as employed in (water) clocks. Another early clock to use escapements was built during the 7th century in Chang'an, by Tantric monk and mathematician, Yi Xing, and government official Liang Lingzan. An astronomical instrument that served as a clock, it was discussed in a contemporary text as follows:
(It) was made in the image of the round heavens and on it were shown the lunar mansions in their order, the equator and the degrees of the heavenly circumference. Water, flowing into scoops, turned a wheel automatically, rotating it one complete revolution in one day and night. Besides this, there were two rings fitted around the celestial sphere outside, having the sun and moon threaded on them, and these were made to move in circling orbit... And they made a wooden casing the surface of which represented the horizon, since the instrument was half sunk in it. It permitted the exact determinations of the time of dawns and dusks, full and new moons, tarrying and hurrying. Moreover, there were two wooden jacks standing on the horizon surface, having one a bell and the other a drum in front of it, the bell being struck automatically to indicate the hours, and the drum being beaten automatically to indicate the quarters. All these motions were brought about by machinery within the casing, each depending on wheels and shafts, hooks, pins and interlocking rods, stopping devices and locks checking mutually.
Since Yi Xing 's clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to − 39 ° C (− 38 ° F). Zhang implemented the changes into his clock tower, which was about 10 metres (33 ft) tall, with escapements to keep the clock turning and bells to signal every quarter - hour. Another noteworthy clock, the elaborate Cosmic Engine, was built by Su Song, in 1088. It was about the size of Zhang 's tower, but had an automatically rotating armillary sphere -- also called a celestial globe -- from which the positions of the stars could be observed. It also featured five panels with mannequins ringing gongs or bells, and tablets showing the time of day, or other special times. Furthermore, it featured the first known endless power - transmitting chain drive in horology. Originally built in the capital of Kaifeng, it was dismantled by the Jin army and sent to the capital of Yanjing (now Beijing), where they were unable to put it back together. As a result, Su Song 's son Su Xie was ordered to build a replica.
The clock towers built by Zhang Sixun and Su Song, in the 10th and 11th centuries, respectively, also incorporated a striking clock mechanism, the use of clock jacks to sound the hours. A striking clock outside of China was the Jayrun Water Clock, at the Umayyad Mosque in Damascus, Syria, which struck once every hour. It was constructed by Muhammad al - Sa'ati in the 12th century, and later described by his son Ridwan ibn al - Sa'ati, in his On the Construction of Clocks and their Use (1203), when repairing the clock. In 1235, an early monumental water - powered alarm clock that "announced the appointed hours of prayer and the time both by day and by night '' was completed in the entrance hall of the Mustansiriya Madrasah in Baghdad.
The first geared clock was invented in the 11th century by the Arab engineer Ibn Khalaf al - Muradi in Islamic Iberia; it was a water clock that employed a complex gear train mechanism, including both segmental and epicyclic gearing, capable of transmitting high torque. The clock was unrivalled in its use of sophisticated complex gearing, until the mechanical clocks of the mid-14th century. Al - Muradi 's clock also employed the use of mercury in its hydraulic linkages, which could function mechanical automata. Al - Muradi 's work was known to scholars working under Alfonso X of Castile, hence the mechanism may have played a role in the development of the European mechanical clocks. Other monumental water clocks constructed by medieval Muslim engineers also employed complex gear trains and arrays of automata. Like the earlier Greeks and Chinese, Arab engineers at the time also developed a liquid - driven escapement mechanism which they employed in some of their water clocks. Heavy floats were used as weights and a constant - head system was used as an escapement mechanism, which was present in the hydraulic controls they used to make heavy floats descend at a slow and steady rate.
A mercury clock, described in the Libros del saber de Astronomia, a Spanish work from 1277 consisting of translations and paraphrases of Arabic works, is sometimes quoted as evidence for Muslim knowledge of a mechanical clock. However, the device was actually a compartmented cylindrical water clock, which the Jewish author of the relevant section, Rabbi Isaac, constructed using principles described by a philosopher named "Iran '', identified with Heron of Alexandria (fl. 1st century AD), on how heavy objects may be lifted.
Clock towers in Western Europe in the Middle Ages were also sometimes striking clocks. The most famous original still standing is possibly St Mark 's Clock on the top of St Mark 's Clocktower in St Mark 's Square in Venice, assembled in 1493 by the clockmaker Gian Carlo Rainieri from Reggio Emilia. In 1497, Simone Campanato moulded the great bell on which every definite time - lapse is beaten by two mechanical bronze statues (h. 2, 60 m.) called Due Mori (Two Moors), handling a hammer. Possibly earlier (1490) is the Prague Astronomical Clock by clockmaster Jan Růže (also called Hanuš) -- according to another source this device was assembled as early as 1410 by clockmaker Mikuláš of Kadaň and mathematician Jan Šindel. The allegorical parade of animated sculptures rings on the hour every day.
During the 11th century in the Song Dynasty, the Chinese astronomer, horologist and mechanical engineer Su Song created a water - driven astronomical clock for his clock tower of Kaifeng City. It incorporated an escapement mechanism as well as the earliest known endless power - transmitting chain drive, which drove the armillary sphere.
Contemporary Muslim astronomers also constructed a variety of highly accurate astronomical clocks for use in their mosques and observatories, such as the water - powered astronomical clock by Al - Jazari in 1206, and the astrolabic clock by Ibn al - Shatir in the early 14th century. The most sophisticated timekeeping astrolabes were the geared astrolabe mechanisms designed by Abū Rayhān Bīrūnī in the 11th century and by Muhammad ibn Abi Bakr in the 13th century. These devices functioned as timekeeping devices and also as calendars.
A sophisticated water - powered astronomical clock was built by Al - Jazari in 1206. This castle clock was a complex device that was about 11 feet (3.4 m) high, and had multiple functions alongside timekeeping. It included a display of the zodiac and the solar and lunar paths, and a pointer in the shape of the crescent moon which travelled across the top of a gateway, moved by a hidden cart and causing doors to open, each revealing a mannequin, every hour. It was possible to reset the length of day and night in order to account for the changing lengths of day and night throughout the year. This clock also featured a number of automata including falcons and musicians who automatically played music when moved by levers operated by a hidden camshaft attached to a water wheel.
The earliest medieval European clockmakers were Catholic monks. Medieval religious institutions required clocks because they regulated daily prayer - and work - schedules strictly, using various types of time - telling and recording devices, such as water clocks, sundials and marked candles, probably in combination. When mechanical clocks came into use, they were often wound at least twice a day to ensure accuracy. Monasteries broadcast important times and durations with bells, rung either by hand or by a mechanical device, such as by a falling weight or by rotating beater.
Although the mortuary inscription of Pacificus, archdeacon of Verona, records that he constructed a night clock (horologium nocturnum) as early as 850, his clock has been identified as being an observation tube used to locate stars with an accompanying book of astronomical observations, rather than a mechanical or water clock, an interpretation supported by illustrations from medieval manuscripts.
The religious necessities and technical skill of the medieval monks were crucial factors in the development of clocks, as the historian Thomas Woods writes:
The monks also counted skillful clock - makers among them. The first recorded clock was built by the future Pope Sylvester II for the German town of Magdeburg, around the year 996. Much more sophisticated clocks were built by later monks. Peter Lightfoot, a 14th - century monk of Glastonbury, built one of the oldest clocks still in existence, which now sits in excellent condition in London 's Science Museum.
The appearance of clocks in writings of the 11th century implies that they were well known in Europe in that period. In the early 14th - century, the Florentine poet Dante Alighieri referred to a clock in his Paradiso; the first known literary reference to a clock that struck the hours. Giovanni da Dondi, Professor of Astronomy at Padua, presented the earliest detailed description of clockwork in his 1364 treatise Il Tractatus Astrarii. This has inspired several modern replicas, including some in London 's Science Museum and the Smithsonian Institution. Other notable examples from this period were built in Milan (1335), Strasbourg (1354), Lund (1380), Rouen (1389), and Prague (1462).
Salisbury cathedral clock, dating from about 1386, is one of the oldest working clocks in the world, and may be the oldest. It still has most of its original parts, although its original verge and foliot timekeeping mechanism is lost, having been converted to a pendulum, which was replaced by a replica verge in 1956. It has no dial, as its purpose was to strike a bell at precise times. The wheels and gears are mounted in an open, box - like iron frame, measuring about 1.2 metres (3.9 ft) square. The framework is held together with metal dowels and pegs. Two large stones, hanging from pulleys, supply the power. As the weights fall, ropes unwind from the wooden barrels. One barrel drives the main wheel, which is regulated by the escapement, and the other drives the striking mechanism and the air brake.
Note also Peter Lightfoot 's Wells Cathedral clock, constructed c. 1390. The dial represents a geocentric view of the universe, with the Sun and Moon revolving around a central fixed Earth. It is unique in having its original medieval face, showing a philosophical model of the pre-Copernican universe. Above the clock is a set of figures, which hit the bells, and a set of jousting knights who revolve around a track every 15 minutes. The clock was converted to pendulum - and - anchor escapement in the 17th century, and was installed in London 's Science Museum in 1884, where it continues to operate. Similar astronomical clocks, or horologes, survive at Exeter, Ottery St Mary, and Wimborne Minster.
One clock that has not survived is that of the Abbey of St Albans, built by the 14th - century abbot Richard of Wallingford. It may have been destroyed during Henry VIII 's Dissolution of the Monasteries, but the abbot 's notes on its design have allowed a full - scale reconstruction. As well as keeping time, the astronomical clock could accurately predict lunar eclipses, and may have shown the Sun, Moon (age, phase, and node), stars and planets, as well as a wheel of fortune, and an indicator of the state of the tide at London Bridge. According to Thomas Woods, "a clock that equaled it in technological sophistication did not appear for at least two centuries ''. Giovanni de Dondi was another early mechanical clockmaker whose clock did not survive, but his work has been replicated based on the designs. De Dondi 's clock was a seven - faced construction with 107 moving parts, showing the positions of the Sun, Moon, and five planets, as well as religious feast days. Around this period, mechanical clocks were introduced into abbeys and monasteries to mark important events and times, gradually replacing water clocks which had served the same purpose.
During the Middle Ages, clocks primarily served religious purposes; the first employed for secular timekeeping emerged around the 15th century. In Dublin, the official measurement of time became a local custom, and by 1466 a public clock stood on top of the Tholsel (the city court and council chamber). It was the first of its kind to be clearly recorded in Ireland, and would only have had an hour hand. The increasing lavishness of castles led to the introduction of turret clocks. A 1435 example survives from Leeds castle; its face is decorated with the images of the Crucifixion of Jesus, Mary and St George.
Early clock dials showed hours: the display of minutes and seconds evolved later. A clock with a minutes dial is mentioned in a 1475 manuscript, and clocks indicating minutes and seconds existed in Germany in the 15th century. Timepieces which indicated minutes and seconds were occasionally made from this time on, but this was not common until the increase in accuracy made possible by the pendulum clock and, in watches, by the spiral balance spring. The 16th - century astronomer Tycho Brahe used clocks with minutes and seconds to observe stellar positions.
The Ottoman engineer Taqi al - Din described a weight - driven clock with a verge - and - foliot escapement, a striking train of gears, an alarm, and a representation of the moon 's phases in his book The Brightest Stars for the Construction of Mechanical Clocks (Al - Kawākib al - durriyya fī wadh ' al - bankāmat al - dawriyya), written around 1556.
The concept of the wristwatch goes back to the production of the very earliest watches in the 16th century. Elizabeth I of England received a wristwatch from Robert Dudley in 1571, described as an arm watch. From the beginning, wrist watches were almost exclusively worn by women, while men used pocket - watches up until the early 20th century. This was not just a matter of fashion or prejudice; watches of the time were notoriously prone to fouling from exposure to the elements, and could only reliably be kept safe from harm if carried securely in the pocket. When the waistcoat was introduced as a manly fashion at the court of Charles II in the 17th century, the pocket watch was tucked into its pocket. Prince Albert, the consort to Queen Victoria, introduced the ' Albert chain ' accessory, designed to secure the pocket watch to the man 's outergarment by way of a clip. By the mid nineteenth century, most watchmakers produced a range of wristwatches, often marketed as bracelets, for women.
Wristwatches were first worn by military men towards the end of the nineteenth century, when the importance of synchronizing manoeuvres during war without potentially revealing the plan to the enemy through signalling was increasingly recognized. It was clear that using pocket watches while in the heat of battle or while mounted on a horse was impractical, so officers began to strap the watches to their wrist. The Garstin Company of London patented a ' Watch Wristlet ' design in 1893, although they were probably producing similar designs from the 1880s. Clearly, a market for men 's wristwatches was coming into being at the time. Officers in the British Army began using wristwatches during colonial military campaigns in the 1880s, such as during the Anglo - Burma War of 1885.
During the Boer War, the importance of coordinating troop movements and synchronizing attacks against the highly mobile Boer insurgents was paramount, and the use of wristwatches subsequently became widespread among the officer class. The company Mappin & Webb began production of their successful ' campaign watch ' for soldiers during the campaign at the Sudan in 1898 and ramped up production for the Boer War a few years later.
These early models were essentially standard pocket - watches fitted to a leather strap, but by the early 20th century, manufacturers began producing purpose - built wristwatches. The Swiss company, Dimier Frères & Cie patented a wristwatch design with the now standard wire lugs in 1903. In 1904, Alberto Santos - Dumont, an early aviator, asked his friend, a French watchmaker called Louis Cartier, to design a watch that could be useful during his flights. Hans Wilsdorf moved to London in 1905 and set up his own business with his brother - in - law Alfred Davis, Wilsdorf & Davis, providing quality timepieces at affordable prices -- the company later became Rolex. Wilsdorf was an early convert to the wristwatch, and contracted the Swiss firm Aegler to produce a line of wristwatches. His Rolex wristwatch of 1910 became the first such watch to receive certification as a chronometer in Switzerland and it went on to win an award in 1914 from Kew Observatory in Richmond, west London.
The impact of the First World War dramatically shifted public perceptions on the propriety of the man 's wristwatch, and opened up a mass market in the post-war era. The creeping barrage artillery tactic, developed during the War, required precise synchronization between the artillery gunners and the infantry advancing behind the barrage. Service watches produced during the War were specially designed for the rigours of trench warfare, with luminous dials and unbreakable glass. Wristwatches were also found to be needed in the air as much as on the ground: military pilots found them more convenient than pocket watches for the same reasons as Santos - Dumont had. The British War Department began issuing wristwatches to combatants from 1917.
The company H. Williamson Ltd., based in Coventry, was one of the first to capitalize on this opportunity. During the company 's 1916 AGM it was noted that "... the public is buying the practical things of life. Nobody can truthfully contend that the watch is a luxury. It is said that one soldier in every four wears a wristlet watch, and the other three mean to get one as soon as they can. '' By the end of the War, almost all enlisted men wore a wristwatch, and after they were demobilized, the fashion soon caught on -- the British Horological Journal wrote in 1917 that "... the wristlet watch was little used by the sterner sex before the war, but now is seen on the wrist of nearly every man in uniform and of many men in civilian attire. '' Within a decade, sales of wristwatches had outstripped those of pocket watches.
In the late 17th and 18th Centuries, equation clocks were made, which allowed the user to see or calculate apparent solar time, as would be shown by a sundial. Before the invention of the pendulum clock, sundials were the only accurate timepieces. When good clocks became available, they appeared inaccurate to people who were used to trusting sundials. The annual variation of the equation of time made a clock up to about 15 minutes fast or slow, relative to a sundial, depending on the time of year. Equation clocks satisfied the demand for clocks that always agreed with sundials. Several types of equation clock mechanism were devised. which can be seen in surviving examples, mostly in museums.
Innovations to the mechanical clock continued, with miniaturization leading to domestic clocks in the 15th century, and personal watches in the 16th. In the 1580s, the Italian polymath Galileo Galilei investigated the regular swing of the pendulum, and discovered that it could be used to regulate a clock. Although Galileo studied the pendulum as early as 1582, he never actually constructed a clock based on that design. The first pendulum clock was designed and built by Dutch scientist Christiaan Huygens, in 1656. Early versions erred by less than one minute per day, and later ones only by 10 seconds, very accurate for their time.
In England, the manufacturing of pendulum clocks was soon taken up. The longcase clock (also known as the grandfather clock) was first created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671; this became feasible after Clement invented the anchor escapement mechanism in about 1670. Before then, pendulum clocks used the older verge escapement mechanism, which required very wide pendulum swings of about 100 °. To avoid the need for a very large case, most clocks using the verge escapement had a short pendulum. The anchor mechanism, however, reduced the pendulum 's necessary swing to between 4 ° to 6 °, allowing clockmakers to use longer pendulums with consequently slower beats. These required less power to move, caused less friction and wear, and were more accurate than their shorter predecessors. Most longcase clocks use a pendulum about a metre (39 inches) long to the center of the bob, with each swing taking one second. This requirement for height, along with the need for a long drop space for the weights that power the clock, gave rise to the tall, narrow case.
Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clock - maker, and the Second Hand was introduced.
The Jesuits were another major contributor to the development of pendulum clocks in the 17th and 18th centuries, having had an "unusually keen appreciation of the importance of precision ''. In measuring an accurate one - second pendulum, for example, the Italian astronomer Father Giovanni Battista Riccioli persuaded nine fellow Jesuits "to count nearly 87,000 oscillations in a single day ''. They served a crucial role in spreading and testing the scientific ideas of the period, and collaborated with contemporary scientists, such as Huygens.
The invention of the mainspring in the early 15th century allowed portable clocks to be built, evolving into the first pocketwatches by the 17th century, but these were not very accurate until the balance spring was added to the balance wheel in the mid 17th century. Some dispute remains as to whether British scientist Robert Hooke (his was a straight spring) or Dutch scientist Christiaan Huygens was the actual inventor of the balance spring. Huygens was clearly the first to use a spiral balance spring, the form used in virtually all watches to the present day. The addition of the balance spring made the balance wheel a harmonic oscillator like the pendulum in a pendulum clock, which oscillated at a fixed resonant frequency and resisted oscillating at other rates. This innovation increased watches ' accuracy enormously, reducing error from perhaps several hours per day to perhaps 10 minutes per day, resulting in the addition of the minute hand to the watch face around 1680 in Britain and 1700 in France.
Like the invention of pendulum clock, Huygens ' spiral hairspring (balance spring) system of portable timekeepers, helped lay the foundations for the modern watchmaking industry. The application of the spiral balance spring for watches ushered in a new era of accuracy for portable timekeepers, similar to that which the pendulum had introduced for clocks. From its invention in 1675 by Christiaan Huygens, the spiral hairspring (balance spring) system for portable timekeepers, still used in mechanical watchmaking industry today.
In 1675, Huygens and Robert Hooke invented the spiral balance, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. This resulted in a great advance in accuracy of pocket watches, from perhaps several hours per day to 10 minutes per day, similar to the effect of the pendulum upon mechanical clocks. The great English clockmaker, Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern - day configuration.
The Rev. Edward Barlow invented the rack and snail striking mechanism for striking clocks, which was a great improvement over the previous mechanism. The repeating clock, that chimes the number of hours (or even minutes) was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720.
Marine chronometers are clocks used at sea as time standards, to determine longitude by celestial navigation. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time - keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. The marine chronometer would have to keep the time of a fixed location -- usually Greenwich Mean Time -- allowing seafarers to determine longitude by comparing the local high noon to the clock. This clock could not contain a pendulum, which would be virtually useless on a rocking ship.
After the Scilly naval disaster of 1707 where four ships ran aground due to navigational mistakes, the British government offered a large prize of £ 20,000, equivalent to millions of pounds today, for anyone who could determine longitude accurately. The reward was eventually claimed in 1761 by Yorkshire carpenter John Harrison, who dedicated his life to improving the accuracy of his clocks.
In 1735 Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship 's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat.
The chronometer was trialled in 1761 by Harrison 's son and by the end of 10 weeks the clock was in error by less than 5 seconds.
In 1815, Sir Francis Ronalds (1788 - 1873) of London published the forerunner of the electric clock, the electrostatic clock. It was powered with dry pile s, a high voltage battery with extremely long life but the disadvantage of its electrical properties varying with the weather. He trialled various means of regulating the electricity and these models proved to be reliable across a range of meteorological conditions.
Alexander Bain, a Scottish clock and instrument maker, was the first to invent and patent the electric clock in 1840. On January 11, 1841, Alexander Bain along with John Barwise, a chronometer maker, took out another important patent describing a clock in which an electromagnetic pendulum and an electric current is employed to keep the clock going instead of springs or weights. Later patents expanded on his original ideas.
The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first quartz crystal oscillator was built by Walter G. Cady in 1921, and in 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings -- the bulky and delicate counting electronics, built with vacuum tubes, limited their practical use elsewhere. In 1932, a quartz clock able to measure small weekly variations in the rotation rate of the Earth was developed. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world 's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production has resulted in the subsequent proliferation of quartz clocks and watches.
Atomic clocks are the most accurate timekeeping devices in practical use today. Accurate to within a few seconds over many thousands of years, they are used to calibrate other clocks and timekeeping instruments.
The idea of using atomic transitions to measure time was first suggested by Lord Kelvin in 1879, although it was only in the 1930s with the development of Magnetic resonance that there was a practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept.
The first accurate atomic clock, a caesium standard based on a certain transition of the caesium - 133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time (ET).
The International System of Units standardized its unit of time, the second, on the properties of cesium in 1967. SI defines the second as 9,192,631,770 cycles of the radiation which corresponds to the transition between two electron spin energy levels of the ground state of the Cs atom. The cesium atomic clock, maintained by the National Institute of Standards and Technology, is accurate to 30 billionths of a second per year. Atomic clocks have employed other elements, such as hydrogen and rubidium vapor, offering greater stability -- in the case of hydrogen clocks -- and smaller size, lower power consumption, and thus lower cost (in the case of rubidium clocks).
The first professional clockmakers came from the guilds of locksmiths and jewellers. Clockmaking developed from a specialized craft into a mass production industry over many years.
Paris and Blois were the early centres of clockmaking in France. French clockmakers such as Julien Le Roy, clockmaker of Versailles, were leaders in case design and ornamental clocks. Le Roy belonged to the fifth generation of a family of clockmakers, and was described by his contemporaries as "the most skillful clockmaker in France, possibly in Europe ''. He invented a special repeating mechanism which improved the precision of clocks and watches, a face that could be opened to view the inside clockwork, and made or supervised over 3,500 watches. The competition and scientific rivalry resulting from his discoveries further encouraged researchers to seek new methods of measuring time more accurately.
Between 1794 and 1795, in the aftermath of the French Revolution, the French government briefly mandated decimal clocks, with a day divided into 10 hours of 100 minutes each. The astronomer and mathematician Pierre - Simon Laplace, among other individuals, modified the dial of his pocket watch to decimal time. A clock in the Palais des Tuileries kept decimal time as late as 1801, but the cost of replacing all the nation 's clocks prevented decimal clocks from becoming widespread. Because decimalized clocks only helped astronomers rather than ordinary citizens, it was one of the most unpopular changes associated with the metric system, and it was abandoned.
In Germany, Nuremberg and Augsburg were the early clockmaking centers, and the Black Forest came to specialize in wooden cuckoo clocks.
The English became the predominant clockmakers of the 17th and 18th centuries. The main centres of the British industry were in the City of London, the West End of London, Soho where many skilled French Huguenots settled and later in Clerkenwell. The Worshipful Company of Clockmakers was established in 1631 as one of the Livery Companies of the City of London.
Thomas Tompion was the first English clockmaker with an international reputation and many of his pupils went on to become great horologists in their own right, such as George Graham who invented the deadbeat escapement, orrery and mercury pendulum, and his pupil Thomas Mudge who created the first lever escapement. Famous clockmakers of this period included Joseph Windmills, Simon de Charmes who established the De Charmes clockmaker firm and Christopher Pinchbeck who invented the alloy pinchbeck.
Later famous horologists included John Arnold who made the first practical and accurate modern watch by refining Harrison 's chronometer, Thomas Earnshaw who was the first to make these available to the public, Daniel Quare, who invented a repeating watch movement, a portable barometer and introduced the concentric minute hand.
Quality control and standards were imposed on clockmakers by the Worshipful Company of Clockmakers, a guild which licensed clockmakers for doing business. By the rise of consumerism in the late 18th century, clocks, especially pocket watches, became regarded as fashion accessories and were made in increasingly decorative styles. By 1796, the industry reached a high point with almost 200,000 clocks being produced annually in London, however by the mid-19th century the industry had gone into steep decline from Swiss competition.
Switzerland established itself as a clockmaking center following the influx of Huguenot craftsmen, and in the 19th century, the Swiss industry "gained worldwide supremacy in high - quality machine - made watches ''. The leading firm of the day was Patek Philippe, founded by Antoni Patek of Warsaw and Adrien Philippe of Bern.
|
battle flag of the army of the potomac | Army of the Potomac - wikipedia
The Army of the Potomac was the principal Union Army in the Eastern Theater of the American Civil War. It was created in July 1861 shortly after the First Battle of Bull Run and was disbanded in May 1865 following the surrender of the Confederate Army of Northern Virginia in April.
The Army of the Potomac was created in 1861 but was then only the size of a corps (relative to the size of Union armies later in the war). Its nucleus was called the Army of Northeastern Virginia, under Brig. Gen. Irvin McDowell, and it was the army that fought (and lost) the war 's first major battle, the First Battle of Bull Run. The arrival in Washington, D.C., of Maj. Gen. George B. McClellan dramatically changed the makeup of that army. McClellan 's original assignment was to command the Division of the Potomac, which included the Department of Northeast Virginia under McDowell and the Department of Washington under Brig. Gen. Joseph K. Mansfield. On July 26, 1861, the Department of the Shenandoah, commanded by Maj. Gen. Nathaniel P. Banks, was merged with McClellan 's departments and on that day, McClellan formed the Army of the Potomac, which was composed of all military forces in the former Departments of Northeastern Virginia, Washington, Pennsylvania, and the Shenandoah. The men under Banks 's command became an infantry division in the Army of the Potomac. The army started with four corps, but these were divided during the Peninsula Campaign to produce two more. After the Second Battle of Bull Run, the Army of the Potomac absorbed the units that had served under Maj. Gen. John Pope.
It is a popular, but mistaken, belief that John Pope commanded the Army of the Potomac in the summer of 1862 after McClellan 's unsuccessful Peninsula Campaign. On the contrary, Pope 's army consisted of different units, and was named the Army of Virginia. During the time that the Army of Virginia existed, the Army of the Potomac was headquartered on the Virginia Peninsula, and then outside Washington, D.C., with McClellan still in command, although three corps of the Army of the Potomac were sent to northern Virginia and were under Pope 's operational control during the Northern Virginia Campaign.
The Army of the Potomac underwent many structural changes during its existence. The army was divided by Ambrose Burnside into three grand divisions of two corps each with a Reserve composed of two more. Hooker abolished the grand divisions. Thereafter the individual corps, seven of which remained in Virginia, reported directly to army headquarters. Hooker also created a Cavalry Corps by combining units that previously had served as smaller formations. In late 1863, two corps were sent West, and -- in 1864 -- the remaining five corps were recombined into three. Burnside 's IX Corps, which accompanied the army at the start of Ulysses S. Grant 's Overland Campaign, rejoined the army later. For more detail, see the section Corps below.
The Army of the Potomac fought in most of the Eastern Theater campaigns, primarily in (Eastern) Virginia, Maryland, and Pennsylvania. After the end of the war, it was disbanded on June 28, 1865, shortly following its participation in the Grand Review of the Armies.
The Army of the Potomac was also the name given to General P.G.T. Beauregard 's Confederate army during the early stages of the war (namely, First Bull Run; thus, the losing Union Army ended up adopting the name of the winning Confederate army). However, the name was eventually changed to the Army of Northern Virginia, which became famous under General Robert E. Lee.
In 1869 the Society of the Army of the Potomac was formed as a veterans association. It had its last reunion in 1929.
Because of its proximity to the large cities of the North, such as Washington, D.C., Philadelphia, and New York City, the Army of the Potomac received more contemporary media coverage than the other Union field armies. Such coverage produced fame for a number of this army 's units. Individual brigades, such as the Irish Brigade, the Philadelphia Brigade, the First New Jersey Brigade, the Vermont Brigade, and the Iron Brigade, all became well known to the general public, both during the Civil War and afterward.
The army originally consisted of fourteen divisions commanded by Edwin Sumner, William B. Franklin, Louis Blenker, Nathaniel Banks, Frederick W. Lander, Silas Casey, Irvin McDowell, Fitz - John Porter, Samuel Heintzelman, Erasmus Keyes, William F. Smith, Charles P. Stone (replaced by John Sedgwick in February 1862), and George McCall. Because this arrangement would be too hard to control in battle, President Lincoln issued an order on March 13, 1862, dividing the army into six corps headed by Sumner, Banks (despite being in the Shenandoah Valley and not part of the main army), McDowell, Heintzelman, and Keyes, the highest - ranking officers. McClellan was not happy with this, as he had intended to wait until the army had been tested in battle before judging which generals were suitable for corps command.
After the Battle of Williamsburg on May 5, McClellan requested and obtained permission to create two additions corps; these became the V Corps, headed by Brig. Gen Fitz - John Porter, and the VI Corps, headed by Brig. Gen William B. Franklin, both personal favorites of his. After the Battle of Kernstown in the Valley on March 23, the administration became paranoid about "Stonewall '' Jackson 's activities there and the potential danger they posed to Washington D.C., and to McClellan 's displeasure, detached Blenker 's division from the II Corps and sent it to West Virginia to serve under John C. Fremont 's command. McDowell 's corps was detached as well and stationed in the Rappahannock area.
In June 1862, George McCall 's division from McDowell 's corps (the Pennsylvania Reserves Division) was sent down to the Peninsula and temporarily attached to the V Corps. In the Seven Days Battles, the V Corps was heavily engaged. The Pennsylvania Reserves, in particular, suffered heavy losses including its division commander, who was captured by the Confederates, and two of its three brigadiers (John F. Reynolds, also captured, and George Meade, who was wounded). The III Corps fought at Glendale, however, the rest of the army was not heavily engaged in the week - long fight aside from Slocum 's division of the VI Corps, which was sent to reinforce the V Corps at Gaines Mill.
The Army of the Potomac remained on the Virginia Peninsula until August, when it was recalled back to Washington D.C. Keyes and one of the two IV Corps divisions were left behind permanently as part of the newly - created Department of the James, while the other division, commanded by Brig. Gen Darius Couch was attached to the VI Corps.
During the Second Battle of Bull Run, the III and V Corps were temporarily attached to Pope 's army; the former suffered major losses and was sent back to Washington to rest and refit afterward, so it did not participate in the Maryland Campaign. The V Corps attracted controversy during the battle when Fitz - John Porter failed to execute Pope 's orders properly and attack Stonewall Jackson 's flank despite his protests that James Longstreet 's troops were blocking the way. Pope blamed the loss at Second Bull Run on Porter, who was court - martialed and spent much of his life attempting to get himself exonerated. Sigel 's command, now redesignated the XI Corps, also spent the Maryland Campaign in Washington resting and refitting.
In the Maryland Campaign, the Army of the Potomac had six corps. These were the I Corps, commanded by Joe Hooker after Irvin McDowell was removed from command, the II Corps, commanded by Edwin Sumner, the V Corps, headed by Fitz - John Porter, the VI Corps, headed by William Franklin, the IX Corps, headed by Ambrose Burnside and formerly the Department of North Carolina, and the XII Corps, headed by Nathaniel Banks until September 12, and given to Joseph K. Mansfield just two days prior to Antietam, where he was killed in action.
At Antietam, the I and XII Corps were the first Union outfits to fight and both corps suffered enormous casualties (plus the loss of their commanders) so that they were down to near - division strength and their brigades at regimental strength after the battle was over. The II and IX Corps were also heavily engaged but the V and VI Corps largely stayed out of the battle.
When Burnside took over command of the army from McClellan in the fall, he formed the army into four Grand Divisions. The Right Grand Division was commanded by Edwin Sumner and comprised the II and V Corps, the Center Grand Division, commanded by Joe Hooker, comprised the IX and III Corps, and the Left Grand Division, commanded by William Franklin, comprised the VI and I Corps. In addition, the Reserve Grand Division, commanded by Franz Sigel, comprised the XI and XII Corps.
At Fredericksburg, the I Corps was commanded by John F. Reynolds, the II Corps by Darius Couch, the III Corps by George Stoneman, the V Corps by Daniel Butterfield, the VI Corps by William F. Smith, and the IX Corps by Orlando Willcox. The XI Corps was commanded by Franz Sigel and the XII Corps by Henry Slocum, however, neither corps was present at Fredericksburg, the former not arriving until after the battle was over, and the latter was stationed at Harper 's Ferry.
Following Fredericksburg, Burnside was removed from command of the army and replaced by Joe Hooker. Hooker immediately abolished the Grand Divisions and also for the first time organized the cavalry into a proper corps led by George Stoneman instead of having them ineffectually scattered among infantry divisions. Burnside and his old IX Corps departed out to a command in the Western Theater. The I, II, and XII Corps retained the same commanders they had had during the Fredericksburg campaign, but the other corps got new commanders once again. Daniel Butterfield was chosen by Hooker as his new chief of staff and command of the V Corps went to George Meade. Daniel Sickles received command of the III Corps and Oliver Howard the XI Corps after Franz Sigel had resigned, refusing to serve under Hooker, his junior in rank. William Franklin also left the army for the same reason. Edwin Sumner, who was in his 60s and exhausted from campaigning, departed as well and died a few months later. William F. Smith resigned from command of the VI Corps, which was taken over by John Sedgwick. The I and V Corps were not significantly engaged during the Chancellorsville campaign.
During the Gettysburg Campaign, the army 's existing organization was largely retained, but a number of brigades composed of short - term nine - month regiments departed as their enlistment terms expired. Darius Couch resigned from command of the II Corps after Chancellorsville, the corps going to Winfield Hancock. The Pennsylvania Reserves Division, having spent several months in Washington D.C. resting and refitting from the 1862 campaigns, returned to the army, but was added to the V Corps rather than rejoining the I Corps. George Stoneman had been removed from command of the cavalry corps by Hooker after a poor performance during the Chancellorsville campaign and replaced by Alfred Pleasanton.
George Meade was suddenly appointed the commander of the army on June 28, a mere three days before the battle of Gettysburg. At the battle, the I, II, and III Corps suffered such severe losses that they were almost nonfunctional as fighting units at the end. One corps commander (Reynolds) was killed, another (Sickles) lost a leg and was permanently out of the war, and a third (Hancock) was badly wounded and never completely recovered from his injuries. The VI Corps had not been significantly engaged and was mostly used to plug up holes in the line during the battle.
For the remainder of the war, corps were added and subtracted from the army. IV Corps was broken up after the Peninsula Campaign, with its headquarters and 2nd Division left behind in Yorktown, while its 1st Division moved north, attached to the VI Corps, in the Maryland Campaign. Those parts of the IV Corps that remained on the Peninsula were reassigned to the Department of Virginia and disbanded on October 1, 1863. Those added to the Army of the Potomac were IX Corps, XI Corps (Sigel 's I Corps in the former Army of Virginia), XII Corps (Banks 's II Corps from the Army of Virginia), added in 1862; and the Cavalry Corps, created in 1863. Eight of these corps (seven infantry, one cavalry) served in the army during 1863, but due to attrition and transfers, the army was reorganized in March 1864 with only four corps: II, V, VI, and Cavalry. Of the original eight, I and III Corps were disbanded due to heavy casualties and their units combined into other corps. The XI and XII Corps were ordered to the West in late 1863 to support the Chattanooga Campaign, and while there were combined into the XX Corps, never returning to the East.
The IX Corps returned to the army in 1864, after being assigned to the West in 1863 and then served alongside, but not as part of, the Army of the Potomac from March to May 24, 1864. On that latter date, IX Corps was formally added to the Army of the Potomac. Two divisions of the Cavalry Corps have transferred in August 1864 to Maj. Gen. Philip Sheridan 's Army of the Shenandoah, and the 2nd Division alone remained under Meade 's command. On March 26, 1865, that division was also assigned to Sheridan for the closing campaigns of the war.
† Major General John G. Parke took brief temporary command during Meade 's absences on four occasions during this period)
Lt. Gen. Ulysses S. Grant, general - in - chief of all Union armies, located his headquarters with the Army of the Potomac and provided operational direction to Meade from May 1864 to April 1865, but Meade retained command of the Army of the Potomac.
Below is the grand recapitulation of the losses susteined by the Army of the Potomac and the Army of the James, from May 5, 1864 to April 9, 1865, compiled in the Adjutant - General 's Office, Washington:
|
where did why did i get married too get filmed at | Why Did I Get Married Too? - Wikipedia
Why Did I Get Married Too? is a 2010 American comedy - drama film produced by Lionsgate and Tyler Perry Studios and stars Janet Jackson, Tyler Perry, and Tasha Smith. It is the sequel to Why Did I Get Married? (2007), The film shares the interactions of four couples who undertake a week - long retreat to improve their relationships.
In the Bahamas for their annual reunion, four couples are eager to share news about their lives over the past year. But when one 's ex-husband arrives to break up her marriage and win her back, the others realize they are not immune to the challenges of love and fidelity.
A June 5, 2009, report stated that Janet Jackson would reprise her role as Patricia, making her the third confirmed cast member to reprise her role. Due to Michael Jackson 's sudden death, film production was halted for a short period of time, after which Jackson returned to continue with the project. On August 6, 2009, Jackson stated that she had finished filming her scenes.
On June 16, 2009, Tyler Perry confirmed that the entire cast from the first film would return for the sequel.
The film received mixed reviews from critics. Based on 43 reviews collected by Rotten Tomatoes, the film has an overall approval rating from critics of 28 % with an average score of 4.5 / 10. By comparison, Metacritic calculated an average score of 44 %, based on 12 reviews.
Wesley Morris of The Boston Globe gave the film 21⁄2 out of 4 stars, claiming: "If Perry 's cinematic vision remains less than 20 / 20, his sagacity gets stronger by the movie. '' Lisa Schwarzbaum of Entertainment Weekly graded the film a D, writing in her review: "It 's a contradiction in terms to think of the phenomenally successful, prolific entertainment showman Tyler Perry as lazy, but there 's no other description for this particular product: Terribly shot and crudely assembled. '' Scott Wilson concurred, stating: "Why Did I Get Married Too? lacks the underlying goodwill of some of his better work and plays like a gift wrapped present to his detractors. '' Wilson graded the movie 1.5 / 5 stars.
Why Did I Get Married Too? grossed a total of $30.2 million in its opening weekend placing second behind Clash of the Titans. Its opening weekend gross makes it the third highest opening for films created by Tyler Perry. As of June 6, 2010 the movie has grossed over $60 million domestically.
Janet Jackson recorded a song for the Why Did I Get Married Too? soundtrack entitled "Nothing ''. It served as the soundtrack 's lead single. Also, Cameron Rafati 's single "Battles '' was featured in this film. Norwegian Christel Alsos was also featured with the song "Still ''. The soundtrack for the film was released on So So Def Recordings with distribution being handled by Malaco Records.
NAACP Image Awards
|
convention on the elimination of all forms of discrimination | Convention on the Elimination of all Forms of Discrimination against women - wikipedia
The Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW) is an international treaty adopted in 1979 by the United Nations General Assembly. Described as an international bill of rights for women, it was instituted on 3 September 1981 and has been ratified by 189 states. Over fifty countries that have ratified the Convention have done so subject to certain declarations, reservations, and objections, including 38 countries who rejected the enforcement article 29, which addresses means of settlement for disputes concerning the interpretation or application of the Convention. Australia 's declaration noted the limitations on central government power resulting from its federal constitutional system. The United States and Palau have signed, but not ratified the treaty. The Holy See, Iran, Somalia, Sudan and Tonga are not signatories to CEDAW.
The Convention has a similar format to the Convention on the Elimination of All Forms of Racial Discrimination, "both with regard to the scope of its substantive obligations and its international monitoring mechanisms ''. The Convention is structured in six parts with 30 articles total.
Article 1 defines discrimination against women in the following terms:
Any distinction, exclusion or restriction made on the basis of sex which has the effect or purpose of impairing or nullifying the recognition, enjoyment or exercise by women, irrespective of their marital status, on a basis of equality of men and women, of human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field.
Article 2 mandates that states parties ratifying the Convention declare intent to enshrine gender equality into their domestic legislation, repeal all discriminatory provisions in their laws, and enact new provisions to guard against discrimination against women. States ratifying the Convention must also establish tribunals and public institutions to guarantee women effective protection against discrimination, and take steps to eliminate all forms of discrimination practiced against women by individuals, organizations, and enterprises.
Article 3 requires states parties to guarantee basic human rights and fundamental freedoms to women "on a basis of equality with men '' through the "political, social, economic, and cultural fields. ''
Article 4 notes that "(a) doption... of special measures aimed at accelerating de facto equality between men and women shall not be considered discrimination. '' It adds that special protection for maternity is not regarded as gender discrimination.
Article 5 requires states parties to take measures to seek to eliminate prejudices and customs based on the idea of the inferiority or the superiority of one sex or on stereotyped role for men and women. It also mandates the states parties "(t) o ensure... the recognition of the common responsibility of men and women in the upbringing and development of their children. ''
Article 6 obliges states parties to "take all appropriate measures, including legislation, to suppress all forms of trafficking in women and exploitation of prostitution of women. ''
Article 7 guarantees women equality in political and public life with a focus on equality in voting, participation in government, and participation in "non-governmental organizations and associations concerned with the public and political life of the country. ''
Article 8 provides that states parties will guarantee women 's equal "opportunity to represent their Government at the international level and to participate in the work of international organizations. ''
Article 9 mandates states parties to "grant women equal rights with men to acquire, change or retain their nationality '' and equal rights "with respect to the nationality of their children. ''
Article 10 necessitates equal opportunity in education for female students and encourages coeducation. It also provides equal access to athletics, scholarships and grants as well as requires "reduction in female students ' drop out rates. ''
Article 11 outlines the right to work for women as "an unalienable right of all human beings. '' It requires equal pay for equal work, the right to social security, paid leave and maternity leave "with pay or with comparable social benefits without loss of former employment, seniority or social allowances. '' Dismissal on the grounds of maternity, pregnancy or status of marriage shall be prohibited with sanction.
Article 12 creates the obligation of states parties to "take all appropriate measures to eliminate discrimination against women in the field of health care in order to ensure... access to health care services, including those related to family planning. ''
Article 13 guarantees equality to women "in economic and social life, '' especially with respect to "the right to family benefits, the right to bank loans, mortgages and other forms of financial credit, and the right to participate in recreational activities, sports and all aspects of cultural life. ''
Article 14 provides protections for rural women and their special problems, ensuring the right of women to participate in development programs, "to have access to adequate health care facilities, '' "to participate in all community activities, '' "to have access to agricultural credit '' and "to enjoy adequate living conditions. ''
Article 15 obliges states parties to guarantee "women equality with men before the law, '' including "a legal capacity identical to that of men. '' It also accords "to men and women the same rights with regard to the law relating to the movement of persons and the freedom to choose their residence and domicile. ''
Article 16 prohibits "discrimination against women in all matters relating to marriage and family relations. '' In particular, it provides men and women with "the same right to enter into marriage, the same right freely to choose a spouse, '' "the same rights and responsibilities during marriage and at its dissolution, '' "the same rights and responsibilities as parents, '' "the same rights to decide freely and responsibly on the number and spacing of their children, '' "the same personal rights as husband and wife, including the right to choose a family name, a profession and an occupation '' "the same rights for both spouses in respect of the ownership, acquisition, management, administration, enjoyment and disposition of property, whether free of charge or for a valuable consideration. ''
Articles 17 - 24 These articles describe the composition and procedures of the CEDAW Committee, like the hierarchical structure and rules and regulations of systematic procedure of the relationship between CEDAW and national and international legislation and the obligation of States to take all steps necessary to implement CEDAW in full form.
Articles 25 - 30 (Administration of CEDAW)
These articles describe the general administrative procedures concerning enforcement of CEDAW, ratification and entering reservations of concerned states.
Resolutions 1325 10th anniversary events highlight use of CEDAW mechanisms
The 10th anniversary of Resolution 1325 in October 2010 highlighted the increasing demand for accountability to UN Security Council Resolution 1325 on Women, Peace and Security. Many expressed concern about the fact that only 22 Member States out of 192 have adopted national action plans. Women are still underrepresented, if not totally absent, in most official peace negotiations and sexual violence in peacetime and in conflict continue to increase.
These realities emphasized the need to use external legal mechanisms to strengthen the implementation of SCR 1325, particularly CEDAW. The well - established mechanisms of CEDAW -- the Member States compliance report and the civil society shadow reporting process were cited as possible instruments to ensure accountability.
Several regional and international meetings including the High Level Seminar "1325 in 2020: Looking Forward... Looking Back, '' organized by the African Center for the Constructive Resolution of Disputes, and the "Stockholm International Conference 10 years with 1325 -- What now? '' called for the use of CEDAW to improve 1325 implementation.
Intersection between SCR 1325 and CEDAW
While CEDAW and UN Security Council Resolutions 1325 and 1820 on Women, Peace and Security are important international instruments on their own, there is also an intersection among the three standards that can be used to enhance their implementation and impact.
Resolutions 1325 and 1820 broaden the scope of CEDAW application by clarifying its relevance to all parties in conflict, whereas CEDAW provides concrete strategic guidance for actions to be taken on the broad commitments outlined in the two Resolutions.
CEDAW is a global human rights treaty that should be incorporated into national law as the highest standard for women 's rights. It requires UN Member States that have ratified it (185 to date) to set in place mechanisms to fully realize women 's rights.
Resolution 1325 is an international law unanimously adopted by the Security Council that mandates UN Member States to engage women in all aspects of peace building including ensuring women 's participation on all levels of decision -- making on peace and security issues.
Resolution 1820 links sexual violence as a tactic of war with the maintenance of international peace and security. It also demands a comprehensive report from the UN Secretary General on implementation and strategies for improving information flow to the Security Council; and adoption of concrete protection and prevention measures to end sexual violence.
Resolutions 1325 and 1820, and CEDAW share the following agenda on women 's human rights and gender equality:
A General Comment from the CEDAW committee could strengthen women 's advocacy for the full implementation of Resolutions 1325 and 1820 at the country and community levels. Conversely, CEDAW 's relevance to conflict - affected areas will be underscored further by the two Resolutions. In other words, all three international instruments will reinforce each other and be much more effective if used together in leveraging women 's human rights.
The six UN member states that have not ratified or acceded to the convention are Iran, Palau, Somalia, Sudan, Tonga, and the United States.
The one UN non-member state that had not acceded to the convention is the Holy See / Vatican City.
The Republic of China (Taiwan) in 2007 has also ratified the treaty in its legislature, but is unrecognized by the United Nations and is a party to the treaty only unofficially.
The latest state to have acceded the convention was South Sudan on 30 April 2015.
Many reservations have been entered against certain articles of the Convention. There are also some reservations that are not specific to an article within the Convention but rather a general reservation to all aspects of the Convention that would violate a stated principle. For example, Mauritania made a reservation stating it approved the Convention "in each and every one of its parts which are not contrary to Islamic Sharia. '' A number these reservations, especially those entered by Islamic states parties, are subject to much debate.
Article 28 of the Convention states that "a reservation incompatible with the object and purpose of the present Convention shall not be permitted. '' As a result, many states parties have entered objections to the reservations of other states parties. Specifically, many Nordic states parties were concerned that some of the reservations were "undermining the integrity of the text. '' Over the years, some states parties have withdrawn their reservations.
As of May 2015, sixty - two states parties have entered reservations against some part of the Convention. Twenty - four states parties have entered objections to at least one of these reservations. The most reserved article is Article 29, concerning dispute resolution and interpretation of the Convention, with thirty - nine reservations. Because reservations to Article 29 are expressly allowed by the Convention itself, these reservations were not very controversial. Article 16, concerning the equality of women in marriage and family life is subject to twenty - three reservations. The Committee, in General Recommendation No. 28, specifically stated that reservations to Article 2, concerning general non-discrimination, are impermissible. However, Article 2 has seventeen reservations.
The Committee on the Elimination of Discrimination Against Women is the United Nations (U.N.) treaty body that oversees the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). The formation of this committee was outlined in Article 17 of the CEDAW, which also established the rules, purpose, and operating procedures of the committee. Throughout its years of operation the committee has held multiple sessions to ensure the rules outlined in the CEDAW are being followed. Over time the practices of the committee have evolved due to an increased focus on women 's rights issues.
The Committee on the Elimination of Discrimination Against Women was formed on 3 September 1981 after the CEDAW received the 20 ratifications required for it to enter into force. Article 17 of the CEDAW established the committee in order to ensure that the provisions of the CEDAW were followed by the countries that had signed and agreed to be bound by it. The first regular session of the committee was held from 18 -- 22 October 1982. In this session the first officers of the committee were elected by simple majority, with Ms. L. Ider of Mongolia becoming chairperson. Other officers elected were three vice chairpersons: M. Caron of Canada, Z. Ilic of Yugoslavia and L. Mukayiranga of Rwanda. The final officer elected was D. P. Bernard of Guyana as rapporteur of the committee. During this session the committee also unanimously approved to adopt its rules of procedure.
The rules regarding where and when the committee can hold sessions are laid out in their rules of procedure.
The committee is allowed to hold as many meetings as are required to perform their duties effectively, with the states party to the CEDAW and the Secretary - General of the United Nations authorizing the number of regular sessions held. In addition, special sessions can be held at the request of either a state party to the convention or the majority of the members serving on the committee. Fifty - three sessions have been held to date, with the most recent taking place from 1 October 2012 to 19 October 2012. The first thirty - nine sessions were held at the United Nations headquarters building in New York City, with the fortieth session and alternating sessions following it held in the Palais des Nations in Geneva. During each of its regular sessions the committee hears reports from states party to the CEDAW on their progress in adhering to CEDAW and implementing its ideas in their countries. The committee also holds pre-sessional work groups to discuss the issues and questions that the committee should deal with during the following session.
Under article 18 of the CEDAW states must report to the committee on the progress they have made in implementing the CEDAW within their state. As most of the information the committee works with comes from these reports, guidelines have been developed to help states prepare accurate and useful reports. Initial reports discussing the current picture of discrimination against women in the reporting states are required to specifically deal with each article of the CEDAW, and consist of no more than one - hundred pages. States are required to prepare and present these initial reports within one year of ratifying the CEDAW. Periodic reports detailing the state 's progress in adhering to the articles of the CEDAW should be no more than seventy - five pages in length and should focus on the specific period of time since the state 's last report. States party to the CEDAW are typically required to provide periodic reports every four years, but if the committee is concerned about the situation in that state they can request a report at any time.
The committee chooses which reports to address by considering factors such as the amount of time the report has been pending, whether the report is initial or periodic (with more priority given to initial reports), and from which region the report originates. Eight states are invited to give their reports during each session and it is required a representative from the state is in attendance when the report is presented. The committee focuses on constructive dialogue when a report is presented, and appreciates careful time management on the part of the state presenting its report. Due to the high backlog of overdue reports the committee has encouraged states to combine all of their outstanding reports into one document, and sends reminders to states who have reports five years overdue. The CEDAW also requires that the committee provide an annual report that includes its activities, comments relating to the reports provided by states, information relating to the Optional Protocol of the CEDAW, and any other general suggestions or recommendations the committee has made. This report is given to the United Nations General Assembly through the Economic and Social Council. All reports, agendas and other official documents pertaining to the committee, including the reports provided by the states, are provided to the public unless otherwise decided by the committee.
Along with issuing its annual report and offering advice to reporting states, the committee has the ability to issue general recommendations that elaborate on its views of the obligations imposed by CEDAW. To date, the committee has issued thirty - two general recommendations, the latest dealing with the gender related dimensions of refugee status, asylum, nationality and statelessness of women. The recommendations issued by the committee in its first decade were short and dealt mainly with the content of states ' reports and reservations to the convention. Since 1991, however, recommendations have been focused on guiding states ' application of the CEDAW in specific situations. The formulation of a general recommendation begins with dialogue between the committee on the topic in the recommendation with various non-governmental organizations and other U.N. bodies. The recommendation is then drafted by a member of the committee and discussed and revised in the next session, and finally adopted in the following session.
For the first ten years the committee operated significantly differently from now. The only form of censure given to the committee by the CEDAW was their general recommendations and concluding comments following a report. Due to the emergence of the Global Campaign for Women 's Human Rights in 1991 more attention was given to the CEDAW, reviving the committee. The committee made changes to the CEDAW that allowed it to meet more than once a year, and have taken advantage of this by meeting at least twice a year since 1997. The committee originally only met for two weeks in its annual sessions, but that has now been changed to meeting multiple times a year in eighteen - day sessions. CEDAW also gained new complaint and inquiry proceedings allowing the committee to initiate inquiry proceedings if it believes a state is in severe violation of the articles of the CEDAW.
Despite evolving since the committee was first formed, members believe there are ways in which the committee can better meet the goals outlined in the CEDAW. One of the committee 's main goals moving forward is expanding its information base, allowing it to more effectively deal with issues that arise concerning the CEDAW. The committee is authorized in Article 22 of the CEDAW to invite specialized U.N. agencies such as the United Nations Development Programme to deliver reports discussing women 's rights issues in the state under discussion. Another method for gathering information is requesting reports from non-governmental organizations dealing with discrimination against women that are operating in the country under discussion. This is recommended to insure that the committee is receiving the full, unbiased picture of affairs within the reporting state.
Another recommendation for improvement involves interpreting and clarifying the language used in the CEDAW in order to make the document as useful as it can be. A third improvement that has been suggested is improving the efficiency of the committee. Due to the backlog in reports faced by the committee it has been suggested that the government officials who prepare reports presented to the committee should be trained, in order to make all reports uniform and more easily processed. A final suggestion for improvement is the implementation of a right of petition in the CEDAW, allowing the committee to hear complaints from citizens of a state against the state, increasing the committee 's strength and direct impact on the problem of discrimination against women.
The official languages of the committee are English, Arabic, French, Russian, and Spanish, with any statement made in one of the official languages translated into the other four. A speaker who does not speak one of the official languages provides a translator. All formal decisions and documents issued by the committee are provided in each of the official languages. The original rules of procedure adopted by the committee did not include Arabic as an official language, but the rule was amended in the committees second session to include Arabic.
Twenty - three members serve on the committee, described as experts for their experience and expertise in women 's issues. The members are nominated by their national governments and elected through a secret ballot by states party to the convention. Upon winning the election and taking up their responsibilities the members of the committee recite the following statement, known as the solemn declaration, "I solemnly declare that I shall perform my duties and exercise powers as a member of the Committee on the Elimination of Discrimination Against Women honourably, faithfully, impartially and conscientiously ''. The members come from a wide range of occupations including doctors, lawyers, diplomats and educators, providing various viewpoints to the committee due to their diversity. Many members continue to hold full - time jobs outside the committee and receive little monetary payment for their work on the committee.
To insure that the nationality of members encompasses all the diverse states who have signed the CEDAW, members are elected according to regions divided into Latin America and the Caribbean, Africa, Asia, Western Europe, and Eastern Europe. The members of the committee differ from those of other treaty bodies of the United Nations in that they have all been women with only one exception. In the event a member of the committee is unable to continue serving on the committee before her term is up the state that had nominated the resigning member shall nominate another expert from their country to fill in her seat. Committee members and experts also attend an annual luncheon, hosted by the NGO Committee on the Status of Women, NY (NGO CSW / NY), where key issues are discusses and the efforts of the committee are honored.
Officers of the Committee
The officers of the committee are composed of a chairperson, three vice-chairpersons and a rapporteur. Officers of the committee are nominated by another member of the committee, as opposed to a government which nominates members for the committee. All officers are elected by majority vote to a two - year term of office, and remain eligible for re-election after their term expires. The chairperson 's duties include declaring a meeting to be open or closed, directing the discussion in a session, announcing decisions made by the committee, preparing agendas in consultation with the secretary - general, designating the members of pre-sessional working groups and representing the committee at United Nations meetings which the committee is invited to participate in. In the case the chairperson is unable to perform any her duties she designates one of the three vice-chairpersons to take over her role. If the chairperson fails to designate a vice-chairperson prior to her absence then the vice-chairperson with the first name in English alphabetical order takes over. In the event an officer is unable to continue serving on the committee before her term expires a new officer from the same region as the original officer shall be nominated, elected and will take over the vacated office. As of May 2015, the 23 members are:
The Optional Protocol to the Convention on the Elimination of All Forms of Discrimination against Women is a side - agreement to the Convention which allows its parties to recognise the competence of the Committee on the Elimination of Discrimination Against Women to consider complaints from individuals.
The Optional Protocol was adopted by the UN General Assembly on 6 October 1999 and entered into force on 22 December 2000. Currently it has 80 signatories and 109 parties.
Controversy around CEDAW comes from two opposite directions: social and religious conservatives which claim that CEDAW is seeking to impose a liberal, progressive, feminist standard on countries, in detriment of traditional values; and radical feminists, who are skeptical of the power, or even desire, of CEDAW to radically transform societies and truly liberate women, and claim that CEDAW adheres to a form of weak liberal feminism similar to other mainstream organizations.
|
what episode does naruto fight the ten tails | Naruto: Shippuden (season 17) - wikipedia
The episodes for the seventeenth season of the anime series Naruto: Shippuden are based on Part II for Masashi Kishimoto 's manga series. The season continues with the repentant Sasuke joining with the allied shinobi forces to battle against Madara and Obito. The episodes are directed by Hayato Date, and produced by Studio Pierrot and TV Tokyo.
The season aired from May to August 2014. The DVD collection was released on January 7, 2015 under the title of The Fourth Great Ninja War - The Return of Squad Seven (忍 界 大戦 ・ 第 七 班 再び, Ninkai Taisen - Dainanahan Futatabi).
The season contains three musical themes, including one opening and two endings. The opening theme, "Guren '' (紅蓮, "Crimson '') by DOES, is used from episode 362 to 372. The first ending theme, "FLAME '' by DISH / / is used from episode 362 to 366. The second ending theme, "Never Change '' by SHUN and Lyu: Lyu is used from episode 367 to 372.
|
who dies in game of thrones season 3 episode 9 | The Rains of Castamere - wikipedia
"The Rains of Castamere '' is the ninth episode of the third season of HBO 's fantasy television series Game of Thrones, and the 29th episode of the series. The episode was written by executive producers David Benioff and D.B. Weiss, and directed by David Nutter. It aired on June 2, 2013 (2013 - 06 - 02).
The episode is centered on the wedding of Edmure Tully and Roslin Frey, one of the most memorable events of the book series, commonly called "The Red Wedding '', during which Robb Stark and his banner - men are massacred. Other storylines include Bran Stark 's group having to separate, Jon Snow 's loyalties being tested, and Daenerys plotting her invasion of the city of Yunkai.
This episode earned Benioff and Weiss a Primetime Emmy Award for Outstanding Writing for a Drama Series nomination.
North of the Wall, Sam and Gilly continue their march south. Sam tells Gilly he plans for them to cross the Wall using the entrance at the Nightfort, an abandoned castle along the Wall.
South of the Wall, Bran and his group take shelter in an abandoned mill. Nearby, Jon and the wildling party raid an elderly horse breeder 's home, taking his horses and gold while the old man flees. While in the mill, Bran and Jojen Reed discuss how they plan to cross the Wall, before Meera spots the old horse breeder riding nearby. After the old man is captured by the wildlings, Hodor -- scared by the thunder -- begins yelling, which threatens to give away their location to the wildlings. Bran uses his warg abilities to enter Hodor 's mind and knocks him out.
Outside, Tormund moves to kill the old man, but Orell tells him to have Jon do it instead to prove his loyalty. Jon is ultimately unable to kill the innocent man, and instead Ygritte kills the man with an arrow. Realizing where Jon 's loyalties lie, Tormund orders his men to kill him, but Jon manages to defeat them. As Ygritte moves to defend him, Jon deliberately knocks her to the ground, allowing Tormund to hold her down and prevent her from getting killed, while he battles with Orell. Bran enters the mind of Summer, his direwolf, and aids Jon. Jon kills Orell while the wolves hold off the other wildlings, and is then able to steal a horse and escape, leaving Ygritte and heading back to the Wall. At night, Bran asks Osha to take Rickon to Last Hearth, the home of the Umber family, and they depart shortly after.
Planning their invasion of Yunkai, Daario tells Daenerys and her knights about a rear gate to the city, through which they can sneak in and open the main gate for her army. Ser Jorah is suspicious of Daario and his plan, but comes around when Daenerys seeks Grey Worm 's opinion. When night falls, Daario, Jorah, and Grey Worm arrive at the gate. Daario enters ahead of them, posing as a still loyal Second Son commander. Shortly after being let inside the city, he signals Jorah and Grey Worm to follow him. Soon, they are ambushed by a group of Yunkai 's slave soldiers, and though largely outnumbered, manage to kill them and accomplish the mission. The group returns to Daenerys, and tells her that she is now in control of the city.
At camp, Catelyn counsels her son Robb, the King in the North, about his planned alliance with Lord Walder Frey and his planned assault on Casterly Rock, the homeland of the Lannisters. The Stark host soon arrives at the Twins, castle homeland of the Freys, where they are given bread and salt, a symbol of the "guest right '': a guarantee of safety when under another lord 's roof. Robb makes an apology to both the sarcastic Walder Frey and his daughters. Walder accepts the apology but insists on inspecting Talisa, the woman for whom Robb broke his vow. Nearby, Arya, though still a captive of Sandor Clegane, journeys to the Twins to reunite with her mother and brother. When they come upon a trader and his cart, Clegane knocks him out and moves to kill him, but Arya manages to dissuade him, and he instead steals the cart of food.
At night, Walder walks his daughter Roslin down the aisle to her future husband Edmure Tully, who is pleasantly surprised by her beauty. They are married shortly after, and the celebration begins. At the feast, Walder calls for the bedding ceremony, and the couple are taken to their chamber. After they leave, Lothar Frey closes the banquet hall doors, and the Frey bards begin playing "The Rains of Castamere '', a Lannister cautionary song, both of which arouse Catelyn 's suspicions. Using the food cart as their reason for being at the Twins, the Hound and Arya arrive at the wedding. They are turned away at the gates, but Arya sneaks in.
Catelyn notices Roose Bolton wearing chainmail under his robes which confirms Catelyn 's suspicions that they have been betrayed. Just as Walder signals his men to attack the Starks ' men, Catelyn tries to warn Robb, but before he can react, Lothar repeatedly stabs the pregnant Talisa in the stomach, killing her and her unborn child. As he tries to draw his sword, Robb is shot by crossbows, and the massacre of his bannermen begins. Arya, having snuck past the gate, witnesses Frey men kill Stark soldiers and Robb 's direwolf, Grey Wind. She is saved by the Hound, who knocks her unconscious and carries her out of the castle. Catelyn, although wounded by a crossbow bolt, holds Walder 's young wife, Joyeuse, hostage with a knife and demands that Robb be allowed to leave. Walder refuses, and Roose Bolton stabs Robb in the heart, delivering Jaime 's message from Harrenhal, "The Lannisters send their regards. '' Catelyn screams and kills Joyeuse in retaliation, before Frey 's son Black Walder cuts Catelyn 's throat.
"The Rains of Castamere '' was written by executive producers David Benioff and D.B. Weiss, based on George R.R. Martin 's original work from his novel A Storm of Swords. The episode adapts content from chapters 41 to 43 and 50 to 53 (Bran III, Jon V, Daenerys IV, Catelyn VI, Arya X, Catelyn VII, and Arya XI).
The episode includes one of the most important plot turns of the series: the betrayal and assassination of the Stark forces during a marriage ceremony in what came to be known as the "Red Wedding ''. The event culminates in Roose Bolton delivering Jaime Lannister 's message from "The Bear and the Maiden Fair '', before killing Robb. This tragic turn of events had a profound impact on Benioff and Weiss in their first read of the novels and it was the scene that convinced them to attempt to obtain the rights for a television series.
George R.R. Martin conceived The Red Wedding during the earliest stages of the planning of his saga, when he was envisioning a trilogy with The Red Wedding as one of the climactic events at the end of the first of the three books. Martin was inspired by a couple of events in Scottish history. One of them was the 15th century historical event known as the "Black Dinner '', where the Scottish king invited the chieftains of the powerful Clan Douglas to a feast at Edinburgh Castle. A black bull 's head, the symbol of death, was served as the last course of the dinner while a single drum was playing in the background, and the Douglases were murdered. Another event from which the author drew inspiration was the 1692 Massacre of Glencoe, where Clan MacDonald hosted the Campbell Clan who killed thirty - eight of their hosts overnight.
Martin has said The Red Wedding was the hardest thing he has ever written. He explained that he always tries to put himself in the skin of his characters when writing from their perspective, and develops bonds with them. He even felt attached to the minor characters killed during the massacre. It was so painful for him that he skipped the chapter and continued writing, and only when the rest of the book was finished, he "forced himself '' to come back to the dreaded scene. In 2012, at ComicCon he even joked that "he will visit a country with no television when the episode goes on air ''.
Martin also said he killed off Robb because he believed the audience would assume that the story was about Ned Stark 's heir avenging his death and wished to keep them guessing. He later suggested Talisa -- whose counterpart Jeyne Westerling was not killed in the books -- died so Robb 's heir could not avenge his death.
Will Champion, the drummer and backing vocalist of the band Coldplay, has a cameo appearance as one of the musicians who play at the wedding.
"The Rains of Castamere '' premiered to 5.22 million viewers and received a 2.8 ratings in adults 18 -- 49. The second airing was viewed by 1.08 million people, bringing total viewership for the night to 6.30 million. In the United Kingdom, the episode was viewed by 1.013 million viewers, making it the highest - rated broadcast that week. It also received 0.112 million timeshift viewers.
The episode was widely praised by critics and cited as one of the best of the series. Rotten Tomatoes, a prominent review aggregator, surveyed 21 reviews of the installment and judged 100 % of them to be positive with an average score of 9.9 out of 10. The website 's critical consensus reads, "The most unforgettable episode of Game of Thrones thus far, ' The Rains of Castamere ' (or as it shall forever be known, ' The Red Wedding ') packs a dramatic wallop that feels as exquisitely shocking as it does ultimately inevitable. '' The majority of the comments were directed at the massacre at the end of the episode, where praise was especially given to Michelle Fairley 's performance, leading to the disappointment of many critics when she was not nominated for the Primetime Emmy Award for Outstanding Supporting Actress in a Drama Series for the 65th Primetime Emmy Awards. IGN 's Matt Fowler gave the episode a perfect 10 / 10, calling it "an exquisitely awful event that managed to out - do the unpredictable and horrifying death of Ned Stark back in Season 1 ''. Fowler also said he believed that the episode 's depiction of the Red Wedding was more powerful than its depiction in A Song of Ice and Fire.
Writing for The A.V. Club, both David Sims and Todd VanDerWerff gave the episode an "A '' grade. Sims (writing for people who have not read the novels) expressed shock at the deaths of several main characters, writing, "I do n't think I 've really processed what I just watched ''. VanDerWerff, who reviews the episodes for people who have read the novels, wrote "If (the reader) does n't terribly want to deal with the thought of the deaths of Catelyn and Robb, well, he or she can read that much more quickly. Or he or she can read that much more slowly if there 's a need to process the emotions more fully. On TV, you ca n't really do that. '' Reviewing for Forbes, Erik Kain called the episode "one of the best episodes of HBO 's dark drama yet '', and noted "there was a deeper sense of tragedy knowing (Robb) also lost his unborn child ''. Sean Collins of the Rolling Stone also praised the episode, and commented on the unusual step the show took in ending one of its central conflicts. Sarah Hughes of The Guardian highlighted the decision to kill Talisa, writing that her "heartbreaking end was unbearable ''.
The episode was also notable for the intense and emotional response it pulled from viewers, many of whom were unaware of what was about to transpire and had their reactions filmed by people who had read the book on which it was based. This led to George R.R. Martin giving his personal analysis of the reactions, which he stated were on par with the responses he received from readers of A Storm of Swords.
This episode received a nomination for the Primetime Emmy Award for Outstanding Writing for a Drama Series for the 65th Primetime Emmy Awards. It also won the 2014 Hugo Award for Best Dramatic Presentation, Short Form.
|
who sings just call me angel of the morning | Angel of the Morning - wikipedia
"Angel of the Morning '' is a popular song, written and composed by Chip Taylor, that has been recorded numerous times by, or has been a hit single for, various artists including Evie Sands, Merrilee Rush, Juice Newton, Nina Simone, P.P. Arnold, Olivia Newton - John, The Pretenders / Chrissie Hynde, Dusty Springfield, Mary Mason, Melba Montgomery, Vagiant, Billie Davis, Bonnie Tyler, Rita Wilson, The New Seekers, Skeeter Davis, Crystal Gayle.
Written and composed by New York City - born songwriter Chip Taylor, actor Jonathan "Jon '' Voight 's brother, "Angel of the Morning '' was originally offered to Connie Francis to sing, but she turned it down because she thought that it was too risqué for her career. The song 's narrator describes her feelings about a one - night stand: "If morning 's echo says we 've sinned, well, it was what I wanted now. ''
Taylor produced a recording of the song with Evie Sands, but the financial straits of Cameo - Parkway Records, which had Sands on their roster, reportedly prevented either that version 's release or its distribution.
Other early recordings of the song were made in 1967 by Danny Michaels for Lee Hazlewood 's LHI label and by UK vocalist Billie Davis.
The song finally became a hit in 1968 through a recording by Merrilee Rush, made that January at American Sound Studios in Memphis, with Chips Moman and Tommy Cogbill producing. Rush had come to Memphis through the group she fronted, the Turnabouts, being the opening act for a Paul Revere and the Raiders tour. While in Memphis, the Raiders recorded the album Going to Memphis at American Sound Studios, an association which led to Rush 's discovery by Tommy Cogbill, who had been hoping to find the right voice for "Angel of the Morning '' -- he had kept a tape of the demo of that song constantly in his pocket for several months.
Rush recorded the song and the tracks which would comprise her Angel of the Morning album with the American Sound house band, even though the single and the album would be credited to the group "Merrilee Rush & the Turnabouts. ''
The single version was released in February 1968, and reached the Top 10 on the Billboard Hot 100 that June, peaking at No. 7. A No. 1 hit in Canada, Australia and New Zealand, and also gave Rush a hit in the Netherlands (No. 4). The song earned Rush a Grammy nomination for Best Contemporary - Pop Vocal Performance, Female.
Rush recorded a new version of the song for her 1977 eponymous album release. (Rush 's version of "Angel of the Morning '' would be featured on the soundtrack of the 1999 film Girl, Interrupted, whose time frame is 1967 and 1968, in which author - composer Chip Taylor 's niece Angelina Jolie had a starring role.)
In the United Kingdom, where Rush 's version stalled at No. 55, a rendition by P.P. Arnold, who had sung background on the 1967 Billie Davis version, reached No. 29 in August 1968.
In 1970 a rendition by Connie Eaton reached No. 34 on the Billboard C&W charts.
In 1977, Mary Mason also had a UK Top 30 hit with her version, which was actually a medley of two Chip Taylor songs, "Angel of the Morning '' and "Any Way That You Want Me, '' reaching No. 27.
Also in 1977 the British act Guys ' n ' Dolls had a hit in the Netherlands with the song, and their version reached No. 11 on the Dutch charts.
In 1978 a release by Melba Montgomery reached No. 22 on the Billboard C&W chart.
The highest - charting and best - selling version in the United States was recorded and released in 1981 by country - rock singer Juice Newton for her album Juice. Newton re-interpreted the song at the suggestion of Steve Meyer, who promoted Capitol Records singles and albums to radio stations and felt a version of "Angel of the Morning '' by Newton would be a strong candidate for airplay. Newton would state that she would never have herself thought of recording "Angel of the Morning, '' and even though she immediately recognized the song when Meyer played it for her (quote): "I (had n't been) really aware of that song because... when (it) was popular I was listening to folk music and R&B and not Pop, and that was a very Pop song. ''
Newton 's version reached No. 4 on the Billboard Hot 100, No. 22 on the Billboard country music chart, and spent three weeks at No. 1 on the Billboard adult contemporary chart in April of that year. The recording also earned Newton a Grammy nomination, and in the same category as Rush 's 1968 hit. More than 1 million units of Newton 's single of the song were sold in the United States, and the single reached the Top 10 in a number of other countries, including Canada and Australia. Notably, Newton 's video for "Angel of the Morning '' was the first country - music music video aired on MTV; it first aired in 1981. In the UK, this recording reached No. 43 on the UK Singles Chart, marking the song 's third appearance on that chart without becoming a truly major hit. (Newton recorded the song again in 1998 for her The Trouble with Angels album.)
The song "Angel, '' released by reggae artist Shaggy, heavily samples "Angel of the Morning, '' using the melody, but with different words, for the sung refrain. It reached # 1 on the Billboard Hot 100 for the week ending March 31, 2001.
Swedish singer Jill Johnson released "Angel of the Morning, '' with lyrics in English, in 2007 from her album of cover versions, Music Row. This version peaked at No. 30 at the Swedish singles chart.
A number of non-English versions of "Angel of the Morning '' have been recorded including the following:
Juice Newton 's version is heard during Drew Barrymore 's first scene in the film Charlie 's Angels, in the film Charlie Wilson 's War (in which it is also sung by Emily Blunt), the opening titles of Deadpool, and the ending of The Meddler. It is also featured in Season 1 of HBO 's True Detective.
The Toyota Highlander "Kid Cave '' commercial, aired from late 2010, features a young boy who is embarrassed by his parents 's singing of the song while he is riding with them in a car.
The song features in an episode of ABC 's sitcom Modern Family, "Regrets Only '' (Series / Season 2, Ep 16), when Gloria, portrayed by Sofia Vergara, is singing along to it on a karaoke machine. A karaoke version of the song is also featured in the second - season finale of the HBO series The Leftovers.
The song also features a parody version in Family Guy with Peter Griffin portraying himself as Deadpool.
|
when did trail of tears start and end | Trail of Tears - wikipedia
Cherokee (4,000) Creek Seminole (3,000 in Second Seminole War - 1835 - 1842) Chickasaw (3,500)
The Trail of Tears was a series of forced removals of Native American nations from their ancestral homelands in the Southeastern United States to an area west of the Mississippi River that had been designated as Indian Territory. The forced relocations were carried out by various government authorities following the passage of the Indian Removal Act in 1830. The relocated people suffered from exposure, disease, and starvation while en route, and more than four thousand died before reaching their various destinations. The removal included members of the Cherokee, Muscogee (Creek), Seminole, Chickasaw, and Choctaw nations. The phrase "Trail of Tears '' originated from a description of the removal of the Cherokee Nation in 1838.
Between 1830 and 1850, the Chickasaw, Choctaw, Creek, Seminole, and Cherokee people (including mixed - race and black freedmen and slaves who lived among them) were forcibly removed from their traditional lands in the Southeastern United States, and relocated farther west. Those Native Americans that were relocated were forced to march to their destinations by state and local militias.
The Cherokee removal in 1838 (the last forced removal east of the Mississippi) was brought on by the discovery of gold near Dahlonega, Georgia in 1828, resulting in the Georgia Gold Rush. Approximately 2,000 - 6,000 of the 16,543 relocated Cherokee perished along the way.
In 1830, a group of Indians collectively referred to as the Five Civilized Tribes, the Cherokee, Chickasaw, Choctaw, Muscogee, and Seminole, were living as autonomous nations in what would be later called the American Deep South. The process of cultural transformation, as proposed by George Washington and Henry Knox, was gaining momentum, especially among the Cherokee and Choctaw.
American settlers had been pressuring the federal government to remove Indians from the Southeast; many settlers were encroaching on Indian lands, while others wanted more land made available to white settlers. Although the effort was vehemently opposed by many, including U.S. Congressman Davy Crockett of Tennessee, President Andrew Jackson was able to gain Congressional passage of the Indian Removal Act of 1830, which authorized the government to extinguish Indian title to lands in the Southeast.
In 1831, the Choctaw became the first Nation to be removed, and their removal served as the model for all future relocations. After two wars, many Seminoles were removed in 1832. The Creek removal followed in 1834, the Chickasaw in 1837, and lastly the Cherokee in 1838. Many Indians remained in their ancestral homelands; some Choctaw are found in Mississippi, Creek in Alabama and Florida, Cherokee in North Carolina, and Seminole in Florida; a small group had moved to the Everglades and were never defeated by the United States government. A limited number of non-Indians, including some of African descent (some as slaves, and others as spouses or freedmen), also accompanied the Indians on the trek westward. By 1837, 46,000 Indians from the southeastern states had been removed from their homelands, thereby opening 25 million acres (100,000 km) for predominantly white settlement.
Prior to 1830, the fixed boundaries of these autonomous tribal nations, comprising large areas of the United States, were subject to continual cession and annexation, in part due to pressure from squatters and the threat of military force in the newly declared U.S. territories -- federally administered regions whose boundaries supervened upon the Native treaty claims. As these territories became U.S. states, state governments sought to dissolve the boundaries of the Indian nations within their borders, which were independent of state jurisdiction, and to expropriate the land therein. These pressures were exacerbated by U.S. population growth and the expansion of slavery in the South, with the rapid development of cotton cultivation in the uplands following the invention of the cotton gin.
The removals, conducted under Presidents Andrew Jackson and Martin Van Buren, followed the Indian Removal Act of 1830. The Act provided the President with powers to exchange land with Native tribes and provide infrastructure improvements on the existing lands. The law also gave the president power to pay for transportation costs to the West, should tribes choose to relocate. The law did not, however, allow the President to force tribes to move West without a mutually agreed - upon treaty.
In the years following the Act, the Cherokee filed several lawsuits regarding conflicts with the state of Georgia. Some of these cases reached the Supreme Court, the most influential being Worcester v. Georgia (1832). Samuel Worcester and other non-Indians were convicted by Georgia law for residing in Cherokee territory in the state of Georgia, without a license. Worcester was sentenced to prison for four years and appealed the ruling, arguing that this sentence violated treaties made between Indian nations and the United States federal government by imposing state laws on Cherokee lands. The Court ruled in Worcester 's favor, declaring that the Cherokee Nation was subject only to federal law and that the Supremacy Clause barred legislative interference by the state of Georgia. Chief Justice Marshall argued, "The Cherokee nation, then, is a distinct community occupying its own territory in which the laws of Georgia can have no force. The whole intercourse between the United States and this Nation, is, by our constitution and laws, vested in the government of the United States. ''
Andrew Jackson did not listen to the Supreme Court mandate barring Georgia from intruding on Cherokee lands. He feared that enforcement would lead to open warfare between federal troops and the Georgia militia, which would compound the ongoing crisis in South Carolina and lead to a broader civil war. Instead, he vigorously negotiated a land exchange treaty with the Cherokee. Political opponents Henry Clay and John Quincy Adams, who supported the Worcester decision, were outraged by Jackson 's refusal to uphold Cherokee claims against the state of Georgia. Ralph Waldo Emerson wrote an account of Cherokee assimilation into the American culture, declaring his support of the Worcester decision.
Jackson chose to continue with Indian removal, and negotiated The Treaty of New Echota, on December 29, 1835, which granted Cherokee Indians two years to move to Indian Territory (modern Oklahoma). Only a fraction of the Cherokees left voluntarily. The U.S. government, with assistance from state militias, forced most of the remaining Cherokees west in 1838. The Cherokees were temporarily remanded in camps in eastern Tennessee. In November, the Cherokee were broken into groups of around 1,000 each and began the journey west. They endured heavy rains, snow, and freezing temperatures.
When the Cherokee negotiated the Treaty of New Echota, they exchanged all their land east of the Mississippi for land in modern Oklahoma and a $5 million payment from the federal government. Many Cherokee felt betrayed that their leadership accepted the deal, and over 16,000 Cherokee signed a petition to prevent the passage of the treaty. By the end of the decade in 1840, tens of thousands of Cherokee and other tribes had been removed from their land east of the Mississippi River. The Creek, Choctaw, Seminole, and Chicksaw were also relocated under the Indian Removal Act of 1830. One Choctaw leader portrayed the removal as "A Trail of Tears and Deaths '', a devastating event that removed most of the Native population of the southeastern United States from their traditional homelands.
The latter forced relocations have sometimes been referred to as "death marches '', in particular with reference to the Cherokee march across the Midwest in 1838, which occurred on a predominantly land route.
Indians who had the means initially provided for their own removal. Contingents that were led by conductors from the U.S. Army included those led by Edward Deas, who was claimed to be a sympathizer for the Cherokee plight. The largest death toll from the Cherokee forced relocation comes from the period after the May 23, 1838 deadline. This was at the point when the remaining Cherokee were rounded into camps and pressed into oversized detachments, often over 700 in size (larger than the populations of Little Rock or Memphis at that time). Communicable diseases spread quickly through these closely quartered groups, killing many. These contingents were among the last to move, but following the same routes the others had taken; the areas they were going through had been depleted of supplies due to the vast numbers that had gone before them. The marchers were subject to extortion and violence along the route. In addition, these final contingents were forced to set out during the hottest and coldest months of the year, killing many. Exposure to the elements, disease and starvation, harassment by local frontiersmen, and insufficient rations similarly killed up to one - third of the Choctaw and other nations on the march.
There exists some debate among historians and the affected tribes as to whether the term "Trail of Tears '' should be used to refer to the entire history of forced relocations from the United States east of the Mississippi into Indian Territory (as was the stated U.S. policy), or to the Five Tribes described above, to the route of the land march specifically, or to specific marches in which the remaining holdouts from each area were rounded up.
The territorial boundaries claimed as sovereign and controlled by the Indian nations living in what were then known as the Indian Territories -- the portion of the early United States west of the Mississippi River not yet claimed or allotted to become Oklahoma -- were fixed and determined by national treaties with the United States federal government. These recognized the tribal governments as dependent but internally sovereign, or autonomous nations under the sole jurisdiction of the federal government.
While retaining their tribal governance, which included a constitution or official council in tribes such as the Iroquois and Cherokee, many portions of the southeastern Indian nations had become partially or completely economically integrated into the economy of the region. This included the plantation economy in states such as Georgia, and the possession of slaves. These slaves were also forcibly relocated during the process of removal. A similar process had occurred earlier in the territories controlled by the Confederacy of the Six Nations in what is now upstate New York prior to the British invasion and subsequent U.S. annexation of the Iroquois nation.
Under the history of U.S. treaty law, the territorial boundaries claimed by federally recognized tribes received the same status under which the Southeastern tribal claims were recognized; until the following establishment of reservations of land, determined by the federal government, which were ceded to the remaining tribes by de jure treaty, in a process that often entailed forced relocation. The establishment of the Indian Territory and the extinguishment of Indian land claims east of the Mississippi anticipated the establishment of the U.S. Indian reservation system. It was imposed on remaining Indian lands later in the 19th century.
The statutory argument for Indian sovereignty persisted until the Supreme Court of the United States ruled in Cherokee Nation v. Georgia (1831), that (e.g.) the Cherokee were not a sovereign and independent nation, and therefore not entitled to a hearing before the court. However, in Worcester v. Georgia (1832), the court re-established limited internal sovereignty under the sole jurisdiction of the federal government, in a ruling that both opposed the subsequent forced relocation and set the basis for modern U.S. case law.
While the latter ruling was defied by Jackson, the actions of the Jackson administration were not isolated because state and federal officials had violated treaties without consequence, often attributed to military exigency, as the members of individual Indian nations were not automatically United States citizens and were rarely given standing in any U.S. court.
Jackson 's involvement in what became known as the Trail of Tears can not be ignored. In a speech regarding Indian removal, Jackson said, "It will separate the Indians from immediate contact with settlements of whites; free them from the power of the States; enable them to pursue happiness in their own way and under their own rude institutions; will retard the progress of decay, which is lessening their numbers, and perhaps cause them gradually, under the protection of the Government and through the influence of good counsels, to cast off their savage habits and become an interesting, civilized, and Christian community. '' According to Jackson, the move would be nothing but beneficial for all parties. His point of view garnered support from many Americans, many of whom would benefit economically from the removal.
This was compounded by the fact that while citizenship tests existed for Indians living in newly annexed areas before and after forced relocation, individual U.S. states did not recognize tribal land claims, only individual title under State law, and distinguished between the rights of white and non-white citizens, who often had limited standing in court; and Indian removal was carried out under U.S. military jurisdiction, often by state militias. As a result, individual Indians who could prove U.S. citizenship were nevertheless displaced from newly annexed areas. The military actions and subsequent treaties enacted by Jackson 's and Martin Van Buren 's administrations pursuant to the 1830 law, which Tennessee Congressman Davy Crockett had unsuccessfully voted against, are widely considered to have directly caused the expulsion or death of a substantial part of the Indian population then living in the southeastern United States.
The Choctaw nation occupied large portions of what are now the U.S. states of Alabama, Mississippi, and Louisiana. After a series of treaties starting in 1801, the Choctaw nation was reduced to 11,000,000 acres (45,000 km). The Treaty of Dancing Rabbit Creek ceded the remaining country to the United States and was ratified in early 1831. The removals were only agreed to after a provision in the Treaty of Dancing Rabbit Creek allowed some Choctaw to remain. George W. Harkins wrote to the citizens of the United States before the removals were to commence:
It is with considerable diffidence that I attempt to address the American people, knowing and feeling sensibly my incompetency; and believing that your highly and well improved minds would not be well entertained by the address of a Choctaw. But having determined to emigrate west of the Mississippi river this fall, I have thought proper in bidding you farewell to make a few remarks expressive of my views, and the feelings that actuate me on the subject of our removal... We as Choctaws rather chose to suffer and be free, than live under the degrading influence of laws, which our voice could not be heard in their formation.
United States Secretary of War Lewis Cass appointed George Gaines to manage the removals. Gaines decided to remove Choctaws in three phases starting in 1831 and ending in 1833. The first was to begin on November 1, 1831 with groups meeting at Memphis and Vicksburg. A harsh winter would batter the emigrants with flash floods, sleet, and snow. Initially the Choctaws were to be transported by wagon but floods halted them. With food running out, the residents of Vicksburg and Memphis were concerned. Five steamboats (the Walter Scott, the Brandywine, the Reindeer, the Talma, and the Cleopatra) would ferry Choctaws to their river - based destinations. The Memphis group traveled up the Arkansas for about 60 miles (100 km) to Arkansas Post. There the temperature stayed below freezing for almost a week with the rivers clogged with ice, so there could be no travel for weeks. Food rationing consisted of a handful of boiled corn, one turnip, and two cups of heated water per day. Forty government wagons were sent to Arkansas Post to transport them to Little Rock. When they reached Little Rock, a Choctaw chief referred to their trek as a "trail of tears and death. '' The Vicksburg group was led by an incompetent guide and was lost in the Lake Providence swamps.
Alexis de Tocqueville, the French philosopher, witnessed the Choctaw removals while in Memphis, Tennessee in 1831,
In the whole scene there was an air of ruin and destruction, something which betrayed a final and irrevocable adieu; one could n't watch without feeling one 's heart wrung. The Indians were tranquil, but sombre and taciturn. There was one who could speak English and of whom I asked why the Chactas were leaving their country. "To be free, '' he answered, could never get any other reason out of him. We... watch the expulsion... of one of the most celebrated and ancient American peoples.
Nearly 17,000 Choctaws made the move to what would be called Indian Territory and then later Oklahoma. About 2,500 -- 6,000 died along the trail of tears. Approximately 5,000 -- 6,000 Choctaws remained in Mississippi in 1831 after the initial removal efforts. The Choctaws who chose to remain in newly formed Mississippi were subject to legal conflict, harassment, and intimidation. The Choctaws "have had our habitations torn down and burned, our fences destroyed, cattle turned into our fields and we ourselves have been scourged, manacled, fettered and otherwise personally abused, until by such treatment some of our best men have died. '' The Choctaws in Mississippi were later reformed as the Mississippi Band of Choctaw Indians and the removed Choctaws became the Choctaw Nation of Oklahoma. The Choctaws were the first to sign a removal treaty presented by the federal government. President Andrew Jackson wanted strong negotiations with the Choctaws in Mississippi, and the Choctaws seemed much more cooperative than Andrew Jackson had imagined. When commissioners and Choctaws came to negotiation agreements it was said the United States would bear the expense of moving their homes and that they had to be removed within two and a half years of the signed treaty.
The U.S. acquired Florida from Spain via the Adams -- Onís Treaty and took possession in 1821. In 1832 the Seminoles were called to a meeting at Payne 's Landing on the Ocklawaha River. The treaty negotiated called for the Seminoles to move west, if the land were found to be suitable. They were to be settled on the Creek reservation and become part of the Creek tribe, who considered them deserters; some of the Seminoles had been derived from Creek bands but also from other tribes. Those among the tribe who once were members of Creek bands did not wish to move west to where they were certain that they would meet death for leaving the main band of Creek Indians. The delegation of seven chiefs who were to inspect the new reservation did not leave Florida until October 1832. After touring the area for several months and conferring with the Creeks who had already settled there, the seven chiefs signed a statement on March 28, 1833 that the new land was acceptable. Upon their return to Florida, however, most of the chiefs renounced the statement, claiming that they had not signed it, or that they had been forced to sign it, and in any case, that they did not have the power to decide for all the tribes and bands that resided on the reservation. The villages in the area of the Apalachicola River were more easily persuaded, however, and went west in 1834. On December 28, 1835 a group of Seminoles and blacks ambushed a U.S. Army company marching from Fort Brooke in Tampa to Fort King in Ocala, killing all but three of the 110 army troops. This came to be known as the Dade Massacre.
As the realization that the Seminoles would resist relocation sank in, Florida began preparing for war. The St. Augustine Militia asked the War Department for the loan of 500 muskets. Five hundred volunteers were mobilized under Brig. Gen. Richard K. Call. Indian war parties raided farms and settlements, and families fled to forts, large towns, or out of the territory altogether. A war party led by Osceola captured a Florida militia supply train, killing eight of its guards and wounding six others. Most of the goods taken were recovered by the militia in another fight a few days later. Sugar plantations along the Atlantic coast south of St. Augustine were destroyed, with many of the slaves on the plantations joining the Seminoles.
Other warchiefs such as Halleck Tustenuggee, Jumper, and Black Seminoles Abraham and John Horse continued the Seminole resistance against the army. The war ended, after a full decade of fighting, in 1842. The U.S. government is estimated to have spent about $20,000,000 on the war, at the time an astronomical sum, and equal to $496,344,828 today. Many Indians were forcibly exiled to Creek lands west of the Mississippi; others retreated into the Everglades. In the end, the government gave up trying to subjugate the Seminole in their Everglades redoubts and left fewer than 100 Seminoles in peace. However, other scholars state that at least several hundred Seminoles remained in the Everglades after the Seminole Wars.
As a result of the Seminole Wars, the surviving Seminole band of the Everglades claims to be the only federally recognized tribe which never relinquished sovereignty or signed a peace treaty with the United States.
In general the American people tended to view the Indian resistance as unwarranted. An article published by the Virginia Enquirer on January 26, 1836, called the "Hostilities of the Seminoles '', assigned all the blame for the violence that came from the Seminole 's resistance to the Seminoles themselves. The article accuses the Indians of not staying true to their word -- the promises they supposedly made in the treaties and negotiations from the Indian Removal Act.
After the War of 1812, some Muscogee leaders such as William McIntosh signed treaties that ceded more land to Georgia. The 1814 signing of the Treaty of Fort Jackson signaled the end for the Creek Nation and for all Indians in the South. Friendly Creek leaders, like Selocta and Big Warrior, addressed Sharp Knife (the Indian nickname for Andrew Jackson) and reminded him that they keep the peace. Nevertheless, Jackson retorted that they did not "cut (Tecumseh 's) throat '' when they had the chance, so they must now cede Creek lands. Jackson also ignored Article 9 of the Treaty of Ghent that restored sovereignty to Indians and their nations.
Jackson opened this first peace session by faintly acknowledging the help of the friendly Creeks. That done, he turned to the Red Sticks and admonished them for listening to evil counsel. For their crime, he said, the entire Creek Nation must pay. He demanded the equivalent of all expenses incurred by the United States in prosecuting the war, which by his calculation came to 23,000,000 acres (93,000 km) of land. - Robert V. Remini, Andrew Jackson
Eventually, the Creek Confederacy enacted a law that made further land cessions a capital offense. Nevertheless, on February 12, 1825, McIntosh and other chiefs signed the Treaty of Indian Springs, which gave up most of the remaining Creek lands in Georgia. After the U.S. Senate ratified the treaty, McIntosh was assassinated on May 13, 1825, by Creeks led by Menawa.
The Creek National Council, led by Opothle Yohola, protested to the United States that the Treaty of Indian Springs was fraudulent. President John Quincy Adams was sympathetic, and eventually the treaty was nullified in a new agreement, the Treaty of Washington (1826). The historian R. Douglas Hurt wrote: "The Creeks had accomplished what no Indian nation had ever done or would do again -- achieve the annulment of a ratified treaty. '' However, Governor Troup of Georgia ignored the new treaty and began to forcibly remove the Indians under the terms of the earlier treaty. At first, President Adams attempted to intervene with federal troops, but Troup called out the militia, and Adams, fearful of a civil war, conceded. As he explained to his intimates, "The Indians are not worth going to war over. ''
Although the Creeks had been forced from Georgia, with many Lower Creeks moving to the Indian Territory, there were still about 20,000 Upper Creeks living in Alabama. However, the state moved to abolish tribal governments and extend state laws over the Creeks. Opothle Yohola appealed to the administration of President Andrew Jackson for protection from Alabama; when none was forthcoming, the Treaty of Cusseta was signed on March 24, 1832, which divided up Creek lands into individual allotments. Creeks could either sell their allotments and receive funds to remove to the west, or stay in Alabama and submit to state laws. The Creeks were never given a fair chance to comply with the terms of the treaty, however. Rampant illegal settlement of their lands by Americans continued unabated with federal and state authorities unable or unwilling to do much to halt it. Further, as recently detailed by historian Billy Winn in his thorough chronicle of the events leading to removal, a variety of fraudulent schemes designed to cheat the Creeks out of their allotments, many of them organized by speculators operating out of Columbus, Georgia and Montgomery, Alabama, were perpetrated after the signing of the Treaty of Cusseta. A portion of the beleaguered Creeks, many desperately poor and feeling abused and oppressed by their American neighbors, struck back by carrying out occasional raids on area farms and committing other isolated acts of violence. Escalating tensions erupted into open war with the United States following the destruction of the village of Roanoke, Georgia, located along the Chattahoochee River on the boundary between Creek and American territory, in May 1836. During the so - called "Creek War of 1836 '' Secretary of War Lewis Cass dispatched General Winfield Scott to end the violence by forcibly removing the Creeks to the Indian Territory west of the Mississippi River. With the Indian Removal Act of 1830 it continued into 1835 and after as in 1836 over 15,000 Creeks were driven from their land for the last time. 3,500 of those 15,000 Creeks did not survive the trip to Oklahoma where they eventually settled.
The Chickasaw received financial compensation from the United States for their lands east of the Mississippi River. In 1836, the Chickasaws had reached an agreement to purchase land from the previously removed Choctaws after a bitter five - year debate. They paid the Choctaws $530,000 (equal to $11,558,818 today) for the westernmost part of the Choctaw land. The first group of Chickasaws moved in 1836 and was led by John M. Millard. The Chickasaws gathered at Memphis on July 4, 1836, with all of their assets -- belongings, livestock, and slaves. Once across the Mississippi River, they followed routes previously established by the Choctaws and the Creeks. Once in Indian Territory, the Chickasaws merged with the Choctaw nation.
By 1838, about 2,000 Cherokee had voluntarily relocated from Georgia to Indian Territory (present day Oklahoma). Forcible removals began in May 1838 when General Winfield Scott received a final order from President Martin Van Buren to relocate the remaining Cherokees. Approximately 4,000 Cherokees died in the ensuing trek to Oklahoma. In the Cherokee language, the event is called nu na da ul tsun yi ("the place where they cried '') or nu na hi du na tlo hi lu i (the trail where they cried). The Cherokee Trail of Tears resulted from the enforcement of the Treaty of New Echota, an agreement signed under the provisions of the Indian Removal Act of 1830, which exchanged Indian land in the East for lands west of the Mississippi River, but which was never accepted by the elected tribal leadership or a majority of the Cherokee people.
The sparsely inhabited Cherokee lands were highly attractive to Georgian farmers experiencing population pressure, and illegal settlements resulted. Long - simmering tensions between Georgia and the Cherokee Nation were brought to a crisis by the discovery of gold near Dahlonega, Georgia, in 1829, resulting in the Georgia Gold Rush, the second gold rush in U.S. history. Hopeful gold speculators began trespassing on Cherokee lands, and pressure mounted to fulfill the Compact of 1802 in which the US Government promised to extinguish Indian land claims in the state of Georgia.
When Georgia moved to extend state laws over Cherokee lands in 1830, the matter went to the U.S. Supreme Court. In Cherokee Nation v. Georgia (1831), the Marshall court ruled that the Cherokee Nation was not a sovereign and independent nation, and therefore refused to hear the case. However, in Worcester v. Georgia (1832), the Court ruled that Georgia could not impose laws in Cherokee territory, since only the national government -- not state governments -- had authority in Indian affairs. Worcester v Georgia is associated with Andrew Jackson 's famous, though apocryphal, quote "John Marshall has made his decision; now let him enforce it! '' In reality, this quote did not appear until 30 years after the incident and was first printed in a textbook authored by Jackson critic Horace Greeley.
Fearing open warfare between federal troops and the Georgia militia, Jackson decided not to enforce Cherokee claims against the state of Georgia. He was already embroiled in a constitutional crisis with South Carolina (i.e. the nullification crisis) and favored Cherokee relocation over civil war. With the Indian Removal Act of 1830, the U.S. Congress had given Jackson authority to negotiate removal treaties, exchanging Indian land in the East for land west of the Mississippi River. Jackson used the dispute with Georgia to put pressure on the Cherokees to sign a removal treaty.
The final treaty, passed in Congress by a single vote, and signed by President Andrew Jackson, was imposed by his successor President Martin Van Buren. Van Buren allowed Georgia, Tennessee, North Carolina, and Alabama an armed force of 7,000 militiamen, army regulars, and volunteers under General Winfield Scott to relocate about 13,000 Cherokees to Cleveland, Tennessee. After the initial roundup, the U.S. military oversaw the emigration to Oklahoma. Former Cherokee lands were immediately opened to settlement. Most of the deaths during the journey were caused by disease, malnutrition, and exposure during an unusually cold winter.
In the winter of 1838 the Cherokee began the 1,000 - mile (1,600 km) march with scant clothing and most on foot without shoes or moccasins. The march began in Red Clay, Tennessee, the location of the last Eastern capital of the Cherokee Nation. Because of the diseases, the Indians were not allowed to go into any towns or villages along the way; many times this meant traveling much farther to go around them. After crossing Tennessee and Kentucky, they arrived at the Ohio River across from Golconda in southern Illinois about the 3rd of December 1838. Here the starving Indians were charged a dollar a head (equal to $22.49 today) to cross the river on "Berry 's Ferry '' which typically charged twelve cents, equal to $2.70 today. They were not allowed passage until the ferry had serviced all others wishing to cross and were forced to take shelter under "Mantle Rock, '' a shelter bluff on the Kentucky side, until "Berry had nothing better to do ''. Many died huddled together at Mantle Rock waiting to cross. Several Cherokee were murdered by locals. The Cherokee filed a lawsuit against the U.S. Government through the courthouse in Vienna, suing the government for $35 a head (equal to $787.17 today) to bury the murdered Cherokee.
As they crossed southern Illinois, on December 26, Martin Davis, Commissary Agent for Moses Daniel 's detachment, wrote:
"There is the coldest weather in Illinois I ever experienced anywhere. The streams are all frozen over something like 8 or 12 inches (20 or 30 cm) thick. We are compelled to cut through the ice to get water for ourselves and animals. It snows here every two or three days at the fartherest. We are now camped in Mississippi (River) swamp 4 miles (6 km) from the river, and there is no possible chance of crossing the river for the numerous quantity of ice that comes floating down the river every day. We have only traveled 65 miles (105 km) on the last month, including the time spent at this place, which has been about three weeks. It is unknown when we shall cross the river... ''
A soldiers from Georgia said:
I fought through the War Between the States and have seen many men shot, but the Cherokee Removal was the cruelest work I ever knew.
It eventually took almost three months to cross the 60 miles (97 kilometres) on land between the Ohio and Mississippi Rivers. The trek through southern Illinois is where the Cherokee suffered most of their deaths. However a few years before forced removal, some Cherokee who opted to leave their homes voluntarily chose a water - based route through the Tennessee, Ohio and Mississippi rivers. It took only 21 days, but the Cherokee who were forcibly relocated were weary of water travel.
Removed Cherokees initially settled near Tahlequah, Oklahoma. When signing the Treaty of New Echota in 1835 Major Ridge said "I have signed my death warrant. '' The resulting political turmoil led to the killings of Major Ridge, John Ridge, and Elias Boudinot; of the leaders of the Treaty Party, only Stand Watie escaped death. The population of the Cherokee Nation eventually rebounded, and today the Cherokees are the largest American Indian group in the United States.
There were some exceptions to removal. Perhaps 100 Cherokees evaded the U.S. soldiers and lived off the land in Georgia and other states. Those Cherokees who lived on private, individually owned lands (rather than communally owned tribal land) were not subject to removal. In North Carolina, about 400 Cherokees, known as the Oconaluftee Cherokee, lived on land in the Great Smoky Mountains owned by a white man named William Holland Thomas (who had been adopted by Cherokees as a boy), and were thus not subject to removal. Added to this were some 200 Cherokee from the Nantahala area allowed to stay in the Qualla Boundary after assisting the U.S. Army in hunting down and capturing the family of the old prophet, Tsali (who faced a firing squad after capture). These North Carolina Cherokees became the Eastern Band of the Cherokee Nation.
The United States Court of Claims ruled in favor of the Eastern Cherokee Tribe 's claim against the U.S. on May 18, 1905. This resulted in the appropriation of $1 million (equal to $27,438,023.04 today) to the Tribe 's eligible individuals and families. Interior Department employee Guion Miller created a list using several rolls and applications to verify tribal enrollment for the distribution of funds, known as the Guion Miller Roll. The applications received documented over 125,000 individuals; the court approved more than 30,000 individuals to share in the funds.
In 1987, about 2,200 miles (3,500 km) of trails were authorized by federal law to mark the removal of 17 detachments of the Cherokee people. Called the "Trail of Tears National Historic Trail, '' it traverses portions of nine states and includes land and water routes.
An historical drama based on the Trail of Tears, Unto These Hills written by Kermit Hunter, has sold over five million tickets for its performances since its opening on July 1, 1950, both touring and at the outdoor Mountainside Theater of the Cherokee Historical Association in Cherokee, North Carolina.
Cherokee artist Troy Anderson was commissioned to design the Cherokee Trail of Tears Sesquicentennial Commemorative Medallion. The falling - tear medallion shows a seven - pointed star, the symbol of the seven clans of the Cherokees.
|
where is area code 207 located in usa | Area code 207 - wikipedia
Area code 207 is the North American telephone area code for the state of Maine, excluding Estcourt Station which uses Quebec province 's overlay of 418 and 581.
Area code 207 was created as one of the original area codes in 1947. The numbering plan area retains its original boundaries, having never been split or overlaid.
Area code 207 is expected to exhaust by 2024.
Coordinates: 45 ° 0 ′ 0 '' N 69 ° 0 ′ 0 '' W / 45.00000 ° N 69.00000 ° W / 45.00000; - 69.00000
|
when did the english land in north america | British America - wikipedia
British America refers to the English territories in North America (including Bermuda), Central America, the Caribbean, and Guyana from 1607 to 1783. Formally, the British colonies in North America were known as British America and the British West Indies until 1776, when the Thirteen Colonies declared their independence and formed the United States of America. After that, the term British North America was used to describe the remainder of Britain 's continental North American possessions. That term was first used informally in 1783, but it was uncommon before the Report on the Affairs of British North America (1839), called the Durham Report.
British America gained large amounts of new territory following the Treaty of Paris (1763) which ended British involvement in the Seven Years ' War. At the start of the American War of Independence in 1775, the British Empire included twenty colonies north and east of New Spain (present - day areas of Mexico and the Western United States). East and West Florida were ceded to Spain in the Treaty of Paris (1783) which ended the American Revolution, and then ceded by Spain to the United States in 1819. The remaining continental colonies of British North America formed the Dominion of Canada by uniting between 1867 and 1873. The Dominion of Newfoundland joined Canada in 1949.
A number of English colonies were established in North America between 1606 and 1670 by individuals and companies whose investors expected to reap rewards from their speculation. They were granted commercial charters by King James I, King Charles I, Parliament, and King Charles II. The first permanent settlement was founded at Jamestown, Virginia by the London Company.
The Thirteen Colonies formed the original states of the United States of America:
Several British colonies and territories ruled by Britain from 1763 were later ceded by Britain to Spain (the Floridas) or the United States (the Indian Reserve and Southwestern Quebec). Others eventually became part of modern Canada.
Territories that eventually became part of the United States of America:
British colonies and territories that eventually became part of modern Canada:
|
can you live in the us and not be a citizen | Citizenship of the United States - wikipedia
Citizenship of the United States is a status that entails specific rights, duties and benefits. Citizenship is understood as a "right to have rights '' since it serves as a foundation of fundamental rights derived from and protected by the Constitution and laws of the United States, such as the right to freedom of expression, vote, due process, live and work in the United States, and to receive federal assistance. However, not all U.S. citizens, such as those living in Puerto Rico, have the right to vote in national elections.
There are two primary sources of citizenship: birthright citizenship, in which a person is presumed to be a citizen if he or she was born within the territorial limits of the United States, or -- providing certain other requirements are met -- born abroad to a U.S. citizen parent, and naturalization, a process in which an immigrant applies for citizenship and is accepted. These two pathways to citizenship are specified in the Citizenship Clause of the Constitution 's 1868 Fourteenth Amendment which reads:
All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.
National citizenship signifies membership in the country as a whole; state citizenship, in contrast, signifies a relation between a person and a particular state and has application generally limited to domestic matters. State citizenship may affect (1) tax decisions and (2) eligibility for some state - provided benefits such as higher education and (3) eligibility for state political posts such as U.S. Senator.
In Article One of the Constitution, the power to establish a "uniform rule of naturalization '' is granted explicitly to Congress.
U.S. law permits multiple citizenship. A citizen of another country naturalized as a U.S. citizen may retain their previous citizenship, though they must renounce allegiance to the other country. A U.S. citizen retains U.S. citizenship when becoming the citizen of another country, should that country 's laws allow it. Citizenship can be renounced by American citizens who also hold another citizenship via a formal procedure at a U.S. Embassy, and it can also be restored.
Civic participation is not required in the United States. There is no requirement to attend town meetings, belong to a political party, or vote in elections. However, a benefit of naturalization is the ability to "participate fully in the civic life of the country ''. There is disagreement about whether popular lack of involvement in politics is helpful or harmful.
Vanderbilt professor Dana D. Nelson suggests that most Americans merely vote for president every four years, and sees this pattern as undemocratic. In her book Bad for Democracy, Nelson argues that declining citizen participation in politics is unhealthy for long term prospects for democracy.
However, writers such as Robert D. Kaplan in The Atlantic see benefits to non-involvement; he wrote "the very indifference of most people allows for a calm and healthy political climate ''. Kaplan elaborated: "Apathy, after all, often means that the political situation is healthy enough to be ignored. The last thing America needs is more voters -- particularly badly educated and alienated ones -- with a passion for politics. '' He argued that civic participation, in itself, is not always a sufficient condition to bring good outcomes, and pointed to authoritarian societies such as Singapore which prospered because it had "relative safety from corruption, from breach of contract, from property expropriation, and from bureaucratic inefficiency ''.
A person who is considered a citizen by more than one nation has dual citizenship. It is possible for a United States citizen to have dual citizenship; this can be achieved in various ways, such as by birth in the United States to a parent who is a citizen of a foreign country (or in certain circumstances the foreign nationality may be transmitted even by a grandparent) by birth in another country to a parent (s) who is / are a United States citizen / s, or by having parents who are citizens of different countries. Anyone who becomes a naturalized U.S. citizen is required to renounce any prior "allegiance '' to other countries during the naturalization ceremony; however, this renunciation of allegiance is generally not considered renunciation of citizenship to those countries.
The earliest recorded instances of dual citizenship began before the French Revolution when the British captured American ships and forced them back to Europe. The British Crown considered subjects from the United States as British by birth and forced them to fight in the Napoleonic wars.
Under certain circumstances there are relevant distinctions between dual citizens who hold a "substantial contact '' with a country, for example by holding a passport or by residing in the country for a certain period of time, and those who do not. For example, under the Heroes Earnings Assistance and Relief Tax (HEART) Act of 2008, U.S. citizens in general are subject to an expatriation tax if they give up U.S. citizenship, but there are exceptions (specifically 26 U.S.C. § 877A (g) (1) (b)) for those who are either under age 181⁄2 upon giving up U.S. citizenship and have lived in the U.S. for less than ten years in their lives, or who are dual citizens by birth residing in their other country of citizenship at the time of giving up U.S. citizenship and have lived in the U.S. for less than ten out of the past fifteen years. Similarly, the United States considers holders of a foreign passport to have a substantial contact with the country that issued the passport, which may preclude security clearance.
U.S. citizens are required by federal law to identify themselves with a U.S. passport, not with any other foreign passport, when entering or leaving the United States. The Supreme Court case of Afroyim v. Rusk declared that a U.S. citizen did not lose his citizenship by voting in an election in a foreign country, or by acquiring foreign citizenship, if they did not intend to lose U.S. citizenship. U.S. citizens who have dual citizenship do not lose their United States citizenship unless they renounce it officially.
Citizenship began in colonial times as an active relation between men working cooperatively to solve municipal problems and participating actively in democratic decision - making, such as in New England town hall meetings. Men met regularly to discuss local affairs and make decisions. These town meetings were described as the "earliest form of American democracy '' which was vital since citizen participation in public affairs helped keep democracy "sturdy '', according to Alexis de Tocqueville in 1835. A variety of forces changed this relation during the nation 's history. Citizenship became less defined by participation in politics and more defined as a legal relation with accompanying rights and privileges. While the realm of civic participation in the public sphere has shrunk, the citizenship franchise has been expanded to include not just propertied white adult men but black men and adult women.
Earlier on, U.S. citizenship was not given to people of Indian or East Asian descent. A.K. Mozumdar was the first person born in the Indian sub-continent to attain U.S. citizenship. Few years earlier, as a result of the 1898 United States v. Wong Kim Ark Supreme Court decision, ethnic Chinese born in the United States became citizens. During World War II, due to Japan 's heavy involvement as an aggressor, it was decided to restrict many Japanese citizens from applying for U.S. citizenship, while Chinese citizens encountered no trouble, because of China 's alliance with the United States.
The Equal Nationality Act of 1934 was an American law which allowed foreign - born children of American mothers and alien fathers who had entered America before age 18 and lived in America for five years to apply for American citizenship for the first time. It also made the naturalization process quicker for American women 's alien husbands. This law equalized expatriation, immigration, naturalization, and repatriation between women and men. However, it was not applied retroactively, and was modified by later laws, such as the Nationality Act of 1940.
U.S. citizenship is usually acquired by birth when a child is born in the territory of the United States. In addition to U.S. states, this includes the District of Columbia, Guam, Puerto Rico, the Northern Mariana Islands and the U.S. Virgin Islands. Citizenship, however, was not specified in the original Constitution. In 1868, the Fourteenth Amendment specifically defined persons who were either born or naturalized in the United States and subject to its jurisdiction as citizens. All babies born in the United States -- except those born to enemy aliens in wartime or the children of foreign diplomats -- enjoy U.S. citizenship under the Supreme Court 's long - standing interpretation of the Fourteenth Amendment. The amendment states: "All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. '' There remains dispute as to who is "subject to the jurisdiction '' of the United States at birth.
By acts of Congress, every person born in Puerto Rico, the U.S. Virgin Islands, Guam, and the Northern Mariana Islands is a United States citizen by birth. Also, every person born in the former Panama Canal Zone whose father or mother (or both) are or were a citizen is a United States citizen by birth.
Regardless of where they are born, children of U.S. citizens are U.S. citizens in most cases. Children born outside the United States with at least one U.S. citizen parent usually have birthright citizenship by parentage.
A child of unknown parentage found in the US while under the age of 5 is considered a US citizen until proven, before reaching the age of 22, to have not been born in the US.
While persons born in the United States are considered to be citizens and can have passports, children under age eighteen are legally considered to be minors and can not vote or hold office. Upon the event of their eighteenth birthday, they are considered full citizens but there is no ceremony acknowledging this relation or any correspondence between the new citizen and the government to this effect. Citizenship is assumed to exist, and the relation is assumed to remain viable until death or until it is renounced or dissolved by some other legal process. Secondary schools teach the basics of citizenship and create "informed and responsible citizens '' who are "skilled in the arts of effective deliberation and action ''.
Americans who live in foreign countries and become members of other governments have, in some instances, been stripped of citizenship, although there have been court cases where decisions regarding citizenship have been reversed.
Acts of Congress provide for acquisition of citizenship by persons born abroad.
The agency in charge of admitting new citizens is the United States Citizenship and Immigration Services, commonly abbreviated as USCIS. It is a bureau of the Department of Homeland Security. It offers web - based services. The agency depends on application fees for revenue; in 2009, with a struggling economy, applications were down sharply, and consequently there was much less revenue to upgrade and streamline services. There was speculation that if the administration of president Barack Obama passed immigration reform measures, then the agency could face a "welcome but overwhelming surge of Americans - in - waiting '' and longer processing times for citizenship applications. The USCIS has made efforts to digitize records. A USCIS website says the "U.S. Citizenship and Immigration Services (USCIS) is committed to offering the best possible service to you, our customer '' and which says "With our focus on customer service, we offer you a variety of services both before and after you file your case. '' The website allowed applicants to estimate the length of time required to process specific types of cases, to check application status, and to access a customer guide. The USCIS processes cases in the order they 're received.
People applying to become citizens must satisfy certain requirements. For example, there have been requirements that applicants have been permanent residents for five years (three if married to a U.S. citizen), be of "good moral character '' (meaning no felony convictions), be of "sound mind '' in the judgment of immigration officials, have knowledge of the Constitution, and be able to speak and understand English unless they are elderly or disabled. Applicants must also pass a simple citizenship test. Up until recently, a test published by the Immigration and Naturalization Service asked questions such as "How many stars are there in our flag? '' and "What is the Constitution? '' and "Who is the president of the United States today? '' At one point, the Government Printing Office sold flashcards for $8.50 to help test takers prepare for the test. In 2006, the government replaced the former trivia test with a ten - question oral test designed to "shun simple historical facts about America that can be recounted in a few words, for more explanation about the principles of American democracy, such as freedom ''. One reviewer described the new citizenship test as "thoughtful ''. While some have criticized the new version of the test, officials counter that the new test is a "teachable moment '' without making it conceptually more difficult, since the list of possible questions and answers, as before, will be publicly available. Six correct answers constitutes a passing grade. The new test probes for signs that immigrants "understand and share American values ''. A unique way to become a permanent resident is to apply to the US government DV lottery. This program is a drawing for foreigners to apply for a drawing to become a permanent resident.
According to a senior fellow at the Migration Policy Institute, "citizenship is a very, very valuable commodity ''. However, one study suggested legal residents eligible for citizenship, but who do n't apply, tend to have low incomes (41 percent), do not speak English well (60 percent), or have low levels of education (25 percent). There is strong demand for citizenship based on the number of applications filed. From 1920 to 1940, the number of immigrants to the United States who became citizens numbered about 200,000 each year; there was a spike after World War II, and then the level reduced to about 150,000 per year until resuming to the 200,000 level beginning about 1980. In the mid-1990s to 2009, the levels rose to about 500,000 per year with considerable variation. In 1996, more than one million people became citizens through naturalization. In 1997, there were 1.41 million applications filed; in 2006, 1.38 million. The number of naturalized citizens in the United States rose from 6.5 million in the mid-1990s to 11 million in 2002. By 2003, the pool of immigrants eligible to become naturalized citizens was 8 million, and of these, 2.7 million lived in California. In 2003, the number of new citizens from naturalization was 463,204. In 2007, the number was 702,589. In 2007, 1.38 million people applied for citizenship creating a backlog. In 2008, applications decreased to 525,786.
Naturalization fees were $60 in 1989; $90 in 1991; $95 in 1994; $225 in 1999; $260 in 2002; $320 in 2003; $330 in 2005. In 2007 application fees were increased from $330 to $595 and an additional $80 computerized fingerprinting fee was added. The biometrics fee was increased to $85 in 2010. On December 23, 2016, the application fees were increased again from $595 to $640. The high fees have been criticized as putting up one more wall to citizenship. Increases in fees for citizenship have drawn criticism. Doris Meissner, a senior fellow at the Migration Policy Institute and former Immigration and Naturalization Service Commissioner, doubted that fee increases deter citizenship - seekers. In 2009, the number of immigrants applying for citizenship plunged 62 percent; reasons cited were the slowing economy and the cost of naturalization.
The citizenship process has been described as a ritual that is meaningful for many immigrants. Many new citizens are sworn in during Fourth of July ceremonies. Most citizenship ceremonies take place at offices of the U.S. Citizenship and Immigration Services. However, one swearing - in ceremony was held at Arlington National Cemetery in Virginia in 2008. The judge who chose this venue explained: "I did it to honor our country 's warriors and to give the new citizens a sense for what makes this country great. '' According to federal law, citizenship applicants who are also changing their names must appear before a federal judge.
The title of "Honorary Citizen of the United States '' has been granted eight times by an act of Congress or by a proclamation issued by the President pursuant to authorization granted by Congress. The eight individuals are Sir Winston Churchill, Raoul Wallenberg, William Penn, Hannah Callowhill Penn, Mother Teresa, the Marquis de Lafayette, Casimir Pulaski, and Bernardo de Gálvez y Madrid, Viscount of Galveston and Count of Gálvez.
Sometimes, the government awarded non-citizen immigrants who died fighting for American forces with the posthumous title of U.S. citizen, but this is not considered honorary citizenship. In June 2003, Congress approved legislation to help families of fallen non-citizen soldiers.
There is a sense in which corporations can be considered "citizens ''. Since corporations are considered persons in the eyes of the law, it is possible to think of corporations as being like citizens. For example, the airline Virgin America asked the United States Department of Transportation to be treated as an American air carrier. The advantage of "citizenship '' is having the protection and support of the United States government when jockeying with foreign governments for access to air routes and overseas airports. Alaska Airlines, a competitor of Virgin America, asked for a review of the situation; according to U.S. law, "foreign ownership in a U.S. air carrier is limited to 25 % of the voting interest in the carrier, '' but executives at Virgin America insisted the airline met this requirement.
For the purposes of diversity jurisdiction in the United States civil procedure, corporate citizenship is determined by the principal place of business of the corporation. There is some degree of disagreement among legal authorities as to how exactly this may be determined.
Another sense of "corporate citizenship '' is a way to show support for causes such as social issues and the environment and, indirectly, gain a kind of "reputational advantage ''.
A distinction is made between U.S. citizenship and U.S. nationality under U.S. Law. Citizenship in the United States comprises a larger set of privileges and rights for those persons that are U.S. citizens which is not afforded to individuals that are only U.S. nationals by virtue of their rights under U.S. Nationality Law. United States law makes a precise distinction between a United States citizen and a United States national. Although all U.S. citizens are also U.S. nationals, not all U.S. nationals are U.S. citizens. The United States Naturalization Act of 1790 (1 Stat. 103) provided the first rules to be followed by the United States in the granting of national citizenship after the ratification of the U.S. Constitution. A number of other Acts and statutes followed the Act of 1790 that expanded or addressed specific situations but it was not until the Immigration and Nationality Act of 1952 (Pub. L. 82 -- 414, 66 Stat. 163, enacted June 27, 1952), codified under Title 8 of the United States Code (8 U.S.C. ch. 12), that the variety of statutes governing citizenship law were organized within one single body of text. The Immigration and Nationality Act of 1952 set forth the legal requirements for the acquisition of American nationality. The Fourteenth Amendment (1868) addressed citizenship rights. The United States nationality law, 8 U.S.C. § 1408, despite its "nationality '' title, comprises the statues that embody the law regarding both American citizenship and American nationality.
For example, as specified in 8 U.S.C. § 1408, a person whose only connection to the U.S. is through birth in an outlying possession (which, as of March 2015, was defined in 8 U.S.C. § 1101 as American Samoa and Swains Island), or through descent from a person so born, acquires only U.S. nationality, not U.S. citizenship. Such person is said to be a U.S. national, not a U.S. citizen, or simply a U.S. noncitizen national.
As of February 11, 2014, American Samoans continued to be U.S. nationals but not U.S. citizens (i.e., U.S. noncitizen nationals). People born in American Samoa receive passports declaring the holder is only a U.S. national, not a U.S. citizen. For an America Samoan to become a U.S. citizen, he or she must relocate to another part of the United States, initiate the naturalization process, pay the $680 fee (as of February 11, 2014), pass a moral character assessment, be fingerprinted and pass an English / civics examination.
In addition, residents of the Northern Mariana Islands who automatically gained U.S. citizenship in 1986 as a result of the Covenant between the Northern Mariana Islands and the U.S. could elect to become U.S. noncitizen nationals within 6 months of the implementation of the Covenant or within 6 months of turning 18.
The nationality status of a person born in an unincorporated U.S. Minor Outlying Island is not specifically mentioned by law, but under international law and Supreme Court dicta, they are also regarded as U.S. noncitizen nationals.
U.S. noncitizen nationals may reside and work in the United States without restrictions, and may apply for citizenship under the same rules as resident aliens. Like resident aliens, they are not presently allowed by any U.S. state to vote in federal or state elections, although, as with resident aliens, there is no constitutional prohibition against their doing so. Like U.S. citizens, U.S. noncitizen nationals may transmit their U.S. noncitizen nationality to children born abroad, although the rules are somewhat different from those for U.S. citizens.
The U.S. passport issued to noncitizen nationals contains the endorsement code 9 which states: "The bearer is a United States national and not a United States citizen '' on the annotations page.
The issue of citizenship naturalization is a highly contentious matter in US politics, particularly regarding illegal immigrants. Candidates in the 2008 presidential election, such as Rudolph Giuliani, tried to "carve out a middle ground '' on the issue of illegal immigration, but rivals such as John McCain advocated legislation requiring illegal immigrants to first leave the country before being eligible to apply as citizens. Some measures to require proof of citizenship upon registering to vote have met with controversy.
Issues such as whether to include questions about current citizenship status in census questions have been debated in the Senate. Generally, there tends to be controversy when citizenship affects political issues. For example, issues such as asking questions about citizenship on the United States Census tend to cause controversy. Census data affects state electoral clout; it also affects budgetary allocations. Including non-citizens in Census counts also shifts political power to states that have large numbers of non-citizens due to the fact that reapportionment of congressional seats is based on Census data.
There have been controversies based on speculation about which way newly naturalized citizens are likely to vote. Since immigrants from many countries have been presumed to vote Democratic if naturalized, there have been efforts by Democratic administrations to streamline citizenship applications before elections to increase turnout; Republicans, in contrast, have exerted pressure to slow down the process. In 1997, there were efforts to strip the citizenship of 5,000 newly approved immigrants who, it was thought, had been "wrongly naturalized ''; a legal effort to do this presented enormous challenges. An examination by the Immigration and Naturalization Service of 1.1 million people who were granted citizenship from September 1995 to September 1996 found 4,946 cases in which a criminal arrest should have disqualified an applicant or in which an applicant lied about his or her criminal history. Before the 2008 election, there was controversy about the speed of the USCIS in processing applications; one report suggested that the agency would complete 930,000 applications in time for the newly processed citizens to vote in the November 2008 election. Foreign - born naturalized citizens tend to vote at the same rates as natives. For example, in the state of New Jersey in the 2008 election, the foreign born represented 20.1 percent of the state 's population of 8,754,560; of these, 636,000 were eighteen or older and hence eligible to vote; of eligible voters, 396,000 actually voted, which was about 62 %. So foreign - born citizens vote in roughly the same proportion (62 %) as native citizens (67 %).
There has been controversy about the agency in charge of citizenship. The USCIS has been criticized as being a "notoriously surly, inattentive bureaucracy '' with long backlogs in which "would - be citizens spent years waiting for paperwork ''. Rules made by United States Congress and the federal government regarding citizenship are highly technical and often confusing, and the agency is forced to cope with enforcement within a complex regulatory milieu. There have been instances in which applicants for citizenship have been deported on technicalities. One Pennsylvania doctor and his wife, both from the Philippines, who applied for citizenship, and one Mr. Darnell from Canada who was married to an American with two children from this marriage, ran afoul of legal technicalities and faced deportation. The New York Times reported that "Mr. Darnell discovered that a 10 - year - old conviction for domestic violence involving a former girlfriend, even though it had been reduced to a misdemeanor and erased from his public record, made him ineligible to become a citizen -- or even to continue living in the United States. '' Overworked federal examiners under pressure to make "quick decisions '' as well as "weed out security risks '' have been described as preferring "to err on the side of rejection ''. In 2000, 399,670 applications were denied (about 1 / 3 of all applications); in 2007, 89,683 applications for naturalization were denied, about 12 percent of those presented.
Generally, eligibility for citizenship is denied for the millions of people living in the United States illegally, although from time to time, there have been amnesties. In 2006, there were mass protests numbering hundreds of thousands of people throughout the US demanding U.S. citizenship for illegal immigrants. Many carried banners which read "We Have A Dream Too ''. One estimate is that there are 12 million illegal immigrants in the USA in 2006. There are many American high school students with citizenship issues. In 2008, it was estimated that there were 65,000 illegal immigrant students. The number was less clear for post-secondary education. A 1982 Supreme Court decision entitled illegal immigrants to free education from kindergarten through high school. Undocumented immigrants who get arrested face difficulties in the courtroom as they have no constitutional right to challenge the outcome of their deportation hearings. In 2009, writer Tom Barry of the Boston Review criticized the crackdown against illegal immigrants since it "flooded the federal courts with nonviolent offenders, besieged poor communities, and dramatically increased the U.S. prison population, while doing little to solve the problem itself ''. Barry criticized the United States ' high incarceration rate as being "fives times greater than the average rate in the rest of the world ''. Virginia Senator Jim Webb agreed that "we are doing something dramatically wrong in our criminal justice system ''.
U.S. citizens can relinquish their citizenship, which involves abandoning the right to reside in the United States and all the other rights and responsibilities of citizenship. "Relinquishment '' is the legal term covering all seven different potentially - expatriating acts (ways of giving up citizenship) under 8 U.S.C. § 1481 (a). "Renunciation '' refers to two of those acts: swearing an oath of renunciation before a U.S. diplomatic or consular officer abroad, or before an official designated by the Attorney General within the United States during a state of war. Out of an estimated three to six million U.S. citizens residing abroad, between five and six thousand relinquished citizenship each year in 2015 and 2016. U.S. nationality law treats people who performs potentially - expatriating acts with intent to give up U.S. citizenship as ceasing to be U.S. citizens from the moment of the act, but U.S. tax law since 2004 treats such individuals as though they remain U.S. citizens until they notify the State Department and apply for a Certificate of Loss of Nationality.
Renunciation requires an oath to be sworn before a State Department officer and thus involves in - person attendance at an embassy or consulate, but applicants for CLNs on the basis of other potentially - expatriating acts must attend an in - person interview as well. During the interview, a State Department official assesses whether the person acted voluntarily, intended to abandon all rights of U.S. citizenship, and understands the consequences of their actions. The State Department strongly recommends that Americans intending to relinquish citizenship have another citizenship, but will permit Americans to make themselves stateless if they understand the consequences. There is a $2,350 administrative fee for the process. In addition, an expatriation tax is imposed on some individuals relinquishing citizenship, but payment of the tax is not a legal prerequisite for relinquishing citizenship; rather, the tax and its associated forms are due on the normal tax due date of the year following relinquishment of citizenship. State Department officials do not seek to obtain any tax information from the interviewee, and instruct the interviewee to contact the IRS directly with any questions about taxes.
|
how many episodes per season of sesame street | History of Sesame Street - wikipedia
The preschool educational television program Sesame Street was first aired on public broadcasting television stations November 10, 1969, and will reach its 47th season in 2017. The history of Sesame Street has reflected changing attitudes to developmental psychology, early childhood education and cultural diversity. Featuring Jim Henson 's Muppets, animation, live shorts, humor and celebrity appearances, it was the first television program of its kind to base its content and production values on laboratory and formative research, and the first to include a curriculum "detailed or stated in terms of measurable outcomes ''. Initial responses to the show included adulatory reviews, some controversy and high ratings. By its 40th anniversary in 2009, Sesame Street was broadcast in over 120 countries, and 20 independent international versions had been produced.
The show was conceived in 1966 during discussions between television producer Joan Ganz Cooney and Carnegie Corporation vice president Lloyd Morrisett. Their goal was to create a children 's television show that would "master the addictive qualities of television and do something good with them '', such as helping young children prepare for school. After two years of research, the newly formed Children 's Television Workshop (CTW) received a combined grant of $8 million from the Carnegie Corporation, the Ford Foundation and the U.S. federal government to create and produce a new children 's television show.
By the show 's tenth anniversary in 1979, nine million American children under the age of six were watching Sesame Street daily, and several studies showed it was having a positive educational impact. The cast and crew expanded during this time, including the hiring of women in the crew and additional minorities in the cast. In 1981, the federal government withdrew its funding, so the CTW turned to other sources, such as its magazine division, book royalties, product licensing and foreign income. During the 1980s, Sesame Street 's curriculum expanded to include topics such as relationships, ethics and emotions. Many of the show 's storylines were taken from the experiences of its writing staff, cast and crew, most notably the death of Will Lee -- who played Mr. Hooper -- and the marriage of Luis and Maria.
In recent decades, Sesame Street has faced societal and economic challenges, including changes in the viewing habits of young children, more competition from other shows, the development of cable television and a drop in ratings. After the turn of the 21st century, the show made major structural adaptations, including changing its traditional magazine format to a narrative format. Because of the popularity of the Muppet Elmo, the show incorporated a popular segment known as "Elmo 's World ''. Sesame Street has won eight Grammys and over a hundred Emmys in its history -- more than any other children 's show.
In the late 1960s, 97 % of all American households owned a television set, and preschool children watched an average of 27 hours of television per week; programs created for them were widely criticized for being too violent and for reflecting commercial values. Producer Joan Ganz Cooney called children 's programming a "wasteland '', and she was not alone in her criticism. Many children 's television programs were produced by local stations, with little regard for educational goals, or cultural diversity and the use of children 's programming as an educational tool was "unproven '' and "a revolutionary concept ''.
According to children 's media experts Edward Palmer and Shalom M. Fisch, children 's television programs of the 1950s and 1960s duplicated "prior media forms ''. For example, they tended to show simple shots of a camera 's - eye view of a location filled with children, or they recreated storybooks with shots of book covers and motionless illustrated pages. The hosts of these programs were "insufferably condescending '', though one exception was Captain Kangaroo, created and hosted by Bob Keeshan, which author Michael Davis described as having a "slower pace and idealism '' that most other children 's shows lacked.
Early childhood educational research had shown that when children were prepared to succeed in school, they earned higher grades and learned more effectively. Children from low - income families had fewer resources than children from higher - income families to prepare them for school. Research had shown that children from low - income, minority backgrounds tested "substantially lower '' than middle - class children in school - related skills, and that they continued to have educational deficits throughout school. The field of developmental psychology had grown during this period, and scientists were beginning to understand that changes in early childhood education could increase children 's cognitive growth. Because of these trends in education, along with the great societal changes occurring in the United States during this era, the time was ripe for the creation of a show like Sesame Street.
Since 1962, Cooney had been producing talk shows and documentaries at educational television station WNDT, and in 1966 had won an Emmy for a documentary about poverty in America. In early 1966, Cooney and her husband Tim hosted a dinner party at their apartment in New York; experimental psychologist Lloyd Morrisett, who has been called Sesame Street 's "financial godfather '', and his wife Mary were among the guests. Cooney 's boss, Lewis Freedman, whom Cooney called "the grandfather of Sesame Street '', also attended the party, as did their colleague Anne Bower. As a vice-president at the Carnegie Corporation, Morrisett had awarded several million dollars in grants to organizations that educated poor and minority preschool children. Morrisett and the other guests felt that even with limited resources, television could be an effective way to reach millions of children.
A few days after the dinner party, Cooney, Freedman and Morrisett met at the Carnegie Corporation 's offices to make plans; they wanted to harness the addictive power of television for their own purposes, but did not yet know how. The following summer, despite Cooney 's lack of experience in the field of education, Morrisett hired her to conduct research on childhood development, education and media, and she visited experts in these fields across the United States and Canada. She researched their ideas about the viewing habits of young children and wrote a report on her findings.
Cooney 's study, titled "Television for Preschool Education '', spelled out how television could be used to help young children, especially from low - income families, prepare for school. The focus on the new show was on children from disadvantaged backgrounds, but Cooney and the show 's creators recognized that in order to achieve the kind of success they wanted, it had to be equally accessible to children of all socio - economic and ethnic backgrounds. At the same time, they wanted to make the show so appealing to inner - city children that it would help them learn as much as children with more educational opportunities. This was the show 's primary criterion for success.
Cooney proposed that public television, even though it had a poor track record in attracting inner - city audiences, could be used to improve the quality of children 's programming. She suggested using the television medium 's "most engaging traits '', including high production values, sophisticated writing, and quality film and animation, to reach the largest audience possible. In the words of critic Peter Hellman, "If (children) could recite Budweiser jingles from TV, why not give them a program that would teach the ABCs and simple number concepts? '' Cooney wanted to create a program that would spread values favoring education to nonviewers -- including their parents and older siblings, who tended to control the television set. To this end, she suggested that humor directed toward adults be included, which, as Lesser reported, "may turn out to be a pretty good system in forcing the young child to stretch to understand programs designed for older audiences ''. Cooney also believed cultural references and guest appearances by celebrities would encourage parents and older siblings to watch the show together.
As a result of Cooney 's proposal, the Carnegie Corporation awarded her a $1 million grant in 1968 to establish the Children 's Television Workshop (CTW) to provide support to the creative staff of the new show. Morrisett, who was responsible for fundraising, procured additional grants from the United States federal government, the Corporation for Public Broadcasting, and the Ford Foundation for the CTW 's initial budget, which totaled $8 million; obtaining funding from this combination of government agencies and private foundations protected the CTW from economic pressures experienced by commercial networks. Sesame Street was an expensive program to produce because the creators decided they needed to compete with other programs that invested in professional, high quality production.
The producers spent eighteen months preparing the new show, something unprecedented in children 's television. The show had an "impressive '' budget of $28,000 per episode. After being named executive director of the CTW, Cooney began to assemble a team of producers: Jon Stone was responsible for writing, casting, and format; David Connell took over animation and volume; and Samuel Gibbon served as the show 's chief liaison between the production staff and the research team. Stone, Connell, and Gibbon had worked on Captain Kangaroo together, but were not involved in children 's television when Cooney recruited them. At first, Cooney planned to divide the show 's production of five episodes a week among several teams, but she was advised by CBS vice-president Mike Dann to use only one. This production team was led by Connell, who had gained experience producing many episodes in a short period of time, a process called "volume production '', during his eleven years working on Captain Kangaroo.
The CTW hired Harvard University professor Gerald S. Lesser to design the show 's educational objectives and establish and lead a National Board of Advisers. Instead of providing what Lesser called "window dressing '', the Board actively participated in the construction of educational goals and creative methods. At the Board 's direction, Lesser conducted five three - day curriculum planning seminars in Boston and New York City in summer 1968. The purpose of the seminars was to ascertain which school - preparation skills to emphasize in the new show. The producers gathered professionals with diverse backgrounds to obtain ideas for educational content. They reported that the seminars were "widely successful '', and resulted in long and detailed lists of possible topics for inclusion in the Sesame Street curriculum; in fact, the seminars produced more suggested educational objectives than could ever be addressed by one television series.
Instead of focusing on the social and emotional aspects of development, the producers decided to follow the suggestions of the seminar participants and emphasize cognitive skills, a decision they felt was warranted by the demands of school and the wishes of parents. The objectives developed during the seminars were condensed into key categories: symbolic representation, cognitive processes, and the physical and social environment. The seminars set forth the new show 's policy about race and social issues and provided the show 's production and creative team with "a crash course '' in psychology, child development, and early childhood education. They also marked the beginning of Jim Henson 's involvement in Sesame Street. Cooney met Henson at one of the seminars; Stone, who was familiar with Henson 's work, felt that if they could not bring him on board, they should "make do without puppets ''.
The producers and writers decided to build the new show around a brownstone or an inner - city street, a choice Davis called "unprecedented ''. Stone was convinced that in order for inner - city children to relate to Sesame Street, it needed to be set in a familiar place. Despite its urban setting, the producers decided to avoid depicting more negativity than what was already present in the child 's environment. Lesser commented, "(despite) all its raucousness and slapstick humor, Sesame Street became a sweet show, and its staff maintains that there is nothing wrong in that ''.
The new show was called the "Preschool Educational Television Show '' in promotional materials; the producers were unable to agree on a name they liked and waited until the last minute to make a decision. In a short, irreverent promotional film shown to public television executives, the producers parodied their "naming dilemma ''. The producers were reportedly "frantic for a title ''; they finally settled on the name that they least disliked: Sesame Street, inspired by Ali Baba 's magical phrase, although there were concerns that it would be too difficult for young children to pronounce. Stone was one of the producers who disliked the name, but, he said, "I was outvoted, for which I 'm deeply grateful ''.
The responsibility of casting for Sesame Street fell to Jon Stone, who set out to form a cast where white actors were in the minority. He did not begin auditions until spring 1969, several weeks before five test shows were due to be produced. He filmed the auditions, and Palmer took them into the field to test children 's reactions. The actors who received the "most enthusiastic thumbs up '' were cast. For example, Loretta Long was chosen to play Susan when the children who saw her audition stood up and sang along with her rendition of "I 'm a Little Teapot ''. Stone reported that casting was the only aspect that was "just completely haphazard ''. Most of the cast and crew found jobs on Sesame Street through personal relationships with Stone and the other producers. Stone hired Bob McGrath (an actor and singer best known at the time for his appearances on Mitch Miller 's sing - along show on NBC) to play Bob, Will Lee to play Mr. Hooper, and Garrett Saunders to play Gordon.
Sesame Street was the first children 's television program that used a curriculum with clear and measurable outcomes, and was the first to use research in the creation of the show 's design and content. Research in Sesame Street had three functions: to test if the show was appealing to children, to discover what could be done to make the show more appealing, and to report to the public and the investors what impact the show had on its young viewers. Ten to fifteen percent of the show 's initial budget of $8 million was devoted to research, and researchers were always present in the studio during the show 's filming. A "Writer 's Notebook '' was developed to assist writers and producers in translating the research and production goals into televised material; this connected the show 's curriculum goals and its script development. The Muppet characters were created to fill specific curriculum needs: Oscar the Grouch, for example, was designed to teach children about their positive and negative emotions. Lesser called the collaboration between researchers and producers, as well as the idea of using television as an educational tool, the "CTW model ''. Cooney agreed, commenting, "From the beginning, we -- the planners of the project -- designed the show as an experimental research project with educational advisers, researchers, and television producers collaborating as equal partners ''.
The producers of Sesame Street believed education through television was possible if they captured and sustained children 's attention; this meant the show needed a strong appeal. Edward Palmer, the CTW 's first Director of Research and the man Cooney credited with building the CTW 's foundation of research, was one of the few academics in the late 1960s researching children 's television. He was recruited by the CTW to test if the curricula developed in the Boston seminars were reaching their audience effectively. Palmer was also tasked with designing and executing the CTW 's in - house research and with working with the Educational Testing Service (ETS). His research was so crucial to Sesame Street that Gladwell asserted, "... without Ed Palmer, the show would have never lasted through the first season ''.
Palmer and his team 's approach to researching the show 's effectiveness was innovative; it was the first time formative research was conducted in this way. For example, Palmer developed "the distractor '', which he used to test if the material shown on Sesame Street captured young viewers ' attention. Two children at a time were brought into the laboratory; they were shown an episode on a television monitor and a slide show next to it. The slides would change every seven seconds, and researchers recorded when the children 's attention was diverted away from the episode. They were able to record almost every second of Sesame Street this way; if the episode captured the children 's interest 80 -- 90 % of the time, the producers would air it, but if it only tested 50 %, they would reshoot. By the fourth season of the show, the episodes rarely tested below 85 %.
During the production of Sesame Street 's first season, producers created five one - hour episodes to test the show 's appeal to children and examine their comprehension of the material. Not intended for broadcast, they were presented to preschoolers in 60 homes throughout Philadelphia and in day care centers in New York City in July 1969. The results were "generally very positive ''; children learned from the shows, their appeal was high, and their attention was sustained over the full hour. However, the researchers found that although children 's attention was high during the Muppet segments, their interest wavered during the "Street '' segments, when no Muppets were on screen. This was because the producers had followed the advice of child psychologists who were concerned that children would be confused if human actors and Muppets were shown together. As a result of this decision, the appeal of the test episodes was lower than the target.
The Street scenes, as Palmer described them, were "the glue '' that "pulled the show together '', so producers knew they needed to make significant changes. On the basis of their experience on Captain Kangaroo, Cannell, Stone, and Gibbon thought the experts ' opinions were "nonsense ''; Cooney agreed. Lesser called their decision to defy the recommendations of their advisers "a turning point in the history of Sesame Street ''. The producers reshot the Street segments; Henson and his coworkers created Muppets that could interact with the human actors, specifically Oscar the Grouch and Big Bird, who became two of the show 's most enduring characters. In addition, the producers found Saunders ' role as Gordon not to be as likable by children watching the show, resulting in the character being recast by Matt Robinson, who was initially the show 's filmed segments producer. These test episodes were directly responsible for what Gladwell called "the essence of Sesame Street -- the artful blend of fluffy monsters and earnest adults ''.
Two days before the show 's premiere, a thirty - minute preview entitled This Way to Sesame Street aired on NBC. The show was financed by a $50,000 grant from Xerox. Written by Stone and produced by CTW publicist Bob Hatch, it was taped the day before it aired. Newsday called the preview "a unique display of cooperation between commercial and noncommercial broadcasters ''.
Sesame Street premiered November 10, 1969. It was widely praised for its originality, and was well received by parents as well as children. The show reached only 67.6 % of the nation, but earned a 3.3 Nielsen rating, meaning 1.9 million households and 7 million children watched it each day. In Sesame Street 's first season, the ETS reported that children who watched the show scored higher in tests than less - frequent viewers.
In November 1970, the cover of Time magazine featured Big Bird, who had received more fan mail than any of the show 's human hosts. The magazine declared, "... It is not only the best children 's show in TV history, it is one of the best parents ' shows as well ''. David Frost, speaking about the versions of Sesame Street that were being produced in other countries, declared it was "a hit everywhere it goes ''. An executive at ABC, while recognizing that Sesame Street was not perfect, said the show "opened children 's TV to taste and wit and substance '' and "made the climate right for improvement ''. Other reviewers predicted commercial television would be forced to improve its children 's programming, something that did not substantially occur until the 1990s. Sesame Street won a Peabody Award, three Emmys, and the Prix Jeunesse award in 1970. President Richard Nixon sent Cooney a congratulatory letter, and Dr. Benjamin Spock predicted the program would result in "better - trained citizens, fewer unemployables in the next generation, fewer people on welfare, and smaller jail populations ''.
Sesame Street was not without its detractors; there was little criticism of the show in the months following its premiere, but it increased at the end of its first season and beginning of the second season. In May 1970, a state commission in Mississippi voted to not air the show on the state 's newly launched public television network. A member of the commission leaked the vote to The New York Times, stating that "Mississippi was not yet ready '' for the show 's integrated cast. Cooney called the ban "a tragedy for both the white and black children of Mississippi ''. The state commission reversed its decision after the vote made national news.
The producers of Sesame Street made a few changes in its second season. Segments that featured children became more spontaneous and allowed more impromptu dialogue, even when it meant cutting other segments. Since federal funds had been used to produce the show, more segments of the population insisted upon being represented on Sesame Street; for example, the show was criticized by Hispanic groups for the lack of Latino characters in the early years of production. A committee of Hispanic activists, commissioned by the CTW in 1970, called Sesame Street "racist '' and said the show 's bilingual aspects were of "poor quality and patronizing ''. The CTW responded to these critics by hiring Hispanic actors, production staff, and researchers. By the mid-70s, Morrow reported that "the show included Chicano and Puerto Rican cast members, films about Mexican holidays and foods, and cartoons that taught Spanish words ''.
While New York Magazine reported criticism of the presence of strong single women in the show, organizations like the National Organization for Women (NOW) expressed concerns that the show needed to be "less male - oriented ''. For example, members of NOW took exception to the character Susan, who was originally a housewife. They complained about the lack of, as Morrow put it, "credible female Muppets '' on the show; Morrow reported that Henson 's response was that "women might not be strong enough to hold the puppets over the long hours of taping ''. The show 's producers responded by making Susan a nurse and by hiring a female writer.
By the mid-1970s, Sesame Street, according to Davis, had become "an American institution ''. ETS conducted two "landmark '' studies of the show in 1970 and 1971 which demonstrated Sesame Street had a positive educational impact on its viewers. The results of these studies led to the producers securing funding for the show over the next several years, and provided the CTW with additional ways to promote it. By the second season, Sesame Street had become so popular that the design of ETS ' experiments to track the show 's educational outcomes had to be changed: instead of comparing viewers with a control group of non-viewers, the researchers studied the differences among levels of viewing. They found that children who watched Sesame Street more frequently had a higher comprehension of the material presented.
Producer Jon Stone was instrumental in guiding the show during these years. According to Davis, Stone "gave Sesame Street its soul ''; without him "there would not have been Sesame Street as we know it ''. Frank Oz regarded Stone as "the father of Sesame Street '', and Cooney considered Stone "the key creative talent on Sesame Street and "probably the most brilliant writer of children 's material in America ''. Stone was able to recognize and mentor talented people for his crew. He actively hired and promoted women during a time when few women earned top production jobs in television. His policies provided the show with a succession of female producers and writers, many of whom went on to lead the boom in children 's programming at Nickelodeon, the Disney Channel, and PBS in the 1990s and 2000s. One of these women was Dulcy Singer, who later became the first female executive producer of Sesame Street.
After the show 's initial success, its producers began to think about its survival beyond its development and first season and decided to explore other funding sources. The CTW decided to depend upon government agencies and private foundations to develop the show. This would protect it from the financial pressures experienced by commercial networks, but created problems in finding continued support. This era in the show 's history was marked by conflicts between the CTW and the federal government; in 1978, the US Department of Education refused to deliver a $2 million check until the last day of the CTW 's fiscal year. As a result, the CTW decided to depend upon licensing arrangements, publishing, and international sales for its funding. Henson owned the trademarks to the Muppet characters: he was reluctant to market them at first, but agreed when the CTW promised that the profits from toys, books, and other products were to be used exclusively to fund the CTW. The producers demanded complete control over all products and product decisions; any product line associated with the show had to be educational, inexpensive, and not advertised during its airings. The CTW approached Random House to establish and manage a non-broadcast materials division. Random House and the CTW named Christopher Cerf to assist the CTW in publishing books and other materials that emphasized the curriculum. In 1980, the CTW began to produce a touring stage production based upon the show, written by Connell and performed by the Ice Follies.
Shortly after the premiere of Sesame Street, the CTW was approached by producers, educators, and officials in other nations, requesting that a version of the show be aired in their countries. Former CBS executive Mike Dann left commercial television to become vice-president of the CTW and Cooney 's assistant; Dann began what Charlotte Cole, vice president for the CTW 's International Research department, called the "globalization '' of Sesame Street. A flexible model was developed, based upon the experiences of the creators and producers of the original show. The shows came to be called "co-productions '', and they contained original sets, characters, and curriculum goals. Depending upon each country 's needs and resources, different versions were produced, including dubbed versions of the original show and independent programs. By 2009, Sesame Street had expanded into 140 countries; The New York Times reported in 2005 that income from the CTW 's international co-productions of the show was $96 million.
Sesame Street 's cast expanded in the 1970s, better fulfilling the show 's original goal of greater diversity in both human and Muppet characters. The cast members who joined the show were Sonia Manzano (Maria), who also wrote for the show, Northern Calloway (David), Alaina Reed (Olivia), Emilio Delgado (Luis), Linda Bove (Linda), and Buffy Sainte - Marie (Buffy). In 1975, Roscoe Orman became the third actor to play Gordon, succeeding Hal Miller, who had briefly replaced Matt Robinson.
New Muppet characters were introduced during the 1970s. Count von Count was created and performed by Jerry Nelson, who also voiced Mr. Snuffleupagus, a large Muppet that required two puppeteers to operate. Richard Hunt, who, in Jon Stone 's words, joined the Muppets as a "wild - eyed 18 - year - old and grew into a master puppeteer and inspired teacher '', created Gladys the Cow, Forgetful Jones, Don Music, and the construction worker Sully. Telly Monster was performed by Brian Muehl; Marty Robinson took over the role in 1984. Frank Oz created Cookie Monster. Matt Robinson created the "controversial '' (as Davis called him) character Roosevelt Franklin. Fran Brill, the first female puppeteer for the Muppets, joined the Henson organization in 1970, and originated the character Prairie Dawn. In 1975, Henson created The Muppet Show, which was filmed and produced in London; Henson brought many of the Muppet performers with him, so opportunities opened up for new performers and puppets to appear on Sesame Street.
The CTW wanted to attract the best composers and lyricists for Sesame Street, so songwriters like Joe Raposo, the show 's music director, and writer Jeff Moss were allowed to retain the rights to the songs they wrote. The writers earned lucrative profits, and the show was able to sustain public interest. Raposo 's "I Love Trash '', written for Oscar the Grouch, was included on the first album of Sesame Street songs, The Sesame Street Book & Record, recorded in 1970. Moss ' "Rubber Duckie '', sung by Henson for Ernie, remained on the Top - 40 Billboard charts for seven weeks that same year. Another Henson song, written by Raposo for Kermit the Frog in 1970, "Bein ' Green '', which Davis called "Raposo 's best - regarded song for Sesame Street '', was later recorded by Frank Sinatra and Ray Charles. "Sing '', which became a hit for The Carpenters in 1973, and "Somebody Come and Play '', were also written by Raposo for Sesame Street.
In 1978, Stone and Singer produced and wrote the show 's first special, the "triumphant '' Christmas Eve on Sesame Street, which included an O Henry - inspired storyline in which Bert and Ernie gave up their prized possessions -- Ernie his rubber ducky and Bert his paper clip collection -- to purchase each other Christmas gifts. Bert and Ernie were played by Frank Oz and Jim Henson, who in real life were, like the puppets they played, colleagues and friends. To Davis, this demonstrated the puppeteers ' remarkable ability to play "puppetry 's Odd Couple ''. In Singer 's opinion, the special -- which Stone also wrote and directed -- demonstrated Stone 's "soul '', and Sonia Manzano called it a good example of what Sesame Street was all about. The special won Emmys for Stone and Singer in 1979, beating, among others, an independently - produced Sesame Street special for CBS.
By the show 's tenth anniversary in 1979, nine million American children under the age of six were watching Sesame Street daily. Four out of five children had watched it over a six - week period, and 90 % of children from low - income inner - city homes regularly viewed the show.
In 1984, the Federal Communications Commission (FCC) deregulated commercial restrictions on children 's television. Advertising during network children 's programs almost doubled, and deregulation resulted in an increase in commercially oriented programming. Sesame Street was successful during this era of deregulation despite the fact that the United States government terminated all federal funding of the CTW in 1981. By 1987, the show was earning $42 million per year from its magazine division, book royalties, product licensing, and foreign income -- enough to cover two - thirds of its expenses. Its remaining budget, plus a $6 million surplus, was covered by revenue from its PBS broadcasts.
According to Davis, Sesame Street 's second decade was spent "turning inward, expanding its young viewers ' world ''. The show 's curriculum grew to include more "affective '' teaching -- relationships, ethics, and positive and negative emotions. Many of the show 's storylines were taken from the experiences of its writing staff, cast, and crew. In 1982, Will Lee, who had played Mr. Hooper since the show 's premiere, died. For the 1983 season, the show 's producers and research staff decided they would explain Mr. Hooper 's death to their preschool audience, instead of recasting the role: the writer of that episode, Norman Stiles, said, "We felt we owed something to a man we respected and loved ''. They convened a group of psychologists, religious leaders, and other experts in the field of grief, loss, and separation. The research team conducted a series of studies before the episode to ascertain if children were able to understand the messages they wanted to convey about Mr. Hooper 's death; the research showed most children did understand. Parents ' reactions to the episode were, according to the CTW 's own reports, "overwhelmingly positive ''. The episode, which won an Emmy, aired on Thanksgiving Day in 1983 so parents could be home to discuss it with their children. Author David Borgenicht called the episode "poignant ''; Davis called it "a landmark broadcast '' and "a truly memorable episode, one of the show 's best ''. Caroll Spinney, who played Big Bird and who drew the caricatures prominently used in the episode, reported the cast and crew were moved to tears during filming.
In the mid-1980s, Americans were becoming more aware of the prevalence of child abuse, so Sesame Street 's researchers and producers decided to "reveal '' Mr. Snuffleupagus in 1985. "Snuffy '' had never been seen by any of the adults on the show and was considered Big Bird 's "imaginary friend ''. The show 's producers were concerned about the message being sent to children; "If children saw that the adults did n't believe what Big Bird said (even though it was true), they would be afraid to talk to adults about dramatic or disturbing things that happened to them ''.
For the 1988 and 1989 seasons, the topics of love, marriage, and childbirth were addressed when the show presented a storyline in which the characters Luis and Maria fall in love, marry, and have a child named Gabi. Sonia Manzano, the actress who played Maria, had married and become pregnant; according to the book Sesame Street Unpaved, published after the show 's thirtieth anniversary in 1999, Manzano 's real - life experiences gave the show 's writers and producers the idea. Before writing began, research was done to gain an understanding of what previous studies had revealed about preschoolers ' understanding of love, marriage, and family. The show 's staff found that at the time that there was very little relevant research done about children 's understanding of these topics, and no books for children had been written about them. Studies done after the episodes about Maria 's pregnancy aired showed that as a result of watching these episodes, children 's understanding of pregnancy increased.
Davis called the 1990s a "time of transition on Sesame Street ''. Several people involved in the show from its beginnings died during this period: Jim Henson in 1990 at the age of 53 "from a runaway strep infection gone stubbornly, foolishly untreated ''; songwriter Joe Raposo from non-Hodgkin's lymphoma fifteen months earlier; long - time cast member Northern Calloway of cardiac arrest in January 1990; puppeteer Richard Hunt of AIDS in early 1992; CTW founder and producer David Connell of bladder cancer in 1995; director Jon Stone of amyotrophic lateral sclerosis in 1997; and writer Jeff Moss of colon cancer in 1998.
By the early 1990s, Sesame Street was, as Davis put it, "the undisputed heavyweight champion of preschool television ''. Entertainment Weekly reported in 1991 that the show 's music had been honored with eight Grammys. The show 's dominance, however, was soon challenged by another PBS television show for preschoolers, Barney & Friends, and Sesame Street 's ratings declined. The producers of Sesame Street responded, at the show 's twenty - fifth anniversary in 1993, by expanding and redesigning the show 's set, calling it "Around the Corner ''. New human and Muppet characters were introduced, including Zoe (performed by Fran Brill), baby Natasha and her parents Ingrid and Humphrey, and Ruthie (played by comedian Ruth Buzzi). The "Around the Corner '' set was dismantled in 1997. Zoe, one of the few characters that survived, was created to include another female Muppet on the show: her spunky and fearless personality was intended to break female stereotypes. According to Davis, she was the first character developed on the show by marketing and product development specialists, who worked with the researchers at the CTW. (The quest for a "break - out '' female Muppet character continued into 2006 with the creation of Abby Cadabby, who was created after nine months of research.) In 1998, for the first time in the show 's history, Sesame Street pursued funding by accepting corporate sponsorship. Consumer advocate Ralph Nader, who had been a guest on the show, urged parents to protest the move by boycotting the show.
For Sesame Street 's 30th anniversary in 1999, its producers researched the reasons for the show 's lower ratings. For the first time since the show debuted, the producers and a team of researchers analyzed Sesame Street 's content and structure during a series of two - week - long workshops. They also studied how children 's viewing habits had changed in the past thirty years. They found that although the show was produced for those between the ages of three and five, children began watching it at a younger age. Preschool television had become more competitive, and the CTW 's research showed the traditional magazine format was not the best way to attract young children 's attention. The growth of home videos during the ' 80s and the increase of thirty - minute children 's shows on cable had demonstrated that children 's attention could be sustained for longer periods of time, but the CTW 's researchers found that their viewers, especially the younger ones, lost attention in Sesame Street after 40 to 45 minutes.
Beginning in 1998, a new 15 - minute segment shown at the end of each episode, "Elmo 's World '', used traditional elements (animation, Muppets, music, and live - action film), but had a more sustained narrative. "Elmo 's World '' followed the same structure each episode, and depended heavily on repetition. Unlike the realism of the rest of the show, the segment took place in a stylized crayon - drawing universe as conceived by its host. Elmo, who represented the three - to four - year - old child, was chosen as host of the closing segment because he had always tested well with this segment of their audience. He was created in 1980 and originally performed by Brian Muehl, and later Richard Hunt, but did not become what his eventual portrayer, Kevin Clash, called a "phenomenon '' until Clash took over the role in 1985. Eventually, Elmo became, as Davis reported, "the embodiment '' of Sesame Street, and "the marketing wonder of our age '' when five million "Tickle Me Elmo '' dolls were sold in 1996. Clash believed the "Tickle Me Elmo '' phenomenon made Elmo a household name and led to the "Elmo 's World '' segment. Michael Jeter was a favorite with younger audiences in his role as Mr. Noodle 's brother, Mister Noodle on Sesame Street from 1999 to 2003.
In 2002, Sesame Street 's producers went further in changing the show to reflect its younger demographic, fundamentally changing the show 's structure, which had relied on "Street scenes '' interrupted by live - action videos and animation. The target age for Sesame Street shifted downward, from four years to three years, after the show 's 33rd season. As co-executive producer Arlene Sherman stated, "We basically deconstructed the show ''. The producers expanded upon the "Elmo 's World '' by changing from a magazine format to a narrative format, which made the show easier for young children to navigate. Sherman called the show 's new look "startlingly different ''. Following its tradition of addressing emotionally difficult topics, Sesame Street 's producers chose to address the attacks of 9 / 11 during this season on its premiere episode, which aired February 4, 2002. This episode, as well as a series of four episodes that aired after Hurricane Katrina in 2005, were used in Sesame Workshop 's Community Outreach program.
In 2006, the United States Department of State called Sesame Street "the most widely viewed children 's television show in the world ''. Over half of the show 's international co-productions were made after 2001; according to the 2006 documentary The World According to Sesame Street, the events of 9 / 11 inspired the producers of these co-productions. In 2003, Takalani Sesame, a South African co-production, elicited criticism in the United States when its producers created Kami, the first HIV - positive Muppet, whose purpose was educating children in South Africa about the epidemic of AIDS. The controversy, which surprised the Sesame Workshop, was short - lived and died down after Kofi Annan and Jerry Falwell praised the Workshop 's efforts. By 2006, Sesame Street had won more Emmy Awards than any other children 's show, including winning the outstanding children 's series award for twelve consecutive years -- every year the Emmys included the category. By 2009, the show had won 118 Emmys throughout its history, and was awarded the Outstanding Achievement Emmy for its 40 years on the air.
By Sesame Street 's 40th anniversary, it was ranked the fifteenth most popular children 's show on television. When the show premiered in 1969, 130 episodes a year were produced; in 2009, because of rising costs, twenty - six episodes were made. In 2000, the Children 's Television Workshop, which had changed its name to the Sesame Workshop (SW) in June 2000 to better reflect its entry into non-television and interactive media, launched a website with a library of free video clips and free podcasts from throughout the show 's history. The 2008 -- 2009 recession, which led to budget cuts for many nonprofit arts organizations, severely affected Sesame Street; in spring 2009, the SW had to lay off 20 % of its staff.
Sesame Street 's 40th anniversary was commemorated by the 2008 publication of Street Gang: The Complete History of Sesame Street, by Michael Davis, which has been called "the definitive statement '' about the history of the show.
Starting in 2009, the producers of Sesame Street took steps to bring back older viewers; it was also successful in increasing its audience viewership among 3 - to - 5 year - olds by the end of the 40th season. In 2012, the show 's 43rd season, Elmo 's World was replaced with Elmo the Musical, which was targeted at the program 's older viewers. Subsequently in September 2014, starting with the show 's 45th season, Sesame Workshop began distributing a half - hour version of the program to PBS member stations. The new version, which complemented the existing hour - long broadcast and focuses more on interstitial segments (although certain segments such as Elmo the Musical or Abby 's Flying Fairy School are omitted from that version), was added because of increasing mobile and online viewing among children as well as growing competition for preschoolers on linear and online television, an increase in use of PBS Kids ' mobile video app during 2013 and decreasing broadcast viewership; the half - hour version airs weekday afternoons on PBS member television stations (with the hour - long version continuing to air in the morning) and was made available for streaming online and on mobile devices through PBS ' website, mobile app and Roku channel.
On August 13, 2015, as part of a five - year programming and development deal, Sesame Workshop announced that first - run episodes of Sesame Street would move to premium television service HBO (which had not aired any original children 's programming since 2005) in late 2015. Sesame Workshop sought the deal because of declining revenue from viewer donations, and decreases in distribution fees paid by PBS member stations and licensing for merchandise sales (particularly through Sesame Workshop 's dependence upon revenue from DVD sales), with the intent to having the show remain on PBS in some fashion (HBO already had involvement in public television at the time of the deal, providing funding for the talk show Charlie Rose); the deal also came in the wake of changing viewer habits of American children over the previous ten years. HBO will hold first - run rights to all newer episodes of the series starting with season 46, after which they will air on PBS member stations following a nine - month exclusivity window, with no charge to the stations for airing the content; however, HBO has not announced whether first - run episodes will air on the pay service 's main channel or its multiplex channel HBO Family. The agreement also gives HBO exclusive rights to stream past and future Sesame Street episodes on HBO Go and HBO Now -- assuming those rights from Amazon Video and Netflix. On August 14, Sesame Workshop announced that it would phase out its in - house subscription streaming service, Sesame Go, as a standalone service; instead of shutting it down entirely, it intends to scale back its offerings to either provide access to a reduced slate of free content or act as a portal for Sesame Street 's website.
In April 2017, Sesame Street introduced a new Muppet called Julia with Autism to the show.
|
who starred in the original king and i | The King and I - wikipedia
The King and I is the fifth musical by the team of composer Richard Rodgers and dramatist Oscar Hammerstein II. It is based on Margaret Landon 's novel, Anna and the King of Siam (1944), which is in turn derived from the memoirs of Anna Leonowens, governess to the children of King Mongkut of Siam in the early 1860s. The musical 's plot relates the experiences of Anna, a British schoolteacher hired as part of the King 's drive to modernize his country. The relationship between the King and Anna is marked by conflict through much of the piece, as well as by a love to which neither can admit. The musical premiered on March 29, 1951, at Broadway 's St. James Theatre. It ran for nearly three years, making it the fourth longest - running Broadway musical in history at the time, and has had many tours and revivals.
In 1950, theatrical attorney Fanny Holtzmann was looking for a part for her client, veteran leading lady Gertrude Lawrence. Holtzmann realized that Landon 's book would provide an ideal vehicle and contacted Rodgers and Hammerstein, who were initially reluctant but agreed to write the musical. The pair initially sought Rex Harrison to play the supporting part of the King, a role he had played in the 1946 film made from Landon 's book, but he was unavailable. They settled on the young actor and television director Yul Brynner.
The musical was an immediate hit, winning Tony Awards for Best Musical, Best Actress (for Lawrence) and Best Featured Actor (for Brynner). Lawrence died unexpectedly of cancer a year and a half after the opening, and the role of Anna was played by several actresses during the remainder of the Broadway run of 1,246 performances. A hit London run and U.S. national tour followed, together with a 1956 film for which Brynner won an Academy Award, and the musical was recorded several times. In later revivals, Brynner came to dominate his role and the musical, starring in a four - year national tour culminating in a 1985 Broadway run shortly before his death.
Christopher Renshaw directed major revivals on Broadway (1996), winning the Tony Award for Best Revival, and in the West End (2000). A 2015 Broadway revival won another Tony for Best Revival. Both professional and amateur revivals of The King and I continue to be staged regularly throughout the English - speaking world.
Mongkut, King of Siam, was about 57 years old in 1861. He had lived half his life as a Buddhist monk, was an able scholar, and founded a new order of Buddhism and a temple in Bangkok (paid for by his half - brother, King Nangklao). Through his decades of devotion, Mongkut acquired an ascetic lifestyle and a firm grasp of Western languages. When Nangklao died in 1850, Mongkut became king. At that time, various European countries were striving for dominance, and American traders sought greater influence in Southeast Asia. He ultimately succeeded in keeping Siam an independent nation, partly by familiarizing his heirs and harem with Western ways.
In 1861, Mongkut wrote to his Singapore agent, Tan Kim Ching, asking him to find a British lady to be governess to the royal children. At the time, the British community in Singapore was small, and the choice fell on a recent arrival there, Anna Leonowens (1831 -- 1915), who was running a small nursery school in the colony. Leonowens was the Anglo - Indian daughter of an Indian Army soldier and the widow of Thomas Owens, a clerk and hotel keeper. She had arrived in Singapore two years previously, claiming to be the genteel widow of an officer and explaining her dark complexion by stating that she was Welsh by birth. Her deception was not detected until long after her death, and had still not come to light when The King and I was written.
Upon receiving the King 's invitation, Leonowens sent her daughter, Avis, to school in England, to give Avis the social advantage of a prestigious British education, and traveled to Bangkok with her five - year - old son, Louis. King Mongkut had sought a Briton to teach his children and wives after trying local missionaries, who used the opportunity to proselytize. Leonowens initially asked for $150 in Singapore currency per month. Her additional request, to live in or near the missionary community to ensure she was not deprived of Western company, aroused suspicion in Mongkut, who cautioned in a letter, "we need not have teacher of Christianity as they are abundant here ''. King Mongkut and Leonowens came to an agreement: $100 per month and a residence near the royal palace. At a time when most transport in Bangkok was by boat, Mongkut did not wish to have to arrange for the teacher to get to work every day. Leonowens and Louis temporarily lived as guests of Mongkut 's prime minister, and after the first house offered was found to be unsuitable, the family moved into a brick residence (wooden structures decayed quickly in Bangkok 's climate) within walking distance of the palace.
In 1867, Leonowens took a six - month leave of absence to visit her daughter Avis in England, intending to deposit Louis at a school in Ireland and return to Siam with Avis. However, due to unexpected delays and opportunities for further travel, Leonowens was still abroad in late 1868, when Mongkut fell ill and died. Leonowens did not return to Siam, although she continued to correspond with her former pupil, the new king Chulalongkorn.
In 1950, British actress Gertrude Lawrence 's business manager and attorney, Fanny Holtzmann, was looking for a new vehicle for her client when the 1944 Margaret Landon novel Anna and the King of Siam (a fictionalized version of Leonowens ' experiences) was sent to her by Landon 's agent. According to Rodgers biographer Meryle Secrest, Holtzmann was worried that Lawrence 's career was fading. The 51 - year - old actress had appeared only in plays, not in musicals, since Lady in the Dark closed in 1943. Holtzmann agreed that a musical based on Anna and the King of Siam would be ideal for her client, who purchased the rights to adapt the novel for the stage.
Holtzmann initially wanted Cole Porter to write the score, but he declined. She was going to approach Noël Coward next, but happened to meet Dorothy Hammerstein (Oscar 's wife) in Manhattan. Holtzmann told Dorothy Hammerstein that she wanted Rodgers and Hammerstein to create a show for Lawrence, and asked her to see that her husband read a book that Holtzmann would send over. In fact, both Dorothy Rodgers and Dorothy Hammerstein had read the novel in 1944 and had urged their husbands to consider it as a possible subject for a musical. Dorothy Hammerstein had known Gertrude Lawrence since 1925, when they had both appeared in André Charlot 's London Revue of 1924 on Broadway and on tour in North America.
Rodgers and Hammerstein had disliked Landon 's novel as a basis for a musical when it was published, and their views still held. It consists of vignettes of life at the Siamese court, interspersed with descriptions of historical events unconnected with each other, except that the King creates most of the difficulties in the episodes, and Anna tries to resolve them. Rodgers and Hammerstein could see no coherent story from which a musical could be made until they saw the 1946 film adaptation, starring Irene Dunne and Rex Harrison, and how the screenplay united the episodes in the novel. Rodgers and Hammerstein were also concerned about writing a star vehicle. They had preferred to make stars rather than hire them, and engaging the legendary Gertrude Lawrence would be expensive. Lawrence 's voice was also a worry: her limited vocal range was diminishing with the years, while her tendency to sing flat was increasing. Lawrence 's temperament was another concern: though she could not sing like one, the star was known to be capable of diva - like behavior. In spite of this, they admired her acting -- what Hammerstein called her "magic light '', a compelling presence on stage -- and agreed to write the show. For her part, Lawrence committed to remaining in the show until June 1, 1953, and waived the star 's usual veto rights over cast and director, leaving control in the hands of the two authors.
Hammerstein found his "door in '' to the play in Landon 's account of a slave in Siam writing about Abraham Lincoln. This would eventually become the narrated dance, "The Small House of Uncle Thomas ''. Since a frank expression of romantic feelings between the King and Anna would be inappropriate in view of both parties ' upbringing and prevailing social mores, Hammerstein wrote love scenes for a secondary couple, Tuptim, a junior wife of the King, and Lun Tha, a scholar. In the Landon work, the relationship is between Tuptim and a priest, and is not romantic. The musical 's most radical change from the novel was to have the King die at the end of the play. Also, since Lawrence was not primarily a singer, the secondary couple gave Rodgers a chance to write his usual "soaring '' romantic melodies. In an interview for The New York Times, Hammerstein indicated that he wrote the first scene before leaving for London and the West End production of Carousel in mid-1950; he wrote a second scene while in the British capital.
The pair had to overcome the challenge of how to represent Thai speech and music. Rodgers, who had experimented with Asian music in his short - lived 1928 musical with Lorenz Hart titled Chee - chee, did not wish to use actual Thai music, which American audiences might not find accessible. Instead, he gave his music an exotic flavor, using open fifths and chords in unusual keys, in ways pleasant to Western ears. Hammerstein faced the problem of how to represent Thai speech; he and Rodgers chose to convey it by musical sounds, made by the orchestra. For the King 's style of speech, Hammerstein developed an abrupt, emphatic way of talking, which was mostly free of articles, as are many East Asian languages. The forceful style reflected the King 's personality and was maintained even when he sang, especially in his one solo, "A Puzzlement ''. Many of the King 's lines, including his first utterance, "Who? Who? Who? '', and much of the initial scene between him and Anna, are drawn from Landon 's version. Nevertheless, the King is presented more sympathetically in the musical than in the novel or the 1946 film, as the musical omits the torture and burning at the stake of Lady Tuptim and her partner.
With Rodgers laid up with back trouble, Hammerstein completed most of the musical 's book before many songs were set to music. Early on, Hammerstein contacted set designer Jo Mielziner and costume designer Irene Sharaff and asked them to begin work in coordination with each other. Sharaff communicated with Jim Thompson, an American who had revived the Thai silk industry after World War II. Thompson sent Sharaff samples of silk cloth from Thailand and pictures of local dress from the mid-19th century. One such picture, of a Thai woman in western dress, inspired the song "Western People Funny '', sung by the King 's chief wife, Lady Thiang, while dressed in western garb.
Producer Leland Hayward, who had worked with the duo on South Pacific, approached Jerome Robbins to choreograph a ballet for "The Small House of Uncle Thomas ''. Robbins was very enthusiastic about the project and asked to choreograph the other musical numbers as well, although Rodgers and Hammerstein had originally planned little other dancing. Robbins staged "The Small House of Uncle Thomas '' as an intimate performance, rather than a large production number. His choreography for the parade of the King 's children to meet their teacher ("March of the Royal Siamese Children '') drew great acclaim. Robert Russell Bennett provided the orchestrations, and Trude Rittmann arranged the ballet music.
The pair discussed having an Act 1 musical scene involving Anna and the King 's wives. The lyrics for that scene proved to be very difficult for Hammerstein to write. He first thought that Anna would simply tell the wives something about her past, and wrote such lyrics as "I was dazzled by the splendor / Of Calcutta and Bombay '' and "The celebrities were many / And the parties very gay / (I recall a curry dinner / And a certain Major Grey). '' Eventually, Hammerstein decided to write about how Anna felt, a song which would not only explain her past and her motivation for traveling with her son to the court of Siam, but also serve to establish a bond with Tuptim and lay the groundwork for the conflict that devastates Anna 's relationship with the King. "Hello, Young Lovers '', the resulting song, was the work of five exhausting weeks for Hammerstein. He finally sent the lyrics to Rodgers by messenger and awaited his reaction. Hammerstein considered the song his best work and was anxious to hear what Rodgers thought of it, but no comment came from Rodgers. Pride kept Hammerstein from asking. Finally, after four days, the two happened to be talking on the phone about other matters, and at the end of the conversation, Rodgers stated, very briefly, that the lyric was fine. Josh Logan, who had worked closely with Hammerstein on South Pacific, listened to the usually unflappable writer pour out his unhappy feelings. It was one of the few times that Hammerstein and Rodgers did not display a united front.
Although the part of the King was only a supporting role to Lawrence 's Anna, Hammerstein and Rodgers thought it essential that a well - known theatrical actor play it. The obvious choice was Rex Harrison, who had played the King in the movie, but he was booked, as was Noël Coward. Alfred Drake, the original Curly in Oklahoma!, made contractual demands which were deemed too high. With time running short before rehearsals, finding an actor to play the King became a major concern. Mary Martin, the original Nellie Forbush in South Pacific, suggested that her co-star in a 1946 musical set in China, Lute Song, try for the role. Rodgers recounted the audition of the Russian - American performer, Yul Brynner:
They told us the name of the first man and out he came with a bald head and sat cross-legged on the stage. He had a guitar and he hit his guitar one whack and gave out with this unearthly yell and sang some heathenish sort of thing, and Oscar and I looked at each other and said, "Well, that 's it. ''
Brynner termed Rodgers ' account "very picturesque, but totally inaccurate ''. He recalled that as an established television director (in CBS 's Starlight Theatre, for example), he was reluctant to go back on the stage. His wife, his agent and Martin finally convinced him to read Hammerstein 's working script, and once he did, he was fascinated by the character of the King and was eager to do the project. In any case, Brynner 's fierce, mercurial, dangerous, yet surprisingly sensitive King was an ideal foil for Lawrence 's strong - willed, yet vulnerable Anna, and when the two finally came together in "Shall We Dance? '', where the King hesitantly touches Anna 's waist, the chemistry was palpable.
Pre-rehearsal preparations began in late 1950. Hammerstein had wanted Logan to direct and co-write the book, as he had for South Pacific, but when Logan declined, Hammerstein decided to write the entire book himself. Instead of Logan, the duo hired as director John van Druten, who had worked with Lawrence years earlier. The costume designer, Sharaff, wryly pointed the press to the incongruity of a Victorian British governess in the midst of an exotic court: "The first - act finale of The King and I will feature Miss Lawrence, Mr. Brynner, and a pink satin ball gown. '' Mielziner 's set plan was the simplest of the four Rodgers and Hammerstein musicals he had worked on, with one main set (the throne room), a number of front - stage drops (for the ship and Anna 's room, for example) and the entire stage cleared for "The Small House of Uncle Thomas ''.
The show was budgeted at $250,000 (US $2,310,000 in 2016 dollars) making it the most expensive Rodgers and Hammerstein production to that point, and prompting some mockery that costs exceeded even their expensive flop Allegro. Investors included Hammerstein, Rodgers, Logan, Martin, Billy Rose and Hayward. The children who were cast as the young princes and princesses came from a wide range of ethnic backgrounds, including Puerto Rican or Italian, though none were Thai. Johnny Stewart was the original Prince Chulalongkorn but left the cast after only three months, replaced by Ronnie Lee. Sandy Kennedy was Louis, and Broadway veteran Larry Douglas played Lun Tha.
Shortly before rehearsals began in January 1951, Rodgers had the first Tuptim, Doretta Morrow, sing the entire score to Lawrence, including Lawrence 's own songs. Lawrence listened calmly, but when she met Rodgers and Hammerstein the following day, she treated Rodgers coldly, apparently seeing the composer 's actions as flaunting her vocal deficiencies. Hammerstein and Rodgers ' doubts about whether Lawrence could handle the part were assuaged by the sheer force of her acting. James Poling, a writer for Collier 's who was allowed to attend the rehearsals, wrote of Lawrence preparing "Shall I Tell You What I Think of You? '':
She took the center of the barren stage wearing, for practice, a dirty muslin hoop over her slacks, with an old jacket thrown over her shoulders for warmth. She began rather quietly on the note, "Your servant! Your servant! Indeed I 'm not your servant! '' Then she gradually built the scene, slowly but powerfully, until, in a great crescendo, she ended prone on the floor, pounding in fury, and screaming, "Toads! Toads! Toads! All of your people are toads. '' When she finished, the handful of professionals in the theatre burst into admiring applause.
At his first meeting with Sharaff, Brynner, who had only a fringe of hair, asked what he was to do about it. When told he was to shave it, Brynner was horror - struck and refused, convinced he would look terrible. He finally gave in during tryouts and put dark makeup on his shaved head. The effect was so well - received that it became Brynner 's trademark.
Lawrence 's health caused her to miss several rehearsals, though no one knew what was wrong with her. When the tryout opened in New Haven, Connecticut on February 27, 1951, the show was nearly four hours long. Lawrence, suffering from laryngitis, had missed the dress rehearsal but managed to make it through the first public performance. The Variety critic noted that despite her recent illness she "slinks, acts, cavorts, and in general exhibits exceedingly well her several facets for entertaining '', but the Philadelphia Bulletin printed that her "already thin voice is now starting to wear a great deal thinner ''. Leland Hayward came to see the show in New Haven and shocked Rodgers by advising him to close it before it went any further. Additionally, when the show left New Haven for Boston for more tryout performances, it was still at least 45 minutes too long. Gemze de Lappe, who was one of the dancers, recalled one cut that she regretted:
They took out a wonderful scene. Mrs. Anna 's first entrance into the palace comes with a song in which she sings, "Over half a year I have been waiting, waiting, waiting, waiting, waiting, waiting outside your door. '' At the end she points her umbrella at him, or something like that, and the King says "Off with her head '' or words to that effect, and the eunuchs pick her up and carry her off. The King says "Who, who, who? '' with great satisfaction, and finds out that he has just thrown out the English schoolteacher. So he says, "Bring her back! '' and she is ushered in... we all loved it.
This song, "Waiting '', was a trio for Anna, the King, and the Kralahome (the King 's prime minister). "Who Would Refuse? '', the Kralahome 's only solo, was also dropped. Left without a note to sing, Mervyn Vye abandoned the show and was replaced by John Juliano. "Now You Leave '', a song for Lady Thiang (played by Dorothy Sarnoff in the original production), was also cut. After the cuts, Rodgers and Hammerstein felt that the first act was lacking something. Lawrence suggested that they write a song for Anna and the children. Mary Martin reminded them of a song that had been cut from South Pacific, "Suddenly Lucky ''. Hammerstein wrote a new lyric for the melody, and the resulting song became "Getting to Know You ''. "Western People Funny '' and "I Have Dreamed '' were also added in Boston.
Brynner regretted that there were not more tryout performances, feeling that the schedule did not give him an adequate opportunity to develop the complex role of the King. When he told this to Hammerstein and Rodgers, they asked what sort of performance they would get from him, and he responded, "It will be good enough, it will get the reviews. ''
In 1862, a strong - willed, widowed schoolteacher, Anna Leonowens, arrives in Bangkok, Siam (later known as Thailand) at the request of the King of Siam to tutor his many children. Anna 's young son, Louis, fears the severe countenance of the King 's prime minister, the Kralahome, but Anna refuses to be intimidated ("I Whistle a Happy Tune ''). The Kralahome has come to escort them to the palace, where they are expected to live -- a violation of Anna 's contract, which calls for them to live in a separate house. She considers returning to Singapore aboard the vessel that brought them, but goes with her son and the Kralahome.
Several weeks pass, during which Anna and Louis are confined to their palace rooms. The King receives a gift from the king of Burma, a lovely slave girl named Tuptim, to be one of his many wives. She is escorted by Lun Tha, a scholar who has come to copy a design for a temple, and the two are secretly in love. Tuptim, left alone, declares that the King may own her, but not her heart ("My Lord and Master ''). The King gives Anna her first audience. The schoolteacher is a part of his plan for the modernization of Siam; he is impressed when she already knows this. She raises the issue of her house with him, he dismisses her protests and orders her to talk with his wives. They are interested in her, and she tells them of her late husband, Tom ("Hello, Young Lovers ''). The King presents her new pupils; Anna is to teach those of his children whose mothers are in favor with him -- several dozen -- and is to teach their mothers as well. The princes and princesses enter in procession ("March of the Royal Siamese Children ''). Anna is charmed by the children, and formality breaks down after the ceremony as they crowd around her.
Anna has not given up on the house, and teaches the children proverbs and songs extolling the virtues of home life, to the King 's irritation. The King has enough worries without battling the schoolteacher, and wonders why the world has become so complicated ("A Puzzlement ''). The children and wives are hard at work learning English ("The Royal Bangkok Academy ''). The children are surprised by a map showing how small Siam is compared with the rest of the world ("Getting to Know You ''). As the crown prince, Chulalongkorn, disputes the map, the King enters a chaotic schoolroom. He orders the pupils to believe the teacher but complains to Anna about her lessons about "home ''. Anna stands her ground and insists on the letter of her contract, threatening to leave Siam, much to the dismay of wives and children. The King orders her to obey as "my servant ''; she repudiates the term and hurries away. The King dismisses school, then leaves, uncertain of his next action. Meanwhile, Lun Tha comes upon Tuptim, and they muse about having to hide their relationship ("We Kiss in a Shadow '').
In her room, Anna replays the confrontation in her mind, her anger building ("Shall I Tell You What I Think of You? ''). Lady Thiang, the King 's head wife, tells Anna that the King is troubled by his portrayal in the West as a barbarian, as the British are being urged to take over Siam as a protectorate. Anna is shocked by the accusations -- the King is a polygamist, but he is no barbarian -- but she is reluctant to see him after their argument. Lady Thiang convinces her that the King is deserving of support ("Something Wonderful ''). Anna goes to him and finds him anxious for reconciliation. The King tells her that the British are sending an envoy to Bangkok to evaluate the situation. Anna "guesses '' -- the only guise in which the King will accept advice -- that the King will receive the envoy in European style, and that the wives will be dressed in Western fashion. Tuptim has been writing a play based on a book that Anna has lent her, Uncle Tom 's Cabin, that can be presented to the guests. News is brought to the King that the British are arriving much earlier than thought, and so Anna and the wives are to stay up all night to prepare. The King assembles his family for a Buddhist prayer for the success of the venture and also promises before Buddha that Anna will receive her own house "as provided in agreement, etc., etc. ''
The wives are dressed in their new European - style gowns, which they find confining ("Western People Funny ''). In the rush to prepare, the question of undergarments has been overlooked, and the wives have practically nothing on underneath their gowns. When the British envoy, Sir Edward Ramsay, arrives and gazes at them through a monocle, they are panicked by the "evil eye '' and lift their skirts over their heads as they flee. Sir Edward is diplomatic about the incident. When the King is called away, it emerges that Sir Edward is an old flame of Anna 's, and they dance in remembrance of old times, as Edward urges her to return to British society. The King returns and irritably reminds them that dancing is for after dinner.
As final preparations for the play are made, Tuptim steals a moment to meet with Lun Tha. He tells her he has an escape plan, and she should be ready to leave after the performance ("I Have Dreamed ''). Anna encounters them, and they confide in her ("Hello, Young Lovers '', reprise). The play ("Small House of Uncle Thomas '', narrated ballet) is presented in a Siamese ballet - inspired dance. Tuptim is the narrator, and she tells her audience of the evil King Simon of Legree and his pursuit of the runaway slave Eliza. Eliza is saved by Buddha, who miraculously freezes a river and conceals her in snow. Buddha then causes the river to melt, drowning King Simon and his hunting party. The anti-slavery message is blunt.
After the play, Sir Edward reveals that the British threat has receded, but the King is distracted by his displeasure at Tuptim 's rebellious message. After Sir Edward leaves, Anna and the King express their delight at how well the evening went, and he presents her with a ring. Secret police report that Tuptim is missing. The King realizes that Anna knows something; she parries his inquiry by asking why he should care: Tuptim is just another woman to him. He is delighted; she is at last understanding the Siamese perspective. Anna tries to explain to him the Western customs of courtship and tells him what it is like for a young woman at a formal dance ("Shall We Dance? ''). He demands that she teach him the dance. She does, and in that dance they experience and express a love for each other that they can never speak aloud. They are interrupted by the Kralahome. Tuptim has been captured, and a search is on for Lun Tha. The King resolves to punish Tuptim, though she denies she and Lun Tha were lovers. Anna tries to dissuade him, but he is determined that her influence shall not rule, and he takes the whip himself. He turns to lash Tuptim, but under Anna 's gaze is unable to swing the whip, and hurries away. Lun Tha is found dead, and Tuptim is dragged off, swearing to kill herself; nothing more is heard about her. Anna asks the Kralahome to give her ring back to the King; both schoolteacher and minister state their wish that she had never come to Siam.
Several months pass with no contact between Anna and the King. Anna is packed and ready to board a ship leaving Siam. Chulalongkorn arrives with a letter from the King, who has been unable to resolve the conflicts within himself and is dying. Anna hurries to the King 's bedside and they reconcile. The King persuades her to take back the ring and to stay and assist the next king, Chulalongkorn. The dying man tells Anna to take dictation from the prince, and instructs the boy to give orders as if he were King. The prince orders the end of the custom of kowtowing that Anna hated. The King grudgingly accepts this decision. As Chulalongkorn continues, prescribing a less arduous bow to show respect for the king, his father dies. Anna kneels by the late King, holding his hand and kissing it, as the wives and children bow or curtsey, a gesture of respect to old king and new.
Act I
Act II
The King and I opened on Broadway on March 29, 1951, with a wide expectation of a hit by the press and public. Both Hammerstein and Rodgers professed to be worried. The composer complained that most people were not concerned about whether the show was good, but whether it was better than South Pacific. Even the weather cooperated: heavy rain in New York stopped in time to allow the mostly wealthy or connected opening night audience to arrive dry at the St. James Theatre. Margaret Landon, author of the book on which the musical was based, was not invited to opening night.
Brynner turned in an outstanding performance that night, nearly stealing the show. Lawrence knew that the company was nervous because of her illnesses. The director, John van Druten, described how her opening night performance put all worries to rest: "She came on the stage with a new and dazzling quality, as if an extra power had been granted to the brilliance of her stage light. She was radiant and wonderful. '' The rave reviews in the newspapers lifted Lawrence 's spirits, and she expected a lengthy run as Anna, first on Broadway, then in London 's West End, and finally on film. Lawrence won a Tony Award for her leading role, while Brynner won the award for best featured actor. The show won the Tony for best musical, and designers Mielziner and Sharaff received awards in their categories.
De Lappe remembered the contrast between Lawrence 's indifferent singing voice and the force of her performance:
I used to listen to Gertrude Lawrence on the public address system every night in our dressing rooms, and she 'd get onto a note and sag down off of it. The night after I left the show to go into Paint Your Wagon, Yul Brynner gave me house seats and I saw her from the front and I was so taken by her. She had such a star quality, you did n't care if she sang off - key. She more than dominated the stage. Boy, was that a lesson to me.
Lawrence had not yet discovered that she was dying from liver cancer, and her weakened condition was exacerbated by the demands of her role. At the age of 52, she was required to wear dresses weighing 75 pounds (34 kg) while walking or dancing a total of 4 miles (6.4 km) during a 31⁄2 hour performance eight times a week. Lawrence found it hard to bear the heat in the theatre during the summer months. Understudy Constance Carpenter began replacing her in matinee performances. Later in the year Lawrence 's strength returned, and she resumed her full schedule, but by Christmas she was battling pleurisy and suffering from exhaustion. She entered the hospital for a full week of tests. Just nine months before her death, the cancer still was not detected. In February 1952, bronchitis felled her for another week, and her husband Richard Aldrich asked Rodgers and Hammerstein if they would consider closing the show for Easter week to give her a chance to recover fully. They denied his request, but agreed to replace her with the original Ado Annie from Oklahoma!, Celeste Holm, for six weeks during the summer. Meanwhile, Lawrence 's performances were deteriorating, prompting audiences to become audibly restive. Rodgers and Hammerstein prepared a letter, never delivered, advising her that "eight times a week you are losing the respect of 1,500 people ''. In late August, Lawrence fainted following a matinee and was admitted to the NewYork -- Presbyterian Hospital. She slipped into a coma and died on September 6, 1952, aged 54. Her autopsy revealed liver cancer. On the day of her funeral, the performance of The King and I was cancelled. The lights of Broadway and the West End were dimmed; she was buried in the ball gown she wore during Act 2.
Carpenter assumed the role of Anna and went on to play it for 620 performances. Other Annas during the run included Holm, Annamary Dickey and Patricia Morison. Although Brynner later boasted of never missing a show, he missed several, once when stagehands at the St. James Theatre accidentally struck him in the nose with a piece of scenery, another time due to appendicitis. Also, for three months in 1952 (and occasionally in 1953), Alfred Drake replaced Brynner. One young actor, Sal Mineo, began as an extra, then became an understudy for a younger prince, then an understudy and later a replacement for Crown Prince Chulalongkorn. Mineo began a close friendship and working relationship with Brynner which would last for more than a decade. Another replacement was Terry Saunders as Lady Thiang. She reprised the role in the 1956 film. The last of the production 's 1,246 performances was on March 20, 1954. The run was, at the time, the fourth longest ever for a Broadway musical. A U.S. national tour began on March 22, 1954, at the Community Theatre, Hershey, Pennsylvania, starring Brynner and Morison. The tour played in 30 cities, closing on December 17, 1955, at the Shubert Theatre, Philadelphia.
The original London production opened on October 8, 1953, at the Theatre Royal, Drury Lane, and was warmly received by both audiences and critics; it ran for 946 performances. The show was restaged by Jerome Whyte. The cast featured Valerie Hobson, in her last role, as Anna; Herbert Lom as the King; and Muriel Smith as Lady Thiang. Martin Benson played the Kralahome, a role he reprised in the film. Eve Lister was a replacement for Hobson, and George Pastell replaced Lom during the long run. The New York Times theatre columnist Brooks Atkinson saw the production with Lister and Pastell, and thought the cast commonplace, except for Smith, whom he praised both for her acting and her voice. Atkinson commented, "The King and I is a beautifully written musical drama on a high plane of human thinking. It can survive in a mediocre performance. ''
The musical was soon premiered in Australia, Japan, and throughout Europe.
The first revival of The King and I in New York was presented by the New York City Center Light Opera Company in April and May 1956 for three weeks, starring Jan Clayton and Zachary Scott, directed by John Fearnley, with Robbins ' choreography recreated by June Graham. Muriel Smith reprised her London role of Lady Thiang, and Patrick Adiarte repeated his film role, Chulalongkorn. This company presented the musical again in May 1960 with Barbara Cook and Farley Granger, again directed by Fearnley, in another three - week engagement. Atkinson admired the purity of Cook 's voice and thought that she portrayed Anna with "a cool dignity that gives a little more stature to the part than it has had before. '' He noted that Granger brought "a fresh point of view -- as well as a full head of hair ''. Joy Clements played Tuptim, and Anita Darian was Lady Thiang. City Center again presented the show in June 1963, starring Eileen Brennan and Manolo Fabregas, directed by Fearnley. Clements and Darian reprised Tuptim and Thiang. In the final City Center Light Opera production, Michael Kermoyan played the King opposite Constance Towers for three weeks in May 1968. Darian again played Lady Thiang. For all of these 1960s productions, Robbins ' choreography was reproduced by Yuriko, who had played the role of Eliza in the original Broadway production and reprised the role in the City Center productions.
The Music Theatre of Lincoln Center, with Rodgers as producer, presented the musical in mid-1964 at the New York State Theater, starring Risë Stevens and Darren McGavin, with Michael Kermoyan as the Kralahome. Lun Tha, Tuptim and Thiang were played by Frank Porretta, Lee Venora and Patricia Neway. Costumes were by Irene Sharaff, the designer for the original productions and the film adaptation. The director was Edward Greenberg, with the Robbins choreography again reproduced by Yuriko. This was Music Theatre 's debut production, a five - week limited engagement.
The King and I was revived at London 's Adelphi Theatre on October 10, 1973, running for 260 performances until May 25, 1974, starring Sally Ann Howes as Anna and Peter Wyngarde as the King. Roger Redfarn directed, and Sheila O'Neill choreographed. The production, which began in June 1973 with a tour of the English provinces, earned mixed to warm reviews. Michael Billington in The Guardian called the revival "well played and well sung ''. Although he was enthusiastic about Howes as Anna, Billington thought Wyngarde "too fragile to be capable of inspiring unholy terror ''. He praised Redfarn 's production -- "whipped along at a good pace and made a sumptuous eyeful out of the interpolated ballet on ' Uncle Tom 's Cabin '. '' Less favorably, Robert Cushman in The Observer thought the production "scenically and economically under - nourished ''. He liked Wyngarde 's King ("a dignified clown '') but thought Howes not formidable enough to stand up to him as Anna. He noted that "she sings beautifully and the songs are the evening 's real justification ''.
In early 1976, Brynner received an offer from impresarios Lee Gruber and Shelly Gross to star, in the role that he had created 25 years before, in a U.S. national tour and Broadway revival. The tour opened in Los Angeles on July 26, 1976, with Constance Towers reprising the role of Anna. On opening night, Brynner suffered so badly from laryngitis that he lip - synched, with his son Rock singing and speaking the role from the orchestra pit. The production traveled across the United States, selling out every city it appeared in and finally opening in New York at the Uris Theatre (today the Gershwin Theatre) on May 2, 1977. The production featured Martin Vidnovic as Lun Tha, and Susan Kikuchi danced the part of Eliza, recreating the role that her mother, Yuriko, had originated. Yuriko both directed the production and recreated the Robbins choreography. Sharaff again designed costumes, and Michael Kermoyan reprised the role of the Kralahome, while June Angela was Tuptim. The run lasted 696 performances, almost two years, during which each of the stars took off three weeks, with Angela Lansbury replacing Towers and Kermoyan replacing Brynner. The production was nominated for the Drama Desk Award for Outstanding Musical.
Brynner insisted on renovations to the Uris before he would play there, stating that the theatre resembled "a public toilet ''. He also insisted that dressing rooms on the tour and at the Uris be arranged to his satisfaction. According to his biographer Michelangelo Capua, for years afterwards, performers thanked Brynner for having backstage facilities across the country cleaned up. New York Times reviewer Clive Barnes said of the revival, "The cast is a good one. Mr. Brynner grinning fire and snorting charm is as near to the original as makes little difference '' and called Towers "piquantly ladylike and sweet without being dangerously saccharine ''. However, fellow Times critic Mel Gussow warned, later in the run, that "to a certain extent (Brynner) was coasting on his charisma ''.
The tour was extended in 1979, after the New York run, still starring Brynner and Towers. The production then opened in the West End, at the London Palladium, on June 12, 1979, and was reported to have the largest advance sale in English history. Brynner stated, "It is not a play, it is a happening. '' Virginia McKenna starred in London as Anna, winning an Olivier Award for her performance. June Angela again played Tuptim, and John Bennett was the Kralahome. It ran until September 27, 1980.
Brynner took only a few months off after the London run ended, which contributed to his third divorce; he returned to the road in early 1981 in an extended U.S. tour of the same production, which eventually ended on Broadway. Mitch Leigh produced and directed, and Robbins ' choreography was reproduced by Rebecca West, who also danced the role of Simon of Legree, which she had danced at the Uris in 1977. Patricia Marand played Anna, Michael Kermoyan was again the Kralahome, Patricia Welch was Tuptim. During 1981, Kate Hunter Brown took over as Anna, continuing in the role for at least a year and a half. By 1983, Mary Beth Peil was playing Anna. On September 13, 1983, in Los Angeles, Brynner celebrated his 4,000 th performance as the King; on the same day he was privately diagnosed with inoperable lung cancer, and the tour had to shut down for a few months while he received painful radiation therapy to shrink the tumor. The Washington Post reviewer saw Brynner 's "absolutely last farewell tour '' in December 1984 and wrote of the star:
When Brynner opened in the original production in 1951, he was the newcomer and Gertrude Lawrence the established star. Now, 33 years and 4,300 performances later, he is the king of the mountain as well as the show... The genius of his performance -- and it must be some sort of genius to maintain a character this long -- is its simplicity. There is not a superfluous expression nor a vague gesture. And if at times, the arms on hips posture, the shining dome and fierce expression remind one of Mr. Clean, it should be remembered that Brynner was there first.
The production reached New York in January 1985, running for 191 performances at the Broadway Theatre, with Brynner, Peil, Welch and West still playing their roles. The part of Eliza was played by the leading man 's fourth wife, Kathy Lee Brynner, and newcomer Jeffrey Bryan Davis played Louis. During the run, Brynner was unable to sing "A Puzzlement '', due to what was announced as a throat and ear infection, but he "projected bursting vitality to the top of the balcony. '' He received a special Tony Award for his role as the King and had come to dominate the musical to such an extent that Peil was nominated merely for a featured actress Tony as Anna. Leigh was nominated for a Tony for his direction. New York Times critic Frank Rich praised Brynner but was ambivalent about the production, which he called "sluggish '', writing that Brynner 's "high points included his fond, paternalistic joshing with his brood in ' The March of the Siamese Children, ' his dumb - show antics while attempting to force the English schoolteacher Anna to bow, and, of course, the death scene... The star aside, such showmanship is too often lacking in this King and I. '' The last performance was a special Sunday night show, on June 30, 1985, in honor of Brynner and his 4,625 th performance of the role. Brynner died less than four months later, on October 10, 1985.
From August 1989 to March 1990, Rudolf Nureyev played the King in a North American tour opposite Liz Robertson, with Kermoyan as the Kralahome, directed by Arthur Storch and with the original Robbins choreography. Reviews were uniformly critical, lamenting that Nureyev failed to embody the character, "a King who stands around like a sulky teenager who did n't ask to be invited to this party... Not even his one dance number... goes well... Rodgers and Hammerstein 's King (is) supposed to be a compelling personality (but Nureyev 's) bears no resemblance to the man described... in the "Something Wonderful '' number. The show therefore comes across as something of a charade... with everyone pretending to be dealing with a fearsome potentate who, in fact, is displaying very little personality at all. ''
The first major revival to break away from the original staging and interpretation was an Australian production directed by Christopher Renshaw, starring Hayley Mills as Anna, in 1991. Renshaw pointedly ignored the printed stage directions in the script when reshaping the piece into what he called "an authentic Thai experience ''. The production had a more sinister Siamese setting, a less elegant but more forceful Anna, and a younger King (Tony Marinyo). The attraction between Anna and the King was made explicit. Renshaw "cut a few lines and lyrics, and translated others into Thai to reinforce the atmosphere of a foreign land '', and all Asian roles were played by Asian actors. He also asked choreographers Lar Lubovitch and Jerome Robbins to create a "spiritual '' ballet, for the King 's entrance in Act 1, and a procession with a sacred white elephant in Act II. According to Renshaw, "The reds and golds were very much inspired by what we saw at the royal palace '', and set and costume elements reflected images, architecture and other designs in the palace and elsewhere in Bangkok. For example, the stage was framed by columns of elephant figures, a large emerald Buddha loomed over Act I, and hundreds of elephant images were woven into the set. Renshaw said, "The elephant is regarded as a very holy creature... they believe the spirit of Buddha often resides in the form of the elephant. ''
Stanley Green, in his Encyclopedia of the Musical Theatre, viewed the central theme of The King and I as "the importance of mutual understanding between people of differing ethnic and cultural backgrounds '', but Renshaw felt that the musical suffered from 1950s attitudes when "Orientalism was used as an exoticism rather than a real understanding of the particular culture. '' He stated that his production was informed by authentic Thai cultural, aesthetic and religious ideas that he learned from visiting Thailand. A feature in Playbill commented that the production focused on the "clash of ideologies and cultures, of East versus West ''. Theatre arts professor Eileen Blumenthal, however, called the production "a King and I for the age of political correctness ''. While she acknowledged that the musical 's treatment of Asian cultures had come to be understood as insensitive in the nearly half century since its premiere, she argued that Rodgers and Hammerstein 's script was more sensitive than most orientalist literature of its day, in that "West learns from East as well as the other way around '', and that, moreover, the musical 's treatment of its Asian subject is fantastical, not intended to be realistic. She concluded that the show is a documentary of "who we 've been '' in the West, and that a work like The King and I should not be suppressed, because it is "too good ''.
The production was reproduced on Broadway, opening on April 11, 1996 at the Neil Simon Theatre, starring Donna Murphy as Anna, who won a Tony Award for her performance, and Lou Diamond Phillips as the King, with Randall Duk Kim as the Kralahome, Jose Llana as Lun Tha, Joohee Choi as Tuptim and Taewon Yi Kim as Lady Thiang. Jenna Ushkowitz made her Broadway debut as one of the children. The production was nominated for eight Tony Awards, winning best revival and three others, with acting nominations for Phillips and Choi, who each won Theatre World Awards, and seven Drama Desk Awards, winning for Outstanding Revival of a Musical; Renshaw won for his direction. The production was praised for "lavish... sumptuous '' designs by Roger Kirk (costumes) and Brian Thomson (sets), who both won Tony and Drama Desk Awards for their work. Faith Prince played the role of Anna later in the run, followed by Marie Osmond. The revival ran on Broadway for 780 performances, and Kevin Gray replaced Phillips. The production then toured in the U.S. with Mills and Victor Talmadge. Other Annas on this tour included Osmond, Sandy Duncan, Stefanie Powers and Maureen McGovern, who ended the tour in Chicago in June 1998.
The production opened on May 3, 2000 at the London Palladium, directed by Renshaw and choreographed by Lubovitch, and using the Kirk and Thomson designs. It reportedly took in £ 8 million in advance ticket sales. The cast included Elaine Paige as Anna and Jason Scott Lee as the King, with Sean Ghazi as Luan Tha and Ho Yi as the Kralahome. Lady Thiang was, again, played by Taewon Yi Kim, of whom The Observer wrote, "Her ' Something Wonderful ' was just that. '' The show was nominated for an Olivier Award for outstanding musical. Later in the run, Lee was replaced as the King by Paul Nakauchi. The revival was generally well received. The Daily Mirror said: "The King and I waltzed back to the West End in triumph last night. '' The Daily Express noted, "Love it or loathe it, The King and I is an unstoppable smash. '' Variety, however, noted a lack of chemistry between the leads, commenting that "there 's something not entirely right in Siam when the greatest applause is reserved for Lady Thiang ''. Replacements included Josie Lawrence as Anna, Keo Woolford as the King and Saeed Jaffrey as the Kralahome. The show closed on January 5, 2002.
Another U.S. national tour began in mid-2004, directed by Baayork Lee (who appeared in the original production at age 5), with choreography by Susan Kikuchi, reproducing the Robbins original. Sandy Duncan again starred as Anna, while Martin Vidnovic played the King. He had played Lun Tha in the 1977 Broadway production and voiced the King in the 1999 animated film. Stefanie Powers took over for Duncan throughout 2005. Near the end of the tour in November 2005, Variety judged that Lee had successfully "harnessed the show 's physical beauty and its intrinsic exotic flavor. ''
Jeremy Sams directed, and Kikuchi choreographed, a limited engagement of the musical in June 2009 at the Royal Albert Hall in London. It starred Maria Friedman and Daniel Dae Kim. A U.K. national tour starred Ramon Tikaram as the King and Josefina Gabrielle as Anna, directed by Paul Kerryson, with choreography by David Needham. It opened in December 2011 in Edinburgh and continued into May 2012.
In June 2014, Théâtre du Châtelet in Paris presented an English - language production of The King and I directed by Lee Blakeley and starring Susan Graham, who was "close to perfection as Anna '', Lambert Wilson, "also excellent as the king '', and Lisa Milne as Lady Thiang. The New York Times called it "a grand new staging that has set French critics searching for superlatives. '' The Renshaw production was revived again in April 2014 by Opera Australia for performances in Sydney, Melbourne and Brisbane, directed by Renshaw and featuring Lisa McCune and Teddy Tahu Rhodes. Some critics questioned anew the portrayal of the Siamese court as barbaric and asked why a show where "the laughs come from the Thai people mis - understanding British... culture '' should be selected for revival.
A fourth Broadway revival began previews on March 12 and opened on April 16, 2015 at the Vivian Beaumont Theater. The production was directed by Bartlett Sher and starred Kelli O'Hara as Anna and Ken Watanabe, as the King, in his American stage debut. It featured Ruthie Ann Miles as Lady Thiang, Paul Nakauchi as the Kralahome, Ashley Park as Tuptim, Conrad Ricamora as Lun Tha, Jake Lucas as Louis Leonowens, and Edward Baker - Duly as Sir Edward Ramsey. Choreography by Christopher Gattelli was based on the original Jerome Robbins dances. The designers included Michael Yeargan (sets), Catherine Zuber (costumes) and Donald Holder (lighting). Reviews were uniformly glowing, with Ben Brantley of The New York Times calling it a "resplendent production '', praising the cast (especially O'Hara), direction, choreographer, designs and orchestra, and commenting that Sher "sheds a light (on the vintage material) that is n't harsh or misty but clarifying (and) balances epic sweep with intimate sensibility. '' The production was nominated for nine Tony Awards, winning four, including Best Revival of a Musical, Best Leading Actress (for O'Hara), Best Featured Actress (for Miles) and best costume design (for Zuber), and won the Drama Desk Award for Outstanding Revival. Replacements for the King included Jose Llana Hoon Lee and Daniel Dae Kim. Replacements for Anna included Marin Mazzie. The revival closed on June 26, 2016 after 538 performances. A U.S. national tour of the production began in November 2016. The cast includes Laura Michelle Kelly as Anna, Llana as the King and Joan Almedilla as Lady Thiang.
The King and I continues to be a popular choice for productions by community theatres, school and university groups, summer camps and regional theatre companies.
The musical was filmed in 1956 with Brynner re-creating his role opposite Deborah Kerr. The film won five Academy Awards and was nominated for four more. Brynner won an Oscar as Best Actor for his portrayal, and Kerr was nominated as Best Actress. Sharaff won for best costume design. The film was directed by Walter Lang (who was also nominated for an Oscar) and choreographed by Robbins. Marni Nixon dubbed the singing voice of Anna, and Rita Moreno played Tuptim. Saunders as Thiang, Adiarte as Chulalongkorn and Benson as the Kralahome reprised their stage roles, as did dancers Yuriko and de Lappe. Alan Mowbray appeared in the new role of the British Ambassador, while Sir Edward Ramsey (demoted to the Ambassador 's aide) was played by Geoffrey Toone. The movie 's script was faithful to the stage version, although it cut a few songs; reviews were enthusiastic. Thomas Hischak, in his The Rodgers and Hammerstein Encyclopedia, states: "It is generally agreed that the (movie) is the finest film adaptation of any R & H musical ''. Thai officials judged the film offensive to their monarchy and banned both film and musical in 1956.
A non-musical 1972 TV comedy series, starring Brynner, was broadcast in the U.S. by CBS but was cancelled in mid-season after 13 episodes. It followed the main storyline of the musical, focusing on the relationship between the title characters. Samantha Eggar played "Anna Owens '', with Brian Tochi as Chulalongkorn, Keye Luke as the Kralahome, Eric Shea as Louis, Lisa Lu as Lady Thiang, and Rosalind Chao as Princess Serena. The first episode aired on September 17, 1972, and the last aired on December 31, 1972. Margaret Landon was unhappy with this series and charged the producers with "inaccurate and mutilated portrayals '' of her literary property; she unsuccessfully sued for copyright infringement.
Jerome Robbins ' Broadway was a Broadway revue, directed by Robbins, showcasing scenes from some of his most popular earlier works on Broadway. The show ran from February 1989 to September 1990 and won six Tony Awards, including best musical. It featured "Shall We Dance '' and "The Small House of Uncle Thomas '' ballet, with Kikuchi as Eliza. Yuriko was the choreographic "reconstruction assistant ''.
RichCrest Animation Studios and Morgan Creek Productions released a 1999 animated film adaptation of the musical. Except for using some of the songs and characters, the story is unrelated to the Rodgers and Hammerstein version. Geared towards children, the adaptation includes cuddly animals, including a dragon. Voices were provided by Miranda Richardson as Anna (speaking), Christiane Noll as Anna (singing), Martin Vidnovic as the King, Ian Richardson as the Kralahome and Adam Wylie as Louis. Hischak dislikes the film but praises the vocals, adding that one compensation of the film is hearing Barbra Streisand sing a medley of "I Have Dreamed '', "We Kiss in a Shadow '' and "Something Wonderful '', which is borrowed from Streisand 's 1985 The Broadway Album and played under the film 's closing credits. He expressed surprise "that the Rodgers & Hammerstein Organization allowed it to be made '' and noted that "children have enjoyed The King and I for five decades without relying on dancing dragons ''. Ted Chapin, president of that organization, has called the film his biggest mistake in granting permission for an adaptation.
In his music, Rodgers sought to give some of the music an Asian flavor. This is exhibited in the piercing major seconds that frame "A Puzzlement '', the flute melody in "We Kiss in a Shadow '', open fifths, the exotic 6 / 2 chords that shape "My Lord and Master '', and in some of the incidental music. The music for "The Small House of Uncle Thomas '' was for the most part written not by Rodgers, but by dance music arranger Trude Rittmann, though "Hello, Young Lovers '' and a snatch of "A Puzzlement '' are quoted within it.
Before Rodgers and Hammerstein began writing together, the AABA form for show tunes was standard, but many of the songs in The King and I vary from it. "I Have Dreamed '' is an almost continuous repetition of variations on the same theme, until the ending, when it is capped by another melody. The first five notes (an eighth note triplet and two half notes) of "Getting to Know You '' also carry the melody all the way through the refrain. According to Mordden, this refusal to accept conventional forms "is one reason why their frequently heard scores never lose their appeal. They attend to situation and they unveil character, but also, they surprise you. ''
According to Rodgers ' biographer William Hyland, the score for The King and I is much more closely tied to the action than that of South Pacific, "which had its share of purely entertaining songs ''. For example, the opening song, "I Whistle a Happy Tune '', establishes Anna 's fear upon entering a strange land with her small son, but the merry melody also expresses her determination to keep a stiff upper lip. Hyland calls "Hello, Young Lovers '' an archetypical Rodgers ballad: simple, with only two chords in the first eight bars, but moving in its directness.
The original cast recording was released by Decca Records in 1951. While John Kenrick admires it for the performances of the secondary couple, Larry Douglas and Doretta Morrow, and for the warmth of Lawrence 's performance, he notes that "Shall We Dance '' was abridged, and there are no children 's voices -- the chorus in "Getting to Know You '' is made up of adults. In 2000, the recording was inducted into the Grammy Hall of Fame. Hischak comments that in the London cast album, Valerie Hobson 's vocals were no stronger than Lawrence 's and that the highlight is Muriel Smith 's "Something Wonderful '' in a disc with too many cuts. He calls Anna 's songs "well served '' by Marni Nixon 's singing in the 1956 film soundtrack and judges the recording as vocally satisfying; Kenrick describes it as a "mixed bag '': he is pleased that it includes several songs cut from the film, and he praises Nixon 's vocals, but he dislikes the supporting cast and suggests watching the movie instead for its visual splendor.
Kenrick prefers the 1964 Lincoln Center cast recording to the earlier ones, especially approving of the performances of Risë Stevens as Anna and Patricia Neway as Lady Thiang. The inclusion on that recording, for the first time, of "The Small House of Uncle Thomas '', was notable because LP technology limited a single - disc album to about fifty minutes, and thus inclusion of the ballet required the exclusion of some of the other numbers. Kenrick finds the recording of the 1977 Broadway revival cast to be "(e) asily the most satisfying King & I on CD ''. He judges it to be Brynner 's best performance, calling Towers "great '' and Martin Vidnovic, June Angela and the rest of the supporting cast "fabulous '', though lamenting the omission of the ballet. Hischak, in contrast, says that some might prefer Brynner in his earlier recordings, when he was "more vibrant ''. Kenrick enjoys the 1992 Angel studio recording mostly for the Anna of Julie Andrews, who he says is "pure magic '' in a role she never performed on stage. Kenrick praises the performance of both stars on the 1996 Broadway revival recording, calling Lou Diamond Phillips "that rarity, a King who can stand free of Brynner 's shadow ''. Hischak finds the soundtrack to the 1999 animated film with Christiane Noll as Anna and Martin Vidnovic as the King, as well as Barbra Streisand singing on one track, more enjoyable than the movie itself, but Kenrick writes that his sole use for that CD is as a coaster.
Opening night reviews of the musical were strongly positive. Richard Watts in the New York Post termed it "(a) nother triumph for the masters ''. Critic John Mason Brown stated, "They have done it again. '' The New York Times drama critic Brooks Atkinson wrote: "This time Messrs. Rodgers and Hammerstein are not breaking any fresh trails, but they are accomplished artists of song and words in the theater; and The King and I is a beautiful and lovable musical play. '' Barely less enthusiastic was John Lardner in The New Yorker, who wrote, "Even those of us who find (the Rodgers and Hammerstein musicals) a little too unremittingly wholesome are bound to take pleasure in the high spirits and technical skill that their authors, and producers, have put into them. '' Otis Guernsey wrote for the New York Herald Tribune, "Musicals and leading men will never be the same after last night... Brynner set an example that will be hard to follow... Probably the best show of the decade.
The balance of opinion among the critics of the original London production was generally favorable, with a few reservations. In The Observer, Ivor Brown predicted that the piece would "settle down for some years at Drury Lane. '' The anonymous critic of The Times compared the work to Gilbert and Sullivan: "Mr. Rodgers charmingly echoes Sullivan in the king 's more topsy - turvy moments; and Mr. Hammerstein attends very skilfully to the lurking Gilbertian humour. '' Less favorably, in The Daily Express, John Barber called the work "this treacle - bin Mikado '', and declared that only one of the cast, Muriel Smith, could really sing.
In 1963, New York Times reviewer Lewis Funke said of the musical, "Mr. Hammerstein put all of his big heart into the simple story of a British woman 's adventures, heartaches, and triumphs... A man with a world - view, he seized the opportunity provided by (Landon 's book) to underscore his thoughts on the common destiny of humanity. '' Fourteen years later, another Times reviewer, Clive Barnes, called the musical "unsophisticated and untroubled. Even its shadows are lightened with a laugh or a sweetly sentimental tear... we can even be persuaded to take death as a happy ending ''.
The reworked 1996 Broadway production received mixed reviews. Vincent Canby of The New York Times disliked it: "This latest King and I might look like a million dollars as a regional production; on Broadway... it 's a disappointment. The score remains enchanting but, somewhere along the line, there has been a serious failure of the theatrical imagination. '' But Liz Smith enthused: "The King and I is perfect ''; and the Houston Chronicle wrote, of the subsequent tour, "The King and I is the essence of musical theater, an occasion when drama, music, dance and decor combine to take the audience on an unforgettable journey. '' Critic Richard Christiansen in the Chicago Tribune observed, of a 1998 tour stop at the Auditorium Theatre: "Written in a more leisurely and innocent and less politically correct period, (The King and I) can not escape the 1990s onus of its condescending attitude toward the pidgin English monarch and his people. And its story moves at a pace that 's a mite too slow for this more hurried day and age. '' When the production reached London in 2000, however, it received uniformly positive reviews; the Financial Times called it "a handsome, spectacular, strongly performed introduction to one of the truly great musicals ''.
The 2015 Broadway revival initially received uniformly glowing reviews. Ben Brantley of The New York Times called it a "resplendent production '' and commented:
(In the) 1996 production... (a) dark strain of sadomasochistic tension born of Victorian repression and Eastern sensuality was introduced into sunny Siam... Mr. Sher is no strong - armed revisionist. He works from within vintage material, coaxing shadowy emotional depths to churn up a surface that might otherwise seem shiny and slick... (T) he show is both panoramic and personal, balancing dazzling musical set pieces with sung introspective soliloquies. (The direction) enhances (scenes ') emotional weight. No one is merely a dancer or an extra or an archetype, which may be the greatest defense this show offers against what can come across as cute condescension toward the exotic East... (The) portrayal of the varied forms and content of love (and) some of (Rodgers and Hammerstein 's) lushest ballads... acquire freshening nuance and anchoring conviction ".
Marilyn Stasio, in Variety, termed the production "sumptuous '' and "absolutely stunning ''. She noted a "still pertinent theme: the dissonant dynamic when Western civilization tries to assert its values on ancient Eastern cultures. '' In USA Today, Elysa Gardner wrote of the grins and tears evoked by the production. "(W) atching these people from vastly different cultures carefully but joyfully reach for common ground... can be almost unbearably moving... (Rodgers and Hammerstein 's) textured humanity and appeals for tolerance, like their shimmering scores, only gain resonance as time passes. '' The production 's attempts to achieve historical accuracy and explore the work 's dark themes with a modern sensibility led some reviewers to conclude that it succeeds at converting the musical 's orientalism into "a modern critique of racism and sexism ''. Other commentators, however, such as composer Mohammed Fairouz, argued that an attempt at sensitivity in production can not compensate for "the inaccurate portrayal of the historic King Mongkut as a childlike tyrant and the infantilization of the entire Siamese population of the court '', which demonstrate a racist subtext in the piece, even in 1951 when it was written. Benjamin Ivry opined that "the Rodgers and Hammerstein organization should shelve the (musical) as a humanitarian gesture toward Southeast Asian history and art ''.
Fifty years after its premiere, Rodgers biographer Meryle Secrest summed up the musical:
The King and I is really a celebration of love in all its guises, from the love of Anna for her dead husband; the love of the King 's official wife, Lady Thiang, for a man she knows is flawed and also unfaithful; the desperation of forbidden love; and a love that is barely recognized and can never be acted upon.
|
when is season 6 of project mc2 coming out | Project Mc2 - Wikipedia
Project Mc (pronounced Project MC - squared) is an American web television series produced by DreamWorks Animation 's AwesomenessTV and MGA Entertainment for Netflix. The series was first released on August 7, 2015.
The series is set in a fictional city called Maywood Glen, California, it revolves around the fields of STEAM, and the adventures of McKeyla McAlister and her best friends, who work for a government organization called NOV8 (pronounced "innovate ''), a highly secretive group of teenage female government operatives who are trying to protect the world.
The first season of the series, consisting of three episodes, was released on August 7, 2015. On April 6, 2016, Netflix announced that the series has been renewed for its second and third seasons. The second season was released on August 12, 2016, and the third season was released on October 14, 2016. Both consisted of six episodes. An extended 34 - minute Valentine 's Day special was released as the first and only episode of the fourth season on February 14, 2017. A fifth season consisting of five episodes was released on September 15, 2017. A sixth season, also consisting of five episodes, was released on November 7, 2017.
The series is filmed primarily in and around the San Fernando Valley area of Los Angeles, including Chatsworth, Woodland Hills, Van Nuys, and Northridge, as well as in Alhambra.
|
the weighting factor for government bonds in basel iii | Capital adequacy ratio - wikipedia
Capital Adequacy Ratio (CAR) is also known as Capital to Risk (Weighted) Assets Ratio (CRAR), is the ratio of a bank 's capital to its risk. National regulators track a bank 's CAR to ensure that it can absorb a reasonable amount of loss and complies with statutory Capital requirements.
It is a measure of a bank 's capital. It is expressed as a percentage of a bank 's risk weighted credit exposures.
This ratio is used to protect depositors and promote stability and efficiency of financial systems around the world.
Two types of capital are measured: tier one capital, which can absorb losses without a bank being required to cease trading, and tier two capital, which can absorb losses in the event of a winding - up and so provides a lesser degree of protection to depositors.
Capital adequacy ratios (CARs) are a measure of the amount of a bank 's core capital expressed as a percentage of its risk - weighted asset.
Capital adequacy ratio is defined as:
CAR = Tier 1 capital + Tier 2 capital Risk weighted assets (\ displaystyle (\ mbox (CAR)) = (\ cfrac (\ mbox (Tier 1 capital + Tier 2 capital)) (\ mbox (Risk weighted assets))))
TIER 1 CAPITAL = (paid up capital + statutory reserves + disclosed free reserves) - (equity investments in subsidiary + intangible assets + current & brought - forward losses)
TIER 2 CAPITAL = A) Undisclosed Reserves + B) General Loss reserves + C) hybrid debt capital instruments and subordinated debts
where Risk can either be weighted assets (a (\ displaystyle \, a)) or the respective national regulator 's minimum total capital requirement. If using risk weighted assets,
CAR = T 1 + T 2 a (\ displaystyle (\ mbox (CAR)) = (\ cfrac (T_ (1) + T_ (2)) (a))) ≥ 10 %.
The percent threshold varies from bank to bank (10 % in this case, a common requirement for regulators conforming to the Basel Accords) and is set by the national banking regulator of different countries.
Two types of capital are measured: tier one capital (T 1 (\ displaystyle T_ (1)) above), which can absorb losses without a bank being required to cease trading, and tier two capital (T 2 (\ displaystyle T_ (2)) above), which can absorb losses in the event of a winding - up and so provides a lesser degree of protection to depositors.
Capital adequacy ratio is the ratio which determines the bank 's capacity to meet the time liabilities and other risks such as credit risk, operational risk etc. In the most simple formulation, a bank 's capital is the "cushion '' for potential losses, and protects the bank 's depositors and other lenders. Banking regulators in most countries define and monitor CAR to protect depositors, thereby maintaining confidence in the banking system.
CAR is similar to leverage; in the most basic formulation, it is comparable to the inverse of debt - to - equity leverage formulations (although CAR uses equity over assets instead of debt - to - equity; since assets are by definition equal to debt plus equity, a transformation is required). Unlike traditional leverage, however, CAR recognizes that assets can have different levels of risk.
Since different types of assets have different risk profiles, CAR primarily adjusts for assets that are less risky by allowing banks to "discount '' lower - risk assets. The specifics of CAR calculation vary from country to country, but general approaches tend to be similar for countries that apply the Basel Accords. In the most basic application, government debt is allowed a 0 % "risk weighting '' - that is, they are subtracted from total assets for purposes of calculating the CAR.
Risk weighted assets - Fund Based: Risk weighted assets mean fund based assets such as cash, loans, investments and other assets. Degrees of credit risk expressed as percentage weights have been assigned by the national regulator to each such assets.
Non-funded (Off - Balance sheet) Items: The credit risk exposure attached to off - balance sheet items has to be first calculated by multiplying the face amount of each of the off - balance sheet items by the Credit Conversion Factor. This will then have to be again multiplied by the relevant weightage.
Local regulations establish that cash and government bonds have a 0 % risk weighting, and residential mortgage loans have a 50 % risk weighting. All other types of assets (loans to customers) have a 100 % risk weighting.
Bank "A '' has assets totaling 100 units, consisting of:
Bank "A '' has debt of 95 units, all of which are deposits. By definition, equity is equal to assets minus debt, or 5 units.
Bank A 's risk - weighted assets are calculated as follows
Even though Bank A would appear to have a debt - to - equity ratio of 95: 5, or equity - to - assets of only 5 %, its CAR is substantially higher. It is considered less risky because some of its assets are less risky than others.
The Basel rules recognize that different types of equity are more important than others. To recognize this, different adjustments are made:
Different minimum CARs are applied. For example, the minimum Tier I equity allowed by statute for risk - weighted assets may be 6 %, while the minimum CAR when including Tier II capital may be 8 %.
There is usually a maximum of Tier II capital that may be "counted '' towards CAR, which varies by jurisdiction.
|
when did the nile river flood in ancient egypt | Flooding of the Nile - wikipedia
The flooding of the Nile (Arabic: عيد وفاء النيل ) has been an important natural cycle in Egypt since ancient times. It is celebrated by Egyptians as an annual holiday for two weeks starting August 15, known as Wafaa El - Nil. It is also celebrated in the Coptic Church by ceremonially throwing a martyr 's relic into the river, hence the name, Esba ` al - shahīd ("The Martyr 's Finger ''). Ancient Egyptians believed that the Nile flooded every year because of Isis 's tears of sorrow for her dead husband, Osiris.
The flooding of the Nile is the result of the yearly monsoon between May and August causing enormous precipitations on the Ethiopian Highlands whose summits reach heights of up to 4550 m (14,928 ft). Most of this rainwater is taken by the Blue Nile and by the Atbarah River into the Nile, a less important amount is flowing through the Sobat and the White Nile into the Nile. During this short period, those rivers contribute up to ninety percent of the water of the Nile and most of the sedimentation carried by it, but after the rainy season, dwindle to minor rivers.
These facts were unknown to the ancient Egyptians who could only observe the rise and fall of the Nile waters. The flooding as such was foreseeable, its exact dates and levels could only be forecast on a short term basis by transmitting the gauge readings at Aswan to the lower parts of the kingdom where the data had to be converted to the local circumstances. Which was not foreseeable, of course, was the size of flooding and its total discharge.
The Egyptian year was divided into the three seasons of Akhet (Inundation), Peret (Growth), and Shemu (Harvest). Akhet covered the Egyptian flood cycle. This cycle was so consistent that the Egyptians timed its onset using the heliacal rising of Sirius, the key event used to set their calendar.
The first indications of the rise of the river could be seen at the first of the cataracts of the Nile (at Aswan) as early as the beginning of June, and a steady increase went on until the middle of July, when the increase of water became very great. The Nile continued to rise until the beginning of September, when the level remained stationary for a period of about three weeks, sometimes a little less. In October it often rose again, and reached its highest level. From this period it began to subside, and usually sank steadily until the month of June when it reached its lowest level, again. Flooding reached Aswan about a week earlier than Cairo, and Luxor 5 -- 6 days earlier than Cairo. Typical heights of flood were 45 feet (13.7 metres) at Aswan, 38 feet (11.6 metres) at Luxor (and Thebes) and 25 feet (7.6 metres) at Cairo.
If it were not for the Nile River, Egyptian civilization could not have developed, as it is the only significant source of water in this desert region.
Its other importance was its function as their gateway to the unknown world. The Nile flows from south to north, to its delta on the Mediterranean Sea. The floods were seen as the annual coming of the god. Possibly Egyptian mythology was based on this understanding, creating stories of gods or nature to give added importance to the processes and cycles that sustained Egypt.
Whilst the earliest Egyptians simply laboured those areas which were inundated by the floods, some 7000 years ago, they started to develop the basin irrigation method. Agricultural land was divided into large fields surrounded by dams and dykes and equipped with intake and exit canals. The basins were flooded and then closed for about 45 days to saturate the soil with moisture and allow the silt to deposit. Then the water was discharged to lower fields or back into the Nile. Immediately thereafter, sowing started, and harvesting followed some three of four months later. In the dry season thereafter, farming was not possible. Thus, all crops had to fit into this tight scheme of irrigation and timing.
In case of a small flood, the upper basins could not be filled with water which would mean famine. If a flood was too large, it would damage villages, dykes and canals.
The basin irrigation method did not exact too much of the soils, and their fertility was sustained by the annual silt deposit. Salinisation did not occur, since in summer, the ground water level was well below the surface, and any salinity which might have accrued was washed away by the next flood.
It is estimated that by this method, in ancient Egypt, some 2 million up to a maximum of 12 million inhabitants could be nourished. By the end of Late Antiquity, the methods and infrastructure slowly decayed, and the population diminished accordingly.
By 1800, Egypt had a population of some 2.5 million inhabitants.
Muhammad Ali Pasha, (1805 -- 1848 Khedive of Egypt) modernized various aspects of Egypt. He endeavoured to extend arable land and achieve additional revenue by introducing cotton cultivation, a crop with a longer growing season and requiring sufficient water at all times. To this end, the Delta Barrages and wide systems of new canals were built, changing the irrigation system from the traditional basin irrigation to perennial irrigation whereby farmland could by irrigated throughout the year. Thereby, many crops could be harvested twice or even three times a year and agricultural output was increased dramatically.
In 1873, Isma'il Pasha built the Ibrahimiya Canal, thereby greatly extending perennial irrigation.
Although the British, during their first period in Egypt, improved and extended this system, it was not able to store large amounts of water and to fully retain the annual flooding. In order to further improve irrigation, Sir William Willcocks, in his role as director general of reservoirs for Egypt, planned and supervised the construction of the Aswan Low Dam, the first true storage reservoir, and the Assiut Barrage, both completed in 1902. However, they were still not able to retain sufficient water to cope with the driest summers, despite the Aswan Low Dam being raised twice, in 1907 -- 1912 and in 1929 -- 1933.
During the 1920s, the Sennar Dam was constructed on the Blue Nile as a reservoir in order to supply water to the huge Gezira Scheme on a regular basis. It was the first dam on the Nile to retain large amounts of sedimentation (and to divert a large quantity of it into the irrigation canals) and in spite of opening the sluice gates during the flooding in order to flush the sediments, the reservoir is assumed to have lost about a third of its storage capacity. In 1966, the Roseires Dam was added to help irrigating the Gezira Scheme. The Jebel Aulia Dam on the White Nile south of Khartoum was completed in 1937 in order to compensate for the Blue Nile 's low waters in winter, but it was still not possible to overcome a period of very low waters in the Nile and thus avoid occasional drought, which had plagued Egypt since ancient times.
In order to overcome these problems, Harold Edwin Hurst, a British hydrologist in the Egyptian Public Works from 1906 until many years after his retirement age, studied the fluctuations of the water levels in the Nile, and already in 1946 submitted an elaborate plan for how a "century storage '' could be achieved to cope with exceptional dry seasons occurring statistically once in a hundred years. His ideas of further reservoirs using Lake Victoria, Lake Albert and Lake Tana and of reducing the evaporation in the Sudd by digging the Jonglei Canal were opposed by the states concerned.
Eventually, Gamal Abdel Nasser, President of Egypt from 1956 to 1970, opted for the idea of the Aswan High Dam at Aswan in Egypt instead of having to deal with many foreign countries. The required size of the reservoir was calculated using Hurst 's figures and mathematical methods. In 1970, with the completion of the Aswan High Dam which was able to store the highest floods, the annual flooding cycle in Egypt came to an end in Lake Nasser.
In the meantime, Egyptian population has risen to 92.5 million inhabitants (2016 estimate).
|
who won football manager of the year 2018 | Premier League Manager of the season - wikipedia
The Premier League Manager of the Season is an annual association football award presented to managers in England. It recognises the most outstanding manager in the Premier League each season. The recipient is chosen by a panel assembled by the league 's sponsor (currently Barclays) and is announced in the second or third week of May. The award was established for the 1993 / 94 by Premier League sponsor Carling. For sponsorship purposes, it was called the Carling Manager of the Year from 1994 to 2001, the Barclaycard Manager of the Year from 2001 to 2004, and since 2004 known as the Barclays Manager of the Season.
In 1994, the inaugural Manager of the Season award was given to Manchester United manager Alex Ferguson for retaining the league championship. The current holder of the award is Manchester City manager Pep Guardiola.
The most number of awards won by a single manager is eleven, achieved by Alex Ferguson between 1994 and his retirement in 2013. He accounted for more than half of the awards in that period of time. In 1998 Arsène Wenger became the first non-British manager to win the award, and has so far received it on two further occasions with Arsenal. José Mourinho is the only manager other Ferguson and Wenger to have won the award on more than on one occasion, and the only manager other than Ferguson to win the award in consecutive seasons.
Four managers have won the award without winning the Premier League trophy in the same season, reflecting the weight of their achievements: George Burley in 2000 -- 01, having guided Ipswich Town to fifth place in the league, after only securing the club 's promotion from the First Division the previous season; Harry Redknapp in 2009 -- 10, for steering Tottenham Hotspur to a top - four finish for the first time in twenty years, Alan Pardew in 2011 -- 12, having guided Newcastle United to their highest position in nine years and Tony Pulis in 2013 -- 14, for steering Crystal Palace from bottom of the league in November to an 11th - place finish.
The Premier League was formed in 1992, when the members of the First Division resigned from The Football League. These clubs set up a new commercially independent league that negotiated its own broadcast and sponsorship agreements. The inaugural season had no sponsor until Carling agreed to a four - year £ 12 million deal that started the following season. That same season, Carling introduced the Manager of the Month and Manager of the Season awards, in addition to the existing manager of the year award presented by the League Managers Association.
The first Manager of the Season award was presented to Alex Ferguson after winning the Premier League with Manchester United for the second consecutive season. Kenny Dalglish was awarded the accolade in the 1994 -- 95 season, having guided Blackburn Rovers to their first league title in 81 years. Despite losing to Liverpool on the final matchday, Blackburn secured the championship when Manchester United failed to beat West Ham United the same day. Manchester United regained the Premier League the following season, resisting Newcastle United 's threat, and successfully retained the championship in 1996 -- 97, ensuring that Ferguson became the first manager to win two consecutive awards.
Arsène Wenger was the first non -- British manager to receive the Manager of the Season award, having led Arsenal to the top of the Premier League in 1997 -- 98, his first full season at the club. This achievement was significant given that Arsenal were, at one stage, 12 points behind leaders Manchester United. After a climactic finish to the 1998 -- 99 season, Ferguson was presented with his fifth managerial award for winning the Premier League with Manchester United. The club beat Tottenham Hotspur on the last matchday to secure their fifth championship in seven years, and in the following week completed a treble of trophies consisting of the domestic league, FA Cup and UEFA Champions League. Ferguson received the accolade again in 1999 -- 2000, as Manchester United finished 18 points above second - placed Arsenal.
Ipswich Town manager George Burley was the winner in 2000 -- 01, the first time the award did not go to a league - winning manager. Ipswich Town, who won promotion to the Premier League from the First Division in the previous season, finished fifth and qualified for the UEFA Cup. Burley triumphed over Ferguson, who led Manchester United to their third consecutive championship title, and Liverpool manager Gérard Houllier, who guided his team to three trophies and a berth in the Champions League. Wenger was named the Manager of the Season for 2001 -- 02 after guiding Arsenal to thirteen consecutive wins towards the end of the season -- a run which ensured the club regained the Premier League trophy. For winning his eighth Premier League title with Manchester United, Ferguson was given the award in the 2002 -- 03 season. Wenger was the outstanding winner for the award in 2003 -- 04 as he managed Arsenal to an unprecedented achievement of winning the league without a single defeat. Reflecting on Wenger 's accomplishment, a Barclaycard Awards Panel spokesperson said "Arsène Wenger is a very worthy recipient of this accolade and has sent his team into the history books. Arsenal have played exciting attacking football throughout the season and finishing it unbeaten is a feat that may not be repeated for another 100 years. ''
Chelsea manager José Mourinho was chosen as the recipient for the 2004 -- 05 season for taking the club to its first league championship in 50 years. Chelsea finished the season with a league - record 95 points, 12 points ahead of runners - up Arsenal, scoring 72 goals and conceding 15 in the process. Mourinho won the award a second successive time the following season -- the first foreign manager to do so -- as Chelsea won their second Premier League title. Ferguson collected the award for the 2006 -- 07, 2007 -- 08 and 2008 -- 09 seasons, in a period when Manchester United regained the domestic title after a four - year drought and retained the trophy for a further two years. Tottenham manager Harry Redknapp was presented with the award at the end of the 2009 -- 10 season, having guided the club to fourth position and a spot in the following season 's Champions League at the expense of Manchester City. In May 2011, Ferguson picked up his tenth Manager of the Season award for leading Manchester United to a record 19th league title. In May 2012, Alan Pardew won his first Manager of the Season award after guiding Newcastle United to their highest position in nine years. In May 2013, Ferguson picked up his eleventh Manager of the Season award for leading Manchester United to a record 20th league title. Tony Pulis became the first Welsh recipient of the award in May 2014, for guiding Crystal Palace from bottom place to 11th.
|
when is it considered running a red light | Red light camera - wikipedia
A red light camera (short for red light running camera) is a type of traffic enforcement camera that captures an image of a vehicle which has entered an intersection in spite of the traffic signal indicating red (during the red phase). By automatically photographing vehicles that run red lights, the photo is evidence that assists authorities in their enforcement of traffic laws. Generally the camera is triggered when a vehicle enters the intersection (passes the stop - bar) after the traffic signal has turned red. Typically, a law enforcement official will review the photographic evidence and determine whether a violation occurred. A citation is then usually mailed to the owner of the vehicle found to be in violation of the law. These cameras are used worldwide, in countries including: Australia, New Zealand, Canada, the United Kingdom, Singapore and the United States. If a proper identification can not be made, instead of a ticket, some police departments send out a notice of violation to the owner of the vehicle, requesting identifying information so that a ticket may be issued later.
There is debate and ongoing research about the use of red light cameras. Authorities cite public safety as the primary reason that the cameras are installed, while opponents contend their use is more for financial gain. There have been concerns that red light cameras scare drivers (who want to avoid a ticket) into more sudden stops, which may increase the risk of rear - end collisions. The elevated incentive to stop may mitigate side collisions. Some traffic signals have an all red duration, allowing a grace period of a few seconds before the cross-direction turns green. Some studies have confirmed more rear - end collisions where red light cameras have been used, while side collisions decreased, but the overall collision rate has been mixed. In some areas, the length of the yellow phase has been increased to provide a longer warning to accompany the red - light - running - camera. There is also concern that the international standard formula used for setting the length of the yellow phase ignores the laws of physics, which may cause drivers to inadvertently run the red phase.
Red light cameras were first developed in the Netherlands by Gatso. Worldwide, red light cameras have been in use since the 1960s, and were used for traffic enforcement in Israel as early as 1969. The first red light camera system was introduced in 1965, using tubes stretched across the road to detect the violation and subsequently trigger the camera. One of the first developers of these red light camera systems was Gatsometer BV.
The cameras first received serious attention in the United States in the 1980s following a highly publicized crash in 1982, involving a red - light runner who collided with an 18 - month - old girl in a stroller (or "push - chair '') in New York City. Subsequently, a community group worked with the city 's Department of Transportation to research automated law - enforcement systems to identify and ticket drivers who run red lights. New York 's red - light camera program went into effect in 1993. From the 1980s onward, red light camera usage expanded worldwide, and one of the early camera system developers, Poltech International, supplied Australia, Britain, South Africa, Taiwan, the Netherlands and Hong Kong. American Traffic Systems (subsequently American Traffic Solutions) (ATS) and Redflex Traffic Systems emerged as the primary suppliers of red light camera systems in the US, while Jenoptik became the leading provider of red light cameras worldwide.
Initially, all red light camera systems used film, which was delivered to local law enforcement departments for review and approval. The first digital camera system was introduced in Canberra in December 2000, and digital cameras have increasingly replaced the older film cameras in other locations since then.
Red light cameras are typically installed in protective metal boxes attached to poles (different from the radar guns carried by police officers) at intersections, which are often specifically chosen due to high numbers of crashes and / or red - light - running violations. Red light camera systems typically employ two closely spaced inductive loops embedded in the pavement just before the limit line, to measure the speed of vehicles. Using the speed measured, the system predicts if a particular vehicle will not be able to stop before entering the intersection, and takes two photographs of the event. The first photo shows the vehicle just before it enters the intersection, with the light showing red, and the second photo, taken a second or two later, shows the vehicle when it is in the intersection.
Details that may be recorded by the camera system (and later presented to the vehicle owner) include: the date and time, the location, the vehicle speed, and the amount of time elapsed since the light turned red and the vehicle passed into the intersection. The event is captured as a series of photographs or a video clip, or both, depending on the technology used, which shows the vehicle before it enters the intersection on a red light signal and its progress through the intersection. The data and images, whether digital or developed from film, are sent to the relevant law enforcement agency. There, the information is typically reviewed by a law enforcement official or police department clerk, who determines if a violation occurred and, if so, approves issuing a citation to the vehicle owner, who may challenge the citation.
Studies have shown that 38 % of violations occur within 0.25 seconds of the light turning red and 79 % within one second. A few red light camera systems allow a "grace period '' of up to half a second for drivers who pass through the intersection just as the light turns red. Ohio and Georgia introduced a statute requiring that one second be added to the standard yellow time of any intersection that has a red light camera, which has led to an 80 % reduction in tickets since its introduction. New Jersey has the strictest yellow timing provisions in the country as a result of concerns that cameras would be used to generate revenue; they have a statute specifying that the yellow time for an intersection that has a red light camera must be based on the speed at which 85 % of the road 's traffic moves rather than be based on the road 's actual speed limit.
Red light camera usage is widespread in a number of countries worldwide. Netherlands - based Gatso presented red light cameras to the market in 1965, and red light cameras were used for traffic enforcement in Israel as early as 1969. In the early 1970s, red light cameras were used for traffic enforcement in at least one jurisdiction in Europe. Australia began to use them on a wide scale in the 1980s. As of July 21, 2010, expansion of red light camera usage in Australia is ongoing. In some areas of Australia, where the red light cameras are used, there is an online system to check the photograph taken of your vehicle if you receive a ticket. Singapore also began use of red light cameras in the 1980s, and installed the first camera systems during five years, starting in August 1986. In Canada, by 1998, red light cameras were in use in British Columbia and due to be implemented in Manitoba. In Alberta, red light cameras were installed in 1999 in Edmonton and in 2001 in Calgary. The UK first installed cameras in the 1990s, with the earliest locations including eight rail crossings in Scotland where there was greatest demand for enforcement of traffic signals due to fatalities.
Red light camera usage is extensive in mainland China. As of 2007, approximately 700 intersections in Shenzhen were monitored for red light violations, speeding, or both.
In Hong Kong, where red light cameras are installed, signs are erected to warn drivers that cameras are present, with the aim of educating drivers to stop for signals. The number of red light cameras in Hong Kong doubled in May 2004, and digital red light cameras were introduced at intersections identified by the police and transport department as having the most violations and greatest risk. The digital cameras were introduced to further deter red - light running. As added assistance to drivers, some of the camera posts were painted orange so that drivers could see them more easily. By 2006, Hong Kong had 96 red light cameras in operation. By 2016 this number had risen to 195.
In the United Kingdom the authorities often refer to red - light cameras, along with speed cameras, as safety cameras. They were first used in the early 1990s, with initial deployment by the Department for Environment, Transport and the Regions. All costs were paid by the local authority in which the individual camera was placed, and revenues accrued from fines were paid to the Treasury Consolidated Fund. In 1998 the government handed the powers of collection to local road - safety partnerships, comprising "... local authorities, Magistrates ' Courts, the Highways Agency and the police. ''
In a report, published in December 2005, there were a total of 612 red light cameras in England alone, of which 225 were in London.
Since the early 1990s, red light cameras have been used in the United States in 26 U.S. states and the District of Columbia. Within some states, the cameras may only be permitted in certain areas. For example, in New York State, the Vehicle and Traffic Law permits red light cameras only within cities with a population above 1 million (i.e. New York City), Rochester, Buffalo, Yonkers, and Nassau and Suffolk Counties. In Florida, a state law went into effect on 1 July 2010, which allows all municipalities in the state to use red light cameras on all state - owned rights - of - way and fine drivers who run red lights, with the aim of enforcing safe driving, according to then - Governor Charlie Crist. The name given to the state law is the Mark Wandall Traffic Safety Act, named for a man who was killed in 2003 by a motorist who ran a red light. In addition to allowing the use of cameras, the law also standardizes driver fines. Major cities throughout the US that use red light cameras include Atlanta, Austin, Baltimore, Baton Rouge, Chicago, Dallas, Denver, Los Angeles, Memphis, New Orleans, New York City, Newark, Philadelphia, Phoenix, Raleigh, San Francisco, Seattle, Toledo, and Washington, D.C. Albuquerque has cameras, but in October 2011 local voters approved a ballot measure advising the city council to cease authorizing the red light camera program. The City of Albuquerque ended its red light program on 31 December 2011.
In March 2017, the city of Chicago changed the period of time between when the light turns red and when the red - light camera is triggered (and a ticket issued) from 0.1 seconds to 0.3 seconds. The "grace period '' in Chicago is now in line with other major American cities like New York City and Philadelphia.
Suppliers of red - light cameras in the US include: Affiliated Computer Services (ACS) State and Local Solutions, a Xerox company, of Dallas, Texas; American Traffic Solutions of Scottsdale, Arizona, 1 / 3 owned by Goldman Sachs; Brekford International Corp., of Hanover, Maryland; CMA Consulting Services, Inc. of Latham, New York; Gatso USA of Beverly, Massachusetts; iTraffic Safety LLC of Ridgeland, South Carolina; Optotraffic, of Lanham, Maryland; Redflex Traffic Systems of Phoenix, Arizona, with its parent company in Australia; RedSpeed - Illinois LLC, of Lombard, Illinois, whose parent company is in Worcestershire, England; SafeSpeed LLC, of Chicago, Illinois, and Sensys America Inc., of Miami, Florida.
Some states have chosen to prohibit the use of red light cameras. These include Arkansas, Maine, Michigan, Mississippi, Montana, Nevada, New Hampshire and West Virginia.
In February 2012, the red light camera ordinance in the city of St. Louis was officially declared void by St. Louis Circuit Court Judge Mark Neill. On 9 August 2012, the Cary, North Carolina town council voted to end their program. In February 2013, the San Diego mayor helped remove a red light camera to keep the campaign promise he made during the November 2012 election to eliminate these systems. New Jersey had to renew the Red Light law by the state legislature in early 2015 and did not do this, making the use of red light cameras illegal in the state afterwards.
In the United States, fines are not standardized and vary to a great degree, from $50 in New York City to approximately $500 in California. The cost in California can increase to approximately $600 if the motorist elects to attend traffic school in order to avoid having a demerit point added to his or her driving record.
In many California police departments, when a positive identification can not be made, the registered owner of the vehicle will be mailed a notice of traffic violation instead of a real ticket. Also known as "snitch tickets, '' these notices are used to request identifying information about the driver of the vehicle during the alleged violation. Because these notices have not been filed at court, they carry no legal weight and the registered owner is under no obligation to respond. In California, a genuine ticket will bear the name and address of the local branch of the Superior Court and direct the recipient to contact that court. In contrast, a notice of traffic violation generated by the police will omit court information, using statements like "This is not a notice to appear '' and "Do not forward this information to the Court. ''
In September 2014, a bill was proposed in New Jersey to disallow the state Motor Vehicle Commission from sharing license plate and driver information needed to cite New Jersey drivers accused of committing infractions in another state.
A report in 2003 by the National Cooperative Highway Research Program (NCHRP) examined studies from the previous 30 years in Australia, the UK, Singapore, and the US, and concluded that red light cameras "improve the overall safety of intersections where they are used. '' While the report states that evidence is not conclusive (partly due to flaws in the studies), the majority of studies show a reduction in angle crashes, a smaller increase in rear - end crashes, with some evidence of a "spillover '' effect of reduced red light running to other intersections within a jurisdiction. These findings are similar to a 2005 meta analysis, which compared the results of 10 controlled before - after studies of red light cameras in the US, Australia, and Singapore. The analysis stated that the studies showed a reduction in crashes (up to almost 30 %) in which there were injuries, however, evidence was less conclusive for a reduction in total collisions. Studies of red light cameras worldwide show a reduction of crashes involving injury by about 25 % to 30 %, taking into account increases in rear - end crashes, according to testimony from a meeting of the Virginia House of Delegates Militia, Police, and Public Safety Committee in 2003. These findings are supported by a review of more than 45 international studies carried out in 2010, which found that red light cameras reduce red light violation rates, crashes resulting from red light running, and usually reduce right - angle collisions.
Amongst the many researched safety benefits of installing RLCs, few studies have examined drivers ' behavior change in relation to red - light cameras showing that at these intersections drivers tended to react quicker to a yellow light change when stopping. The consequence of this change could be the slight decline in the intersection capacity. In terms of location - specific studies, in Singapore a study from 2003 found that there was "a substantial drop '' in red light violations at intersections with red light cameras. In particular the study found that drivers were encouraged to stop more readily in areas with red light cameras in use. A report from civic administrators in Saskatchewan in 2001, when considering red light camera use, referred to studies in the Netherlands and Australia that found a 40 % decrease in red light violations and 32 % decrease in right - angle crashes where red light cameras were installed. Following the introduction of red light cameras in Western Australia, the number of serious right - angle crashes decreased by 40 %, according to an article from the Canberra Times. In an article from the Xinhua General News Service, the Hong Kong transport department reported that in 2006 the monthly average number of crashes due to red light violations fell 25 % and the number of people injured in these crashes decreased by 30 %, following an increase in the number of red light cameras in use.
In the U.S. and Canada, a number of studies have examined whether red light cameras produce a safety benefit. A 2005 study by the U.S. Federal Highway Administration (FHWA) suggests red light cameras reduce dangerous right - angle crashes. The FHWA study has been criticized as containing critical methodological and analytical flaws and failing to explain an increase in fatalities associated with red light camera use:
(...) the authors spotlight the statistical difficulties of including the cost of fatalities, while ignoring the practical implications of such events (...) assuming that each angle injury crash had a societal cost of $64,468, when in fact the cost was $82,816 before camera use and $100,176 after camera use (...)
IIHS research on the safety effects of red light cameras has also been criticized as biased and methodologically flawed.
Not all studies have been favorable to the use of red light cameras. A 2004 study of 17,271 crashes from North Carolina A & T University showed that the presence of red light cameras increased the overall number of crashes by 40 %. This research received no peer review and is considered flawed by the IIHS. A 2005 Virginia Department of Transportation study of the long - term effects of camera enforcement in the state found a decrease in the number of right - angle crashes with injuries, but an increase in rear - end crashes and an overall increase in the number of crashes causing injuries. In 2007, the department issued an updated report which showed that the overall number of crashes at intersections with red light cameras increased. This report concluded that the decision to install red light cameras should be made on an intersection - by - intersection basis as some intersections saw decreases in crashes and injuries that justified the use of red light cameras, while others saw increases in crashes, indicating that the cameras were not suitable in that location. This study, too, is considered flawed by the IIHS. Aurora, Colorado experienced mixed results with red light cameras; after starting camera enforcement at 4 intersections, crashes decreased by 60 % at one, increased 100 % at two, and increased 175 % at the fourth. According to the IIHS, most studies suggest the increase in rear - end collisions decreases once drivers have become accustomed to the new dynamics of the intersection. Some locations experience a decrease in rear - end collisions at intersections with red light cameras over time, for instance, in Los Angeles such collisions fell 4.7 % from 2008 to 2009. However, a 2010 analysis by the Los Angeles City Controller found L.A. 's red light cameras had n't demonstrated an improvement in safety, specifically that of the 32 intersections equipped with cameras, 12 saw more crashes than before the cameras were installed, 4 had the same number, and 16 had fewer crashes; also that factors other than the cameras may have been responsible for the reduced crashes at the 16 intersections. And in Winnipeg, Manitoba, crashes were found to have significantly increased in the years following the deployment of red light cameras. In 2010, Arizona completed a study of their statewide 76 photo enforcement cameras and decided they would not renew the program in 2011; lower revenue than expected, mixed public acceptance and mixed accident data were cited.
Nevertheless, the FHWA has concluded that the cameras yielded a positive overall cost benefit due to the reduction in more expensive right - angle injury collisions. Other studies have found a greater crash reduction. For example, a 2005 study of the Raleigh, North Carolina, red light camera program conducted by the Institute for Transportation Research and Education at North Carolina State University found right - angle crashes dropped by 42 %, rear - end crashes dropped by 25 % and total crashes dropped by 17 %. In 2010, the IIHS looked at results of a number of studies and found that red light cameras reduce total collisions and particularly reduce the type of crashes that are especially likely to cause injuries. A 2011 IIHS report concluded that the rate of fatal collisions involving red - light running in cities with a population of 200,000 or greater was 24 % lower with cameras than it would have been without cameras.
A 2009 Public Opinion Strategies poll which asked, "Do you support or oppose the use of red - light cameras to detect red - light runners and enforce traffic laws in your state 's most dangerous intersections? '' found 69 % support and 29 % oppose. A 2012 telephone survey of District of Columbia residents published in the journal Traffic Injury Prevention found that 87 % favored red light cameras.
The National Motorists Association opposes red light cameras on the grounds that the use of these devices raises legal issues and violates the privacy of citizens. They also argue that the use of red light cameras does not increase safety. In the US, AAA Auto Club South argued against the passage of a Florida state law to allow red light cameras, stating that use of red light cameras was primarily for raising money for the state and local government coffers and would not increase road safety. Worse, there are allegations of corruption in shortening the amber to increase the number of tickets. The construction of speed breakers or road bumps were conventional methods of forcing motorists to lower speeds, but were dropped at locales in favor of cameras due to lobbying efforts.
In Norway, Spain, and the Netherlands, a postal survey in 2003 showed acceptance of the use of red light cameras for traffic enforcement. For some groups, the enforcement of traffic laws is considered the main reason for using the red light cameras. For example, a report from civic administrators in Canada 's Saskatoon in 2001 described the cameras as "simply an enforcement tool used to penalize motorists that fail to stop for red traffic signals. ''
As of December 2016 Arizona, Arkansas, Louisiana, Maine, Mississippi, Montana, Nevada, New Jersey, South Carolina, South Dakota, Utah, West Virginia, and Wisconsin have enacted various prohibitions on red light, speed or other photo enforcement camera uses. Restrictions or conditions exist in additional states; the New Mexico Department of Transportation, for example, has asserted the right to restrict or prohibit red light cameras on state highways. While red light cameras may not be prohibited in other regions, they may have some restrictions on their use. In some jurisdictions, the law says that the camera needs to obtain a photo of the driver 's face in order for the citation issued for running the red light to be valid. This is the case in California and Colorado where the red light cameras are set up to take a series of photographs, including one of the driver 's face. In California, state law assesses a demerit point against a driver who runs a red light, and the need to identify the actual violator has led to the creation of a unique investigatory tool, the fake "ticket. '' Groups opposing the use of red light cameras have argued that where the cameras are not set up to identify the vehicle driver, owner liability issues are raised. It is perceived by some that the owner of the vehicle is unfairly penalized by being considered liable for red - light violations although they may not have been the driver at the time of the offense. In most jurisdictions the liability for red light violations is a civil offense, rather than a criminal citation, issued upon the vehicle owner -- similar to a parking ticket. The issue of owner liability has been addressed in the US courts, with a ruling in the District of Columbia Court of Appeals in 2007, which agreed with a lower court when it found that the presumption of liability of the owners of vehicles issued citations does not violate due process rights. This ruling was supported by a 2009 7th US Circuit Court of Appeals ruling in which it was held that issuing citations to vehicle owners (or lessees) is constitutional. The court stated that it also encourages drivers to be cautious in lending their vehicles to others.
The argument that red light cameras violate the privacy of citizens, has also been addressed in the US courts. According to a 2009 ruling by the 7th US Circuit Court of Appeals, "no one has a fundamental right to run a red light or avoid being seen by a camera on a public street. '' In addition, cameras only take photographs or video when a vehicle has run a red light and, in most states, the camera does not photograph the driver or the occupants of the vehicle.
In most areas, red light enforcement cameras are installed and maintained by private firms. Lawsuits have been raised challenging private companies ' rights to hand out citations, such as a December 2008 lawsuit challenging the city of Dallas ' red light camera program, which was dismissed in March 2009. In most cases, citations are issued by law enforcement officers using the evidence provided by the companies.
There have been many instances where cities in the US have been found to have too - short yellow - light intervals at some intersections where red light cameras have been installed. In Tennessee, 176 drivers were refunded for fines paid after it was discovered that the length of the yellow was too short for that location, and motorists were caught running the light in the first second of the red phase. In California, a combined total of 7603 tickets were refunded or dismissed by the cities of Bakersfield, Costa Mesa, East LA, San Carlos, and Union City, because of too - short yellows. Although national guidelines addressing the length of traffic signals are available, traffic signal phase times are determined by the government employees of the city, county or state for that signalized location. While some states set jurisdiction - wide constant durations for yellow - light intervals, a new standard is taking hold. States are required to adopt the 2009 National Manual on Uniform Traffic Control Devices (MUTCD) as their legal state standard for traffic - control devices since 2011. These standards require engineering practices to be used to set yellow - light - timing durations at individual intersections and or corridors. For guidance to state authorities, MUTCD states yellow lights should have a minimum duration of 3 seconds and a maximum duration of 6 seconds. The deadline for compliance is 2014. In the US, if any part of a driver 's vehicle has already passed into the intersection when the signal turns red, a violation is not generated. A ticket is only issued if the vehicle enters the intersection while the light is red.
In 2014, a bill was introduced in the United States House of Representatives attempting to prohibit red light cameras on federally funded highways and in the District of Columbia.
In 2010, it was revealed that the municipality of Segrate, Italy, two nearby traffic lights had been synchronized such that drivers were coerced to either break the speed limit or pass during the red light. This was investigated as a deliberate fraud to increase the income from tickets. It took months before the machines were eventually dismantled by the Guardia di Finanza.
A red light camera is not the only countermeasure against red - light running. Others include increasing the visibility distance and conspicuity of the traffic light so it is more likely to attract the driver 's attention in time for him or her to stop, re-timing lights so drivers will encounter fewer red ones, increasing the duration of the yellow light between the green and the red, adding a "clearance '' phase to the intersection 's traffic signals, during which all directions have a red light. It has been posited that the regulatory minimum yellow duration has been decreased over the years, that this is a cause of the increase in red - light running, and that the latter countermeasures amount to a reversion to earlier, longer regulated yellow - light durations.
|
the prosperous african empire that facilitated early globalization was | History of the world - wikipedia
The history of the world is the history of humanity (or human history), as determined from archaeology, anthropology, genetics, linguistics, and other disciplines; and, for periods since the invention of writing, from recorded history and from secondary sources and studies.
Humanity 's written history was preceded by its prehistory, beginning with the Palaeolithic Era ("Early Stone Age ''), followed by the Neolithic Era ("New Stone Age ''). The Neolithic saw the Agricultural Revolution begin, between 8000 and 5000 BCE, in the Near East 's Fertile Crescent. The Agricultural Revolution marked a fundamental change in history, with humans beginning the systematic husbandry of plants and animals. As agriculture advanced, most humans transitioned from a nomadic to a settled lifestyle as farmers in permanent settlements. The relative security and increased productivity provided by farming allowed communities to expand into increasingly larger units, fostered by advances in transportation.
Whether in prehistoric or historic times, people always had to be near reliable sources of potable water. Cities developed on river banks as early as 3000 BCE, when some of the first well - developed settlements arose in Mesopotamia, on the banks of Egypt 's Nile River, in the Indus River valley, and along China 's rivers. As farming developed, grain agriculture became more sophisticated and prompted a division of labour to store food between growing seasons. Labour divisions led to the rise of a leisured upper class and the development of cities, which provided the foundation for civilization. The growing complexity of human societies necessitated systems of accounting and writing.
With civilizations flourishing, ancient history ("Antiquity, '' including the Classical Age, up to about 500 CE) saw the rise and fall of empires. Post-classical history (the "Middle Ages, '' c. 500 -- 1500 CE) witnessed the rise of Christianity, the Islamic Golden Age (c. 750 CE -- c. 1258 CE), and the early Italian Renaissance (from around 1300 CE). The Early Modern Period, sometimes referred to as the "European Age '', from about 1500 to 1800, included the Age of Enlightenment and the Age of Discovery. The mid-15th - century invention of modern printing, employing movable type, revolutionized communication and facilitated ever wider dissemination of information, helping end the Middle Ages and ushering in the Scientific Revolution. By the 18th century, the accumulation of knowledge and technology had reached a critical mass that brought about the Industrial Revolution and began the Late Modern Period, which starts around 1800 and includes the current day.
This scheme of historical periodization (dividing history into Antiquity, Post-Classical, Early Modern, and Late Modern periods) was developed for, and applies best to, the history of the Old World, particularly Europe and the Mediterranean. Outside this region, including ancient China and ancient India, historical timelines unfolded differently. However, by the 18th century, due to extensive world trade and colonization, the histories of most civilizations had become substantially intertwined. In the last quarter - millennium, the rates of growth of population, knowledge, technology, communications, commerce, weapons destructiveness, and environmental degradation have greatly accelerated, creating opportunities and perils that now confront the planet 's human communities.
Genetic measurements indicate that the ape lineage which would lead to Homo sapiens diverged from the lineage that would lead to the bonobo, the closest living relative of modern humans, around 4.6 to 6.2 million years ago. Anatomically modern humans arose in Africa about 200,000 years ago, and reached behavioural modernity about 50,000 years ago.
Modern humans spread rapidly from Africa into the frost - free zones of Europe and Asia around 60,000 years ago. The rapid expansion of humankind to North America and Oceania took place at the climax of the most recent ice age, when temperate regions of today were extremely inhospitable. Yet, humans had colonized nearly all the ice - free parts of the globe by the end of the Ice Age, some 12,000 years ago. Other hominids such as Homo erectus had been using simple wood and stone tools for millennia, but as time progressed, tools became far more refined and complex.
Perhaps as early as 1.8 million years ago, but certainly by 500,000 years ago, humans began using fire for heat and cooking. They also developed language in the Paleolithic period and a conceptual repertoire that included systematic burial of the dead and adornment of the living. Early artistic expression can be found in the form of cave paintings and sculptures made from ivory, stone, and bone, showing a spirituality generally interpreted as animism, or even shamanism. During this period, all humans lived as hunter - gatherers, and were generally nomadic. Archaeological and genetic data suggest that the source populations of Paleolithic hunter - gatherers survived in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover.
The Neolithic Revolution, beginning around 10,000 BCE, saw the development of agriculture, which fundamentally changed the human lifestyle. Farming developed around 10,000 BCE in the Middle East, around 7000 BCE in what is now China, about 6000 BCE in the Indus Valley and Europe, and about 4000 BCE in the Americas. Cultivation of cereal crops and the domestication of animals occurred around 8500 BCE in the Middle East, where wheat and barley were the first crops and sheep and goats were domesticated. In the Indus Valley, crops were cultivated by 6000 BCE, along with domesticated cattle. The Yellow River valley in China cultivated millet and other cereal crops by about 7000 BCE, but the Yangtze River valley domesticated rice earlier, by at least 8000 BCE. In the Americas, sunflowers were cultivated by about 4000 BCE, and corn and beans were domesticated in Central America by 3500 BCE. Potatoes were first cultivated in the Andes Mountains of South America, where the llama was also domesticated. Metal - working, starting with copper around 6000 BCE, was first used for tools and ornaments. Gold soon followed, with its main use being for ornaments. The need for metal ores stimulated trade, as many of the areas of early human settlement were lacking in ores. Bronze, an alloy of copper and tin, is first known from about 2500 BCE, but did not become widely used until much later.
Though early "cities '' appeared at Jericho and Catal Huyuk around 6000 BCE, the first civilizations did not emerge until around 3000 BCE in Egypt and Mesopotamia. These cultures gave birth to the invention of the wheel, mathematics, bronze - working, sailing boats, the pottery wheel, woven cloth, construction of monumental buildings, and writing. Writing developed independently and at different times in five areas of the world: Egypt (c. 3200 BCE), India (c. 3200 BCE), Mesopotamia (c. 3000 BCE), China (c. 1600 BCE), and Mesoamerica (c. 600 BCE).
Farming permitted far denser populations, which in time organized into states. Agriculture also created food surpluses that could support people not directly engaged in food production. The development of agriculture permitted the creation of the first cities. These were centres of trade, manufacturing and political power. Cities established a symbiosis with their surrounding countrysides, absorbing agricultural products and providing, in return, manufactured goods and varying degrees of military control and protection.
The development of cities was synonymous with the rise of civilization. Early civilizations arose first in Lower Mesopotamia (3000 BCE), followed by Egyptian civilization along the Nile River (3000 BCE), the Harappan civilization in the Indus River Valley (in present - day India and Pakistan; 2500 BCE), and Chinese civilization along the Yellow and Yangtze Rivers (2200 BCE). These societies developed a number of unifying characteristics, including a central government, a complex economy and social structure, sophisticated language and writing systems, and distinct cultures and religions. Writing facilitated the administration of cities, the expression of ideas, and the preservation of information.
Entities such as the Sun, Moon, Earth, sky, and sea were often deified. Shrines developed, which evolved into temple establishments, complete with a complex hierarchy of priests and priestesses and other functionaries. Typical of the Neolithic was a tendency to worship anthropomorphic deities. Among the earliest surviving written religious scriptures are the Egyptian Pyramid Texts, the oldest of which date to between 2400 and 2300 BCE.
The Bronze Age is part of the three - age system (Stone Age, Bronze Age, Iron Age) that for some parts of the world describes effectively the early history of civilization. During this era the most fertile areas of the world saw city - states and the first civilizations develop. These were concentrated in fertile river valleys: the Tigris and Euphrates in Mesopotamia, the Nile in Egypt, the Indus in the Indian subcontinent, and the Yangtze and Yellow Rivers in China.
Sumer, located in Mesopotamia, is the first known complex civilization, developing the first city - states in the 4th millennium BCE. It was in these cities that the earliest known form of writing, cuneiform script, appeared around 3000 BCE. Cuneiform writing began as a system of pictographs. These pictorial representations eventually became simplified and more abstract. Cuneiform texts were written on clay tablets, on which symbols were drawn with a blunt reed used as a stylus. Writing made the administration of a large state far easier.
Transport was facilitated by waterways -- by rivers and seas. The Mediterranean Sea, at the juncture of three continents, fostered the projection of military power and the exchange of goods, ideas, and inventions. This era also saw new land technologies, such as horse - based cavalry and chariots, that allowed armies to move faster.
These developments led to the rise of territorial states and empires. In Mesopotamia there prevailed a pattern of independent warring city - states and of a loose hegemony shifting from one city to another. In Egypt, by contrast, first there was a dual division into Upper and Lower Egypt which was shortly followed by unification of all the valley around 3100 BCE, followed by permanent pacification. In Crete the Minoan civilization had entered the Bronze Age by 2700 BCE and is regarded as the first civilization in Europe. Over the next millennia, other river valleys saw monarchical empires rise to power. In the 25th -- 21st centuries BCE, the empires of Akkad and Sumer arose in Mesopotamia.
Over the following millennia, civilizations developed across the world. Trade increasingly became a source of power as states with access to important resources or controlling important trade routes rose to dominance. By 1400 BCE, Mycenaean Greece began to develop. In India this era was the Vedic period, which laid the foundations of Hinduism and other cultural aspects of early Indian society, and ended in the 6th century BCE. From around 550 BCE, many independent kingdoms and republics known as the Mahajanapadas were established across the subcontinent.
As complex civilizations arose in the Eastern Hemisphere, the indigenous societies in the Americas remained relatively simple and fragmented into diverse regional cultures. During the formative stage in Mesoamerica (about 1500 BCE to 500 CE), more complex and centralized civilizations began to develop, mostly in what is now Mexico, Central America, and Peru. They included civilizations such as the Olmec, Maya, Zapotec, Moche, and Nazca. They developed agriculture, growing maize, chili peppers, cocoa, tomatoes, and potatoes, crops unique to the Americas, and creating distinct cultures and religions. These ancient indigenous societies would be greatly affected, for good and ill, by European contact during the early modern period.
Beginning in the 8th century BCE, the "Axial Age '' saw the development of a set of transformative philosophical and religious ideas, mostly independently, in many different places. Chinese Confucianism, Indian Buddhism and Jainism, and Jewish monotheism are all claimed by some scholars to have developed in the 6th century BCE. (Karl Jaspers ' Axial - Age theory also includes Persian Zoroastrianism, but other scholars dispute his timeline for Zoroastrianism.) In the 5th century BCE, Socrates and Plato made substantial advances in the development of ancient Greek philosophy.
In the East, three schools of thought would dominate Chinese thinking until the modern day. These were Taoism, Legalism, and Confucianism. The Confucian tradition, which would become particularly dominant, looked for political morality not to the force of law but to the power and example of tradition. Confucianism would later spread to the Korean Peninsula and toward Japan.
In the West, the Greek philosophical tradition, represented by Socrates, Plato, Aristotle, and other philosophers, along with accumulated science, technology, and culture, diffused throughout Europe, Egypt, the Middle East, and Northwest India, starting in the 4th century BCE after the conquests of Alexander III of Macedon (Alexander the Great).
The millennium from 500 BCE to 500 CE saw a series of empires of unprecedented size develop. Well - trained professional armies, unifying ideologies, and advanced bureaucracies created the possibility for emperors to rule over large domains whose populations could attain numbers upwards of tens of millions of subjects. The great empires depended on military annexation of territory and on the formation of defended settlements to become agricultural centres. The relative peace that the empires brought encouraged international trade, most notably the massive trade routes in the Mediterranean, the maritime trade web in the Indian Ocean, and the Silk Road. In southern Europe, the Greeks (and later the Romans), in an era known as "classical antiquity, '' established cultures whose practices, laws, and customs are considered the foundation of contemporary Western culture.
There were a number of regional empires during this period. The kingdom of the Medes helped to destroy the Assyrian Empire in tandem with the nomadic Scythians and the Babylonians. Nineveh, the capital of Assyria, was sacked by the Medes in 612 BCE. The Median Empire gave way to successive Iranian empires, including the Achaemenid Empire (550 -- 330 BCE) and the Sasanian Empire (224 -- 651 CE).
Several empires began in modern - day Greece. First was the Delian League (from 477 BCE) and the succeeding Athenian Empire (454 -- 404 BCE), centred in present - day Greece. Later, Alexander the Great (356 -- 323 BCE), of Macedon, founded an empire of conquest, extending from present - day Greece to present - day India. The empire divided shortly after his death, but the influence of his Hellenistic successors made for an extended Hellenistic period (323 -- 31 BCE) throughout the region.
In Asia, the Maurya Empire (322 -- 185 BCE) existed in present - day India; in the 3rd century BCE, most of South Asia was united to the Maurya Empire by Chandragupta Maurya and flourished under Ashoka the Great. From the 3rd century CE, the Gupta dynasty oversaw the period referred to as ancient India 's Golden Age. From the 4th to 6th centuries, northern India was ruled by the Gupta Empire. In southern India, three prominent Dravidian kingdoms emerged: the Cheras, Cholas, and Pandyas. The ensuing stability contributed to heralding in the golden age of Hindu culture in the 4th and 5th centuries.
In Europe, the Roman Empire, centred in present - day Italy, began in the 7th century BCE. Beginning in the 3rd century BCE, the Roman Republic began expanding its territory through conquest and alliances. By the time of Augustus (63 BCE -- 14 CE), who became the first Roman Emperor, Rome had already established dominion over most of the Mediterranean. The empire would continue to grow, controlling much of the land from England to Mesopotamia, reaching its greatest extent under the emperor Trajan (d. 117 CE). In the 3rd century CE, the empire would split into western and eastern regions, with (usually) separate emperors. The Western empire would fall, in 476 CE, to German influence under Odoacer. The eastern empire, now known as the Byzantine Empire, with its capital at Constantinople, would continue for another thousand years, until overthrown by the Ottoman Empire in 1453 CE.
In China, the Qin dynasty (221 -- 206 BCE), the first imperial dynasty of China, was followed by the Han Empire (206 BCE -- 220 CE). The Han Dynasty was comparable in power and influence to the Roman Empire that lay at the other end of the Silk Road. Han China developed advanced cartography, shipbuilding, and navigation. The Chinese invented blast furnaces, and created finely tuned copper instruments. As with other empires during the Classical Period, Han China advanced significantly in the areas of government, education, mathematics, astronomy, technology, and many others.
In Africa, the Kingdom of Aksum, centred in present - day Ethiopia, established itself by the 1st century CE as a major trading empire, dominating its neighbours in South Arabia and Kush and controlling the Red Sea trade. It minted its own currency and carved enormous monolithic steles such as the Obelisk of Axum to mark their emperors ' graves.
Successful regional empires were also established in the Americas, arising from cultures established as early as 2500 BCE. In Mesoamerica, vast pre-Columbian societies were built, the most notable being the Zapotec Empire (700 BCE -- 1521 CE), and the Maya civilization, which reached its highest state of development during the Mesoamerican Classic period (c. 250 -- 900 CE), but continued throughout the Post-Classic period until the arrival of the Spanish in the 16th century CE. Maya civilization arose as the Olmec mother culture gradually declined. The great Mayan city - states slowly rose in number and prominence, and Maya culture spread throughout the Yucatán and surrounding areas. The later empire of the Aztecs was built on neighbouring cultures and was influenced by conquered peoples such as the Toltecs.
Some areas experienced slow but steady technological advances, with important developments such as the stirrup and moldboard plough arriving every few centuries. There were, however, in some regions, periods of rapid technological progress. Most important, perhaps, was the Mediterranean area during the Hellenistic period, when hundreds of technologies were invented. Such periods were followed by periods of technological decay, as during the Roman Empire 's decline and fall and the ensuing early medieval period.
The empires faced common problems associated with maintaining huge armies and supporting a central bureaucracy. These costs fell most heavily on the peasantry, while land - owning magnates increasingly evaded centralized control and its costs. Barbarian pressure on the frontiers hastened internal dissolution. China 's Han dynasty fell into civil war in 220 CE, beginning the Three Kingdoms period, while its Roman counterpart became increasingly decentralized and divided about the same time in what is known as the Crisis of the Third Century. The great empires of Eurasia were all located on temperate and subtropical coastal plains. From the Central Asian steppes, horse - based nomads (mainly Mongols and Turks) dominated a large part of the continent. The development of the stirrup and the breeding of horses strong enough to carry a fully armed archer made the nomads a constant threat to the more settled civilizations.
The gradual break - up of the Roman Empire, spanning several centuries after the 2nd century CE, coincided with the spread of Christianity outward from the Middle East. The Western Roman Empire fell under the domination of Germanic tribes in the 5th century, and these polities gradually developed into a number of warring states, all associated in one way or another with the Catholic Church. The remaining part of the Roman Empire, in the eastern Mediterranean, continued as what came to be called the Byzantine Empire. Centuries later, a limited unity would be restored to western Europe through the establishment in 962 of a revived "Roman Empire '', later called the Holy Roman Empire, comprising a number of states in what is now Germany, Austria, Switzerland, Czech Republic, Belgium, Italy, and parts of France.
In China, dynasties would rise and fall, but, by sharp contrast to the Mediterranean - European world, dynastic unity would be restored. After the fall of the Eastern Han Dynasty and the demise of the Three Kingdoms, nomadic tribes from the north began to invade in the 4th century, eventually conquering areas of northern China and setting up many small kingdoms. The Sui Dynasty successfully reunified the whole of China in 581, and laid the foundations for a Chinese golden age under the Tang dynasty (618 -- 907).
The Post-classical Era, though deriving its name from the Eurocentric era of "Classical antiquity '', refers to a broader geographic sweep. The era is commonly dated from the 5th - century fall of the Western Roman Empire, which fragmented into many separate kingdoms, some of which would later be confederated under the Holy Roman Empire. The Eastern Roman, or Byzantine, Empire survived until late in the Post-classical, or Medieval, period. The Post-classical period also encompasses the Early Muslim conquests, the subsequent Islamic Golden Age, and the commencement and expansion of the Arab slave trade, followed by the Mongol invasions in the Middle East and Central Asia, and the founding around 1280 of the Ottoman Empire. South Asia saw a series of middle kingdoms of India, followed by the establishment of Islamic empires in India.
In western Africa, the Mali Empire and the Songhai Empire developed. On the southeast coast of Africa, Arabic ports were established where gold, spices, and other commodities were traded. This allowed Africa to join the Southeast Asia trading system, bringing it contact with Asia; this, along with Muslim culture, resulted in the Swahili culture. The Chinese Empire experienced the successive Sui, Tang, Song, Yuan, early Ming Dynasties. Middle Eastern trade routes along the Indian Ocean, and the Silk Road through the Gobi Desert, provided limited economic and cultural contact between Asian and European civilizations. During the same period, civilizations in the Americas, such as the Inca, Maya, and Aztec, reached their height; all would be seriously compromised by contact with European colonists at the beginning of the Modern period.
Prior to the advent of Islam in the 7th century, the Middle East was dominated by the Byzantine Empire and the Persian Sasanian Empire, which constantly fought each other for control of several disputed regions. This was also a cultural battle, with the Byzantine Hellenistic and Christian culture competing against the Persian Iranian traditions and Zoroastrian religion. The formation of the Islamic religion created a new contender that quickly surpassed both of these empires. Islam greatly affected the political, economic, and military history of the Old World, especially the Middle East.
From their centre on the Arabian Peninsula, Muslims began their expansion during the early Postclassical Era. By 750 CE, they came to conquer most of the Near East, North Africa, and parts of Europe, ushering in an era of learning, science, and invention known as the Islamic Golden Age. The knowledge and skills of the ancient Near East, Greece, and Persia were preserved in the Postclassical Era by Muslims, who also added new and important innovations from outside, such as the manufacture of paper from China and decimal positional numbering from India.
Much of this learning and development can be linked to geography. Even prior to Islam 's presence, the city of Mecca had served as a centre of trade in Arabia, and the Islamic prophet Muhammad himself was a merchant. With the new Islamic tradition of the Hajj, the pilgrimage to Mecca, the city became even more a centre for exchanging goods and ideas. The influence held by Muslim merchants over African - Arabian and Arabian - Asian trade routes was tremendous. As a result, Islamic civilization grew and expanded on the basis of its merchant economy, in contrast to the Europeans, Indians, and Chinese, who based their societies on an agricultural landholding nobility. Merchants brought goods and their Islamic faith to China, India, southeast Asia, and the kingdoms of western Africa, and returned with new discoveries and inventions.
Motivated by religion and dreams of conquest, European kings launched a number of Crusades to try to roll back Muslim power and retake the Holy Land. The Crusades were ultimately unsuccessful and served more to weaken the Byzantine Empire especially with the 1204 sack of Constantinople, which began to lose increasing amounts of territory to the Ottoman Turks. Arab domination of the region ended in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. In the early 13th century, a new wave of invaders, the Mongol Empire 's armies, swept through the region but were eventually eclipsed by the Turks and the founding of the Ottoman Empire in modern - day Turkey around 1280.
Starting with the Sui Dynasty (581 -- 618), the Chinese began expanding into eastern Central Asia, and had to deal with Turkic nomads, who were becoming the most dominant ethnic group in Central Asia. Originally the relationship was largely cooperative, but in 630 the Tang dynasty began an offensive against the Turks, capturing areas of the Mongolian Ordos Desert. The Tang Empire competed with the Tibetan Empire for control of areas in Inner and Central Asia. In the 8th century, Islam began to penetrate the region and soon became the sole faith of most of the population, though Buddhism remained strong in the east. The desert nomads of Arabia could militarily match the nomads of the steppe, and the early Arab Empire gained control over parts of Central Asia.
The Hephthalites were the most powerful of the nomad groups in the 6th and 7th centuries, and controlled much of the region. In the 9th through 13th centuries the region was divided among several powerful states, including the Samanid dynasty, the Seljuq dynasty, and the Khwarezmid Empire. The most spectacular power to rise out of Central Asia developed when Genghis Khan united the tribes of Mongolia. The Mongol Empire spread to comprise all of Central Asia and China as well as large parts of Russia, and the Middle East. After Genghis Khan died in 1227, most of Central Asia continued to be dominated by the successor Chagatai Khanate. In 1369, Timur, a Turkic leader in the Mongol military tradition, conquered most of the region and founded the Timurid Empire. Timur 's large empire collapsed soon after his death, however. The region then became divided into a series of smaller khanates that were created by the Uzbeks. These included the Khanate of Khiva, the Khanate of Bukhara, and the Khanate of Kokand, all of whose capitals are located in present - day Uzbekistan.
North Africa saw the rise of polities formed by the Berbers, such as the Marinid dynasty in Morocco, the Zayyanid dynasty in Algeria, and the Hafsid dynasty in Tunisia. The region will later be called the Barbary Coast and will host pirates and privateers who will use several North African ports for their raids against the coastal towns of several European countries in search of slaves to be sold in North African markets as part of the Barbary slave trade.
Europe during the Early Middle Ages was characterized by depopulation, deurbanization, and barbarian invasion, all of which had begun in Late Antiquity. The barbarian invaders formed their own new kingdoms in the remains of the Western Roman Empire. In the 7th century, North Africa and the Middle East, once part of the Eastern Roman Empire, became part of the Caliphate after conquest by Muhammad 's successors. Although there were substantial changes in society and political structures, the break was not as extreme as once put forth by historians, with most of the new kingdoms incorporating as many of the existing Roman institutions as they could. Christianity expanded in western Europe, and monasteries were founded. In the 7th and 8th centuries the Franks, under the Carolingian dynasty, established an empire covering much of western Europe; it lasted until the 9th century, when it succumbed to pressure from new invaders -- the Vikings, Magyars, and Saracens.
During the High Middle Ages, which began after 1000, the population of Europe increased greatly as technological and agricultural innovations allowed trade to flourish and crop yields to increase. Manorialism -- the organization of peasants into villages that owed rents and labour service to nobles -- and feudalism -- a political structure whereby knights and lower - status nobles owed military service to their overlords in return for the right to rents from lands and manors -- were two of the ways of organizing medieval society that developed during the High Middle Ages. Kingdoms became more centralized after the decentralizing effects of the breakup of the Carolingian Empire. The Crusades, first preached in 1095, were an attempt by western Christians from countries such as the Kingdom of England, the Kingdom of France and the Holy Roman Empire to regain control of the Holy Land from the Muslims and succeeded long enough to establish some Christian states in the Near East. Also, merchants imported thousands of Armenians, Circassians, Georgians, Greeks and Slavs into Italy to work as household slaves and in processing sugar. Intellectual life was marked by scholasticism and the founding of universities, while the building of Gothic cathedrals was one of the outstanding artistic achievements of the age.
The Late Middle Ages were marked by difficulties and calamities. Famine, plague and war devastated the population of western Europe. The Black Death alone killed approximately 75 to 200 million people between 1347 and 1350. It was one of the deadliest pandemics in human history. Starting in Asia, the disease reached Mediterranean and western Europe during the late 1340s, and killed tens of millions of Europeans in six years; between a third and a half of the population perished.
The Middle Ages witnessed the first sustained urbanization of northern and western Europe. Many modern European states owe their origins to events unfolding in the Middle Ages; present European political boundaries are, in many regards, the result of military and dynastic events during this tumultuous period. The Middle Ages lasted until the beginning of the Early modern period in the 16th century, marked by the rise of nation states, the division of Western Christianity in the Reformation, the rise of humanism in the Italian Renaissance, and the beginnings of European overseas expansion which allowed for the Columbian Exchange.
Medieval Sub-Saharan Africa was home to many different civilizations. The Kingdom of Aksum declined in the 7th century as Islam cut it off from its Christian allies and its people moved further into the Ethiopian Highlands for protection. They eventually gave way to the Zagwe dynasty who are famed for their rock cut architecture at Lalibela. The Zagwe would then fall to the Solomonic dynasty who claimed descent from the Aksumite emperors and would rule the country well into the 20th century. In the West African Sahel region, many Islamic empires rose, such as the Ghana Empire, the Mali Empire, the Songhai Empire, and the Kanem Empire. They controlled the trans - Saharan trade in gold, ivory, salt and slaves.
South of the Sahel, civilizations rose in the coastal forests where horses and camels could not survive. These include the Yoruba city of Ife, noted for its art, and the Oyo Empire, the Benin Empire of the Edo people centred in Benin City, the Igbo Kingdom of Nri which produced advanced bronze art at Igbo - Ukwu, and the Akan who are noted for their intricate architecture.
Central Africa saw the birth of several states, including the Kingdom of Kongo. In what is now modern Zimbabwe various kingdoms such as the Kingdom of Mutapa descended from the Kingdom of Mapungubwe in modern South Africa. They flourished through trade with the Swahili people on the East African coast. They built large defensive stone structures without mortar such as Great Zimbabwe, capital of the Kingdom of Zimbabwe, Khami, capital of Kingdom of Butua, and Danangombe (Dhlo - Dhlo), capital of the Rozwi Empire. The Swahili people themselves were the inhabitants of the East African coast from Kenya to Mozambique who traded extensively with Asians and Arabs, who introduced them to Islam. They built many port cities such as Mombasa, Zanzibar and Kilwa, which were known to Chinese sailors under Zheng He and Islamic geographers.
In northern India, after the fall (550 CE) of the Gupta Empire, the region divided into a complex and fluid network of smaller kingly states. Early Muslim incursions began in the west in 712 CE, when the Arab Umayyad Caliphate annexed much of present - day Pakistan. Arab military advance was largely halted at that point, but Islam still spread in India, largely due to the influence of Arab merchants along the western coast. The Tripartite Struggle for control of northern India took place in the ninth century. The struggle was between the Pratihara Empire, the Pala Empire and the Rashtrakuta Empire. Some of the important states that emerged in India at this time included the Bahmani Sultanate and the Vijayanagara Empire. Post-classical dynasties in South India included those of the Chalukyas, the Rashtrakutas, the Hoysalas, the Cholas, the Islamic Mughals, the Marathas and the Mysores. Science, engineering, art, literature, astronomy, and philosophy flourished under the patronage of these kings.
After a period of relative disunity, the Sui dynasty reunified China in 581, and under the succeeding Tang dynasty (618 -- 907) China entered a Golden Age. The Tang dynasty eventually splintered, however, and after half a century of turmoil the Song Dynasty reunified China, when it was, according to William McNeill, the "richest, most skilled, and most populous country on earth ''. Pressure from nomadic empires to the north became increasingly urgent. By 1142, North China had been lost to the Jurchens in the Jin -- Song Wars, and the Mongol Empire conquered all of China in 1279, along with almost half of Eurasia 's landmass. After about a century of Mongol Yuan dynasty rule, the ethnic Chinese reasserted control with the founding of the Ming dynasty (1368).
In Japan, the imperial lineage had been established by this time, and during the Asuka period (538 -- 710) the Yamato Province developed into a clearly centralized state. Buddhism was introduced, and there was an emphasis on the adoption of elements of Chinese culture and Confucianism. The Nara period of the 8th century marked the emergence of a strong Japanese state and is often portrayed as a golden age. During this period, the imperial government undertook great public works, including government offices, temples, roads, and irrigation systems. The Heian period (794 to 1185) saw the peak of imperial power, followed by the rise of militarized clans, and the beginning of Japanese feudalism. The feudal period of Japanese history, dominated by powerful regional lords (daimyōs) and the military rule of warlords (shōguns) such as the Ashikaga shogunate and Tokugawa shogunate, stretched from 1185 to 1868. The emperor remained, but mostly as a figurehead, and the power of merchants was weak.
Postclassical Korea saw the end of the Three Kingdoms era, the three kingdoms being Goguryeo, Baekje and Silla. Silla conquered Baekje in 660, and Goguryeo in 668, marking the beginning of the North -- South States Period (남북 국 시대), with Unified Silla in the south and Balhae, a successor state to Goguryeo, in the north. In 892 CE, this arrangement reverted to the Later Three Kingdoms, with Goguryeo (then called Taebong and eventually named Goryeo) emerging as dominant, unifying the entire peninsula by 936. The founding Goryeo dynasty ruled until 1392, succeeded by the Joseon, which ruled for approximately 500 years.
The beginning of the Middle Ages in Southeast Asia saw the fall (550 CE) of the Kingdom of Funan to the Chenla Empire, which was then replaced by the Khmer Empire (802 CE). The Khmer 's capital city Angkor was the largest city in the world prior to the industrial age and contained over a thousand temples, the most famous being Angkor Wat. The Sukhothai (1238 CE) and Ayutthaya (1351 CE) kingdoms were major powers of the Thai people, who were influenced by the Khmer. Starting in the 9th century, the Pagan Kingdom rose to prominence in modern Myanmar. Other notable kingdoms of the period include the Srivijayan Empire and the Lavo Kingdom (both coming into prominence in the 7th century), the Champa and the Hariphunchai (both about 750), the Dai Viet (968), Lan Na (13th century), Majapahit (1293), Lan Xang (1354), and the Kingdom of Ava (1364). Taiwanese aborigines formed tribal alliances such as the Kingdom of Middag. It was also during this period that Islam spread to present - day Indonesia (beginning in the 13th century), and the Malay states began to emerge including the Malacca Sultanate, the Bruneian Empire and the Rajahnate of Maynila.
The Tuʻi Tonga Empire was founded in the 10th century CE and expanded between 1200 and 1500. Tongan culture, language, and hegemony spread widely throughout Eastern Melanesia, Micronesia and Central Polynesia during this period, influencing East ' Uvea, Rotuma, Futuna, Samoa and Niue, as well as specific islands / parts of Micronesia (Kiribati, Pohnpei, the Mariana Islands populated by the Chamorro people and miscellaneous outliers), Vanuatu, and New Caledonia (specifically, the Loyalty Islands, with the main island being predominantly populated by the Melanesian Kanak people and their cultures). At around the same time, a powerful thalassocracy appeared in Eastern Polynesia centred around the Society Islands, specifically on the sacred Taputapuatea marae, which drew in Eastern Polynesian colonists from places as far away as Hawai'i, New Zealand (Aotearoa), and the Tuamotu Islands for political, spiritual and economic reasons, until the unexplained collapse of regular long - distance voyaging in the Eastern Pacific a few centuries before Europeans began exploring the area. Indigenous written records from this period are virtually non-existent, as it seems that all Pacific Islanders, with the possible exception of the enigmatic Rapa Nui and their currently undecipherable Rongorongo script, had no writing systems of any kind until after their introduction by European colonists; however, some indigenous prehistories can be estimated and academically reconstructed through careful, judicious analysis of native oral traditions, colonial ethnography, archaeology, physical anthropology and linguistics research.
In North America, this period saw the rise of the Mississippian culture in the modern United States c. 800 CE, marked by the extensive 12th - century urban complex at Cahokia. The Ancestral Puebloans and their predecessors (9th -- 13th centuries) built extensive permanent settlements, including stone structures that would remain the largest buildings in North America until the 19th century. In Mesoamerica, the Teotihuacan civilization fell and the Classic Maya collapse occurred. The Aztec Empire came to dominate much of Mesoamerica in the 14th and 15th centuries. In South America, the 14th and 15th centuries saw the rise of the Inca. The Inca Empire of Tawantinsuyu, with its capital at Cusco, spanned the entire Andes Mountain Range, making it the most extensive Pre-Columbian civilization. The Inca were prosperous and advanced, known for an excellent road system and unrivaled masonry.
Modern history (the "modern period, '' the "modern era, '' "modern times '') is history of the period following the Middle Ages. "Contemporary history '' is history that only covers events from around 1945 to the present day.
"Early modern period '' is a term used by historians to refer to the period between the Middle Ages (Post-classical history) and the Industrial Revolution -- roughly 1500 to 1800. The Early Modern period is characterized by the rise of science, and by increasingly rapid technological progress, secularized civic politics, and the nation state. Capitalist economies began their rise, initially in northern Italian republics such as Genoa. The Early Modern period also saw the rise and dominance of the mercantilist economic theory. As such, the Early Modern period represents the decline and eventual disappearance, in much of the European sphere, of feudalism, serfdom and the power of the Catholic Church. The period includes the Protestant Reformation, the disastrous Thirty Years ' War, the Age of Discovery, European colonial expansion, the peak of European witch - hunting, the Scientific revolution, and the Age of Enlightenment.
Europe 's Renaissance, beginning in the 14th century and extending into the 16th, consisted of the rediscovery of the classical world 's scientific contributions, and of the economic and social rise of Europe. The Renaissance also engendered a culture of inquisitiveness which ultimately led to Humanism and the Scientific Revolution. Although it saw social and political upheaval and revolutions in many intellectual pursuits, the Renaissance is perhaps known best for its artistic developments and the contributions of such polymaths as Leonardo da Vinci and Michelangelo, who inspired the term "Renaissance man. ''
During this period, European powers came to dominate most of the world. Although the most developed regions of European classical civilization were more urbanized than any other region of the world, European civilization had undergone a lengthy period of gradual decline and collapse. During the Early Modern Period, Europe was able to regain its dominance; historians still debate the causes.
Europe 's success in this period stands in contrast to other regions. For example, one of the most advanced civilizations of the Middle Ages was China. It had developed an advanced monetary economy by 1,000 CE. China had a free peasantry who were no longer subsistence farmers, and could sell their produce and actively participate in the market. According to Adam Smith, writing in the 18th century, China had long been one of the richest, most fertile, best cultivated, most industrious, most urbanized, and most prosperous countries in the world. It enjoyed a technological advantage and had a monopoly in cast iron production, piston bellows, suspension bridge construction, printing, and the compass. However, it seemed to have long since stopped progressing. Marco Polo, who visited China in the 13th century, describes its cultivation, industry, and populousness almost in the same terms as travelers would in the 18th century.
One theory of Europe 's rise holds that Europe 's geography played an important role in its success. The Middle East, India and China are all ringed by mountains and oceans but, once past these outer barriers, are nearly flat. By contrast, the Pyrenees, Alps, Apennines, Carpathians and other mountain ranges run through Europe, and the continent is also divided by several seas. This gave Europe some degree of protection from the peril of Central Asian invaders. Before the era of firearms, these nomads were militarily superior to the agricultural states on the periphery of the Eurasian continent and, as they broke out into the plains of northern India or the valleys of China, were all but unstoppable. These invasions were often devastating. The Golden Age of Islam was ended by the Mongol sack of Baghdad in 1258. India and China were subject to periodic invasions, and Russia spent a couple of centuries under the Mongol - Tatar yoke. Central and western Europe, logistically more distant from the Central Asian heartland, proved less vulnerable to these threats.
Geography contributed to important geopolitical differences. For most of their histories, China, India, and the Middle East were each unified under a single dominant power that expanded until it reached the surrounding mountains and deserts. In 1600 the Ottoman Empire controlled almost all the Middle East, the Ming dynasty ruled China, and the Mughal Empire held sway over India. By contrast, Europe was almost always divided into a number of warring states. Pan-European empires, with the notable exception of the Roman Empire, tended to collapse soon after they arose. Another doubtless important geographic factor in the rise of Europe was the Mediterranean Sea, which, for millennia, had functioned as a maritime superhighway fostering the exchange of goods, people, ideas and inventions.
Nearly all the agricultural civilizations have been heavily constrained by their environments. Productivity remained low, and climatic changes easily instigated boom - and - bust cycles that brought about civilizations ' rise and fall. By about 1500, however, there was a qualitative change in world history. Technological advance and the wealth generated by trade gradually brought about a widening of possibilities.
Many have also argued that Europe 's institutions allowed it to expand, that property rights and free - market economics were stronger than elsewhere due to an ideal of freedom peculiar to Europe. In recent years, however, scholars such as Kenneth Pomeranz have challenged this view. Europe 's maritime expansion unsurprisingly -- given the continent 's geography -- was largely the work of its Atlantic states: Portugal, Spain, England, France, and the Netherlands. Initially the Portuguese and Spanish Empires were the predominant conquerors and sources of influence, and their union resulted in the Iberian Union, the first global empire on which the "sun never set ''. Soon the more northern English, French and Dutch began to dominate the Atlantic. In a series of wars fought in the 17th and 18th centuries, culminating with the Napoleonic Wars, Britain emerged as the new world power.
Persia came under the rule of the Safavid Empire in 1501, succeeded by the Afsharid Empire in 1736, and the Qajar Empire in 1796. Areas to the north and east were held by Uzbeks and Pashtuns. The Ottoman Empire, after taking Constantinople in 1453, quickly gained control of the Middle East, the Balkans, and most of North Africa.
In Africa, this period saw a decline in many civilizations and an advancement in others. The Swahili Coast declined after coming under Portuguese (and later Omani) control. In west Africa, the Songhai Empire fell to the Moroccans in 1591 when they invaded with guns. The South African Kingdom of Zimbabwe gave way to smaller kingdoms such as Mutapa, Butua, and Rozwi. Ethiopia suffered from the 1531 invasion from neighbouring Muslim Adal Sultanate, and in 1769 entered the Zemene Mesafint (Age of Princes) during which the Emperor became a figurehead and the country was ruled by warlords, though the royal line later would recover under Emperor Tewodros II. The Ajuran Empire, in the Horn of Africa, began to decline in the 17th century, succeeded by the Geledi Sultanate. Other civilizations in Africa advanced during this period. The Oyo Empire experienced its golden age, as did the Benin Empire. The Ashanti Empire rose to power in what is modern day Ghana in 1670. The Kingdom of Kongo also thrived during this period. European exploration of Africa reached its zenith at this time.
In the Far East, the Chinese Ming Dynasty gave way (1644) to the Qing, the last Chinese imperial dynasty, which would rule until 1912. Japan experienced its Azuchi -- Momoyama period (1568 -- 1603), followed by the Edo period (1603 -- 1868). The Korean Joseon Dynasty (1392 -- 1910) ruled throughout this period, successfully repelling 16th - and 17th - century invasions from Japan and China. Japan and China were significantly affected during this period by expanded maritime trade with Europe, particularly the Portuguese in Japan. During the Edo period, Japan would pursue isolationist policies, to eliminate foreign influences.
On the Indian subcontinent, the Delhi Sultanate and the Deccan sultanates would give way, beginning in the 16th century, to the Mughal Empire. Starting in the northwest, the Mughal Empire would by the late 17th century come to rule the entire subcontinent, except for the southernmost Indian provinces, which would remain independent. Against the Muslim Mughal Empire, the Hindu Maratha Empire was founded on the west coast in 1674, gradually gaining territory -- a majority of present - day India -- from the Mughals over several decades, particularly in the Mughal -- Maratha Wars (1681 -- 1701). The Maratha Empire would in 1818 fall under the control of the British East India Company, with all former Maratha and Mughal authority devolving in 1858 to the British Raj.
In 1511 the Portuguese overthrew the Malacca Sultanate in present - day Malaysia and Indonesian Sumatra. The Portuguese held this important trading territory (and the valuable associated navigational strait) until overthrown by the Dutch in 1641. The Johor Sultanate, centred on the southern tip of the Malay Peninsula, became the dominant trading power in the region. European colonization expanded with the Dutch in the Netherlands East Indies, and the Spanish in the Philippines. Into the 19th century, European expansion would affect the whole of Southeast Asia, with the British in Myanmar and Malaysia, and the establishment of French Indochina. Only Thailand would successfully resist colonization.
The Pacific islands of Oceania would also be affected by European contact, starting with the circumnavigational voyage of Ferdinand Magellan, who landed on the Marianas and other islands in 1521. Also notable were the voyages (1642 -- 44) of Abel Tasman to present - day Australia, New Zealand and nearby islands, and the voyages (1768 -- 1779) of Captain James Cook, who made the first recorded European contact with Hawaii. Britain would found its first colony on Australia in 1788.
In the Americas, the western European powers vigorously colonized the newly discovered continents, largely displacing the indigenous populations, and destroying the advanced civilizations of the Aztecs and the Inca. Spain, Portugal, Britain, and France all made extensive territorial claims, and undertook large - scale settlement, including the importation of large numbers of African slaves. Portugal claimed Brazil. Spain claimed the rest of South America, Mesoamerica, and southern North America. Britain colonized the east coast of North America, and France colonized the central region of North America. Russia made incursions onto the northwest coast of North America, with a first colony in present - day Alaska in 1784, and the outpost of Fort Ross in present - day California in 1812. In 1762, in the midst of the Seven Years ' War, France secretly ceded most of its North American claims to Spain in the Treaty of Fontainebleau. Thirteen of the British colonies declared independence as the United States of America in 1776, ratified by the Treaty of Paris in 1783, ending the American Revolutionary War. Napoleon Bonaparte won France 's claims back from Spain in the Napoleonic Wars in 1800, but sold them to the United States in 1803 as the Louisiana Purchase.
In Russia, Ivan the Terrible was crowned (1547) the first Tsar of Russia, and by annexing the Turkic Khanates in the east, transformed Russia into a regional power. The countries of western Europe, while expanding prodigiously through technological advancement and colonial conquest, competed with each other economically and militarily in a state of almost constant war. Often the wars had a religious dimension, either Catholic versus Protestant, or (primarily in eastern Europe) Christian versus Muslim. Wars of particular note include the Thirty Years ' War, the War of the Spanish Succession, the Seven Years ' War, and the French Revolutionary Wars. Napoleon came to power in France in 1799, an event foreshadowing the Napoleonic Wars of the early 19th century.
The Scientific Revolution changed humanity 's understanding of the world and led to the Industrial Revolution, a major transformation of the world 's economies. The Scientific Revolution in the 17th century had had little immediate effect on industrial technology; only in the second half of the 18th century did scientific advances begin to be applied substantially to practical invention. The Industrial Revolution began in Great Britain and used new modes of production -- the factory, mass production, and mechanization -- to manufacture a wide array of goods faster and using less labour than previously required. The Age of Enlightenment also led to the beginnings of modern democracy in the late - 18th century American and French Revolutions. Democracy and republicanism would grow to have a profound effect on world events and on quality of life.
After Europeans had achieved influence and control over the Americas, imperial activities turned to the lands of Asia and Oceania. In the 19th century the European states had social and technological advantage over Eastern lands. Britain gained control of the Indian subcontinent, Egypt and the Malay Peninsula; the French took Indochina; while the Dutch cemented their control over the Dutch East Indies. The British also colonized Australia, New Zealand and South Africa with large numbers of British colonists emigrating to these colonies. Russia colonized large pre-agricultural areas of Siberia. In the late 19th century, the European powers divided the remaining areas of Africa. Within Europe, economic and military challenges created a system of nation states, and ethno - linguistic groupings began to identify themselves as distinctive nations with aspirations for cultural and political autonomy. This nationalism would become important to peoples across the world in the 20th century.
During the Second Industrial Revolution, the world economy became reliant on coal as a fuel, as new methods of transport, such as railways and steamships, effectively shrank the world. Meanwhile, industrial pollution and environmental damage, present since the discovery of fire and the beginning of civilization, accelerated drastically.
The advantages that Europe had developed by the mid-18th century were two: an entrepreneurial culture, and the wealth generated by the Atlantic trade (including the African slave trade). By the late 16th century, silver from the Americas accounted for the Spanish empire 's wealth. The profits of the slave trade and of West Indian plantations amounted to 5 % of the British economy at the time of the Industrial Revolution. While some historians conclude that, in 1750, labour productivity in the most developed regions of China was still on a par with that of Europe 's Atlantic economy, other historians like Angus Maddison hold that the per - capita productivity of western Europe had by the late Middle Ages surpassed that of all other regions.
The 20th century opened with Europe at an apex of wealth and power, and with much of the world under its direct colonial control or its indirect domination. Much of the rest of the world was influenced by heavily Europeanized nations: the United States and Japan. As the century unfolded, however, the global system dominated by rival powers was subjected to severe strains, and ultimately seemed to yield to a more fluid structure of independent nations organized on Western models.
This transformation was catalysed by wars of unparalleled scope and devastation. World War I destroyed many of Europe 's empires and monarchies, and weakened Britain and France. In its aftermath, powerful ideologies arose. The Russian Revolution of 1917 created the first communist state, while the 1920s and 1930s saw militaristic fascist dictatorships gain control in Italy, Germany, Spain and elsewhere.
Ongoing national rivalries, exacerbated by the economic turmoil of the Great Depression, helped precipitate World War II. The militaristic dictatorships of Europe and Japan pursued an ultimately doomed course of imperialist expansionism. Their defeat opened the way for the advance of communism into Central Europe, Yugoslavia, Bulgaria, Romania, Albania, China, North Vietnam, and North Korea.
When World War II ended in 1945, the United Nations was founded in the hope of preventing future wars, as the League of Nations had been formed following World War I. The war had left two countries, the United States and the Soviet Union, with principal power to influence international affairs. Each was suspicious of the other and feared a global spread of the other 's, respectively capitalist and communist, political - economic model. This led to the Cold War, a forty - five - year stand - off and arms race between the United States and its allies, on one hand, and the Soviet Union and its allies on the other. With the development of nuclear weapons during World War II, and with their subsequent proliferation, all of humanity were put at risk of nuclear war between the two superpowers, as demonstrated by many incidents, most prominently the October 1962 Cuban Missile Crisis. Such war being viewed as impractical, proxy wars were instead waged, at the expense of non-nuclear - armed Third World countries.
The Cold War ended in 1991, when the Soviet Union disintegrated, in part due to inability to compete economically with the United States and western Europe. However, the United States likewise began to show signs of slippage in her geopolitical influence; even as her private sector, now less inhibited by the claims of the public sector, increasingly sought private advantage to the prejudice of the public weal.
In the early postwar decades, the African and Asian colonies of the Belgian, British, Dutch, French, and other west European empires won their formal independence. But the newly independent countries faced challenges in the form of neocolonialism, sociopolitical disarray, poverty, illiteracy, and endemic tropical diseases.
Most Western European and Central European countries gradually formed a political and economic community, the European Union, which expanded eastward to include former Soviet - satellite countries. The European Union 's effectiveness was handicapped by the immaturity of its common economic and political institutions, somewhat comparable to the inadequacy of United States institutions under the Articles of Confederation prior to the adoption of the U.S. Constitution that came into force in 1789. Asian and African countries followed suit and began taking tentative steps toward forming their own respective continental associations.
Cold War preparations to deter or to fight a third world war accelerated advances in technologies that, though conceptualized before World War II, had been implemented for that war 's exigencies, such as jet aircraft, rocketry, and electronic computers. In the decades after World War II, these advances led to jet travel, artificial satellites with innumerable applications including global positioning systems (GPS), and the Internet -- inventions that have revolutionized the movement of people, ideas, and information.
However, not all scientific and technological advances in the second half of the 20th century required an initial military impetus. That period also saw ground - breaking developments such as the discovery of the structure of DNA, the consequent sequencing of the human genome, the worldwide eradication of smallpox, the discovery of plate tectonics, manned and unmanned exploration of space and of previously inaccessible parts of mankind 's home planet, and foundational discoveries in physics phenomena ranging from the smallest entities (particle physics) to the greatest entity (physical cosmology).
The century saw several global threats emerge or become more serious or more widely recognized, including nuclear proliferation, global climate change, deforestation, ocean acidification, overpopulation, deadly epidemics of microbial diseases, near - Earth asteroids and comets, supervolcano eruptions, lethal gamma - ray bursts, geomagnetic storms destroying all electronic equipment, and the dwindling of global natural resources (particularly fossil fuels).
A man - made hazard to world survival that dominated concerns in the second half of the 20th century, continues into the 21st. Countries ambitious to develop and deploy nuclear weapons are discouraged from doing so by countries that already possess them. At the same time, nuclear - armed countries have shown little urgency about honoring their 20th - century pledge to eventually eliminate all nuclear weapons. Such weapons continue to be equally hazardous to their owners as to their potential targets.
The 21st century has been marked by growing economic globalization and integration, with consequent increased risk to interlinked economies; and by the expansion of communications with mobile phones and the Internet, which have caused fundamental societal changes in business, politics, and individuals ' personal lives.
The early 21st century saw escalating intra - and international strife in the Near East and Afghanistan, stimulated by vast economic disparities, by dissatisfaction with governments dominated by Western interests, by inter-ethnic and inter-sectarian feuds, and by the longest war in the history of the United States, the proximate cause for which was Osama bin Laden 's provocative 2001 destruction of New York City 's World Trade Center.
US military involvements in the Near East and Afghanistan drained US economic resources at a time when the US and other Western countries were experiencing mounting socioeconomic dislocations aggravated by the robotization of work and the exportation of industries to cheaper - workforce countries.
Meanwhile, ancient and populous Asian civilizations, India and especially China, have been emerging from centuries of relative scientific, technological, and economic languishment to become potential rivals for Western, chiefly European and United States, economic and political ascendance in the world.
Worldwide demand and competition for resources has risen due to growing populations and industrialization, especially in India, China, and Brazil (see List of countries by carbon dioxide emissions per capita). This increased demand is causing increased levels of environmental degradation and a fast - growing threat of disastrous global warming. That, and a need for reliable energy supplies independent of politically volatile regions, has spurred the development of alternative, renewable sources of energy (notably solar energy and wind energy), proposals for cleaner fossil - fuel technologies and for expanded use of nuclear energy (somewhat dampened by nuclear - plant accidents), and, conversely, calls to eschew the indiscriminate large - scale employment of the "fissile - fossil complex '' of fissile - (nuclear) and fossil - fuel (coal, petroleum, natural - gas) energy generation.
In recognition that global warming caused by growing concentrations of man - made greenhouse gases was an existential threat to everyone on Earth, in December 2015 195 countries signed the Paris Climate Agreement, scheduled to go into effect in 2020. Thus with the exception of two non-signatory countries (and in June 2017, after the fact, with the exception of the United States government) all the world 's countries explicitly recognized that this common existential threat required a common cooperative response. The transition to environmentally sustainable energy is being aided by the growing economic competitiveness of solar and wind energy vis - à - vis the fissile - fossil complex of nuclear and fossil - fuel energy.
|
what city is the clemson football team from | Clemson Tigers - wikipedia
The Clemson Tigers are the athletic teams that represent Clemson University. They compete as a member of the National Collegiate Athletic Association (NCAA) Division I level (Football Bowl Subdivision (FBS) sub-level for football), primarily competing in the Atlantic Coast Conference (ACC) for all sports since the 1953 - 54 season. Clemson competes for and has won multiple NCAA Division I national championships in various sports, including football, men 's soccer, and men 's golf.
In 1896, football coach Walter Riggs came to Clemson, then Clemson Agricultural College of South Carolina, from Auburn University. He had always admired the Princeton Tigers, and hence gave Clemson the Tiger mascot. The Clemson Tigers field seventeen athletic teams. Men 's sports are football, basketball, baseball, soccer, tennis, golf, track and field (indoor and outdoor) and cross-country. Women 's sports are basketball, soccer, tennis, golf, volleyball, track and field (indoor and outdoor), cross-country and rowing. The South Carolina Gamecocks are Clemson 's in - state athletic rival. The two institutions compete against each other in many sports, but the annual football game receives the most attention. Clemson 's main rivals within the Atlantic Coast Conference are Georgia Tech and Florida State.
The Tiger Paw logo was introduced at a press conference on July 21, 1970. It was created by John Antonio and developed by Helen Weaver of Henderson Advertising in Greenville, South Carolina, from a mold of a Bengal tiger sent to the agency by the Field Museum of Natural History in Chicago. The telltale hook at the bottom of the paw is a sign that this is the official licensed trademark for the university.
Clemson University sponsors teams in nine men 's and nine and a half * women 's NCAA sanctioned sports. Women 's diving completed its final season in 2017, and Clemson announced on March 14, 2017 that it would add college softball, targeting a 2020 start for the program.
The Tiger football program has won 59.1 % of its games through the 2010 season, placing it 34th on the all - time winning percentage list. Clemson also won two Southern Conference titles before joining the ACC. The program has participated in 33 bowl games over the years, winning 16. The 1981 squad, led by Head Coach Danny Ford, became the first athletic team in school history to win a national championship. Clemson defeated Nebraska 22 -- 15 in the Orange Bowl in Miami, Florida to win the 1981 NCAA Football National Championship. Stars of the game included Homer Jordan (QB) and Perry Tuttle (WR). Clemson finished the year 12 -- 0 and ranked # 1 in the Associated Press and Coaches polls.
Some of the most notable coaching names in Clemson football history are John Heisman (who also coached at Akron, Auburn, Georgia Tech, Penn, Washington & Jefferson, and Rice; the Heisman Trophy is named after him), Jess Neely, Frank Howard (whom the playing field at Death Valley is named after), and Danny Ford. After Tommy Bowden resigned midseason on October 13, 2008, Dabo Swinney took over as interim head coach. On December 1, 2008, Swinney was named head coach of the Clemson Tigers football team.
Before each home game, the team ends pre-game warm ups and proceeds to the locker room. With five minutes to go before game time, three buses leave the street behind the West Endzone carrying the Clemson football players. The buses pull to a stop at the gate in front of The Hill, and the Tigers gather at the top, where each player proceeds to rub "Howard 's Rock, '' which is an imported rock from Death Valley, California that was presented to Frank Howard in 1967. While Tiger Rag is played and a cannon sounds, the Tigers run down the hill onto the field in front of over 83,000 screaming fans. This tradition has been dubbed "The most exciting 25 seconds in college football '' by sportscaster Brent Musburger.
The Clemson Men 's Basketball team is coached by head coach Brad Brownell, announced April 13, 2010. Accomplishments include:
* vacated by NCAA
The Clemson women 's basketball team is currently coached by head coach Audra Smith. Accomplishments include:
As of 2008, the Tiger baseball team has posted a combined 30 ACC regular season and tournament championships (the most in the conference), 34 NCAA Tournament appearances, 16 NCAA Regional Titles, 4 NCAA Super Regional Titles, and 12 College World Series appearances. Much of the baseball program 's success occurred under Bill Wilhelm during his 35 seasons as Clemson 's head coach. Monte Lee is the Tigers ' current head coach, having replaced Jack Leggett after the conclusion of the 2015 season.
* - recognized ACC championships. ACC tournament has decided conference champion since 1973 (except for 1979 due to academic conflicts)
§ - the ACC does not recognize Division Championships in baseball. Divisions serve the purpose of simplifying conference scheduling during the regular season. Winning percentages in regular season conference play are then used to determine seedings for the Conference Tournament.
The men 's soccer team was Clemson 's second sports program to win a national championship, winning the NCAA Tournament in 1984 and again in 1987. In their 26 appearances in the NCAA tournament, the men 's soccer team garnered runner - up finishes in 1979 and 2015, and has appeared in the NCAA Final Four eight times, with the 2015 squad being the most recent team to accomplish that feat. In addition to their NCAA titles, the men 's program has won 16 combined ACC regular season and tournament titles, with the last one coming in the 2014 ACC Tournament. The Tigers have known only four coaches in their history: Dr. I.M. Ibrahim (1967 -- 1994, 388 -- 100 -- 31 career record), Trevor Adair (1995 -- 2008, 50 -- 48 -- 10 record at Clemson), Phil Hindson (Interim coach in 2009, 6 - 12 - 1 record) and Mike Noonan. Famous former Tigers include Oguchi Onyewu, Stuart Holden and Paul Stalteri, all three whom are capped for their respective nations.
* - recognized ACC championships. ACC champion decided by tournament since 1987
Women 's soccer became a varsity sport at Clemson in 1994. The women 's soccer team has won the ACC regular season crown twice, and advanced to the NCAA tournament sixteen times. The team has never been able to advance past the Quarterfinals of the NCAA tournament. However, the team has been able to reach the Quarterfinals four times. The Tigers have known five coaches in their history Tracey Leone (1994 - 1998 89 - 39 - 4 career record), Ray Leone (1999 - 2000 33 - 10 - 3 career record), Todd Bramble (2001 - 2007 80 - 51 - 17 career record), Hershey Strosberg (2008 - 2010 14 - 39 - 1 career record), and Eddie Radwanski (2011 - Current).
The Tiger golf team have a tradition of being among the best in the ACC and the nation, having won several ACC titles and regularly qualifying for the NCAA Tournament. In 2003, Clemson defeated Oklahoma State to win its first National Championship in golf and the 4th overall for the school. In addition to that victory, Clemson also won the ACC and NCAA East Regional titles that year, making the Tigers the first program in NCAA history to win its conference, regional, and national championship tournaments in the same year. Clemson has also won seven regional titles since the NCAA adopted the regional tournament format in 1989. 2009 U.S. Open champion Lucas Glover played golf at Clemson.
3 (outdoor)
99 (outdoor)
178 (outdoor)
227 (outdoor)
In 2017, Logan Morris represented the US at the IAAF World Cross Country Championships in Uganda, finishing 45th out 101 runners in the Junior (under 20 ranks).
* ACC Championship decided by tournament until 2004; regular season finish has determined the ACC champion since 2005 season.
* The Lady Tigers rowing team became the first team other than Virginia to win the ACC Championship since the ACC began sponsoring the women 's rowing championship in 2000.
* Clemson sponsored a women 's diving team from 2013 -- 2017.
Wrestling
Sammie Henson (1993, 1994)
Wrestling at Clemson University was discontinued in 1995, despite the success of the program, due to financial shortages from Tiger Athletics ' funding from the university. The wrestling program began in 1975 winning the ACC title as a team under coach Eddie Griffin in 1991. The Tiger wrestling program produced 8 overall wrestlers with All - American status, two NCAA Champions, and a finish at the NCAA Championships as high as 7th in 1994. Sammie Henson is a former standout at Clemson, as one of the most accomplished tiger wrestlers with a 1993 and 1994 NCAA Champion titles who eventually earned a 2000 Olympic silver medal and became a 1998 world champion in freestyle wrestling.
Clemson University has five NCAA team national championships.
Clemson Rugby was founded in 1967. Although rugby is a club sport at Clemson, the team receives significant support from the university and from the Clemson Rugby Foundation, which was founded in 2007 by Clemson alumni. Clemson rugby has been led since 2010 by head coach Justin Hickey, who has also served as team manager for the U.S. national under - 20 team.
Clemson 's best season was 1996, when the team advanced to the national college rugby quarterfinals. Clemson also advanced to the round of 16 of the national playoffs for three consecutive years from 2005 - 2007. Clemson has played since 2011 in the Atlantic Coast Rugby League against its traditional ACC rivals. Clemson placed second in its conference in the spring 2012 season with a 6 - 1 conference record, narrowly missing out to Maryland for the conference title and a place in the national college rugby playoffs. Clemson again finished the spring 2013 season with a 6 - 1 conference record, and then defeated South Carolina 29 - 7 in the round of 16 national playoffs, before losing in the quarterfinals to Central Florida 20 - 24.
Baseball
Swimming
Tennis
Track
Wrestling
Clemson 's intra-conference football rivalries include Georgia Tech (GT leads 50 - 29 - 2), NC State (Clemson leads 56 - 28 - 1 in the Textile Bowl), Boston College (O'Rourke - McFadden Trophy, Clemson leads 15 - 9 - 2), and Florida State (FSU leads 20 - 9).
Clemson has a lesser rivalry with the University of Georgia, born because of the two institutions ' close proximity (roughly 75 miles apart). Clemson and Georgia first met in 1897, only the second year the Tigers fielded a football team. The rivalry was at its height in the 1980s. The athletic departments recently added games to be played in 2013 at Clemson and 2014 in Athens. Georgia leads the football series 41 -- 18 -- 4, winning the past five meetings in a row until losing to the Tigers in 2013.
Clemson 's fight song is the "Tiger Rag '', the "Song that Shakes the Southland '', a variation of the song originally recorded by the Original Dixieland Jazz Band. The song is played at all Clemson sporting events, particularly following scores or big plays, and during the "Most Exciting 25 Seconds in College Football ''. The song 's lyrics are not used, save for the spell - out of "Clemson '' at the end.
The most prominent of Clemson 's facilities is Memorial Stadium, Frank Howard Field, home to the Clemson University men 's football team. Memorial Stadium is also known by its nickname, "Death Valley. '' Memorial Stadium is also home to the WestZone, which was completed in 2006. With the completion of the first phase of the WestZone, the listed capacity for Memorial Stadium is 80,301. The WestZone holds many IPTAY offices, Clemson football coach 's offices, weight rooms, locker rooms, and a recruiting center.
The men 's and women 's basketball teams play at Littlejohn Coliseum, which has a listed capacity of 10,000 spectators. Littlejohn also acts as a venue for a variety of campus functions throughout the year, including concerts and graduation ceremonies.
Recently renovated Doug Kingsmore Stadium is home to Clemson 's men 's baseball team.
The men 's and women 's soccer teams play their home games at historic Riggs Field.
Other home venues for these sports are: Walker Golf Course, Hoke Sloan Tennis Center, Jervey Gym (volleyball), Rock Norman Track Complex, and McHugh Natatorium. Women 's rowing holds home events on nearby Lake Hartwell.
|
who plays mr fitz in pretty little liars | Ian Harding - wikipedia
Ian Harding (born 16 September 1986) is an American actor. He is known for his role as Ezra Fitz in the television series Pretty Little Liars.
Harding was born in Heidelberg, Germany, to an American military family. His family moved to Virginia a few years later, where he joined the drama club at his high school, Georgetown Preparatory School in North Bethesda, Maryland. He was selected by his class to give the commencement address. He later went on to pursue the Acting / Music Theater program at Carnegie Mellon University.
Harding has been working with the Lupus Foundation of America to raise funds and awareness for lupus research and education to support his mother, who has been living with lupus for more than 20 years.
Harding was cast in the role of Ezra Fitz on the ABC Family television drama series Pretty Little Liars in 2010. Harding has won seven Teen Choice Awards for his portrayal of Ezra Fitz.
In February 2017 Harding was cast in a starring role on the Fox television pilot Thin Ice, but the network passed on the pilot in May.
|
where is hong kong in relation to china | Hong Kong -- mainland China conflict - wikipedia
Tensions between people from Hong Kong and mainland China have developed since the transfer of sovereignty over Hong Kong to China in 1997, and in particular since the late 2000s and early 2010s. Various factors have contributed to the development of such tensions: these include a difference between the popular interpretation in Hong Kong of the "One country, two systems '' constitutional principle as against the Chinese government 's official interpretation; policies of the Hong Kong and central governments to encourage mainland visitors to Hong Kong; and changing economic environments in Hong Kong and mainland China. Increasingly, these tensions have resulted in a rising sentiment in Hong Kong of hostility to "mainlanders '' and resentment at a perceived trend towards assimilation and interference from the mainland and the central government, and at the same time a rising sentiment in mainland China of bewilderment and resentment at assertions that Hong Kong is, and should remain, different from the mainland in terms of political system, culture and language.
The sovereignty of Hong Kong was transferred from the United Kingdom to the People 's Republic of China in 1997. The terms agreed between the governments for the transfer included a series of guarantees for the maintenance of Hong Kong 's differing economic, political and legal systems after the transfer, and the further development of Hong Kong 's political system with a goal of democratic government. These guarantees were set out in the Sino - British Joint Declaration and enshrined in the semi-constitutional Basic Law of Hong Kong. Initially, many Hong Kongers were enthusiastic about Hong Kong 's return to China. However, tension has arisen between Hong Kong residents and the mainland, and in particular the central government, since 1997, and especially in the late 2000s and early 2010s. The Hong Kong government has implemented some controversial policies, for instance, the Individual Visit Scheme and the Guangzhou - Shenzhen - Hong Kong Express Rail Link. China (2011) argues that since the Hong Kong government failed to force through the legislation to implement Article 23 of the Basic Law, Beijing 's relatively hands - off approach to Hong Kong changed dramatically. The PRC 's strategy became aimed at trying to dissolve the city - state boundary of Hong Kong in the name of economic rejuvenation and ostensibly to strengthen socio - economic ties with the mainland. The central government has adopted increasingly strong rhetoric perceived to be attacking Hong Kong 's political and legal systems, such as releasing a report in 2014 that asserts that Hong Kong 's judiciary should be subordinate to, and not independent of, the government. The Basic Law and the Sino - British Joint Declaration guarantee the development of Hong Kong 's electoral system towards universal suffrage, but the electoral system offered to Hong Kong by the central government in 2014 - 2015 was widely perceived as falling short of genuinely democratic.
Hong Kong has more international cultural values from its past as a British colony and international city, and at the same time has retained many traditional Chinese cultural values, putting it in stark contrast to the culture of many parts of mainland China, where many international cultural values have never taken root and where many traditional cultural values have been lost. Hong Kong is also a multi-ethnic society with different cultural values in relation to race, languages and cultures to those held by the Chinese government and many mainland residents. As a highly developed economy with a high standard of living, Hong Kong culture has different values in relation to hygiene and social propriety compared to some parts of mainland China. Hong Kong - mainland conflict is mainly attributed to the cultural differences between Hong Kong people and mainlanders, such as languages, as well as the significant growth in number of mainland visitors. Since the implementation of Individual Visit Scheme on 28 July 2003, the number of mainland visitors increased from 6.83 million in 2002 to 40.7 million in 2013, according to the statistics provided by the Hong Kong Tourism Board. The conflict associates to issues regarding the allocation of resources between mainlanders and Hong Kong people in different sectors, such as healthcare and education.
In recent years, there were some incidents showing conflicts between Hong Kongers and mainlanders.
On 5 February 2011, Lee Qiaozhen, a Hong Kong tour guide, had a quarrel with three mainland tourists. Lee verbally insulted the tourists for not buying at a jewellery store, referring to them as "dogs ''. The tourists were dissatisfied and this eventually turned into a fight. Lee and the three tourists were arrested by the police for physical assault.
On 5 January 2012, Apple Daily reported that only Hong Kong citizens had been prevented from taking pictures of Dolce & Gabbana window displays in both their Hong Kong fashion outlets, stirring anti-Mainlander sentiment. In particular staff and security personnel at their flagship store on Canton Road asserted the pavement area outside was private property where photography was forbidden. The actions sparked protests spanning several days and gained international news coverage on 8 January. Citing the case of Zhou Jiugeng (周久耕), a Nanjing official whose high - living lifestyle was identified by Chinese citizens using internet photographs, local news reports speculated that the Dolce & Gabbana photo ban may have been imposed at the request of some wealthy Chinese government officials who were shopping and who feared photographs of them in the store might circulate and fuel corruption allegations and investigations into the source of their wealth.
In early 2012, Kong Qingdong, a Peking University professor, publicly called Hong Kongers "old dogs '' in the aftermath of a controversy over mainland visitors urinating or defecating in public in Hong Kong. Kong 's strong language prompted protests in Hong Kong.
Since 2012, there have been a vertiginous increase in mainland parallel traders coming to the northern parts of Hong Kong to import goods and export them back to mainland. Products that are popular among these traders include infant formula and household products. As a result of shortages of milk powder in Hong Kong for an extended time, the government imposed restrictions on the amount of milk powder exports from Hong Kong. Up to now, each person is only allowed 2 cans, or 1.2 kg of milk powder per trip in the MTR and cross-borders. Besides, since northern places like Sheung Shui became the transaction centres of the traders, this resulted in discontent from nearby residents.
In recent years till 2012, the number of anchor babies in Hong Kong had been increasing. Pregnant mainland women seeking to give birth in Hong Kong, specifically to benefit from the right of abode. Their parents came from mainland to give birth in Hong Kong, which resulted in their children gaining the right to abode and enjoy social welfare in the city. Hong Kong citizens expressed concerns that the pregnant women and anchor babies put heavier burden on Hong Kong 's medical system. Some of them even called mainlanders "locusts '' which take away Hong Kong 's resources from locals. Over 170,000 new births where both parents were mainlanders between 2001 and 2011, of which 32,653 were born in 2010. CY Leung 's first public announcement on policy as Chief Executive - elect was to impose a ' zero ' quota on mainland mothers giving birth in Hong Kong. Leung further underlined that those who did may not be able to secure the right of abode for their offspring in Hong Kong.
In 2015, the Chinese Football Association launched a series of posters relating to other Asian football teams. Among these, the poster relating to Hong Kong appeared to mock the multi-ethnic make - up of Hong Kong 's football team. In response, in subsequent matches between Hong Kong and Bhutan and the Maldives respectively, supporters of the Hong Kong team jeered when the Chinese national anthem was played for the Hong Kong team.
In April 2017, during a match in Hong Kong between Hong Kong club Eastern SC and Chinese club Guangzhou Evergrande, Guangzhou Evergrande fans displayed an "Annihilate British Dogs, Eradicate Hong Kong Independence Poison '' banner during the game. This resulted them being fined US $22,500
In July 2015, localists including Hong Kong Indigenous and Youngspiration marched to the Immigration Department to demand deportation of an undocumented 12 - year - old Mainland boy Siu Yau - wai, who lived in Hong Kong for nine years without identification. Siu, whose parents are alive and well in mainland China, stayed with his grandparents after having overstayed his two - way permit nine years ago. Pro-Beijing Federation of Trade Unions lawmaker Chan Yuen - han advised and assisted the boy and his grandmother to obtain a temporary ID and pleaded for compassion from the local community. Some called on the authorities to consider the case on a humanitarian basis and grant Siu permanent citizenship while many others, afraid that the case would open the floodgates to appeals from other illegal immigrants, asked for the boy to be repatriated. The boy eventually gave up and returned to his parents in mainland China.
On November 19, 2015, an Anti-Mainlandisation motion was voted down, with 19 in favour and 34 opposing. The motion sought to defend local history and culture from the influence of mainland China. Supporters argued that mainlandisation leads to fakeness, rampant corruption and the abuse of power, while Hong Kong risked becoming another mainland city. Opponents of the motion, argued that motion was seeing different cultures with a narrow perspective and attempting to split the Chinese nation and create conflict.
In September 2017, tensions arose between Mainland students, Hong Kong students, CUHK staff and CUHK student union staff members over the content of posters / banners put up on Democracy wall in the Chinese University of Hong Kong. With issues of vandalism, disobeying the rules, freedom of speech, respecting different opinions and displaying hateful messages reaching the spotlight, as well similar incidents occurring in other Hong Kong universities ' Democracy walls ' such as Education University of Hong Kong, University of Hong Kong, Polytechnic University of Hong Kong. This also reignited Hong Kong Independence debate within Hong Kong Kong society.
The conflict between Hong Kong people and mainlanders poses an immense impact on Hong Kong society.
The major significance is the rise of local awareness in self - identity. With reference to the survey conducted by a public opinion programme of the University of Hong Kong, the identity index of interviewees who regarded themselves as "Chinese '' plummeted between the years of 2008 -- 2014, from approximately 7.5 in 2008 to a continuous fluctuation within the range between 6 -- 7. The drop in sense of national identity is believed to be the result of the aforementioned conflicts. The recent conflicts (anchor babies, D&G crisis, and parallel trading) further contributed to the rise of local awareness in self - identity.
There are differences in culture and political backgrounds between those from Hong Kong and China. Hong Kong was ruled by the British based on the system of Ladder Patten throughout the 1850s up until 1997, whereas China was under the control of the Chinese Communist Party from 1949 onwards. The education that people received, the culture, and lifestyle were very different which lead to the cultural conflicts.
Some Hong Kong people perceive mainlanders as rude, impolite, poorly educated. This further leads to locals ' nonacceptance of mainlanders, especially when they travel in Hong Kong. Travelers from the mainland are growing in a tremendous number that their existence can influence the direction of government 's policies. The premise of various protests within the 2010s were related to the issue of the individual visit scheme adversely affecting the daily lives of Hong Kongers. On the other hand, some Mainlanders view Hong Kong is acting like a spoiled, ungrateful child despite all the (economic) support it is getting from China. Hong Kong is increasingly viewed as a place of traitors, British lapdogs, nest of subversives within China, while pointing out Macau 's relationship to China as a role model.
The 2014 Hong Kong protests let to birth of new political parties. The pan-democrats encourage young people who participated in the Occupy movement to register and vote in the district council poll. The first wave of dilettantes, about 50 in number, many of whom were born in the new millennium having political aspirations and disillusioned with the political establishment and who were influenced by the Umbrella Revolution, contested the 2015 district council elections. Pitted against seasoned politicians, and with support only from friends and family, they are popularly known as "Umbrella Soldiers ''.
During the Hong Kong legislative election, 2016, six localist groups which emerged after the 2014 Umbrella Revolution, Youngspiration, Kowloon East Community, Tin Shui Wai New Force, Cheung Sha Wan Community Establishment Power, Tsz Wan Shan Constructive Power and Tuen Mun Community, formed an electoral alliance under the name "ALLinHK '' to field candidates in four of the five geographical constituencies with the agenda to put forward a referendum on Hong Kong 's self - determination, while Hong Kong Indigenous and another new pro-independence Hong Kong National Party attempted to run in the upcoming election. The student leaders in the Umbrella Revolution, Joshua Wong, Oscar Lai and Agnes Chow of Scholarism and Nathan Law of the Hong Kong Federation of Students (HKFS) formed a new party called Demosistō. The new party calls for referendum on Hong Kong 's future after 2047 when the one country, two systems is supposed to expire. and fielded candidates in Hong Kong Island and Kowloon East.
Due to recent tensions between Mainland and Hong Kong people, along with impact of the Umbrella Movement, different sectors of Hong Kong have shifted their view of Hong Kong 's development of democracy.
Traditionally, the pan-democratic camp campaigned for democracy in China and Hong Kong, however after the Umbrella movement, with the rise of localism, there were calls to make Hong Kong democratic first, then China or only focus on making Hong Kong democratic. In recent years, localism within Hong Kong, has been gaining popularity of Hong Kong youth, this has led to new political parties and organisations being formed. Some Localist parties have taken the latter view of democracy, while others promote the notion of Hong Kong Independence, believing that only when Hong Kong is Independent from Mainland China, real democracy can be established.
Likewise, since the end of Umbrella movement, the pro-Beijing camp as well Mainland officials, along with CY Leung and Carrie Lam have said that the development of democracy in Hong Kong is not a top priority, the Hong Kong government should focus on livelihood issues first.
Since 1997, Hong Kong is a part of China under the "one country, two systems '' approach. Within Hong Kong society, there are different views of this arrangement, such as within the political spectrum, the Pro-Beijing camp tend to focus on "one country '' aspect, where Hong Kong will gradually integrating into China, while following and supporting the Central government policies will bring stability and prosperity to Hong Kong. However, in the Pro-democracy camp, focus on the "two systems '' approach, where Hong Kong is a part of China, but Hong Kong must develop more democratic institutions and preserve freedoms and human rights to achieve prosperity, while co-operating with China.
In recent years, there have been incidents of "Mainlandisation '' where some sectors of society are worried about the changing environment of Hong Kong. Mainlandisation or Integration of Hong Kong is the official policy Beijing government and its Beijing supporters in Hong Kong are actively helping to promote it 's agenda, using their power to influence certain key decision - making choices within Hong Kong society.
Under the Basic Law of Hong Kong, Mandarin was made an official language along with Cantonese and English. On paper, the three languages were given equal status, in reality Mandarin is increasingly given more importance.
In recent years, Mandarin has been increasingly used in Hong Kong, this has led to fears of Cantonese being replaced. The use of English and its proficiency in Hong Kong has also suffered a decline in standards. The promotion and increasingly use of Mandarin over Cantonese and English in Hong Kong has led to questions raised about Hong Kong 's competitiveness in the global economy, its dependency on the Mainland 's economy and its loss of a distinct cultural identity.
In 2003, the Closer Economic Partnership Arrangement was signed between the Hong Kong government and the Central Government. CEPA is a free trade agreement pursuant to which qualifying products, companies and residents of Hong Kong enjoy preferential access to the mainland Chinese market. It was seen as a free - trade agreement between China and Hong Kong, but at the same time with realigned Hong Kong 's economy to be more dependent on China 's economy.
Moral and national education (MNE, 德育 及 國民 教育; 德育 及 国民 教育) is a school curriculum proposed by the Education Bureau of Hong Kong, transformed from the current moral and civic education (MCE, 德育 及 公民 教育). The Hong Kong attempted to pass the curriculum in 2012, which lead to protests. The subject was particularly controversial for praising the communist and nationalist ideology of China 's government on the one hand, and condemning democracy and republicanism on the other.
Since 2002, Hong Kong 's press freedom has declined. From 18th place in 2002, 34th in 2011, 54th in 2012, 58th in 2013, 61 in 2014, 70 in 2015 Reporters Without Borders examines 180 countries and regions, gave a ranking to Hong Kong at 73, with China ranked at 176 and Taiwan being 45 -- the highest ranking among all Asian countries in 2017
Hong Kong Journalists Association attributes this to increasingly self - censorship within the industry, due to staff members not wanting to upset people in Beijing in fear of retaliation or loss of future opportunities. Jason Y. Ng, writing for the Hong Kong Free Press remarks that "The post-handover era has witnessed a series of ownership changes in the media industry. Self - censorship can also take the form of personnel changes, including management reshuffling in the newsroom and discontinuation of influential columns. ''
In recent years, there have been many infrastructure projects / policies connecting Hong Kong to Mainland China. The pro-democracy camp is suspicious of such projects, arguing that Mainland government slowly gaining control and influence over Hong Kong, as this integration will eventually turn Hong Kong into another Mainland city and lose its special uniqueness. A key problem is the minimum or lack of consultation from the Hong Kong people. Another concern is the environmental impact of such projects as well the high costs, as well some projects going over budget, which paid by the local taxpayer. However the pro-Beijing camp argues that these projects are to help redevelop Hong Kong, help it maintain its competitiveness and provide new economic opportunities.
List of Integration infrastructure projects:
Since the handover, The scheme, which allows 150 mainlanders a day to come to Hong Kong and Macau to reunite with their families, is administered by Chinese authorities, with Hong Kong and Macau authorities having no say on who can come in. Most of people on this quota end up going to Hong Kong. In recent years this quota has sparked intense debate on the positives, negatives and impacts it has on Hong Kong society. The Beijing government argues that the scheme is to prevent illegal immigration into Hong Kong and Macau.
The Pro-Beijing camp argue that these new immigrants are to help combat an aging population as well bringing new talent into the city. The pro-democracy camp see the one - way permit scheme serves as a tool for Beijing to gradually change the population mix in Hong Kong and integrate the city with China. A majority of immigrants from the mainland tend to cast their votes in favor of pro-Beijing politicians during elections for district councils and the legislature. Others point out, that too many immigrants are taking away resources from local graduates as there is more competition for jobs and housing. This has lead calls from the pro-democracy camp to change or modify the scheme to allow Hong Kong government to have a say in choosing which immigrants to come or final approval, while the localist camp advocate cancelling the scheme, saying this preferential treatment has put a strain on resources in Hong Kong and argues that immigrants from the mainland can come and settle in Hong Kong like any other immigrants from around the world.
In 2015, The University of Hong Kong governing council 's rejected of Johannes Chan 's (the dean of Faculty of Law 2004 - 2014) recommended appointment to the post of pro-vice - chancellor in charge of staffing and resources. The governing council 's decision, the first time that a candidate selected by the committee has been rejected, is widely viewed as political retaliation for Chan 's involvement with pro-democratic figures. A majority of HKU Council members are not students or staff of the university, and many that are directly appointed by HKSAR Chief Executive Leung Chun - ying, the majority with Pro-Beijing views. The decision has received international condemnation, and is being viewed as part of a Beijing - backed curtailing of academic freedoms that will damage Hong Kong 's academic reputation.
Since the end of 2014 Hong Kong protests, professors and lecturers with pro-democracy views or sympathies have experienced media smear campaigns from pro-communist newspapers, harassment from paid Pro-Beijing mobs, cyber-attacks, contracts not renewed, rejection of jobs or promotions, suffered demotions or blocked from taking senior management positions by university councils, where most members are appointed by the Chief Executive, who are loyal to Beijing.
The Hong Kong Bar Association has claimed that Beijing has undermined Hong Kong 's Judicial independence and rule of law through Standing Committee of the National People 's Congress (NPCSC) interpretation of the Hong Kong Basic Law. These controversial interpretations have let to the legal sector of Hong Kong to stage rare silent protests over these interpretations and since 1997 four have been held. It is feared that China wants the Hong Kong 's Judiciary to become the same format and characteristics as in the Mainland.
The first march took place in 1999, when the NPCSC issued the first interpretation of the Basic Law relating to the issue of the right of abode of Chinese citizens with Hong Kong parents. The second was held in 2005 after the NPCSC interpreted a provision in the Hong Kong Basic Law regarding the chief executive 's term of office. The third was held in June 2014 over Beijing 's issuance of a white paper on the One Country, Two Systems policy, which stated that judges in Hong Kong should be "patriotic '' and are "administrators '' that suppose to co-operate with the Executive Branch of Hong Kong, whereas many in Hong Kong believe the Judiciary, Executive and Legislature are independent from each other. The fourth march occurred on November 2016, over Hong Kong Legislative Council oath - taking controversy, with over 3000 lawyers and activists parading through Hong Kong in silence and dressed in black.
In late December 2017, in response to the co-location agreement in West Kowloon, the Hong Kong Bar Association issued the following statement: "The current co-location arrangement is in direct contravention of the Basic Law and if implemented would substantially damage the rule of law in Hong Kong. "The rule of law will be threatened and undermined if the clear meaning of the Basic Law can be twisted and the provisions of the Basic Law can be interpreted according to expediency and convenience. ''
|
is the missouri river young mature or old | Missouri River - wikipedia
The Missouri River is the longest river in North America. Rising in the Rocky Mountains of western Montana, the Missouri flows east and south for 2,341 miles (3,767 km) before entering the Mississippi River north of St. Louis, Missouri. The river takes drainage from a sparsely populated, semi-arid watershed of more than half a million square miles (1,300,000 km), which includes parts of ten U.S. states and two Canadian provinces. When combined with the lower Mississippi River, it forms the world 's fourth longest river system.
For over 12,000 years, people have depended on the Missouri River and its tributaries as a source of sustenance and transportation. More than ten major groups of Native Americans populated the watershed, most leading a nomadic lifestyle and dependent on enormous bison herds that once roamed through the Great Plains. The first Europeans encountered the river in the late seventeenth century, and the region passed through Spanish and French hands before finally becoming part of the United States through the Louisiana Purchase. The Missouri was long believed to be part of the Northwest Passage -- a water route from the Atlantic to the Pacific -- but when Lewis and Clark became the first to travel the river 's entire length, they confirmed the mythical pathway to be no more than a legend.
The Missouri River was one of the main routes for the westward expansion of the United States during the 19th century. The growth of the fur trade in the early 19th century laid much of the groundwork as trappers explored the region and blazed trails. Pioneers headed west en masse beginning in the 1830s, first by covered wagon, then by the growing numbers of steamboats entering service on the river. Former Native American lands in the watershed were taken over by settlers, leading to some of the most longstanding and violent wars against indigenous peoples in American history.
During the 20th century, the Missouri River basin was extensively developed for irrigation, flood control and the generation of hydroelectric power. Fifteen dams impound the main stem of the river, with hundreds more on tributaries. Meanders have been cut and the river channelized to improve navigation, reducing its length by almost 200 miles (320 km) from pre-development times. Although the lower Missouri valley is now a populous and highly productive agricultural and industrial region, heavy development has taken its toll on wildlife and fish populations as well as water quality.
From the Rocky Mountains of Montana and Wyoming, three streams rise to form the headwaters of the Missouri River: v
The Missouri River officially starts at the confluence of the Jefferson and Madison in Missouri Headwaters State Park near Three Forks, Montana, and is joined by the Gallatin a mile (1.6 km) downstream. The Missouri then passes through Canyon Ferry Lake, a reservoir west of the Big Belt Mountains. Issuing from the mountains near Cascade, the river flows northeast to the city of Great Falls, where it drops over the Great Falls of the Missouri, a series of five substantial waterfalls. It then winds east through a scenic region of canyons and badlands known as the Missouri Breaks, receiving the Marias River from the west then widening into the Fort Peck Lake reservoir a few miles above the confluence with the Musselshell River. Farther on, the river passes through the Fort Peck Dam, and immediately downstream, the Milk River joins from the north.
Flowing eastwards through the plains of eastern Montana, the Missouri receives the Poplar River from the north before crossing into North Dakota where the Yellowstone River, its greatest tributary by volume, joins from the southwest. At the confluence, the Yellowstone is actually the larger river. The Missouri then meanders east past Williston and into Lake Sakakawea, the reservoir formed by Garrison Dam. Below the dam the Missouri receives the Knife River from the west and flows south to Bismarck, the capital of North Dakota, where the Heart River joins from the west. It slows into the Lake Oahe reservoir just before the Cannonball River confluence. While it continues south, eventually reaching Oahe Dam in South Dakota, the Grand, Moreau and Cheyenne Rivers all join the Missouri from the west.
The Missouri makes a bend to the southeast as it winds through the Great Plains, receiving the Niobrara River and many smaller tributaries from the southwest. It then proceeds to form the boundary of South Dakota and Nebraska, then after being joined by the James River from the north, forms the Iowa -- Nebraska boundary. At Sioux City the Big Sioux River comes in from the north. The Missouri flows south to the city of Omaha where it receives its longest tributary, the Platte River, from the west. Downstream, it begins to define the Nebraska -- Missouri border, then flows between Missouri and Kansas. The Missouri swings east at Kansas City, where the Kansas River enters from the west, and so on into north - central Missouri. To the east of Kansas City, the Missouri receives, on the left side, the Grand River. It passes south of Columbia and receives the Osage and Gasconade Rivers from the south downstream of Jefferson City. The river then rounds the northern side of St. Louis to join the Mississippi River on the border between Missouri and Illinois.
With a drainage basin spanning 529,350 square miles (1,371,000 km), the Missouri River 's catchment encompasses nearly one - sixth of the area of the United States or just over five percent of the continent of North America. Comparable to the size of the Canadian province of Quebec, the watershed encompasses most of the central Great Plains, stretching from the Rocky Mountains in the west to the Mississippi River Valley in the east and from the southern extreme of western Canada to the border of the Arkansas River watershed. Compared with the Mississippi River above their confluence, the Missouri is twice as long and drains an area three times as large. The Missouri accounts for 45 percent of the annual flow of the Mississippi past St. Louis, and as much as 70 percent in certain droughts.
In 1990, the Missouri River watershed was home to about 12 million people. This included the entire population of the U.S. state of Nebraska, parts of the U.S. states of Colorado, Iowa, Kansas, Minnesota, Missouri, Montana, North Dakota, South Dakota, and Wyoming, and small southern portions of the Canadian provinces of Alberta and Saskatchewan. The watershed 's largest city is Denver, Colorado, with a population of more than six hundred thousand. Denver is the main city of the Front Range Urban Corridor whose cities had a combined population of over four million in 2005, making it the largest metropolitan area in the Missouri River basin. Other major population centers -- mostly located in the southeastern portion of the watershed -- include Omaha, Nebraska, situated north of the confluence of the Missouri and Platte Rivers; Kansas City, Missouri -- Kansas City, Kansas, located at the confluence of the Missouri with the Kansas River; and the St. Louis metropolitan area, situated south of the Missouri River just below the latter 's mouth, on the Mississippi. In contrast, the northwestern part of the watershed is sparsely populated. However, many northwestern cities, such as Billings, Montana, are among the fastest growing in the Missouri basin.
With more than 170,000 square miles (440,000 km) under the plow, the Missouri River watershed includes roughly one - fourth of all the agricultural land in the United States, providing more than a third of the country 's wheat, flax, barley and oats. However, only 11,000 square miles (28,000 km) of farmland in the basin is irrigated. A further 281,000 square miles (730,000 km) of the basin is devoted to the raising of livestock, mainly cattle. Forested areas of the watershed, mostly second - growth, total about 43,700 square miles (113,000 km). Urban areas, on the other hand, comprise less than 13,000 square miles (34,000 km) of land. Most built - up areas are located along the main stem and a few major tributaries, including the Platte and Yellowstone Rivers.
Elevations in the watershed vary widely, ranging from just over 400 feet (120 m) at the Missouri 's mouth to the 14,293 - foot (4,357 m) summit of Mount Lincoln in central Colorado. The river itself drops a total of 8,626 feet (2,629 m) from Brower 's Spring, the farthest source. Although the plains of the watershed have extremely little local vertical relief, the land rises about 10 feet per mile (1.9 m / km) from east to west. The elevation is less than 500 feet (150 m) at the eastern border of the watershed, but is over 3,000 feet (910 m) above sea level in many places at the base of the Rockies.
The Missouri 's drainage basin has highly variable weather and rainfall patterns, Overall, the watershed is defined by a Continental climate with warm, wet summers and harsh, cold winters. Most of the watershed receives an average of 8 to 10 inches (200 to 250 mm) of precipitation each year. However, the westernmost portions of the basin in the Rockies as well as southeastern regions in Missouri may receive as much as 40 inches (1,000 mm). The vast majority of precipitation occurs in winter, although the upper basin is known for short - lived but intense summer thunderstorms such as the one which produced the 1972 Black Hills flood through Rapid City, South Dakota. Winter temperatures in Montana, Wyoming and Colorado may drop as low as − 60 ° F (− 51 ° C), while summer highs in Kansas and Missouri have reached 120 ° F (49 ° C) at times.
As one of the continent 's most significant river systems, the Missouri 's drainage basin borders on many other major watersheds of the United States and Canada. The Continental Divide, running along the spine of the Rocky Mountains, forms most of the western border of the Missouri watershed. The Clark Fork and Snake River, both part of the Columbia River basin, drain the area west of the Rockies in Montana, Idaho and western Wyoming. The Columbia, Missouri and Colorado River watersheds meet at Three Waters Mountain in Wyoming 's Wind River Range. South of there, the Missouri basin is bordered on the west by the drainage of the Green River, a tributary of the Colorado, then on the south by the mainstem of the Colorado. Both the Colorado and Columbia Rivers flow to the Pacific Ocean. However, a large endorheic drainage called the Great Divide Basin exists between the Missouri and Green watersheds in western Wyoming. This area is sometimes counted as part of the Missouri River watershed, even though its waters do not flow to either side of the Continental Divide.
To the north, the much lower Laurentian Divide separates the Missouri River watershed from those of the Oldman River, a tributary of the South Saskatchewan River, as well as the Souris, Sheyenne, and smaller tributaries of the Red River of the North. All of these streams are part of Canada 's Nelson River drainage basin, which empties into Hudson Bay. There are also several large endorheic basins between the Missouri and Nelson watersheds in southern Alberta and Saskatchewan. The Minnesota and Des Moines Rivers, tributaries of the upper Mississippi, drain most of the area bordering the eastern side of the Missouri River basin. Finally, on the south, the Ozark Mountains and other low divides through central Missouri, Kansas and Colorado separate the Missouri watershed from those of the White River and Arkansas River, also tributaries of the Mississippi River.
Over 95 significant tributaries and hundreds of smaller ones feed the Missouri River, with most of the larger ones coming in as the river draws close to the mouth. Most rivers and streams in the Missouri River basin flow from west to east, following the incline of the Great Plains; however, some eastern tributaries such as the James, Big Sioux and Grand River systems flow from north to south.
The Missouri 's largest tributaries by runoff are the Yellowstone in Montana and Wyoming, the Platte in Wyoming, Colorado, and Nebraska, and the Kansas -- Republican / Smoky Hill and Osage in Kansas and Missouri. Each of these tributaries drains an area greater than 50,000 square miles (130,000 km), and has an average discharge greater than 5,000 cu ft / s (140 m / s). The Yellowstone River has the highest discharge, even though the Platte is longer and drains a larger area. In fact, the Yellowstone 's flow is about 13,800 cu ft / s (390 m / s) -- accounting for sixteen percent of total runoff in the Missouri basin and nearly double that of the Platte. On the other end of the scale is the tiny Roe River in Montana, which at 201 feet (61 m) long is one the world 's shortest rivers.
The table on the right lists the ten longest tributaries of the Missouri, along with their respective catchment areas and flows. Length is measured to the hydrologic source, regardless of naming convention. The main stem of the Kansas River, for example, is 148 miles (238 km) long. However, including the longest headwaters tributaries, the 453 - mile (729 km) Republican River and the 156 - mile (251 km) Arikaree River, brings the total length to 749 miles (1,205 km). Similar naming issues are encountered with the Platte River, whose longest tributary, the North Platte River, is more than twice as long as its mainstream.
The Missouri 's headwaters above Three Forks extend much farther upstream than the main stem. Measured to the farthest source at Brower 's Spring, the Jefferson River is 298 miles (480 km) long. Thus measured to its highest headwaters, the Missouri River stretches for 2,639 miles (4,247 km). When combined with the lower Mississippi, the Missouri and its headwaters form part of the fourth - longest river system in the world, at 3,745 miles (6,027 km).
By discharge, the Missouri is the ninth largest river of the United States, after the Mississippi, St. Lawrence, Ohio, Columbia, Niagara, Yukon, Detroit, and St. Clair. The latter two, however, are sometimes considered part of a strait between Lake Huron and Lake Erie. Among rivers of North America as a whole, the Missouri is thirteenth largest, after the Mississippi, Mackenzie, St. Lawrence, Ohio, Columbia, Niagara, Yukon, Detroit, St. Clair, Fraser, Slave, and Koksoak.
As the Missouri drains a predominantly semi-arid region, its discharge is much lower and more variable than other North American rivers of comparable length. Before the construction of dams, the river flooded twice each year -- once in the "April Rise '' or "Spring Fresh '', with the melting of snow on the plains of the watershed, and in the "June Rise '', caused by snowmelt and summer rainstorms in the Rocky Mountains. The latter was far more destructive, with the river increasing to over ten times its normal discharge in some years. The Missouri 's discharge is affected by over 17,000 reservoirs with an aggregate capacity of some 141 million acre feet (174 km). By providing flood control, the reservoirs dramatically reduce peak flows and increase low flows. Evaporation from reservoirs significantly reduces the river 's runoff, causing an annual loss of over 3 million acre feet (3.7 km) from mainstem reservoirs alone.
The United States Geological Survey operates fifty - one stream gauges along the Missouri River. The river 's average discharge at Bismarck, 1,314.5 miles (2,115.5 km) from the mouth, is 21,920 cu ft / s (621 m / s). This is from a drainage area of 186,400 sq mi (483,000 km), or 35 % of the total river basin. At Kansas City, 366.1 miles (589.2 km) from the mouth, the river 's average flow is 55,400 cu ft / s (1,570 m / s). The river here drains about 484,100 sq mi (1,254,000 km), representing about 91 % of the entire basin.
The lowermost gage with a period of record greater than fifty years is at Hermann, Missouri -- 97.9 miles (157.6 km) upstream of the mouth of the Missouri -- where the average annual flow was 87,520 cu ft / s (2,478 m / s) from 1897 to 2010. About 522,500 sq mi (1,353,000 km), or 98.7 % of the watershed, lies above Hermann. The highest annual mean was 181,800 cu ft / s (5,150 m / s) in 1993, and the lowest was 41,690 cu ft / s (1,181 m / s) in 2006. Extremes of the flow vary even further. The largest discharge ever recorded was over 750,000 cu ft / s (21,000 m / s) on July 31, 1993, during a historic flood. The lowest, a mere 602 cu ft / s (17.0 m / s) -- caused by the formation of an ice dam -- was measured on December 23, 1963.
The Rocky Mountains of southwestern Montana at the headwaters of the Missouri River first rose in the Laramide Orogeny, a mountain - building episode that occurred from around 70 to 45 million years ago (the end of the Mesozoic through the early Cenozoic). This orogeny uplifted Cretaceous rocks along the western side of the Western Interior Seaway, a vast shallow sea that stretched from the Arctic Ocean to the Gulf of Mexico, and deposited the sediments that now underlie much of the drainage basin of the Missouri River. This Laramide uplift caused the sea to retreat and laid the framework for a vast drainage system of rivers flowing from the Rocky and Appalachian Mountains, the predecessor of the modern - day Mississippi watershed. The Laramide Orogeny is essential to modern Missouri River hydrology, as snow and ice melt from the Rockies provide the majority of the flow in the Missouri and its tributaries.
The Missouri and many of its tributaries cross the Great Plains, flowing over or cutting into the Ogallala Group and older mid-Cenozoic sedimentary rocks. The lowest major Cenozoic unit, the White River Formation, was deposited between roughly 35 and 29 million years ago and consists of claystone, sandstone, limestone, and conglomerate. Channel sandstones and finer - grained overbank deposits of the fluvial Arikaree Group were deposited between 29 and 19 million years ago. The Miocene - age Ogallala and the slightly younger Pliocene - age Broadwater Formation deposited atop the Arikaree Group, and are formed from material eroded off of the Rocky Mountains during a time of increased generation of topographic relief; these formations stretch from the Rocky Mountains nearly to the Iowa border and give the Great Plains much of their gentle but persistent eastward tilt, and also constitute a major aquifer.
Immediately before the Quaternary Ice Age, the Missouri River was likely split into three segments: an upper portion that drained northwards into Hudson Bay, and middle and lower sections that flowed eastward down the regional slope. As the Earth plunged into the Ice Age, a pre-Illinoian (or possibly the Illinoian) glaciation diverted the Missouri River southeastwards towards its present confluence with the Mississippi and caused it to integrate into a single river system that cuts across the regional slope. In western Montana, the Missouri River is thought to have once flowed north then east around the Bear Paw Mountains. Sapphires are found in some spots along the river in western Montana. Advances of the continental ice sheets diverted the river and its tributaries, causing them to pool up into large temporary lakes such as Glacial Lakes Great Falls, Musselshell and others. As the lakes rose, the water in them often spilled across adjacent local drainage divides, creating now - abandoned channels and coulees including the Shonkin Sag, 100 miles (160 km) long. When the glaciers retreated, the Missouri flowed in a new course along the south side of the Bearpaws, and the lower part of the Milk River tributary took over the original main channel.
The Missouri 's nickname, the "Big Muddy '', was inspired by its enormous loads of sediment or silt -- some of the largest of any North American river. In its pre-development state, the river transported some 175 to 320 million short tons (159 to 290 Mt) per year. The construction of dams and levees has drastically reduced this to 20 to 25 million short tons (18 to 23 Mt) in the present day. Much of this sediment is derived from the river 's floodplain, also called the meander belt; every time the river changed course, it would erode tons of soil and rocks from its banks. However, damming and channeling the river has kept it from reaching its natural sediment sources along most of its course. Reservoirs along the Missouri trap roughly 36.4 million short tons (33.0 Mt) of sediment each year. Despite this, the river still transports more than half the total silt that empties into the Gulf of Mexico; the Mississippi River Delta, formed by sediment deposits at the mouth of the Mississippi, constitutes a majority of sediments carried by the Missouri.
Archaeological evidence, especially in Missouri, suggests that human beings first inhabited the watershed of the Missouri River between 10,000 and 12,000 years ago at the end of the Pleistocene. During the end of the last glacial period, large migration of humans were taking place, such as those via the Bering land bridge between the Americas and Eurasia. Over centuries, the Missouri River formed one of these main migration paths. Most migratory groups that passed through the area eventually settled in the Ohio Valley and the lower Mississippi River Valley, but many, including the Mound builders, stayed along the Missouri, becoming the ancestors of the later Indigenous peoples of the Great Plains.
Indigenous peoples of North America who have lived along the Missouri have historically had access to ample food, water, and shelter. Many migratory animals naturally inhabit the plains area. Before they were slaughtered by colonists, these animals, such as the buffalo, provided meat, clothing, and other everyday items; there were also great riparian areas in the river 's floodplain that provided habitat for herbs and other staple foods. No written records from the tribes and peoples of the pre-European contact period exist because they did not yet use writing. According to the writings of early colonists, some of the major tribes along the Missouri River included the Otoe, Missouria, Omaha, Ponca, Brulé, Lakota, Arikara, Hidatsa, Mandan, Assiniboine, Gros Ventres and Blackfeet.
In this pre-colonial and early - colonial era, the Missouri river was used as a path of trade and transport, and the river and its tributaries often formed territorial boundaries. Most of the Indigenous peoples in the region at that time had semi-nomadic cultures, with many tribes maintaining different summer and winter camps. However, the center of Native American wealth and trade lay along the Missouri River in the Dakotas region on its great bend south. A large cluster of walled Mandan, Hidatsa and Arikara villages situated on bluffs and islands of the river was home to thousands, and later served as a market and trading post used by early French and British explorers and fur traders. Following the introduction of horses to Missouri River tribes, possibly from feral European - introduced populations, Natives ' way of life changed dramatically. The use of the horse allowed them to travel greater distances, and thus facilitated hunting, communications and trade.
Once, tens of millions of American bison (commonly called buffalo), one of the keystone species of the Great Plains and the Ohio Valley, roamed the plains of the Missouri River basin. Most Native American nations in the basin relied heavily on the bison as a food source, and their hides and bones served to create other household items. In time, the species came to benefit from the indigenous peoples ' periodic controlled burnings of the grasslands surrounding the Missouri to clear out old and dead growth. The large bison population of the region gave rise to the term great bison belt, an area of rich annual grasslands that extended from Alaska to Mexico along the eastern flank of the Continental Divide. However, after the arrival of Europeans in North America, both the bison and the Native Americans saw a rapid decline in population. Massive over-hunting for sport by colonists eliminated bison populations east of the Mississippi River by 1833 and reduced the numbers in the Missouri basin to a mere few hundred. Foreign diseases brought by settlers, such as smallpox, raged across the land, decimating Native American populations. Left without their primary source of sustenance, many of the remaining indigenous people were forced onto resettlement areas and reservations, often at gunpoint.
In May 1673, the French explorers Louis Jolliet and Jacques Marquette left the settlement of St. Ignace on Lake Huron and traveled down the Wisconsin and Mississippi Rivers, aiming to reach the Pacific Ocean. In late June, Jolliet and Marquette became the first documented European discoverers of the Missouri River, which according to their journals was in full flood. "I never saw anything more terrific, '' Jolliet wrote, "a tangle of entire trees from the mouth of the Pekistanoui (Missouri) with such impetuosity that one could not attempt to cross it without great danger. The commotion was such that the water was made muddy by it and could not clear itself. '' They recorded Pekitanoui or Pekistanoui as the local name for the Missouri. However, the party never explored the Missouri beyond its mouth, nor did they linger in the area. In addition, they later learned that the Mississippi drained into the Gulf of Mexico and not the Pacific as they had originally presumed; the expedition turned back about 440 miles (710 km) short of the Gulf at the confluence of the Arkansas River with the Mississippi.
In 1682, France expanded its territorial claims in North America to include land on the western side of the Mississippi River, which included the lower portion of the Missouri. However, the Missouri itself remained formally unexplored until Étienne de Veniard, Sieur de Bourgmont commanded an expedition in 1714 that reached at least as far as the mouth of the Platte River. It is unclear exactly how far Bourgmont traveled beyond there; he described the blond - haired Mandans in his journals, so it is likely he reached as far as their villages in present - day North Dakota. Later that year, Bourgmont published The Route To Be Taken To Ascend The Missouri River, the first known document to use the name "Missouri River ''; many of the names he gave to tributaries, mostly for the native tribes that lived along them, are still in use today. The expedition 's discoveries eventually found their way to cartographer Guillaume Delisle, who used the information to create a map of the lower Missouri. In 1718, Jean - Baptiste Le Moyne, Sieur de Bienville requested that the French government bestow upon Bourgmont the Cross of St. Louis because of his "outstanding service to France ''.
Bourgmont had in fact been in trouble with the French colonial authorities since 1706, when he deserted his post as commandant of Fort Detroit after poorly handling an attack by the Ottawa that resulted in thirty - one deaths. However, his reputation was enhanced in 1720 when the Pawnee -- who had earlier been befriended by Bourgmont -- massacred the Spanish Villasur expedition near present - day Columbus, Nebraska on the Missouri River and temporarily ending Spanish encroachment on French Louisiana.
Bourgmont established Fort Orleans, the first European settlement of any kind on the Missouri River, near present - day Brunswick, Missouri, in 1723. The following year Bourgmont led an expedition to enlist Comanche support against the Spanish, who continued to show interest in taking over the Missouri. In 1725 Bourgmont brought the chiefs of several Missouri River tribes to visit France. There he was raised to the rank of nobility and did not accompany the chiefs back to North America. Fort Orleans was either abandoned or its small contingent massacred by Native Americans in 1726.
The French and Indian War erupted when territorial disputes between France and Great Britain in North America reached a head in 1754. By 1763, France was defeated by the much greater strength of the British army and was forced to cede its Canadian possessions to the English and Louisiana to the Spanish in the Treaty of Paris, amounting to most of its colonial holdings in North America. Initially, the Spanish did not extensively explore the Missouri and let French traders continue their activities under license. However, this ended after news of the British Hudson 's Bay Company incursions in the upper Missouri River watershed was brought back following an expedition by Jacques D'Eglise in the early 1790s. In 1795 the Spanish chartered the Company of Discoverers and Explorers of the Missouri, popularly referred to as the "Missouri Company '', and offered a reward for the first person to reach the Pacific Ocean via the Missouri. In 1794 and 1795 expeditions led by Jean Baptiste Truteau and Antoine Simon Lecuyer de la Jonchšre did not even make it as far north as the Mandan villages in central North Dakota.
Arguably the most successful of the Missouri Company expeditions was that of James MacKay and John Evans. The two set out along the Missouri, and established Fort Charles about 20 miles (32 km) south of present - day Sioux City as a winter camp in 1795. At the Mandan villages in North Dakota, they expelled several British traders, and while talking to the populace they pinpointed the location of the Yellowstone River, which was called Roche Jaune ("Yellow Rock '') by the French. Although MacKay and Evans failed to accomplish their original goal of reaching the Pacific, they did create the first accurate map of the upper Missouri River.
In 1795, the young United States and Spain signed Pinckney 's Treaty, which recognized American rights to navigate the Mississippi River and store goods for export in New Orleans. Three years later, Spain revoked the treaty and in 1800 secretly returned Louisiana to Napoleonic France in the Third Treaty of San Ildefonso. This transfer was so secret that the Spanish continued to administer the territory. In 1801, Spain restored rights to use the Mississippi and New Orleans to the United States.
Fearing that the cutoffs could occur again, President Thomas Jefferson proposed to buy the port of New Orleans from France for $10 million. Instead, faced with a debt crisis, Napoleon offered to sell the entirety of Louisiana, including the Missouri River, for $15 million -- amounting to less than 3 ¢ per acre. The deal was signed in 1803, doubling the size of the United States with the acquisition of the Louisiana Territory. In 1803, Jefferson instructed Meriwether Lewis to explore the Missouri and search for a water route to the Pacific Ocean. By then, it had been discovered that the Columbia River system, which drains into the Pacific, had a similar latitude as the headwaters of the Missouri River, and it was widely believed that a connection or short portage existed between the two. However, Spain balked at the takeover, citing that they had never formally returned Louisiana to the French. Spanish authorities warned Lewis not to take the journey and forbade him from seeing the MacKay and Evans map of the Missouri, although Lewis eventually managed to gain access to it.
Meriwether Lewis and William Clark began their famed expedition in 1804 with a party of thirty - three people in three boats. Although they became the first Europeans to travel the entire length of the Missouri and reach the Pacific Ocean via the Columbia, they found no trace of the Northwest Passage. The maps made by Lewis and Clark, especially those of the Pacific Northwest region, provided a foundation for future explorers and emigrants. They also negotiated relations with numerous Native American tribes and wrote extensive reports on the climate, ecology and geology of the region. Many present - day names of geographic features in the upper Missouri basin originated from their expedition.
As early as the 18th century, fur trappers entered the extreme northern basin of the Missouri River in the hopes of finding populations of beaver and river otter, the sale of whose pelts drove the thriving North American fur trade. They came from many different places -- some from the Canadian fur corporations at Hudson Bay, some from the Pacific Northwest (see also: Maritime fur trade), and some from the midwestern United States. Most did not stay in the area for long, as they failed to find significant resources.
The first glowing reports of country rich with thousands of game animals came in 1806 when Meriwether Lewis and William Clark returned from their two - year expedition. Their journals described lands amply stocked with thousands of buffalo, beaver, and river otter; and also an abundant population of sea otters on the Pacific Northwest coast. In 1807, explorer Manuel Lisa organized an expedition which would lead to the explosive growth of the fur trade in the upper Missouri River country. Lisa and his crew traveled up the Missouri and Yellowstone Rivers, trading manufactured items in return for furs from local Native American tribes, and established a fort at the confluence of the Yellowstone and a tributary, the Bighorn, in southern Montana. Although the business started small, it quickly grew into a thriving trade.
Lisa 's men started construction of Fort Raymond, which sat on a bluff overlooking the confluence of the Yellowstone and Bighorn, in the fall of 1807. The fort would serve primarily as a trading post for bartering with the Native Americans for furs. This method was unlike that of the Pacific Northwest fur trade, which involved trappers hired by the various fur enterprises, namely Hudson 's Bay. Fort Raymond was later replaced by Fort Lisa at the confluence of the Missouri and Yellowstone in North Dakota; a second fort also called Fort Lisa was built downstream on the Missouri River in Nebraska. In 1809 the St. Louis Missouri Fur Company was founded by Lisa in conjunction with William Clark and Pierre Choteau, among others. In 1828, the American Fur Company founded Fort Union at the confluence of the Missouri and Yellowstone Rivers. Fort Union gradually became the main headquarters for the fur trade in the upper Missouri basin.
Fur trapping activities in the early 19th century encompassed nearly all of the Rocky Mountains on both the eastern and western slopes. Trappers of the Hudson 's Bay Company, St. Louis Missouri Fur Company, American Fur Company, Rocky Mountain Fur Company, North West Company and other outfits worked thousands of streams in the Missouri watershed as well as the neighboring Columbia, Colorado, Arkansas, and Saskatchewan river systems. During this period, the trappers, also called mountain men, blazed trails through the wilderness that would later form the paths pioneers and settlers would travel by into the West. Transport of the thousands of beaver pelts required ships, providing one of the first large motives for river transport on the Missouri to start.
As the 1830s drew to a close, the fur industry slowly began to die as silk replaced beaver fur as a desirable clothing item. By this time, also, the beaver population of streams in the Rocky Mountains had been decimated by intense hunting. Furthermore, frequent Native American attacks on trading posts made it dangerous for employees of the fur companies. In some regions, the industry continued well into the 1840s, but in others such as the Platte River valley, declines of the beaver population contributed to an earlier demise. The fur trade finally disappeared in the Great Plains around 1850, with the primary center of industry shifting to the Mississippi Valley and central Canada. Despite the demise of the once - prosperous trade, however, its legacy led to the opening of the American West and a flood of settlers, farmers, ranchers, adventurers, hopefuls, financially bereft, and entrepreneurs took their place.
The river roughly defined the American frontier in the 19th century, particularly downstream from Kansas City, where it takes a sharp eastern turn into the heart of the state of Missouri, an area known as the Boonslick. As first area settled by Europeans along the river it was largely populated by slave - owning southerners following the Boone 's Lick Road. The major trails for the opening of the American West all have their starting points on the river, including the California, Mormon, Oregon, and Santa Fe trails. The first westward leg of the Pony Express was a ferry across the Missouri at St. Joseph, Missouri. Similarly, most emigrants arrived at the eastern terminus of the First Transcontinental Railroad via a ferry ride across the Missouri between Council Bluffs, Iowa, and Omaha. The Hannibal Bridge became the first bridge to cross the Missouri River in 1869, and its location was a major reason why Kansas City became the largest city on the river upstream from its mouth at St. Louis.
True to the then - ideal of Manifest Destiny, over 500,000 people set out from the river town of Independence, Missouri to their various destinations in the American West from the 1830s to the 1860s. These people had many reasons to embark on this strenuous year - long journey -- economic crisis, and later gold strikes including the California Gold Rush, for example. For most, the route took them up the Missouri to Omaha, Nebraska, where they would set out along the Platte River, which flows from the Rocky Mountains in Wyoming and Colorado eastwards through the Great Plains. An early expedition led by Robert Stuart from 1812 to 1813 proved the Platte impossible to navigate by the dugout canoes they used, let alone the large sidewheelers and sternwheelers that would later ply the Missouri in increasing numbers. One explorer remarked that the Platte was "too thick to drink, too thin to plow ''. Nevertheless, the Platte provided an abundant and reliable source of water for the pioneers as they headed west. Covered wagons, popularly referred to as prairie schooners, provided the primary means of transport until the beginning of regular boat service on the river in the 1850s.
During the 1860s, gold strikes in Montana, Colorado, Wyoming, and northern Utah attracted another wave of hopefuls to the region. Although some freight was hauled overland, most transport to and from the gold fields was done through the Missouri and Kansas Rivers, as well as the Snake River in western Wyoming and the Bear River in Utah, Idaho, and Wyoming. It is estimated that of the passengers and freight hauled from the Midwest to Montana, over 80 percent were transported by boat, a journey that took 150 days in the upstream direction. A route more directly west into Colorado lay along the Kansas River and its tributary the Republican River as well as pair of smaller Colorado streams, Big Sandy Creek and the South Platte River, to near Denver. The gold rushes precipitated the decline of the Bozeman Trail as a popular emigration route, as it passed through land held by often - hostile Native Americans. Safer paths were blazed to the Great Salt Lake near Corinne, Utah during the gold rush period, which led to the large - scale settlement of the Rocky Mountains region and eastern Great Basin.
As settlers expanded their holdings into the Great Plains, they ran into land conflicts with Native American tribes. This resulted in frequent raids, massacres and armed conflicts, leading to the federal government creating multiple treaties with the Plains tribes, which generally involved establishing borders and reserving lands for the indigenous. As with many other treaties between the U.S. and Native Americans, they were soon broken, leading to huge wars. Over 1,000 battles, big and small, were fought between the U.S. military and Native Americans before the tribes were forced out of their land onto reservations.
Conflicts between natives and settlers over the opening of the Bozeman Trail in the Dakotas, Wyoming and Montana led to Red Cloud 's War, in which the Lakota and Cheyenne fought against the U.S. Army. The fighting resulted in a complete Native American victory. In 1868, the Treaty of Fort Laramie was signed, which "guaranteed '' the use of the Black Hills, Powder River Country and other regions surrounding the northern Missouri River to Native Americans without white intervention. The Missouri River was also a significant landmark as it divides northeastern Kansas from western Missouri; pro-slavery forces from Missouri would cross the river into Kansas and spark mayhem during Bleeding Kansas, leading to continued tension and hostility even today between Kansas and Missouri. Another significant military engagement on the Missouri River during this period was the 1861 Battle of Boonville, which did not affect Native Americans but rather was a turning point in the American Civil War that allowed the Union to seize control of transport on the river, discouraging the state of Missouri from joining the Confederacy.
However, the peace and freedom of the Native Americans did not last for long. The Great Sioux War of 1876 -- 77 was sparked when American miners discovered gold in the Black Hills of western South Dakota and eastern Wyoming. These lands were originally set aside for Native American use by the Treaty of Fort Laramie. When the settlers intruded onto the lands, they were attacked by Native Americans. U.S. troops were sent to the area to protect the miners, and drive out the natives from the new settlements. During this bloody period, both the Native Americans and the U.S. military won victories in major battles, resulting in the loss of nearly a thousand lives. The war eventually ended in an American victory, and the Black Hills were opened to settlement. Native Americans of that region were relocated to reservations in Wyoming and southeastern Montana.
In the late 19th and early 20th centuries, a great number of dams were built along the course of the Missouri, transforming 35 percent of the river into a chain of reservoirs. River development was stimulated by a variety of factors, first by growing demand for electricity in the rural northwestern parts of the basin, and also by floods and droughts that plagued rapidly growing agricultural and urban areas along the lower Missouri River. Small, privately owned hydroelectric projects have existed since the 1890s, but the large flood - control and storage dams that characterize the middle reaches of the river today were not constructed until the 1950s.
Between 1890 and 1940, five dams were built in the vicinity of Great Falls to generate power from the Great Falls of the Missouri, a chain of giant waterfalls formed by the river in its path through western Montana. Black Eagle Dam, built in 1891 on Black Eagle Falls, was the first dam of the Missouri. Replaced in 1926 with a more modern structure, the dam was little more than a small weir atop Black Eagle Falls, diverting part of the Missouri 's flow into the Black Eagle power plant. The largest of the five dams, Ryan Dam, was built in 1913. The dam lies directly above the 87 - foot (27 m) Big Falls, the largest waterfall of the Missouri.
In the same period, several private establishments -- most notably the Montana Power Company -- began to develop the Missouri River above Great Falls and below Helena for power generation. A small run - of - the river structure completed in 1898 near the present site of Canyon Ferry Dam became the second dam to be built on the Missouri. This rock - filled timber crib dam generated seven and a half megawatts of electricity for Helena and the surrounding countryside. The nearby steel Hauser Dam was finished in 1907, but failed in 1908 because of structural deficiencies, causing catastrophic flooding all the way downstream past Craig. At Great Falls, a section of the Black Eagle Dam was dynamited to save nearby factories from inundation. Hauser was rebuilt in 1910 as a concrete gravity structure, and stands to this day.
Holter Dam, about 45 miles (72 km) downstream of Helena, was the third hydroelectric dam built on this stretch of the Missouri River. When completed in 1918 by the Montana Power Company and the United Missouri River Power Company, its reservoir flooded the Gates of the Mountains, a limestone canyon which Meriwether Lewis described as "the most remarkable clifts that we have yet seen... the tow (er) ing and projecting rocks in many places seem ready to tumble on us. '' In 1949, the U.S. Bureau of Reclamation (USBR) began construction on the modern Canyon Ferry Dam to provide flood control to the Great Falls area. By 1954, the rising waters of Canyon Ferry Lake submerged the old 1898 dam, whose powerhouse still stands underwater about 1 ⁄ miles (2.4 km) upstream of the present - day dam.
"(The Missouri 's temperament was) uncertain as the actions of a jury or the state of a woman 's mind. '' -- Sioux City Register, March 28, 1868
The Missouri basin suffered a series of catastrophic floods around the turn of the 20th century, most notably in 1844, 1881, and 1926 -- 1927. In 1940, as part of the Great Depression - era New Deal, the U.S. Army Corps of Engineers (USACE) completed Fort Peck Dam in Montana. Construction of this massive public works project provided jobs for more than 50,000 laborers during the Depression and was a major step in providing flood control to the lower half of the Missouri River. However, Fort Peck only controls runoff from 11 percent of the Missouri River watershed, and had little effect on a severe snowmelt flood that struck the lower basin three years later. This event was particularly destructive as it submerged manufacturing plants in Omaha and Kansas City, greatly delaying shipments of military supplies in World War II.
Flooding damages on the Mississippi -- Missouri river system were one of the primary reasons for which Congress passed the Flood Control Act of 1944, opening the way for the USACE to develop the Missouri on a massive scale. The 1944 act authorized the Pick -- Sloan Missouri Basin Program (Pick -- Sloan Plan), which was a composite of two widely varying proposals. The Pick plan, with an emphasis on flood control and hydroelectric power, called for the construction of large storage dams along the main stem of the Missouri. The Sloan plan, which stressed the development of local irrigation, included provisions for roughly 85 smaller dams on tributaries.
In the early stages of Pick -- Sloan development, tentative plans were made to build a low dam on the Missouri at Riverdale, North Dakota and 27 smaller dams on the Yellowstone River and its tributaries. This was met with controversy from inhabitants of the Yellowstone basin, and eventually the USBR proposed a solution: to greatly increase the size of the proposed dam at Riverdale -- today 's Garrison Dam, thus replacing the storage that would have been provided by the Yellowstone dams. Because of this decision, the Yellowstone is now the longest free - flowing river in the contiguous United States. In the 1950s, construction commenced on the five mainstem dams -- Garrison, Oahe, Big Bend, Fort Randall and Gavins Point -- proposed under the Pick - Sloan Plan. Along with Fort Peck, which was integrated as a unit of the Pick - Sloan Plan in the 1940s, these dams now form what is known as the Missouri River Mainstem System.
The flooding of lands along the Missouri River heavily impacted Native American groups whose reservations included fertile bottomlands and floodplains, especially in the arid Dakotas where it was some of the only good farmland they had. These consequences were pronounced in North Dakota 's Fort Berthold Indian Reservation, where 150,000 acres (61,000 ha) of land was taken by the construction of Garrison Dam. The Mandan, Hidatsa and Arikara / Sanish tribes sued the federal government on the basis of the 1851 Treaty of Fort Laramie which provided that reservation land could not be taken without the consent of both the tribes and Congress. After a lengthy legal battle the tribes were coerced in 1947 to accept a $5.1 million ($55 million today) settlement for the land, just $33 per acre. In 1949 this was increased to $12.6 million. The tribes were even denied the right to use the reservoir shore "for grazing, hunting, fishing, and other purposes. ''
The six dams of the Mainstem System, chiefly Fort Peck, Garrison and Oahe, are among the largest dams in the world by volume; their sprawling reservoirs also rank within the biggest of the nation. Holding up to 74.1 million acre feet (91.4 km) in total, the six reservoirs can store more than three years ' worth of the river 's flow as measured below Gavins Point, the lowermost dam. This capacity makes it the largest reservoir system in the United States and one of the largest in North America. In addition to storing irrigation water, the system also includes an annual flood - control reservation of 16.3 million acre feet (20.1 km). Mainstem power plants generate about 9.3 billion KWh annually -- equal to a constant output of almost 1,100 megawatts. Along with nearly 100 smaller dams on tributaries, namely the Bighorn, Platte, Kansas, and Osage Rivers, the system provides irrigation water to nearly 7,500 sq mi (19,000 km) of land.
The table at left lists statistics of all fifteen dams on the Missouri River, ordered downstream. Many of the run - of - the - river dams on the Missouri (marked in yellow) form very small impoundments which may or may not have been given names; those unnamed are left blank. All dams are on the upper half of the river above Sioux City; the lower river is uninterrupted due to its longstanding use as a shipping channel.
"(Missouri River shipping) never achieved its expectations. Even under the very best of circumstances, it was never a huge industry. '' ~ Richard Opper, former Missouri River Basin Association executive director
Boat travel on the Missouri started with the wood - framed canoes and bull boats of the Native Americans, which were used for thousands of years before the introduction of larger craft to the river upon colonization of the Great Plains. The first steamboat on the Missouri was the Independence, which started running between St. Louis and Keytesville, Missouri around 1819. By the 1830s, large mail and freight - carrying vessels were running regularly between Kansas City and St. Louis, and many traveled even farther upstream. A handful, such as the Western Engineer and the Yellowstone, were able to make it up the river as far as eastern Montana.
During the early 19th century, at the height of the fur trade, steamboats and keelboats began traveling nearly the whole length of the Missouri from Montana 's rugged Missouri Breaks to the mouth, carrying beaver and buffalo furs to and from the areas that the trappers frequented. This resulted in the development of the Missouri River mackinaw, which specialized in carrying furs. Since these boats could only travel downriver, they were dismantled and sold for lumber upon their arrival at St. Louis.
Water transport increased through the 1850s with multiple craft ferrying pioneers, emigrants and miners; many of these runs were from St. Louis or Independence to near Omaha. There, most of these people would set out overland along the large but shallow and unnavigable Platte River, which was described by pioneers as "a mile wide and an inch deep '' and "the most magnificent and useless of rivers ''. Steamboat navigation peaked in 1858 with over 130 boats operating full - time on the Missouri, with many more smaller vessels. Many of the earlier vessels were built on the Ohio River before being transferred to the Missouri. Side - wheeler steamboats were preferred over the larger sternwheelers used on the Mississippi and Ohio because of their greater maneuverability.
The industry 's success, however, did not guarantee safety. In the early decades before the river 's flow was controlled by man, its sketchy rises and falls and its massive amounts of sediment, which prevented a clear view of the bottom, wrecked some 300 vessels. Because of the dangers of navigating the Missouri River, the average ship 's lifespan was short, only about four years. The development of the Transcontinental and Northern Pacific Railroads marked the beginning of the end of steamboat commerce on the Missouri. Outcompeted by trains, the number of boats slowly dwindled, until there was almost nothing left by the 1890s. Transport of agricultural and mining products by barge, however, saw a revival in the early twentieth century.
Since the beginning of the 20th century, the Missouri River has been extensively engineered for water transport purposes, and about 32 percent of the river now flows through artificially straightened channels. In 1912, the USACE was authorized to maintain the Missouri to a depth of six feet (1.8 m) from the Port of Kansas City to the mouth, a distance of 368 miles (592 km). This was accomplished by constructing levees and wing dams to direct the river 's flow into a straight, narrow channel and prevent sedimentation. In 1925, the USACE began a project to widen the river 's navigation channel to 200 feet (61 m); two years later, they began dredging a deep - water channel from Kansas City to Sioux City. These modifications have reduced the river 's length from some 2,540 miles (4,090 km) in the late 19th century to 2,341 miles (3,767 km) in the present day.
Construction of dams on the Missouri under the Pick - Sloan Plan in the mid-twentieth century was the final step in aiding navigation. The large reservoirs of the Mainstem System help provide a dependable flow to maintain the navigation channel year - round, and are capable of halting most of the river 's annual freshets. However, high and low water cycles of the Missouri -- notably the protracted early - 21st - century drought in the Missouri River basin and historic floods in 1993 and 2011 -- are difficult for even the massive Mainstem System reservoirs to control.
In 1945, the USACE began the Missouri River Bank Stabilization and Navigation Project, which would permanently increase the river 's navigation channel to a width of 300 feet (91 m) and a depth of nine feet (2.7 m). During work that continues to this day, the 735 - mile (1,183 km) navigation channel from Sioux City to St. Louis has been controlled by building rock dikes to direct the river 's flow and scour out sediments, sealing and cutting off meanders and side channels, and dredging the riverbed. However, the Missouri has often resisted the efforts of the USACE to control its depth. In 2006, several U.S. Coast Guard boats ran aground in the Missouri River because the navigation channel had been severely silted. The USACE was blamed for failing to maintain the channel to the minimum depth.
In 1929, the Missouri River Navigation Commission estimated the total amount of goods shipped on the river annually at 15 million tons (13.6 million metric tons), providing widespread consensus for the creation of a navigation channel. However, shipping traffic has since been far lower than expected -- shipments of commodities including produce, manufactured items, lumber, and oil averaged only 683,000 tons (616,000 t) per year from 1994 to 2006.
By tonnage of transported material, Missouri is by far the largest user of the river accounting for 83 percent of river traffic, while Kansas has 12 percent, Nebraska three percent and Iowa two percent. Almost all of the barge traffic on the Missouri River ships sand and gravel dredged from the lower 500 miles (800 km) of the river; the remaining portion of the shipping channel now sees little to no use by commercial vessels.
For navigation purposes, the Missouri River is divided into two main sections. The Upper Missouri River is north of Gavins Point Dam, the last hydroelectric dam of fifteen on the river, just upstream from Sioux City, Iowa. The Lower Missouri River is the 840 miles (1,350 km) of river below Gavins Point until it meets the Mississippi just above St. Louis. The Lower Missouri River has no hydroelectric dams or locks but it has a plethora of wing dams that enable barge traffic by directing the flow of the river into a 200 - foot - wide (61 m), 12 - foot - deep (3.7 m) channel. These wing dams have been put in place by and are maintained by the U.S. Army Corps of Engineers, and there currently are no plans to construct any locks to replace these wing dams on the Missouri River.
Tonnage of goods shipped by barges on the Missouri River has seen a serious decline from the 1960s to the present. In the 1960s, the USACE predicted an increase to 12 million short tons (11 Mt) per year by 2000, but instead the opposite has happened. The amount of goods plunged from 3.3 million short tons (3.0 Mt) in 1977 to just 1.3 million short tons (1.2 Mt) in 2000. One of the largest drops has been in agricultural products, especially wheat. Part of the reason is that irrigated land along the Missouri has only been developed to a fraction of its potential. In 2006, barges on the Missouri hauled only 200,000 short tons (180,000 t) of products which is equal to the amount of daily freight traffic on the Mississippi.
Drought conditions in the early 21st century and competition from other modes of transport -- mainly railroads -- are the primary reason for decreasing river traffic on the Missouri. The failure of the USACE to consistently maintain the navigation channel has also hampered the industry. Currently, efforts are being made to revive the shipping industry on the Missouri River, because of the efficiency and cheapness of river transport to haul agricultural products, and the overcrowding of alternative transportation routes. Solutions such as expanding the navigation channel and releasing more water from reservoirs during the peak of the navigation season are being considered. Drought conditions lifted in 2010, in which about 334,000 short tons (303,000 t) were barged on the Missouri, representing the first significant increase in shipments since 2000. However, flooding in 2011 closed record stretches of the river to boat traffic -- "wash (ing) away hopes for a bounce - back year. ''
There are no lock and dams on the lower Missouri River, but there are plenty of wing dams that jettie out into the river and make it harder for barges to navigate. In contrast, the upper Mississippi has 29 locks and dams and averaged 61.3 million tons of cargo annually from 2008 to 2011, and its locks are closed in the winter.
Historically, the thousands of square miles that comprised the floodplain of the Missouri River supported a wide range of plant and animal species. Biodiversity generally increased proceeding downstream from the cold, subalpine headwaters in Montana to the temperate, moist climate of Missouri. Today, the river 's riparian zone consists primarily of cottonwoods, willows and sycamores, with several other types of trees such as maple and ash. Average tree height generally increases farther from the riverbanks for a limited distance, as land adjacent to the river is vulnerable to soil erosion during floods. Because of its large sediment concentrations, the Missouri does not support many aquatic invertebrates. However, the basin does support about 300 species of birds and 150 species of fish, some of which are endangered such as the pallid sturgeon. The Missouri 's aquatic and riparian habitats also support several species of mammals, such as minks, river otters, beavers, muskrats, and raccoons.
The World Wide Fund For Nature divides the Missouri River watershed into three freshwater ecoregions: the Upper Missouri, Lower Missouri and Central Prairie. The Upper Missouri, roughly encompassing the area within Montana, Wyoming, southern Alberta and Saskatchewan, and North Dakota, comprises mainly semiarid shrub - steppe grasslands with sparse biodiversity because of Ice Age glaciations. There are no known endemic species within the region. Except for the headwaters in the Rockies, there is little precipitation in this part of the watershed. The Middle Missouri ecoregion, extending through Colorado, southwestern Minnesota, northern Kansas, Nebraska, and parts of Wyoming and Iowa, has greater rainfall and is characterized by temperate forests and grasslands. Plant life is more diverse in the Middle Missouri, which is also home to about twice as many animal species. Finally, the Central Prairie ecoregion is situated on the lower part of the Missouri, encompassing all or parts of Missouri, Kansas, Oklahoma and Arkansas. Despite large seasonal temperature fluctuations, this region has the greatest diversity of plants and animals of the three. Thirteen species of crayfish are endemic to the lower Missouri.
Since the beginning of river commerce and industrial development in the 1800s, the Missouri has been severely polluted and its water quality degraded by human activity. Most of the river 's floodplain habitat is long gone, replaced by irrigated agricultural land. Development of the floodplain has led to increasing numbers of people and infrastructure within areas at high risk of inundation. Levees have been constructed along more than a third of the river in order to keep floodwater within the channel, but with the consequences of faster stream velocity and a resulting increase of peak flows in downstream areas. Fertilizer runoff, which causes elevated levels of nitrogen and other nutrients, is a major problem along the Missouri River, especially in Iowa and Missouri. This form of pollution also heavily affects the upper Mississippi, Illinois and Ohio Rivers. Low oxygen levels in rivers and the vast Gulf of Mexico dead zone at the end of the Mississippi Delta are both results of high nutrient concentrations in the Missouri and other tributaries of the Mississippi.
Channelization of the lower Missouri waters has made the river narrower, deeper and less accessible to riparian flora and fauna. Numerous dams and bank stabilization projects have been constructed to facilitate the conversion of 300,000 acres (1,200 km) of Missouri River floodplain to agricultural land. Channel control has significantly reduced the volume of sediment transported downstream by the river and eliminated critical habitat for fish, birds and amphibians. By the early 21st century, declines in populations of native species prompted the U.S. Fish and Wildlife Service to issue a biological opinion recommending restoration of river habitats for federally endangered bird and fish species.
The USACE began work on ecosystem restoration projects along the lower Missouri River in the early 21st century. Because of the low use of the shipping channel in the lower Missouri maintained by the USACE, it is now considered feasible to remove some of the levees, dikes, and wing dams that constrict the river 's flow, thus allowing it to naturally restore its banks. By 2001, there were 87,000 acres (350 km) of riverside floodplain undergoing active restoration.
Restoration projects have re-mobilized some of the sediments that had been trapped behind bank stabilization structures, prompting concerns of exacerbated nutrient and sediment pollution locally and downstream in the northern Gulf of Mexico. A 2010 National Research Council report assessed the roles of sediment in the Missouri River, evaluating current habitat restoration strategies and alternative ways to manage sediment. The report found that a better understanding of sediment processes in the Missouri River, including the creation of a "sediment budget '' -- an accounting of sediment transport, erosion, and deposition volumes for the length of the Missouri River -- would provide a foundation for projects to improve water quality standards and protect endangered species.
With over 1,500 sq mi (3,900 km) of open water, the six reservoirs of the Missouri River Mainstem System provide some of the main recreational areas within the basin. Visitation has increased from 10 million visitor - hours in the mid-1960s to over 60 million visitor - hours in 1990. Development of visitor facilities was spurred by the Federal Water Project Recreation Act of 1965, which required the USACE to build and maintain boat ramps, campgrounds and other public facilities along major reservoirs. Recreational use of Missouri River reservoirs is estimated to contribute $85 -- 100 million to the regional economy each year.
The Lewis and Clark National Historic Trail, some 3,700 miles (6,000 km) long, follows nearly the entire Missouri River from its mouth to its source, retracing the route of the Lewis and Clark Expedition. Extending from Wood River, Illinois, in the east, to Astoria, Oregon, in the west, it also follows portions of the Mississippi and Columbia Rivers. The trail, which spans through eleven U.S. states, is maintained by various federal and state government agencies; it passes through some 100 historic sites, notably archaeological locations including the Knife River Indian Villages National Historic Site.
Parts of the river itself are designated for recreational or preservational use. The Missouri National Recreational River consists of portions of the Missouri downstream from Fort Randall and Gavins Point Dams that total 98 miles (158 km). These reaches exhibit islands, meanders, sandbars, underwater rocks, riffles, snags, and other once - common features of the lower river that have now disappeared under reservoirs or have been destroyed by channeling. About forty - five steamboat wrecks are scattered along these reaches of the river.
Downstream from Great Falls, Montana, about 149 miles (240 km) of the river course through a rugged series of canyons and badlands known as the Missouri Breaks. This part of the river, designated a U.S. National Wild and Scenic River in 1976, flows within the Upper Missouri Breaks National Monument, a 375,000 - acre (1,520 km) preserve comprising steep cliffs, deep gorges, arid plains, badlands, archaeological sites, and whitewater rapids on the Missouri itself. The preserve includes a wide variety of plant and animal life; recreational activities include boating, rafting, hiking and wildlife observation.
In north - central Montana, some 1,100,000 acres (4,500 km) along over 125 miles (201 km) of the Missouri River, centering on Fort Peck Lake, comprise the Charles M. Russell National Wildlife Refuge. The wildlife refuge consists of a native northern Great Plains ecosystem that has not been heavily affected by human development, except for the construction of Fort Peck Dam. Although there are few designated trails, the whole preserve is open to hiking and camping.
Many U.S. national parks, such as Glacier National Park, Rocky Mountain National Park, Yellowstone National Park and Badlands National Park are, at least partially, in the watershed. Parts of other rivers in the basin are set aside for preservation and recreational use -- notably the Niobrara National Scenic River, which is a 76 - mile (122 km) protected stretch of the Niobrara River, one of the Missouri 's longest tributaries. The Missouri flows through or past many National Historic Landmarks, which include Three Forks of the Missouri, Fort Benton, Montana, Big Hidatsa Village Site, Fort Atkinson, Nebraska and Arrow Rock Historic District.
|
when did houston move to the national league | Houston Astros - wikipedia
The Houston Astros are an American professional baseball team based in Houston, Texas. The Astros compete in Major League Baseball (MLB) as a member club of the American League (AL) West division, having moved to the division in 2013 after spending their first 51 seasons in the National League (NL). The Astros have played their home games at Minute Maid Park since 2000.
The Astros were established as the Houston Colt. 45s and entered the National League as an expansion team in 1962 along with the New York Mets. The current name -- reflecting Houston 's role as the control center of the U.S. space program -- was adopted three years later, when they moved into the Astrodome, the world 's first domed sports stadium.
The Astros played in the NL from 1962 to 2012. They played in the West Division from 1969 to 1993, and the Central Division from 1994 to 2012. While a member of the NL, the Astros played in one World Series, in 2005, against the Chicago White Sox, in which they were swept in four games. In 2017, they became the first franchise in MLB history to have won a pennant in both the NL and the AL, when they defeated the New York Yankees in the ALCS. They subsequently won the 2017 World Series against the Los Angeles Dodgers, winning four games to three, earning the team, and Texas, its first World Series title.
From 1888 until 1961, Houston 's professional baseball club was the minor league Houston Buffaloes. Although expansion from the National League eventually brought an MLB team to Texas in 1962, Houston officials had been making efforts to do so for years prior. There were four men chiefly responsible for bringing Major League Baseball to Houston: George Kirksey and Craig Cullinan, who had led a futile attempt to purchase the St. Louis Cardinals in 1952; R.E. "Bob '' Smith, a prominent oilman and real estate magnate in Houston who was brought in for his financial resources; and Judge Roy Hofheinz, a former Mayor of Houston and Harris County Judge who was recruited for his salesmanship and political style. They formed the Houston Sports Association as their vehicle for attaining a big league franchise for the city of Houston.
Given MLB 's refusal to consider expansion, Kirksey, Cullinan, Smith, and Hofheinz joined forces with would - be owners from other cities and announced the formation of a new league to compete with the established National and American Leagues. They called the new league the Continental League. Wanting to protect potential new markets, both existing leagues chose to expand from eight teams to ten. However, plans eventually fell through for the Houston franchise after the Houston Buffaloes owner, Marty Marion, could not come to an agreement with the HSA to sell the team. To make matters worse, the Continental League as a whole folded in August 1960.
However, on October 17, 1960, the National League granted an expansion franchise to the Houston Sports Association in which their team could begin play in the 1962 season. According to the Major League Baseball Constitution, the Houston Sports Association was required to obtain territorial rights from the Houston Buffaloes in order to play in the Houston area, and again negotiations began to purchase the team. Eventually, the Houston Sports Association succeeded in purchasing the Houston Buffaloes, at this point majority - owned by William Hopkins, on January 17, 1961. The Buffs played one last minor league season as the top farm team of the Chicago Cubs in 1961 before being succeeded by the city 's NL club.
The new Houston team was named the Colt. 45s after a "Name The Team '' contest was won by William Irving Neder. The Colt. 45 was well known as "the gun that won the west. '' The colors selected were navy blue and orange. The first team was formed mostly through an expansion draft after the 1961 season. The Colt. 45s and their expansion cousins, the New York Mets, took turns choosing players left unprotected by the other National League franchises.
Many of those associated with the Houston Buffaloes organization were allowed by the ownership to continue in the major league. Manager Harry Craft, who had joined Houston in 1961, remained in the same position for the team until the end of the 1964 season. General manager Spec Richardson also continued with the organization as business manager, but was later promoted again to the same position with the Astros from 1967 until 1975. Although most players for the major league franchise were obtained through the 1961 Major League Baseball expansion draft, Buffs players J.C. Hartman, Pidge Browne, Jim Campbell, Ron Davis, Dave Giusti, and Dave Roberts were chosen to continue as major league ball players.
Similarly, the radio broadcasting team remained with the new Houston major league franchise. Loel Passe worked alongside Gene Elston as a color commentator until he retired from broadcasting in 1976. Elston continued with the Astros until 1986.
The Colt. 45s began their existence playing at Colt Stadium, a temporary venue built just north of the construction site of the indoor stadium.
The Colt. 45s started their inaugural season on April 10, 1962, against the Chicago Cubs with Harry Craft as the Colt. 45s ' manager. Bob Aspromonte scored the first run for the Colt. 45s on an Al Spangler triple in the first inning. They started the season with a three - game sweep of the Cubs but eventually finished eighth among the National League 's ten teams. The team 's best pitcher, Richard "Turk '' Farrell, lost 20 games despite an ERA of 3.02. A starter for the Colt. 45s, Farrell was primarily a relief pitcher prior to playing for Houston. He was selected to both All - Star Games in 1962.
The 1963 season saw more young talent mixed with seasoned veterans. Jimmy Wynn, Rusty Staub, and Joe Morgan all made their major league debuts in the 1963 season. However, Houston 's position in the standings did not improve. In fact, the Colt. 45s finished in ninth place with a 66 -- 96 record. The team was still building, trying to find that perfect mix to compete. The 1964 campaign began on a sad note, as relief pitcher Jim Umbricht died of cancer at the age of 33 on April 8, just before Opening Day. Umbricht was the only Colt. 45s pitcher to post a winning record in Houston 's first two seasons. He was so well liked by players and fans that the team retired his jersey number, 32, in 1965.
Just on the horizon, the structure of the new domed stadium was more prevalent and it would soon change the way that baseball was watched in Houston and around the league. On December 1, 1964, the team announced the name change from Colt. 45s to "Astros. ''
With Judge Roy Hofheinz now the sole owner of the franchise and the new venue complete, the renamed "Astros '' moved into their new domed stadium, the Astrodome, in 1965. The name honored Houston 's position as the center of the nation 's space program; NASA 's new Manned Spacecraft Center had recently opened southeast of the city. The Astrodome, coined the "Eighth Wonder of the World '', did little to improve the home team 's results on the field. While several "indoor '' firsts were accomplished, the team still finished ninth in the standings. The attendance was high not because of the team accomplishments, but because people came from miles around to see the Astrodome.
Just as the excitement was settling down over the Astrodome, the 1966 season found something new to put the domed stadium in the spotlight once again -- the field. Grass would not grow in the new park, since the roof panels had been painted to reduce the glare that was causing players on both the Astros and the visiting teams to miss routine pop flies. A new artificial turf was created called "AstroTurf '' and Houston would be involved in yet another change in the way the game was played.
With new manager Grady Hatton the Astros started the 1966 season strong. By May they were in second place in the National League and looked like a team that could contend. Joe Morgan was named as a starter on the All - Star Team. The success did not last as they lost Jimmy Wynn for the season after he crashed into an outfield fence in Philadelphia and Morgan had broken his knee cap. 1967 saw first baseman Eddie Mathews join the Astros. The slugger hit his 500th home run while in Houston. He would be traded late in the season and Doug Rader would be promoted to the big leagues. Rookie Don Wilson pitched a no - hitter on June 18. Wynn also provided some enthusiasm in 1967. The 5 ft 9 in Wynn was becoming known not only for how often he hit home runs, but also for how far he hit them. Wynn set club records with 37 home runs, and 107 RBIs. It was also in 1967 that Wynn hit his famous home run onto Interstate 75 in Cincinnati. As the season came to a close, the Astros found themselves again in ninth place and with a winning percentage below. 500. The team looked good on paper, but could not make it work on the field.
April 15, 1968 saw a pitching duel for the ages. The Astros ' Don Wilson and the Mets ' Tom Seaver faced each other in a pitching duel that lasted six hours. Seaver went ten innings, allowing no walks and just two hits. Wilson went nine innings, allowing five hits and three walks. After the starters exited, eleven relievers (seven for the Mets and four for the Astros) tried to end the game. The game finally ended in the 24th inning when Aspromonte hit a shot toward Mets shortstop Al Weis. Weis had been perfect all night at short, but he was not quick enough to make the play. The ball zipped into left field, allowing Norm Miller to score.
With baseball expansion and trades, the Astros had dramatically changed in 1969. Aspromonte was sent to the Braves and Staub was traded to the expansion Montreal Expos, in exchange for outfielder Jesús Alou and first baseman Donn Clendenon. However, Clendenon refused to report to Houston, electing to retire and take job with a pen manufacturing company. The Astros asked Commissioner Bowie Kuhn to void the trade, but he refused. Instead, he awarded Jack Billingham and a left - handed relief pitcher to the Astros to complete the trade. Cuellar was traded to the Baltimore Orioles for Curt Blefary. Other new players included catcher Johnny Edwards, infielder Denis Menke, and pitcher Denny Lemaster. Wilson continued to pitch brilliantly and on May 1 threw the second no - hitter of his career. In that game, he struck out 18 batters, tying what was then the all - time single - game mark. He was just 24 years of age and was second to only Sandy Koufax for career no - hit wins. Wilson 's no - hitter lit the Astros ' fire after a miserable month of April, and six days later the team tied a major league record by turning seven double plays in a game. By May 's end the Astros had put together a ten - game winning streak. The Houston infield tandem of Menke and Joe Morgan continued to improve, providing power at the plate and great defense. Morgan had 15 homers and stole 49 bases while Menke led the Astros with 90 RBIs. The Menke / Morgan punch was beginning to come alive, and the team was responding to Walker 's management style. The Astros dominated the season series against their expansion twins, the New York Mets, that season. In one game at New York, Denis Menke and Jimmy Wynn hit grand slams in the same inning, against a Mets team that would go on to win the World Series that same year. The Astros finished the 1969 season with a record of 81 wins, 81 losses, marking their first season of. 500 ball.
In 1970, the Astros were expected to be a serious threat in the National League West. In June, 19 - year - old César Cedeño was called up and immediately showed signs of being a superstar. The Dominican outfielder batted. 310 after being called up. Not to be outdone, Denis Menke batted. 304 and Jesús Alou batted. 306. The Astros ' batting average was up by 19 points compared to the season before. The team looked good, but the Astros ' ERA was up. Larry Dierker and Don Wilson had winning records, but the pitching staff as a whole had an off season. Houston finished in fourth place in 1970.
The fashion trends of the 1970s had started taking root in baseball. Long hair and loud colors were starting to appear on team uniforms, including the Astros '. In 1971 the Astros made some changes to their uniform: they kept the same style they had in previous seasons, but inverted the colors. What was navy blue was now orange and what was orange was now a lighter shade of blue. The players ' last names were added to the back of the jerseys. In 1972, the uniform fabric was also changed to what was at the time revolutionizing the industry -- polyester. Belts were replaced by elastic waistbands, and jerseys zipped up instead of having buttons. The uniforms became popular with fans, but would last only until 1975, when the Astros would shock baseball and the fashion world.
The uniforms were about the only thing that did change in 1971. The acquisition of Roger Metzger from the Chicago Cubs in the off - season moved Menke to first base and Bob Watson to the outfield. The Astros got off to a slow start and the pitching and hitting averages were down. Larry Dierker was selected to the All - Star game in 1971, but due to an arm injury he could not make it. César Cedeño led the club with 81 RBIs and the league with 40 doubles, but batted just. 264 and had 102 strikeouts in his second season with the Astros. Pitcher J.R. Richard made his debut in September of the 1971 season against the Giants.
In November 1971 the Astros and Cincinnati Reds made one of the biggest blockbuster trades in the history of the sport, and helped create The Big Red Machine of the 1970s, with the Reds getting the better end of the deal. Houston sent second baseman Joe Morgan, infielder Denis Menke, pitcher Jack Billingham, outfielder César Gerónimo and prospect Ed Armbrister to Cincinnati for first baseman Lee May, second baseman Tommy Helms and infielder Jimmy Stewart. The trade left Astros fans and the baseball world scratching their heads as to why General Manager Spec Richardson would give up so much for so little. The Reds, on the other hand, would shore up many problems. They had an off year in 1971, but were the National League Pennant winner in 1972.
The Astros ' acquisition of Lee May added more power to the lineup in 1972. May, Wynn, Rader and Cedeño all had 20 or more home runs and Watson hit 16. Cedeño also led the Astros with a. 320 batting average, 55 stolen bases and made spectacular plays on the field. Cedeño made his first All - Star game in 1972 and became the first Astros player in team history to hit for the cycle in August versus the Reds. The Astros finished the strike - shortened season at 84 - 69, their first winning season.
Astros fans had hoped for more of the same in 1973, but it was not to be. The Astros run production was down, even though the same five sluggers the year before were still punching the ball out of the park. Lee May led the Astros with 28 home runs and Cesar Cedeño batted. 320 with 25 home runs. Bob Watson hit the. 312 mark and drove in 94 runs. Doug Rader and Jimmy Wynn both had 20 or more home runs. However, injuries to their pitching staff limited the Astros to an 82 -- 80 fourth - place finish. The Astros again finished in fourth place the next year under new manager Preston Gómez.
With the $38 million deficit of the Astrodome, control of the Astrodomain was passed from Judge Roy Hofheinz to GE Credit and Ford Motor Credit. This included the Astros. The creditors were just interested in preserving asset value of the team, so any money spent had to be found or saved somewhere else. Tal Smith returned to the Astros from the New York Yankees to find a team that needed a lot of work and did not have a lot of money. However, there would be some bright spots that would prove to be good investments in the near future.
The year started on a sad note. Pitcher Don Wilson was found dead in the passenger seat of his car on January 5, 1975. Cause of death was asphyxiation by carbon monoxide. Wilson was 29 years old. Wilson 's number 40 was retired on April 13, 1975.
The 1975 season saw the introduction of the Astros ' new uniforms. Many teams were going away from the traditional uniform and the Astros were no exception. The uniforms had multishade stripes of orange, red and yellow in front and in back behind a large dark blue star over the midsection. The same stripes ran down the pant legs. Players numbers not only appeared on the back of the jersey, but also on the pant leg. The bright stripes were meant to appear as a fiery trail like a rocket sweeping across the heavens. The uniforms were panned by critics, but the public liked them and versions started appearing at the high school and little league level. The uniform was so different from what other teams wore that the Astros wore it both at home and on the road until 1980.
Besides the bright new uniforms there were some other changes. Lee May was traded to Baltimore for much talked about rookie second baseman Rob Andrews and utility player Enos Cabell. In Baltimore, Cabell was stuck behind third baseman Brooks Robinson, but he took advantage of his opportunity in Houston and became their everyday third baseman. Cabell would go on to become a big part of the team 's success in later years. With May gone, Bob Watson was able to move to first base and was a bright spot in the line up, batting. 324 with 85 RBI.
The two biggest moves the Astros made in the offseason were the acquisitions of Joe Niekro and José Cruz. The Astros bought Niekro from the Braves for almost nothing. Niekro had bounced around the big leagues with minimal success. His older brother Phil Niekro had started teaching Joe how to throw his knuckleball and Joe was just starting to use it when he came to the Astros. Niekro won six games, saved four games and had an ERA of 3.07. José Cruz was also a steal, in retrospect, from the Cardinals. Cruz became a fixture in the Astros ' outfield for several years and would eventually have his number 25 retired.
Despite high expectations, 1975 was among the Astros ' worst in franchise history. Their record of 64 -- 97 was far worse than even the expansion Colt. 45 's and would remain the worst record in franchise history until 2011. It was the worst record in baseball and manager Preston Gómez was fired late in the season and replaced by Bill Virdon. The Astros played. 500 ball under Virdon in the last 34 games of the season. With Virdon as the manager the Astros improved greatly in 1976 finishing in third place with an 80 -- 82 record. A healthy César Cedeño was a key reason for the Astros ' success in 1976. Bob Watson continued to show consistency and led the club with a. 313 average and 102 RBI. José Cruz became Houston 's everyday left fielder and hit. 303 with 28 stolen bases. 1976 saw the end of Larry Dierker 's playing career as an Astro, but before it was all over he would throw a no - hitter and win the 1,000 th game in the Astrodome. The Astros finished in third place again in 1977 with a record of 81 -- 81.
One of the big problems the Astros had in the late 1970s was that they were unable to compete in the free agent market. Ford Motor Credit Company was still in control of the team and was looking to sell the Astros, but they were not going to spend money on better players. Most of the talent was either farm grown or bought on the cheap.
The 1979 season would prove to be a big turnaround in Astros history. During the offseason, the Astros made an effort to fix some of their problem areas. They traded Floyd Bannister to Seattle for shortstop Craig Reynolds and acquired catcher Alan Ashby from Toronto for pitcher Mark Lemongello. Reynolds and Ashby were both solid in their positions and gave Houston some much needed consistency. The season started with a boost from pitcher Ken Forsch, who threw a no - hitter the Braves the second game of the season. In May 1979, New Jersey shipping tycoon Dr. John McMullen had agreed to buy the Astros. Now with an investor in charge, the Astros would be more likely to compete in the free agent market.
The Astros were playing great baseball throughout the season. José Cruz and Enos Cabell both stole 30 bases. Joe Niekro had a great year with 21 wins and 3.00 ERA. J.R. Richard won 18 games and set a new personal strikeout record at 313. Joe Sambito came into his own with 22 saves as the Astros closer. Things were going as they should for a team that could win the west. The Astros and Reds battled the final month of the season. The Reds pulled ahead of the Astros by a game and a half. Later that month they split a pair and the Reds kept the lead. That would be how it would end. The Astros finished with their best record to that point at 89 -- 73 and 11⁄2 games behind the NL winner Reds.
With Dr. McMullen as sole owner of the Astros, the team would now benefit in ways a corporation could not give them. The rumors of the Astros moving out of Houston started to crumble and the Astros were now able to compete in the free - agent market. McMullen showed the city of Houston that he too wanted a winning team, signing nearby Alvin, Texas native Nolan Ryan to the first million - dollar - a-year deal. Ryan had four career no - hitters already and had struck out 383 in one season.
Joe Morgan returned in 1980. Now back in Houston with two MVP awards and two World Series rings, Morgan wanted to help make the Astros a pennant winner.
The 1980 pitching staff was one of the best Houston ever had, with the fastball of Ryan, the knuckleball of Joe Niekro and the terrifying 6 ft 8 in frame of J.R. Richard. Teams felt lucky to face Ken Forsch, who was a double - digit winner in the previous two seasons. Richard became the first Astros pitcher to start an All - Star game. After a medical examination three days later, Richard was told to rest his arm and he collapsed during a July 30 workout. He had suffered a stroke after a blood clot in the arm apparently moved to his neck and cut off blood flow to the brain. Surgery was done to save his life, but the Astros had lost their ace pitcher after a 10 -- 4 start with a stingy 1.89 ERA. Richard attempted a comeback, but would never again pitch a big league game.
After the loss of Richard and some offensive struggles, the Astros slipped to third place in the division behind the Dodgers and the Reds. They bounced back to first with a ten - game winning streak, but the Dodgers had regained a two - game lead when they arrived in Houston on September 9. The Astros won the first two games of that series and the two teams were tied for the division lead. The Astros held a three - game lead over the Dodgers with three games left in the season against the Dodgers. The Dodgers swept the series games, forcing a one - game playoff the next day. The Astros would however win the one - game playoff 7 - 1, and advance to their first post-season.
The team would face the Philadelphia Phillies in the 1980 National League Championship Series. The Phillies sent out Steve Carlton in game one of the NLCS after a six - hour flight the night before. The Phillies would win the opener after the Astros got out to a 1 - 0 third - inning lead. Ken Forsch pitched particularly strong fourth and fifth innings, but Greg Luzinski hit a sixth - inning two - run bomb to the 300 level seats of Veterans Stadium. The Phillies added an insurance run on the way to a 3 -- 1 win. Houston bounced back to win games two and three. Game four went into extra innings, with the Phillies taking the lead and the win in the tenth inning. Pete Rose started a rally with a one - out single, then Luzinski doubled off the left field wall and Rose bowled over catcher Bruce Bochy to score the go - ahead run. The Phillies got an insurance run on the way to tying the series.
Rookie Phillies pitcher Marty Bystrom was sent out by Philadelphia manager Dallas Green to face veteran Nolan Ryan in Game Five. The rookie gave up a run in the first inning, then held the Astros at bay until the sixth inning. An Astros lead was lost when Bob Boone hit a two - out single in the second, but the Astros tied the game in the sixth with an Alan Ashby single scoring Denny Walling. Houston took a 5 -- 2 lead in the seventh, however the Phillies came back with five runs in the inning. The Astros came back against Tug McGraw with four singles and two two - out runs. Now in extra innings, Garry Maddox doubled in Del Unser with one out to give the Phillies an 8 -- 7 lead. The Astros failed to score in the bottom of the tenth.
A 1981 player strike ran between June 12 and August 10. Ultimately, the strike would help the Astros get into the playoffs. Nolan Ryan and Bob Knepper picked up steam in the second half of the season. Ryan threw his fifth no - hitter on September 26 and finished the season with a 1.69 ERA. Knepper finished with an ERA of 2.18. In the wake of the strike, Major League Baseball took the winners of each "half '' season and set up a best - of - five divisional playoff. The Reds won more games than any other team in the National League, but they won neither half of the strike - divided season. The Astros finished 61 -- 49 overall, which would have been third in the division behind the Reds and the Dodgers. Advancing to the playoffs as winners of the second half, Houston beat Los Angeles in their first two playoff games at home, but the Dodgers took the next three in Los Angeles to advance to the NLCS.
By 1982, only four players and three starting pitchers remained from the 1980 squad. The Astros were out of pennant contention by August and began rebuilding for the near future. Bill Virdon was fired as manager and replaced by original Colt. 45 Bob Lillis. Don Sutton asked to be traded and was sent to the Milwaukee Brewers for cash and the team gained three new prospects, including Kevin Bass. Minor league player Bill Doran was called up in September. Bass also got a look in the outfield. The Astros finished fourth in the west, but new talent was starting to appear.
Before the 1983 season, the Astros traded Danny Heep to the Mets for pitcher Mike Scott, a 28 - year - old who had struggled with New York. Art Howe sat out the 1983 season with an injury, forcing Phil Garner to third and Ray Knight to first. Doran took over at second, becoming the everyday second baseman for the next seven seasons. The Astros finished third in the National League West. The 1984 season started off badly when shortstop Dickie Thon was hit in the head by a pitch and was lost for the season. In September, the Astros called up rookie Glenn Davis after he posted impressive numbers in AAA. The Astros finished in second place. In 1985, Mike Scott learned a new pitch, the split - finger fastball. Scott, who was coming off of a 5 -- 11 season, had found his new pitch and would become one of Houston 's most celebrated hurlers. In June, Davis made the starting lineup at first base, adding power to the team. In September, Joe Niekro was traded to the Yankees for two minor league pitchers and lefty Jim Deshaies. The Astros finished in fourth place in 1985.
After finishing fourth in 1985, the Astros fired general manager Al Rosen and manager Bob Lillis. The former was supplanted by Dick Wagner, the man whose Reds defeated the Astros to win the 1979 NL West title. The latter was replaced by Hal Lanier who, like his manager mentor in St. Louis, Whitey Herzog, had a hard - nosed approach to managing and espoused a playing style that focused on pitching, defense, and speed rather than home runs to win games. This style of baseball, known as Whiteyball, took advantage of stadiums with deep fences and artificial turf, both of which were characteristics of the Astrodome. Lanier 's style of baseball took Houston by storm. Before Lanier took over, fans were accustomed to Houston 's occasional slow starts, but with Lanier leading the way, Houston got off to a hot start, winning 13 of their first 19 contests.
Prior to the start of the season the Astros acquired outfielder Billy Hatcher from the Cubs for Jerry Mumphrey. Lainer also made a change in the pitching staff, going with a three - man rotation to start the season. This allowed Lanier to keep his three starters (Nolan Ryan, Bob Knepper, and Mike Scott) sharp and to slowly work in rookie hurler Jim Deshaies. Bill Doran and Glenn Davis held down the right side of the field but Lainer rotated the left side. Denny Walling and Craig Reynolds faced the right - handed pitchers while Phil Garner and Dickie Thon batted against left - handers. Lainer knew the Astros had talent and he put it to work.
The Astrodome was host to the 1986 All - Star Game in which Astros Mike Scott, Kevin Bass, Glenn Davis, and Dave Smith represented the host field. The Astros kept pace with the NL West after the All - Star break. They went on a streak of five straight come - from - behind wins. Houston swept a key 3 - game series over the San Francisco Giants in late September to clinch the division title. Mike Scott took the mound in the final game of the series and pitched a no - hitter -- the only time in MLB history that any division was clinched via a no - hitter. Scott would finish the season with an 18 -- 10 record and a Cy Young Award.
The 1986 National League Championship Series against the New York Mets was noted for great drama and is considered one of the best postseason series ever. In Game 3, the Astros were ahead at Shea Stadium, 5 -- 4, in the bottom of the 9th when closer Dave Smith gave up a two - run home run to Lenny Dykstra, giving the Mets a dramatic 6 -- 5 win.
However, the signature game of the series was Game 6. Needing a win to get to Mike Scott (who had been dominant in the series) in Game 7, the Astros jumped off to a 3 -- 0 lead in the first inning but neither team would score again until the 9th inning. In the 9th, starting pitcher Bob Knepper would give up two runs, and once again the Astros would look to Dave Smith to close it out. However, Smith would walk Gary Carter and Darryl Strawberry, giving up a sacrifice fly to Ray Knight, tying the game. Despite having the go - ahead runs on base, Smith was able to escape the inning without any further damage.
There was no scoring until the 14th inning when the Mets would take the lead on a Wally Backman single and an error by left fielder Billy Hatcher. The Astros would get the run back in the bottom of the 14th when Hatcher (in a classic goat - to - hero - conversion - moment) hit one of the most dramatic home runs in NLCS history, off the left field foul pole. In the 16th inning, Darryl Strawberry doubled to lead off the inning and Ray Knight drove him home in the next at - bat. The Mets would score a total of three runs in the inning to take what appeared an insurmountable 7 -- 4 lead. With their season on the line, the Astros would nonetheless rally for two runs to come to within 7 -- 6. Kevin Bass came up with the tying and winning runs on base; however Jesse Orosco would strike him out, ending the game. At the time the 16 - inning game held the record for the longest in MLB postseason history. The Mets won the series, 4 - 2.
After the 1986 season, the team had difficulty finding success again. Several changes occurred. The "rainbow '' uniforms were phased out, the team electing to keep a five - stripe "rainbow '' design on the sleeves. From 1987 to 1993, the Astros wore the same uniform for both home and away games; the only team in Major League Baseball to do so during that period. Its favorites Nolan Ryan and José Cruz moved on and the team entered a rebuilding phase. Craig Biggio debuted in June 1988, joining new prospects Ken Caminiti and Gerald Young. Biggio would become the everyday catcher by 1990. A trade acquiring Jeff Bagwell in exchange for Larry Andersen would become one of the biggest deals in Astros history. Glenn Davis was traded to Baltimore for Curt Schilling, Pete Harnisch and Steve Finley in 1990.1990 -- 1999: Fine tuning
The early 1990s were marked by the Astros ' growing discontent with their home, the Astrodome. After the Astrodome was renovated for the primary benefit of the NFL 's Houston Oilers (who shared the Astrodome with the Astros since the 1960s), the Astros began to grow increasingly disenchanted with the facility. Faced with declining attendance at the Astrodome and the inability of management to obtain a new stadium, in the 1991 off - season Astros management announced its intention to sell the team and move the franchise to the Washington, D.C. area. However, the move was not approved by other National League owners, thus compelling the Astros to remain in Houston. Shortly thereafter, McMullen (who also owned the NHL 's New Jersey Devils) sold the team to Texas businessman Drayton McLane in 1993, who committed to keeping the team in Houston.
Shortly after McLane 's arrival, which coincided with the maturation of Bagwell and Biggio, the Astros began to show signs of consistent success. After finishing second in their division in 1994 (in a strike year), 1995, and 1996, the Astros won consecutive division titles in 1997, 1998, and 1999. In the 1998 season, the Astros set a team record with 102 victories. However, each of these titles was followed by a first - round playoff elimination, in 1998 by the San Diego Padres and in 1997 and 1999 against the Atlanta Braves. The manager of these title teams was Larry Dierker, who had previously been a broadcaster and pitcher for the Astros. During this period, Bagwell, Biggio, Derek Bell, and Sean Berry earned the collective nickname "The Killer Bs ''. In later seasons, the name came to include other Astros, especially Lance Berkman.
Coinciding with the change in ownership, the team switched uniforms and team colors after the 1993 season in order to go for a new, more serious image. The team 's trademark rainbow uniforms were retired, and the team 's colors changed to midnight blue and metallic gold. The "Astros '' font on the team logo was changed to a more aggressive one, and the team 's traditional star logo was changed to a stylized, "flying '' star with an open left end. It marked the first time since the team 's inception that orange was not part of the team 's colors. Despite general agreement that the rainbow uniforms identified with the team had become tired (and looked too much like a minor league team according to the new owners), the new uniforms and caps were never especially popular with many Astros fans.
Off the field, in 1994, the Astros hired one of the first African American general managers, former franchise player Bob Watson. Watson would leave the Astros after the 1995 season to become general manager of the New York Yankees and helped to lead the Yankees to a World Championship in 1996. He would be replaced by Gerry Hunsicker, who until 2004 would continue to oversee the building of the Astros into one of the better and most consistent organizations in the Major Leagues.
However, in 1996, the Astros again nearly left Houston. By the mid-1990s, McLane (like McMullen before him) wanted his team out of the Astrodome and was asking the city to build the Astros a new stadium. When things did not progress quickly toward that end, he put the team up for sale. He had nearly finalized a deal to sell the team to businessman William Collins, who planned to move them to Northern Virginia. However, Collins was having difficulty finding a site for a stadium himself, so Major League owners stepped in and forced McLane to give Houston another chance to grant his stadium wish. Houston voters, having already lost the Houston Oilers in a similar situation, responded positively via a stadium referendum and the Astros stayed put.
The 2000 season saw a move to a new stadium. Originally to be named The Ballpark at Union Station due to being located on the site of Union Station (Houston), it was renamed Enron Field by the season opening after the naming rights were sold to energy corporation Enron. The stadium was to feature a retractable roof, a particularly useful feature with unpredictable Houston weather. The ballpark also featured more intimate surroundings than the Astrodome. In 2002, naming rights were purchased by Houston - based Minute Maid, after Enron went bankrupt. The park was built on the grounds of the old Union Station. A locomotive moves across the outfield and whistles after home runs, paying homage to a Houston history which had eleven railroad company lines running through the city by 1860. The ballpark previously contained quirks such as "Tal 's Hill '', which was a hill in deep center field on which a flagpole stood, all in fair territory. Tal 's Hill was replaced in the 2016 -- 2017 offseason. The wall was moved in to 409 feet, which the team hoped would generate more home runs. A similar feature was located in Crosley Field. Over the years, many highlight reel catches have been made by center fielders running up the hill to make catches. With the change in location also came a change in attire. Gone were the blue and gold uniforms of the 1990s in favor a more "retro '' look with pinstripes, a traditional baseball font, and the colors of brick red, sand and black. These colors were chosen because ownership originally wanted to rename the team the Houston Diesels. The "shooting star '' logo was modified but still retained its definitive look.
After two fairly successful seasons without a playoff appearance, the Astros were early favorites to win the 2004 NL pennant. They added star pitcher Andy Pettitte to a roster that already included standouts like Lance Berkman and Jeff Kent as well as veterans Bagwell and Biggio. Roger Clemens, who had retired after the 2003 season with the New York Yankees, agreed to join former teammate Pettitte on the Astros for 2004. The one - year deal included unique conditions, such as the option for Clemens to stay home in Houston on select road trips when he was n't scheduled to pitch. Despite the early predictions for success, the Astros had a mediocre 44 -- 44 record at the All - Star break. A lack of run production and a poor record in close games were major issues. After being booed at the 2004 All - Star Game held in Houston, manager Jimy Williams was fired and replaced by Phil Garner, a star on the division - winning 1986 Astros. The Astros enjoyed a 46 -- 26 record in the second half of the season under Garner and earned the NL wild card spot. The Astros defeated the Braves 3 -- 2 in the Division Series, but would lose the National League Championship Series to the St. Louis Cardinals in seven games. Clemens earned a record seventh Cy Young Award in 2004. Additionally, the mid-season addition of Carlos Beltrán in a trade with the Kansas City Royals helped the Astros tremendously in their playoff run. Despite midseason trade rumors, Beltrán would prove instrumental to the team 's hopes, hitting eight home runs in the postseason. Though he had asserted a desire to remain with the Astros, Beltrán signed a long - term contract with the New York Mets on January 9, 2005.
In 2005, the Astros started poorly and found themselves with a 15 - 30 record in late May. The Houston Chronicle had written them off with a tombstone emblazoned with "RIP 2005 Astros ''. However, from that low point until the end of July, Houston went 42 -- 17 and found themselves in the lead for an NL wild card spot. July saw the best single month record in the club 's history at 22 - 7. Offensive production had increased greatly after a slow start in the first two months. The Astros had also developed an excellent pitching staff, anchored by Roy Oswalt (20 -- 12, 2.94), Andy Pettitte (17 -- 9, 2.39), and Roger Clemens (13 -- 8 with a league - low ERA of only 1.87). The contributions of the other starters -- Brandon Backe (10 -- 8, 4.76) and rookie starters Ezequiel Astacio (3 -- 6, 5.67) and Wandy Rodríguez (10 -- 10, 5.53) -- were less remarkable, but enough to push the Astros into position for a playoff run. The Astros won a wild card berth on the final day of the regular season, becoming the first team since the world champion 1914 Boston Braves to qualify for the postseason after being 15 games under. 500.
The Astros won the National League Division Series against the Atlanta Braves, 3 - 1, with a game four that set postseason records for most innings (18), most players used by a single team (23), and longest game time (5 hours and 50 minutes). Trailing by a score of 6 - 1, Lance Berkman hit an eighth inning grand slam to narrow the score to 6 -- 5. In the bottom of the ninth, catcher Brad Ausmus, hit a game tying home run that allowed the game to continue in extra innings. In the bottom of the tenth inning, Luke Scott hit a blast to left field that had home run distance, but was inches foul. This game remained scoreless for the next eight innings. In the top of the fifteenth inning, Roger Clemens made only his second career relief appearance, pitching three shutout innings, notably striking out Julio Franco, at the time the oldest player in the MLB at 47 years old; Clemens was himself 43. In the bottom of the eighteenth inning, Clemens came to bat again, indicating that he would be pitching in the nineteenth inning, if it came to that. Clemens struck out, but the next batter, Chris Burke, hit a home run to left field for the Astros win, 7 -- 6. Oddly enough, a fan in the "Crawford Boxes '' in left field had previously caught Berkman 's grand slam and this same fan caught Burke 's home run. The National League Championship Series featured a rematch of the 2004 NLCS. The Astros lost the first game in St. Louis, but would win the next three games, with Roy Oswalt getting the win. Though the Astros were poised to close out the series in Game Five in Houston, Brad Lidge gave up a monstrous two - out three - run home run to Albert Pujols, forcing the series to a sixth game in St. Louis, where the Astros clinched a World Series appearance. Roy Oswalt was named NLCS MVP, having gone 2 -- 0 with a 1.29 ERA in the series. Current honorary NL President William Y. Giles presented the league champion Astros with the Warren C. Giles Trophy. Warren Giles, William 's father and President of the National League from 1951 to 1969, had awarded an MLB franchise to the city of Houston in 1960.
The Astros faced the Chicago White Sox in the World Series. Chicago had been considered the slight favorite but would win all four games, the first two at U.S. Cellular Field in Chicago and the final two in Houston. Game 3 marked the first World Series game held in the state of Texas, and was the longest game in World Series history, lasting 5 hours and 41 minutes.
This World Series was marked by a controversy involving the Minute Maid Park roof. MLB & Commissioner Bud Selig insisted that the Astros must play with the roof open which mitigated the intensity and enthusiasm of the cheering Astros fans.
In the 2006 offseason, the team signed Preston Wilson and moved Berkman to first base, ending the long tenure of Jeff Bagwell. The Astros renewed the contract with Clemens and traded two minor league prospects to the Tampa Bay Devil Rays for left - handed hitter Aubrey Huff. By August, Preston Wilson complained about his playing time after the return of Luke Scott from AAA Round Rock. The Astros released Wilson and he was signed by St. Louis. A dramatic season end included wins in 10 of their last 12 games, but the Astros missed a playoff appearance when they lost the final game of the season to the Atlanta Braves.
On October 31, the Astros declined a contract option on Jeff Bagwell for 2007, ending his 15 - year Astros career and leading to his retirement. Roger Clemens and Andy Pettitte filed for free agency. On December 12, the Astros traded Willy Taveras, Taylor Buchholz, and Jason Hirsh to the Colorado Rockies for Rockies pitchers Jason Jennings and Miguel Asencio. A trade with the White Sox, involving the same three Astros in exchange for Jon Garland, had been nixed a few days earlier when Buchholz reportedly failed a physical. In the end, Taveras continued to develop and Hirsh had a strong 2007 rookie campaign, while Jennings was often injured and generally ineffective.
On April 28, 2007, the Astros purchased the contract of top minor league prospect Hunter Pence. He debuted that night, getting a hit and scoring a run. By May 2007, the Astros had suffered one of their worst recent losing streaks (10 games). On June 28, second baseman Craig Biggio became the 27th MLB player to accrue 3,000 career hits. Biggio needed three hits to reach 3,000 and on that night he had five hits. That night, Carlos Lee hit a towering walk - off grand slam in the eleventh inning. Lee later quipped to the newsmedia that "he had hit a walk - off grand slam and he got second billing '', considering Biggio 's achievement. On July 24, Biggio announced that he would retire at the end of the season. He hit a grand slam in that night 's game which broke a 3 -- 3 tie and led to an Astros win. In Biggio 's last at bat, he grounded out to Chipper Jones of the Atlanta Braves.
On September 20, Ed Wade was named General Manager. In his first move, he traded Jason Lane to the Padres on September 24. On September 30, Craig Biggio retired after twenty years with the team. In November, the Astros traded RHP Brad Lidge and SS Eric Bruntlett to the Philadelphia Phillies for OF Michael Bourn, RHP Geoff Geary, and minor leaguer Mike Costanzo. Utility player Mark Loretta accepted Houston 's salary arbitration and Kazuo Matsui finalized a $16.5 million, three - year contract with the team. In December the Astros traded OF Luke Scott, RHP Matt Albers, RHP Dennis Sarfate, LHP Troy Patton, and minor - league 3B Mike Costanzo, to the Baltimore Orioles for SS Miguel Tejada. On December 14, they sent infielder Chris Burke, RHP Juan Gutiérrez, and RHP Chad Qualls to the Arizona Diamondbacks for RHP José Valverde. On December 27, the Astros came to terms on a deal with All - star, Gold Glove winner Darin Erstad.
In January and February 2008, the Astros signed Brandon Backe, Ty Wigginton, Dave Borkowski and Shawn Chacón to one - year deals. The starting rotation would feature Roy Oswalt and Brandon Backe as numbers one and two. Wandy Rodríguez, Chacón and Chris Sampson rounded out the bottom three slots in the rotation. Woody Williams had retired after a 0 -- 4 spring training and Jason Jennings was now with Texas. On the other side of the roster, the Astros would start without Kazuo Matsui, who was on a minor league rehab assignment after a spring training injury.
The Astros regressed in 2008 and 2009, finishing with records of 86 -- 75 and 74 -- 88, respectively. Manager Cecil Cooper was fired after the 2009 season.
The 2010 season was the first season as Astros manager for Brad Mills, who was previously the bench coach of the Boston Red Sox. The Astros struggled throughout a season that was marked by trade - deadline deals that sent longtime Astros to other teams. On July 29, the Astros ' ace starting pitcher, Roy Oswalt, was dealt to the Philadelphia Phillies for J.A. Happ and two minor league players. On July 31, outfielder Lance Berkman was traded to the New York Yankees for minor leaguers Jimmy Paredes and Mark Melancon. The Astros finished with a record of 76 - 86.
On July 30, 2011, the Astros traded OF Hunter Pence, the team 's 2010 leader in home runs, to the Philadelphia Phillies. On July 31, they traded OF Michael Bourn to the Atlanta Braves. On September 17, the Astros clinched their first 100 - loss season in franchise history, On September 28, the Astros ended the season with an 8 -- 0 home loss to the St. Louis Cardinals. Cardinals pitcher Chris Carpenter pitched a complete game, two - hit shutout in the game, enabling the Cardinals to win the National League Wild Card, where they went on to beat the Texas Rangers in the World Series, with Lance Berkman being a key player in their championship victory. The Astros finished with a record of 56 -- 106, the worst single - season record in franchise history (a record which would be broken the following season).
In November 2010, Drayton McLane announced that the Astros were being put up for sale. McLane stated that because the Astros were one of the few franchises in Major League Baseball with only one family as the owners, he was planning his estate. McLane was 75 years old as of November 2011. In March 2011, local Houston businessman Jim Crane emerged as the front - runner to purchase the franchise. In the 1980s, Crane founded an air freight business which later merged with CEVA Logistics, and later founded Crane Capital Group. McLane and Crane had a previous handshake agreement for the franchise in 2008, but Crane abruptly changed his mind and broke off discussions. Crane also attempted to buy the Chicago Cubs in 2008 and the Texas Rangers during their 2010 bankruptcy auction. Crane came under scrutiny because of previous allegations of discriminatory hiring practices regarding women and minorities, among other issues. This delayed MLB 's approval process. During the summer of 2011, a frustrated Crane hinted that the delays might threaten the deal. In October 2011, Crane met personally with MLB Commissioner Bud Selig, in a meeting that was described as "constructive ''.
On November 15, 2011, it was announced that Crane had agreed to move the franchise to the American League for the 2013 season. The move was part of an overall divisional realignment of MLB, with the National and American leagues each having 15 teams in three geographically balanced divisions. Crane was given a $70 million concession by MLB for agreeing to the switch; the move was a condition for the sale to the new ownership group. Two days later, the Astros were officially sold to Crane after the other owners unanimously voted in favor of the sale. It was also announced that 2012 would be the last season for the Astros in the NL. After over fifty years of the Astros being a part of the National League, this move was unpopular with many Astros fans.
In 2012, the Astros were eliminated from the playoffs before September 5. On September 27, the Astros named Bo Porter to be the manager for the 2013 season.
On October 3, the Astros ended over 50 years of NL play with a 5 -- 4 loss to the Chicago Cubs and began to look ahead to join the American League. Winning only 20 road games during the entire season, the Astros finished with a 55 -- 107 record, the worst record in all of Major League Baseball for the 2012 season, and surpassing the 2011 season for the worst record in Astros history.
On November 2, 2012, the Astros unveiled their new look in preparation for their move to the American League for the 2013 season. The uniform is navy blue and orange, going back to the original 1960s team colors, as well as debuting a new version of the classic navy blue hat with a white "H '' over an orange star.
On November 6, 2012, the Astros hired former Cleveland Indians director of baseball operations David Stearns as the team 's new assistant general manager. The Astros would also go on to hire former St. Louis Cardinals front office executive Jeff Luhnow as their general manager.
The Houston Astros played their first game as an American League team on March 31, 2013, where they were victorious over their in - state division competitor, the Texas Rangers, with a score of 8 -- 2.
On September 29, the Astros completed their first year in the American League, losing 5 -- 1 in a 14 inning game to the New York Yankees. The Astros finished the season with a 51 -- 111 record (a franchise worst) with a season ending 15 game losing streak, again surpassing their worst record from last season. The team finished 45 games back out of the division winner Oakland Athletics, further adding to their futility. This marked three consecutive years that the Astros had lost more than 100 games in a single season. They also became the first team to have the first overall pick in the draft three years in a row. They improved on their season in 2014, going 70 -- 92, finishing 28 games back over the division winner Los Angeles Angels of Anaheim, and placing fourth in the AL West over the Texas Rangers.
After a slow start, the Astros took over first place in the AL West on April 19 and stayed there until shortly before the All - Star Break in mid-July. The Astros retook first place on July 29, but fell from first on September 15.
In September 2015, four men died who had been closely associated with the team: Yogi Berra, Gene Elston, Milo Hamilton, and John McMullen. McMullen and Hamilton passed on the same Thursday, September 17, 2015, Elston on September 5th, and Berra on September 22. McMullen, after being a co-owner of the New York Yankees, purchased the Astros in 1979. He also brought Nolan Ryan to the Astros. In 1980, the Astros played the Philadelphia Phillies for the NL pennant with Gene Elston on the radio. In 1985, McMullen brought Yogi Berra in to be a bench coach for the new Astros manager Hal Lanier for the 1986 season, with Milo Hamilton on the radio.
Dallas Keuchel led the AL with 20 victories, going 15 -- 0 at home, an MLB record. Key additions to the team included Scott Kazmir and SS Carlos Correa who hit 22 home runs, being called up in June 2015. 2B José Altuve picked up where he left off as the star of the Astros ' offense. On July 30, the Astros picked up Mike Fiers and Carlos Gómez from the Milwaukee Brewers. Fiers threw the 11th no - hitter in Astros history on August 21 against the LA Dodgers. Houston got the final AL playoff spot and faced the Yankees in the Wild Card Game on October 6 at New York. They defeated the Yankees 3 -- 0, but lost to the Kansas City Royals in the American League Division Series.
The Astros split the first two games of the ALDS best - of - five series in Kansas City. The Astros won the first game at Minute Maid to take a 2 -- 1 lead in the ALDS. In game 4, after 7 innings, the Astros had a 6 -- 2 lead. In the top half of the eighth inning, which took about 45 minutes to end, the Royals had taken 7 -- 6 lead with a series of consecutive base hits. The Astros suffered a 9 -- 6 loss and the ALDS was tied at 2 -- 2. Then the series went back to Kansas City, where the Royals clinched the series in the fifth game, 7 - 2.
The Astros entered the 2016 season as the favorites to win the AL West after a promising 2015 season. After a bad start to their season, with Houston going just 7 -- 17 in April, the Astros bounced back and went on to have a winning record in their next four months, including an 18 -- 8 record in June. But after going 12 -- 15 in September, the Astros were eliminated from playoff contention. They finished in third place in the American League West Division with a final record of 84 -- 78.
On August 10, 2016 Gómez was designated for assignment by the Astros after a dreadful campaign with the team. He was released on August 18, 2016. Gomez finished his Astros career batting a career low average of. 210 and only hitting 5 HR through 85 games with the club. Gómez later signed with division rival Texas.
The season was marked by the Astros 4 -- 15 record against their in - state division rival (and eventual division winner) Texas Rangers. The Astros finished the 2016 season eleven games behind the Rangers.
In 2014, Sports Illustrated predicted the Astros would win the 2017 World Series through their strategic rebuilding process. As of June 9, the Astros were 41 -- 16, which gave them a 13.5 - game lead over the rest of their division, and they had comfortable possession of the best record in the entire league. This was the best start in the Astros ' 55 - year history. As the games of June 23 concluded, the Astros had an 11.5 - game lead over the rest of the division. The Astros entered the All - Star Break with an American League best 60 -- 29 record and a 16 - game lead in the division, although the overall best record in MLB had just barely slipped to the Dodgers shortly before the All - Star Break by just one game.
With Hurricane Harvey causing massive flooding throughout Houston and southeast Texas, the Astros ' three - game series against the Texas Rangers for August 29 -- 31, was relocated to Tropicana Field (home of the Tampa Bay Rays), in St. Petersburg, Florida. The Astros greatly improved against the Rangers in 2017, going 12 -- 7 against them and winning the season series.
At the August 31 waiver - trade deadline GM Jeff Luhnow acquired veteran starting pitcher and Cy Young Award winner Justin Verlander to bolster the starting rotation. Verlander won each of his 5 regular season starts with the Astros, yielding only 4 runs over this stretch. He carried his success into the playoffs, posting a record of 4 - 1 in his 6 starts, and throwing a complete game shutout in Game 2 of the ALCS. Verlander was named the 2017 ALCS MVP.
The Astros clinched their first division title as a member of the American League West division, and first division title overall since 2001. They also became the first team in Major League history to win three different divisions: National League West in 1980 and 1986, National League Central from 1997 -- 1999 and 2001, and American League West in 2017. On September 29, the Astros won their 100th game of the season, the second time the Astros finished a season with over 100 wins, the first being in 1998. They finished 101 -- 61, with a 21 - game lead in the division, and faced the Red Sox in the first round of the AL playoffs. The Astros defeated the Red Sox three games to one, and advanced to the American League Championship Series against the New York Yankees. The Astros won the ALCS four games to three, and advanced to the World Series to play against the Los Angeles Dodgers. The Astros defeated the Dodgers in the deciding seventh game of the World Series, winning the first championship in franchise history.
The city of Houston celebrated the team 's accomplishment with a parade on the afternoon of November 3, 2017. Houston 's Independent School District gave the students and teachers the day off to watch the parade.
On November 16, 2017 Jose Altuve was named the American League Most Valuable Player, capping off a historic season in which he accumulated 200 hits for the fourth consecutive season, led the majors with a. 346 BA, and was the unquestioned clubhouse leader of the World Series Champions.
Source:
Two awards are presented each year, one to a Houston Astro and one to a St. Louis Cardinal, each of whom exemplifies Kile 's virtues of being "a good teammate, a great friend, a fine father and a humble man. '' The winner is selected by each local chapter of the Baseball Writers ' Association of America.
The number 42 is retired by Major League Baseball in honor of Jackie Robinson.
Source:
57: Has not been reissued since former Astros pitcher Darryl Kile died as an active player with the St. Louis Cardinals in 2002.
17: Has not been reissued since Lance Berkman was traded in 2010.
Nellie Fox
Jeff Bagwell Craig Biggio
Leo Durocher Randy Johnson
Eddie Mathews Joe Morgan
Robin Roberts Iván Rodríguez
Nolan Ryan Don Sutton
Gene Elston
Milo Hamilton
Harry Kalas
Pitchers
Catchers
Infielders
Outfielders
Catchers
Manager
Coaches
40 active, 0 inactive, 1 non-roster invitees
7 - or 10 - day disabled list * Not on active roster Suspended list Roster, coaches, and NRIs updated December 15, 2017 Transactions Depth Chart → All MLB rosters
* In December 2016, the Astros agreed to a 30 - year deal to field a Class - A Advanced team in Fayetteville, North Carolina, beginning in 2019. The team will play the 2017 and 2018 seasons in Buies Creek while a new stadium is built in Fayetteville.
As of 2013, the Astros ' new flagship radio station is KBME, Sportstalk 790AM (a Fox Sports Radio affiliate), leaving KTRH, 740AM after a partnership with them since 1999 (both stations are owned by iHeartMedia). This change suddenly made it difficult for listeners outside of Houston itself to hear the Astros, as KTRH runs 50 kilowatts of power day and night, and KBME runs only five kilowatts. As a result, KTRH is audible across much of Central, East, and South Texas, whereas KBME can only be heard in Houston, especially after dark. Milo Hamilton, a veteran voice who was on the call for Hank Aaron 's 715th career home run in 1974, retired at the end of the 2012 season, after broadcasting play - by - play for the Astros since 1985. Dave Raymond and Brett Dolan shared play - by play duty for road games, while Raymond additionally worked as Hamilton 's color analyst (while Hamilton called home games only for the past few seasons before his retirement); they were not retained and instead brought in Robert Ford and Steve Sparks to begin broadcasting for the 2013 season.
Spanish language radio play - by - play is handled by Francisco Romero, and his play - by - play partner is Alex Treviño, a former backup catcher for the club.
During the 2012 season Astros games on television were announced by Bill Brown and Jim Deshaies. In the seven seasons before then, Astros games were broadcast on television by Fox Sports Houston, with select games shown on broadcast TV by KTXH. As part of a ten - year, $1 billion deal with Comcast that includes a majority stake jointly held by the Astros and the Houston Rockets, Houston Astros games moved to the new Comcast SportsNet Houston at the beginning of the 2013 season. On September 27, 2013 CSN Houston filed for Chapter 11 Bankruptcy and surprising the Astros who own the largest stake. After being brought out of bankruptcy by DirecTV Sports Networks and AT&T, the channel 's name was changed to Root Sports Southwest then later AT&T SportsNet Southwest.
The current television team consists of Todd Kalas and Geoff Blum.
Orbit is the name given to MLB 's Houston Astros mascot, a lime - green outer - space creature wearing an Astros jersey with antennae extending into baseballs. Orbit was the team 's official mascot from the 1990 through the 1999 seasons until the 2000 season, where Junction Jack was introduced as the team 's mascot with the move from the Astrodome to then Enron Field. Orbit returned on November 2, 2012 at the unveiling of the Astros new look for their 2013 debut in the American League. The name Orbit pays homage to Houston 's association with NASA and nickname Space City.
The Astros had been represented by a trio of rabbit mascots named Junction Jack, Jesse and Julie from 2000 through 2012.
In April 1977 the Houston Astros introduced their very first mascot, Chester Charge. Created by Ed Henderson, Chester Charge was a Texas cavalry soldier on a horse. Chester appeared on the field at the beginning of each home game, during the seventh inning stretch and then ran around the bases at the conclusion of each win. At the blast of a bugle, the scoreboard would light up and the audience would yell, "Charge! ''
General
In - line citations
See also: List of companies in Houston
See: List of colleges and universities in Houston
|
rita ora - your song (official video) | Your Song (Rita Ora Song) - wikipedia
"Your Song '' is a song by British singer Rita Ora. It was released by Atlantic Records on 26 May 2017 as the lead single from her upcoming second studio album. The song peaked at number seven on the UK Singles Chart, becoming Ora 's ninth single to reach the top 10 in the UK. It also reached the top twenty in more than fifteen countries, including Belgium, Netherlands, Germany, Switzerland, Australia and New Zealand.
"Your Song '' is performed in the key of B ♭ minor with a tempo of 118 beats per minute in common time. The song follows a chord progression of B ♭ m -- G ♭ -- D ♭ -- G ♭ -- D ♭ -- A ♭ -- B ♭ m, and Ora 's vocals span from A ♭ to D ♭. Written by Ed Sheeran (with Steve Mac who also produced it), the song also features Sheeran on backing vocals.
The audio for the song was uploaded to YouTube a day before its official release. On 22 June 2017, the official music video for "Your Song '' was released on Ora 's YouTube channel. The video, filmed in Vancouver, was directed by Michael Haussman.
Ora performed "Your Song '' for the first time on 25 May 2017 during an amfAR charity gala at the 2017 Cannes Film Festival. She reprised the performance for Radio 1 's Big Weekend in Hull on 28 May. On 23 June 2017, she gave the first televised performance of the song on The One Show on BBC One. Ora performed the song on French television show, Quotidien, a week later, on 30 June. On 18 July 2017, Ora performed the song on The Tonight Show Starring Jimmy Fallon. She performed the song at the 2017 Teen Choice Awards, on 13 August. She performed the song again on The Ellen DeGeneres Show on 3 October 2017. Ora sang a medley of "Anywhere '' and "Your Song '' at the 2017 MTV Europe Music Awards on 12 November 2017. She repeated the same medley of songs at the 2017 Bambi Awards on November 16. On 12 April 2018, Ora performed the song live during a medley with "Anywhere '' and "For You '' (duet with Liam Payne) at the German Echo Music Prize.
sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
|
which is the best description for the lines m and k | Karl Marx - wikipedia
Karl Marx (/ mɑːrks /; German: (ˈkaɐ̯l ˈmaɐ̯ks); 5 May 1818 -- 14 March 1883) was a Prussian - born philosopher, economist, political theorist, sociologist, journalist and revolutionary socialist.
Born in Trier to a middle - class family, Marx later studied political economy and Hegelian philosophy. As an adult, Marx became stateless and spent much of his life in London, England, where he continued to develop his thought in collaboration with German thinker Friedrich Engels and published various works, of which the two most well - known are the 1848 pamphlet The Communist Manifesto and the three - volume Das Kapital. His work has since influenced subsequent intellectual, economic and political history.
Marx 's theories about society, economics and politics -- collectively understood as Marxism -- hold that human societies develop through class struggle. In capitalism, this manifests itself in the conflict between the ruling classes (known as the bourgeoisie) that control the means of production and working classes (known as the proletariat) that enable these means by selling their labour power in return for wages. Employing a critical approach known as historical materialism, Marx predicted that, like previous socioeconomic systems, capitalism produced internal tensions which would lead to its self - destruction and replacement by a new system: socialism. For Marx, class antagonisms under capitalism, owing in part to its instability and crisis - prone nature, would eventuate the working class ' development of class consciousness, leading to their conquest of political power and eventually the establishment of a classless, communist society constituted by a free association of producers. Marx actively fought for its implementation, arguing that the working class should carry out organised revolutionary action to topple capitalism and bring about socio - economic emancipation.
Marx has been described as one of the most influential figures in human history and his work has been both lauded and criticised. His work in economics laid the basis for much of the current understanding of labour and its relation to capital, and subsequent economic thought. Many intellectuals, labour unions, artists and political parties worldwide have been influenced by Marx 's work, with many modifying or adapting his ideas. Marx is typically cited as one of the principal architects of modern social science.
Marx was born on 5 May 1818 to Heinrich Marx (1777 -- 1838) and Henrietta Pressburg (1788 -- 1863). He was born at Brückengasse 664 in Trier, a town then part of the Kingdom of Prussia 's Province of the Lower Rhine. Marx was ancestrally Jewish as his maternal grandfather was a Dutch rabbi, while his paternal line had supplied Trier 's rabbis since 1723, a role taken by his grandfather Meier Halevi Marx. His father, as a child known as Herschel, was the first in the line to receive a secular education and he became a lawyer and lived a relatively wealthy and middle - class existence, with his family owning a number of Moselle vineyards. Prior to his son 's birth, and to escape the constraints of anti-semitic legislation, Herschel converted from Judaism to Lutheranism, the main Protestant denomination in Germany and Prussia at the time, taking on the German forename of Heinrich over the Yiddish Herschel. Marx was a third cousin once removed of German Romantic poet Heinrich Heine, also born to a German Jewish family in the Rhineland, with whom he became a frequent correspondent in later life.
Largely non-religious, Heinrich was a man of the Enlightenment, interested in the ideas of the philosophers Immanuel Kant and Voltaire. A classical liberal, he took part in agitation for a constitution and reforms in Prussia, then governed by an absolute monarchy. In 1815, Heinrich Marx began work as an attorney and in 1819 moved his family to a ten - room property near the Porta Nigra. His wife, a Dutch Jewish woman, Henriette Pressburg, was from a prosperous business family that later founded the company Philips Electronics: she was great - aunt to Anton and Gerard Philips, as well as great - great - aunt to Frits Philips. Her sister Sophie Pressburg (1797 -- 1854) was Marx 's aunt, was married to Lion Philips (1794 -- 1866) who become Marx 's uncle through this marriage and was the grandmother of both Gerard and Anton Philips. Lion Philips was a wealthy Dutch tobacco manufacturer and industrialist, upon whom Karl and Jenny Marx would later often come to rely for loans while they were exiled in London.
Little is known of Marx 's childhood. The third of nine children, he became the oldest son when his brother Moritz died in 1819. Young Marx was baptised into the Lutheran Church in August 1824 along with his surviving siblings, Sophie, Hermann, Henriette, Louise, Emilie and Caroline as was their mother the following year. Young Marx was privately educated by his father until 1830, when he entered Trier High School, whose headmaster, Hugo Wyttenbach, was a friend of his father. By employing many liberal humanists as teachers, Wyttenbach incurred the anger of the local conservative government. Subsequently, police raided the school in 1832 and discovered that literature espousing political liberalism was being distributed among the students. Considering the distribution of such material a seditious act, the authorities instituted reforms and replaced several staff during Marx 's attendance.
In October 1835 at the age of 17, Marx travelled to the University of Bonn wishing to study philosophy and literature, but his father insisted on law as a more practical field. Due to a condition referred to as a "weak chest '', Marx was excused from military duty when he turned 18. While at the University at Bonn, Marx joined the Poets ' Club, a group containing political radicals that were monitored by the police. Marx also joined the Trier Tavern Club drinking society (Landsmannschaft der Treveraner), at one point serving as club co-president. Additionally, Marx was involved in certain disputes, some of which became serious: in August 1836 he took part in a duel with a member of the university 's Borussian Korps. Although his grades in the first term were good, they soon deteriorated, leading his father to force a transfer to the more serious and academic University of Berlin.
Spending summer and autumn 1836 in Trier, Marx became more serious about his studies and his life. He became engaged to Jenny von Westphalen, an educated baroness of the Prussian ruling class who had known Marx since childhood. As she had broken off her engagement with a young aristocrat to be with Marx, their relationship was socially controversial owing to the differences between their religious and class origins, but Marx befriended her father Ludwig von Westphalen (a liberal aristocrat) and later dedicated his doctoral thesis to him. Seven years after their engagement, on 19 June 1843 they got married in a Protestant church in Kreuznach.
In October 1836, Marx arrived in Berlin, matriculating in the university 's faculty of law and renting a room in the Mittelstrasse. Although studying law, he was fascinated by philosophy and looked for a way to combine the two, believing that "without philosophy nothing could be accomplished ''. Marx became interested in the recently deceased German philosopher G.W.F. Hegel, whose ideas were then widely debated among European philosophical circles. During a convalescence in Stralau, he joined the Doctor 's Club (Doktorklub), a student group which discussed Hegelian ideas and through them became involved with a group of radical thinkers known as the Young Hegelians in 1837. They gathered around Ludwig Feuerbach and Bruno Bauer, with Marx developing a particularly close friendship with Adolf Rutenberg. Like Marx, the Young Hegelians were critical of Hegel 's metaphysical assumptions, but adopted his dialectical method in order to criticise established society, politics and religion from a leftist perspective. Marx 's father died in May 1838, resulting in a diminished income for the family. Marx had been emotionally close to his father and treasured his memory after his death.
By 1837, Marx was writing both fiction and non-fiction, having completed a short novel, Scorpion and Felix, a drama, Oulanem, as well as a number of love poems dedicated to Jenny von Westphalen, though none of this early work was published during his lifetime. Marx soon abandoned fiction for other pursuits, including the study of both English and Italian, art history and the translation of Latin classics. He began co-operating with Bruno Bauer on editing Hegel 's Philosophy of Religion in 1840. Marx was also engaged in writing his doctoral thesis, The Difference Between the Democritean and Epicurean Philosophy of Nature, which he completed in 1841. It was described as "a daring and original piece of work in which Marx set out to show that theology must yield to the superior wisdom of philosophy ''. The essay was controversial, particularly among the conservative professors at the University of Berlin. Marx decided instead to submit his thesis to the more liberal University of Jena, whose faculty awarded him his PhD in April 1841. As Marx and Bauer were both atheists, in March 1841 they began plans for a journal entitled Archiv des Atheismus (Atheistic Archives), but it never came to fruition. In July, Marx and Bauer took a trip to Bonn from Berlin. There they scandalised their class by getting drunk, laughing in church and galloping through the streets on donkeys.
Marx was considering an academic career, but this path was barred by the government 's growing opposition to classical liberalism and the Young Hegelians. Marx moved to Cologne in 1842, where he became a journalist, writing for the radical newspaper Rheinische Zeitung (Rhineland News), expressing his early views on socialism and his developing interest in economics. Marx criticised both right - wing European governments as well as figures in the liberal and socialist movements whom he thought ineffective or counter-productive. The newspaper attracted the attention of the Prussian government censors, who checked every issue for seditious material before printing, as Marx lamented: "Our newspaper has to be presented to the police to be sniffed at, and if the police nose smells anything un-Christian or un-Prussian, the newspaper is not allowed to appear ''. After the Rheinische Zeitung published an article strongly criticising the Russian monarchy, Tsar Nicholas I requested it be banned and Prussia 's government complied in 1843.
In 1843, Marx became co-editor of a new, radical leftist Parisian newspaper, the Deutsch - Französische Jahrbücher (German - French Annals), then being set up by the German socialist Arnold Ruge to bring together German and French radicals and thus Marx and his wife moved to Paris in October 1843. Initially living with Ruge and his wife communally at 23 Rue Vaneau, they found the living conditions difficult, so moved out following the birth of their daughter Jenny in 1844. Although intended to attract writers from both France and the German states, the Jahrbücher was dominated by the latter and the only non-German writer was the exiled Russian anarchist collectivist Mikhail Bakunin. Marx contributed two essays to the paper, "Introduction to a Contribution to the Critique of Hegel 's Philosophy of Right '' and "On the Jewish Question '', the latter introducing his belief that the proletariat were a revolutionary force and marking his embrace of communism. Only one issue was published, but it was relatively successful, largely owing to the inclusion of Heinrich Heine 's satirical odes on King Ludwig of Bavaria, leading the German states to ban it and seize imported copies (Ruge nevertheless refused to fund the publication of further issues and his friendship with Marx broke down). After the paper 's collapse, Marx began writing for the only uncensored German - language radical newspaper left, Vorwärts! (Forward!). Based in Paris, the paper was connected to the League of the Just, a utopian socialist secret society of workers and artisans. Marx attended some of their meetings, but did not join. In Vorwärts!, Marx refined his views on socialism based upon Hegelian and Feuerbachian ideas of dialectical materialism, at the same time criticising liberals and other socialists operating in Europe.
On 28 August 1844, Marx met the German socialist Friedrich Engels at the Café de la Régence, beginning a lifelong friendship. Engels showed Marx his recently published The Condition of the Working Class in England in 1844, convincing Marx that the working class would be the agent and instrument of the final revolution in history. Soon, Marx and Engels were collaborating on a criticism of the philosophical ideas of Marx 's former friend, Bruno Bauer. This work was published in 1845 as The Holy Family. Although critical of Bauer, Marx was increasingly influenced by the ideas of the Young Hegelians Max Stirner and Ludwig Feuerbach, but eventually Marx and Engels abandoned Feuerbachian materialism as well.
During the time that he lived at 38 Rue Vanneau in Paris (from October 1843 until January 1845), Marx engaged in an intensive study of "political economy '' (Adam Smith, David Ricardo, James Mill, etc.), the French socialists (especially Claude Henri St. Simon and Charles Fourier) and the history of France. The study of political economy is a study that Marx would pursue for the rest of his life and would result in his major economic work -- the three - volume series called Capital. Marxism is based in large part on three influences: Hegel 's dialectics, French utopian socialism and English economics. Together with his earlier study of Hegel 's dialectics, the studying that Marx did during this time in Paris meant that all major components of "Marxism '' (or political economy as Marx called it) were in place by the autumn of 1844. Although Marx was constantly being pulled away from his study of political economy by the usual daily demands on his time that everyone faces and the additional special demands of editing a radical newspaper and later by the demands of organising and directing the efforts of a political party during years in which popular uprisings of the citizenry might at any moment become a revolution, Marx was always drawn back to his economic studies. Marx sought "to understand the inner workings of capitalism ''.
An outline of "Marxism '' had definitely formed in the mind of Karl Marx by late 1844. Indeed, many features of the Marxist view of the world 's political economy had been worked out in great detail, but Marx needed to write down all of the details of his economic world view to further clarify the new economic theory in his own mind. Accordingly, Marx wrote The Economic and Philosophical Manuscripts. These manuscripts covered numerous topics, detailing Marx 's concept of alienated labour. However, by the spring of 1845 his continued study of political economy, capital and capitalism had led Marx to the belief that the new political economic theory that he was espousing -- scientific socialism -- needed to be built on the base of a thoroughly developed materialistic view of the world.
The Economic and Philosophical Manuscripts of 1844 had been written between April and August 1844, but soon Marx recognised that the Manuscripts had been influenced by some inconsistent ideas of Ludwig Feuerbach. Accordingly, Marx recognised the need to break with Feuerbach 's philosophy in favour of historical materialism, thus a year later (in April 1845) after moving from Paris to Brussels, Marx wrote his eleven "Theses on Feuerbach ''. The "Theses on Feuerbach '' are best known for Thesis 11, which states that "philosophers have only interpreted the world in various ways, the point is to change it ''. This work contains Marx 's criticism of materialism (for being contemplative), idealism (for reducing practice to theory) overall, criticising philosophy for putting abstract reality above the physical world. It thus introduced the first glimpse at Marx 's historical materialism, an argument that the world is changed not by ideas but by actual, physical, material activity and practice. In 1845, after receiving a request from the Prussian king, the French government shut down Vorwärts!, with the interior minister, François Guizot, expelling Marx from France. At this point, Marx moved from Paris to Brussels, where Marx hoped to once again continue his study of capitalism and political economy.
Unable either to stay in France or to move to Germany, Marx decided to emigrate to Brussels in Belgium in February 1845. However, to stay in Belgium he had to pledge not to publish anything on the subject of contemporary politics. In Brussels, Marx associated with other exiled socialists from across Europe, including Moses Hess, Karl Heinzen and Joseph Weydemeyer. In April 1845, Engels moved from Barmen in Germany to Brussels to join Marx and the growing cadre of members of the League of the Just now seeking home in Brussels. Later, Mary Burns, Engels ' long - time companion, left Manchester, England to join Engels in Brussels.
In mid-July 1845, Marx and Engels left Brussels for England to visit the leaders of the Chartists, a socialist movement in Britain. This was Marx 's first trip to England and Engels was an ideal guide for the trip. Engels had already spent two years living in Manchester from November 1842 to August 1844. Not only did Engels already know the English language, he had also developed a close relationship with many Chartist leaders. Indeed, Engels was serving as a reporter for many Chartist and socialist English newspapers. Marx used the trip as an opportunity to examine the economic resources available for study in various libraries in London and Manchester.
In collaboration with Engels, Marx also set about writing a book which is often seen as his best treatment of the concept of historical materialism, The German Ideology. In this work, Marx broke with Ludwig Feuerbach, Bruno Bauer, Max Stirner and the rest of the Young Hegelians, while he also broke with Karl Grun and other "true socialists '' whose philosophies were still based in part on "idealism ''. In German Ideology, Marx and Engels finally completed their philosophy, which was based solely on materialism as the sole motor force in history. German Ideology is written in a humorously satirical form, but even this satirical form did not save the work from censorship. Like so many other early writings of his, German Ideology would not be published in Marx 's lifetime and would be published only in 1932.
After completing German Ideology, Marx turned to a work that was intended to clarify his own position regarding "the theory and tactics '' of a truly "revolutionary proletarian movement '' operating from the standpoint of a truly "scientific materialist '' philosophy. This work was intended to draw a distinction between the utopian socialists and Marx 's own scientific socialist philosophy. Whereas the utopians believed that people must be persuaded one person at a time to join the socialist movement, the way a person must be persuaded to adopt any different belief, Marx knew that people would tend on most occasions to act in accordance with their own economic interests, thus appealing to an entire class (the working class in this case) with a broad appeal to the class 's best material interest would be the best way to mobilise the broad mass of that class to make a revolution and change society. This was the intent of the new book that Marx was planning, but to get the manuscript past the government censors he called the book The Poverty of Philosophy (1847) and offered it as a response to the "petty bourgeois philosophy '' of the French anarchist socialist Pierre - Joseph Proudhon as expressed in his book The Philosophy of Poverty (1840).
These books laid the foundation for Marx and Engels 's most famous work, a political pamphlet that has since come to be commonly known as The Communist Manifesto. While residing in Brussels in 1846, Marx continued his association with the secret radical organisation League of the Just. As noted above, Marx thought the League to be just the sort of radical organisation that was needed to spur the working class of Europe toward the mass movement that would bring about a working class revolution. However, to organise the working class into a mass movement the League had to cease its "secret '' or "underground '' orientation and operate in the open as a political party. Members of the League eventually became persuaded in this regard. Accordingly, in June 1847 the League was reorganised by its membership into a new open "above ground '' political society that appealed directly to the working classes. This new open political society was called the Communist League. Both Marx and Engels participated in drawing the programme and organisational principles of the new Communist League.
In late 1847, Marx and Engels began writing what was to become their most famous work -- a programme of action for the Communist League. Written jointly by Marx and Engels from December 1847 to January 1848, The Communist Manifesto was first published on 21 February 1848. The Communist Manifesto laid out the beliefs of the new Communist League. No longer a secret society, the Communist League wanted to make aims and intentions clear to the general public rather than hiding its beliefs as the League of the Just had been doing. The opening lines of the pamphlet set forth the principal basis of Marxism: "The history of all hitherto existing society is the history of class struggles ''. It goes on to examine the antagonisms that Marx claimed were arising in the clashes of interest between the bourgeoisie (the wealthy capitalist class) and the proletariat (the industrial working class). Proceeding on from this, the Manifesto presents the argument for why the Communist League, as opposed to other socialist and liberal political parties and groups at the time, was truly acting in the interests of the proletariat to overthrow capitalist society and to replace it with socialism.
Later that year, Europe experienced a series of protests, rebellions and often violent upheavals that became known as the Revolutions of 1848. In France, a revolution led to the overthrow of the monarchy and the establishment of the French Second Republic. Marx was supportive of such activity and having recently received a substantial inheritance from his father (withheld by his uncle Lionel Philips since his father 's death in 1838) of either 6,000 or 5,000 francs he allegedly used a third of it to arm Belgian workers who were planning revolutionary action. Although the veracity of these allegations is disputed, the Belgian Ministry of Justice accused Marx of it, subsequently arresting him and he was forced to flee back to France, where with a new republican government in power he believed that he would be safe.
Temporarily settling down in Paris, Marx transferred the Communist League executive headquarters to the city and also set up a German Workers ' Club with various German socialists living there. Hoping to see the revolution spread to Germany, in 1848 Marx moved back to Cologne where he began issuing a handbill entitled the Demands of the Communist Party in Germany, in which he argued for only four of the ten points of the Communist Manifesto, believing that in Germany at that time the bourgeoisie must overthrow the feudal monarchy and aristocracy before the proletariat could overthrow the bourgeoisie. On 1 June, Marx started publication of a daily newspaper, the Neue Rheinische Zeitung, which he helped to finance through his recent inheritance from his father. Designed to put forward news from across Europe with his own Marxist interpretation of events, the newspaper featured Marx as a primary writer and the dominant editorial influence. Despite contributions by fellow members of the Communist League, according to Friedrich Engels it remained "a simple dictatorship by Marx ''.
Whilst editor of the paper, Marx and the other revolutionary socialists were regularly harassed by the police and Marx was brought to trial on several occasions, facing various allegations including insulting the Chief Public Prosecutor, committing a press misdemeanor and inciting armed rebellion through tax boycotting, although each time he was acquitted. Meanwhile, the democratic parliament in Prussia collapsed and the king, Frederick William IV, introduced a new cabinet of his reactionary supporters, who implemented counter-revolutionary measures to expunge leftist and other revolutionary elements from the country. Consequently, the Neue Rheinische Zeitung was soon suppressed and Marx was ordered to leave the country on 16 May. Marx returned to Paris, which was then under the grip of both a reactionary counter-revolution and a cholera epidemic and was soon expelled by the city authorities, who considered him a political threat. With his wife Jenny expecting their fourth child and not able to move back to Germany or Belgium, in August 1849 he sought refuge in London.
Marx moved to London in early June 1849 and would remain based in the city for the rest of his life. The headquarters of the Communist League also moved to London. However, in the winter of 1849 -- 1850 a split within the ranks of the Communist League occurred when a faction within it led by August Willich and Karl Schapper began agitating for an immediate uprising. Willich and Schapper believed that once the Communist League had initiated the uprising, the entire working class from across Europe would rise "spontaneously '' to join it, thus creating revolution across Europe. Marx and Engels protested that such an unplanned uprising on the part of the Communist League was "adventuristic '' and would be suicide for the Communist League. Such an uprising as that recommended by the Schapper / Willich group would easily be crushed by the police and the armed forces of the reactionary governments of Europe. Marx maintained that this would spell doom for the Communist League itself, arguing that changes in society are not achieved overnight through the efforts and will power of a handful of men. They are instead brought about through a scientific analysis of economic conditions of society and by moving toward revolution through different stages of social development. In the present stage of development (circa 1850), following the defeat of the uprisings across Europe in 1848 he felt that the Communist League should encourage the working class to unite with progressive elements of the rising bourgeoisie to defeat the feudal aristocracy on issues involving demands for governmental reforms, such as a constitutional republic with freely elected assemblies and universal (male) suffrage. In other words, the working class must join with bourgeois and democratic forces to bring about the successful conclusion of the bourgeois revolution before stressing the working class agenda and a working class revolution.
After a long struggle which threatened to ruin the Communist League, Marx 's opinion prevailed and eventually the Willich / Schapper group left the Communist League. Meanwhile, Marx also became heavily involved with the socialist German Workers ' Educational Society. The Society held their meetings in Great Windmill Street, Soho, central London 's entertainment district. This organisation was also racked by an internal struggle between its members, some of whom followed Marx while others followed the Schapper / Willich faction. The issues in this internal split were the same issues raised in the internal split within the Communist League, but Marx lost the fight with the Schapper / Willich faction within the German Workers ' Educational Society and on 17 September 1850 resigned from the Society.
While in London, Marx devoted himself to the task of revolutionary organising of the working class. For the first few years, he and his family lived in extreme poverty. His main source of income was his colleague Engels, who derived much of his income from his family 's business. Later, Marx and Engels both began writing for six different newspapers around the world in England, the United States, Prussia, Austria and South Africa. However, most of Marx 's journalistic writing was as a European correspondent for the New York Daily Tribune. In earlier years, Marx had been able to communicate with the broad masses of the working class by editing his own newspaper or editing a newspaper financed by others sympathetic to his philosophy. Now in London, Marx was unable to finance his own newspaper and unable to put together financing from others, thus Marx sought to communicate with the public by writing articles for the New York Tribune and other "bourgeois '' newspapers. At first, Marx 's English - language articles were translated from German by Wilhelm Pieper (de), but eventually Marx learned English well enough to write without translation.
The New York Daily Tribune had been founded in New York City in the United States by Horace Greeley in April 1841. Marx 's main contact on the Tribune was Charles Dana. Later in 1868, Charles Dana would leave the Tribune to become the owner and editor - in - chief of the New York Sun, a competing newspaper in New York City. However, at this time Charles Dana served on the editorial board of the Tribune.
Several characteristics about the Tribune made the newspaper an excellent vehicle for Marx to reach a sympathetic public across the Atlantic Ocean. Since its founding, the Tribune had been an inexpensive newspaper -- two cents per copy. Accordingly, it was popular with the broad masses of the working class of the United States. With a run of about 50,000 issues, the Tribune was the most widely circulated journal in the United States. Editorially, the Tribune reflected Greeley 's anti-slavery opinions. Not only did the Tribune have wide readership with the United States and not only did that readership come from the working classes, but the readers seemed to be from the progressive wing of the working class. Marx 's first article for the New York Tribune was on the British elections to Parliament and was published in the Tribune on 21 August 1852.
Marx was just one of the reporters in Europe that the New York Tribune employed. However, with the slavery crisis in the United States coming to a head in the late 1850s and with the outbreak of the American Civil War in 1861, the American public 's interest in European affairs declined. Thus Marx very early began to write on issues affecting the United States -- particularly the "slavery crisis '' and the "War Between the States ''.
From December 1851 to March 1852, Marx wrote The Eighteenth Brumaire of Louis Napoleon, a work on the French Revolution of 1848 in which he expanded upon his concepts of historical materialism, class struggle and the dictatorship of the proletariat, advancing the argument that victorious proletariat has to smash the bourgeois state.
The 1850s and 1860s also mark the line between what some scholars see as the idealistic, Hegelian young Marx from the more scientifically minded mature Marx writings of the later period. This distinction is usually associated with the structural Marxism school and not all scholars agree that it exists. The years of revolution from 1848 to 1849 had been a grand experience for both Marx and Engels. They both became sure that their economic view of the course of history was the only valid way that historic events like the revolutionary upsurge of 1848 could be adequately explained. For some time after 1848, Marx and Engels wondered if the entire revolutionary upsurge had completely played out. As time passed, they began to think that a new revolutionary upsurge would not occur until there was another economic downturn. The question of whether a recession would be necessary to create a new revolutionary situation in society became a point of contention between Marx and certain other revolutionaries. Marx accused these other revolutionaries of being "adventurists '' because of their belief that a revolutionary situation could be created out of thin air by the sheer "will power '' of the revolutionaries without regard to the economic realities of the current situation.
The downturn in the United States economy in 1852 led Marx and Engels to wonder if a revolutionary upsurge would soon occur. However, the United States ' economy was too new to play host to a classical revolution. The western frontier in America always provided a relief valve for the pent - up forces that might in other countries cause social unrest. Any economic crisis which began in the United States would not lead to revolution unless one of the older economies of Europe "caught the contagion '' from the United States. In other words, economies of the world were still seen as individual national systems which were contiguous with the national borders of each country. The Panic of 1857 broke the mould of all prior thinking on the world economy. Beginning in the United States, the Panic spread across the globe. Indeed, the Panic of 1857 was the first truly global economic crisis.
Marx longed to return to his economic studies, as he had left these studies in 1844 and had been preoccupied with other projects over the last thirteen years. By returning to his study of economics, Marx felt he would be able to understand more thoroughly what was occurring in the world.
Marx continued to write articles for the New York Daily Tribune as long as he was sure that the Tribune 's editorial policy was still progressive. However, the departure of Charles Dana from the paper in late 1861 and the resultant change in the editorial board brought about a new editorial policy. No longer was the Tribune to be a strong abolitionist paper dedicated to a complete Union victory. The new editorial board supported an immediate peace between the Union and the Confederacy in the Civil War in the United States with slavery left intact in the Confederacy. Marx strongly disagreed with this new political position and in 1863 was forced to withdraw as a writer for the Tribune.
In 1864, Marx became involved in the International Workingmen 's Association (also known as First International), to whose General Council he was elected at its inception in 1864. In that organisation, Marx was involved in the struggle against the anarchist wing centred on Mikhail Bakunin (1814 -- 1876). Although Marx won this contest, the transfer of the seat of the General Council from London to New York in 1872, which Marx supported, led to the decline of the International. The most important political event during the existence of the International was the Paris Commune of 1871, when the citizens of Paris rebelled against their government and held the city for two months. In response to the bloody suppression of this rebellion, Marx wrote one of his most famous pamphlets, "The Civil War in France '', a defence of the Commune.
Given the repeated failures and frustrations of workers ' revolutions and movements, Marx also sought to understand capitalism and spent a great deal of time in the reading room of the British Museum studying and reflecting on the works of political economists and on economic data. By 1857, Marx had accumulated over 800 pages of notes and short essays on capital, landed property, wage labour, the state and foreign trade and the world market, though this work did not appear in print until 1939 under the title Outlines of the Critique of Political Economy.
Finally in 1859, Marx published A Contribution to the Critique of Political Economy, his first serious economic work. This work was intended merely as a preview of his three - volume Das Kapital (English title: Capital: Critique of Political Economy), which he intended to publish at a later date. In A Contribution to the Critique of Political Economy, Marx accepts the labour theory of value as advocated by David Ricardo, but whereas Ricardo drew a distinction between use value and value in commodities, Ricardo always had been unable to define the real relationship between use value and value. The reasoning Marx laid out in his book clearly delineated the true relationship between use value and value. He also produced a truly scientific theory of money and money circulation in the capitalist economy, thus A Contribution to the Critique of Political Economy created a storm of enthusiasm when it appeared in public and the entire edition of the book was sold out quickly.
The successful sales of A Contribution to the Critique of Political Economy stimulated Marx in the early 1860s to finish work on the three large volumes that would compose his major life 's work -- Das Kapital and the Theories of Surplus Value, which discussed the theoreticians of political economy, particularly Adam Smith and David Ricardo. Theories of Surplus Value is often referred to as the fourth volume book of Das Kapital and constitutes one of the first comprehensive treatises on the history of economic thought. In 1867, the first volume of Das Kapital was published, a work which analysed the capitalist process of production. Here Marx elaborated his labour theory of value, which had been influenced by Thomas Hodgskin. Marx acknowledged Hodgskin 's "admirable work '' Labour Defended against the Claims of Capital at more than one point in Capital. Indeed, Marx quoted Hodgskin as recognising the alienation of labour that occurred under modern capitalist production. No longer was there any "natural reward of individual labour. Each labourer produces only some part of a whole, and each part having no value or utility of itself, there is nothing on which the labourer can seize, and say: ' This is my product, this will I keep to myself ' ''. In this first volume of Capital, Marx outlined his conception of surplus value and exploitation, which he argued would ultimately lead to a falling rate of profit and the collapse of industrial capitalism. Demand for a Russian language edition of Capital soon led to the printing of 3,000 copies of the book in the Russian language, which was published on 27 March 1872. By the autumn of 1871, the entire first edition of the German language edition of Capital had been sold out and a second edition was published.
Volumes II and III of Capital remained mere manuscripts upon which Marx continued to work for the rest of his life. Both volumes were published by Engels after Marx 's death. Volume II of Capital was prepared and published by Engels in July 1893 under the name Capital II: The Process of Circulation of Capital. Volume III of Capital was published a year later in October 1894 under the name Capital III: The Process of Capitalist Production as a Whole. Theories of Surplus Value was developed from the Economic Manuscripts of 1861 -- 1863 which comprise Volumes 30, 31 32 and 33 of the Collected Works of Marx and Engels and from the Economic Manuscripts of 1861 -- 1864 which comprises Volume 34 of the Collected Works of Marx and Engels. The exact part of the Economic Manuscripts of 1861 -- 1863 which makes up the Theories of Surplus Value are the last part of Volume 30 of the Collected Works, the whole of Volume 31 of the Collected Works and the whole of Volume 32 of the Collected Works. A German language abridged edition of Theories of Surplus Value was published in 1905 and in 1910. This abridged edition was translated into English and published in 1951 in London, but the complete unabridged edition of Theories of Surplus Value was published as the "fourth volume '' of Capital in 1963 and 1971 in Moscow.
During the last decade of his life, Marx 's health declined and he became incapable of the sustained effort that had characterised his previous work. He did manage to comment substantially on contemporary politics, particularly in Germany and Russia. His Critique of the Gotha Programme opposed the tendency of his followers Wilhelm Liebknecht and August Bebel to compromise with the state socialism of Ferdinand Lassalle in the interests of a united socialist party. This work is also notable for another famous Marx quote: "From each according to his ability, to each according to his need ''.
In a letter to Vera Zasulich dated 8 March 1881, Marx contemplated the possibility of Russia 's bypassing the capitalist stage of development and building communism on the basis of the common ownership of land characteristic of the village mir. While admitting that Russia 's rural "commune is the fulcrum of social regeneration in Russia '', Marx also warned that in order for the mir to operate as a means for moving straight to the socialist stage without a preceding capitalist stage it "would first be necessary to eliminate the deleterious influences which are assailing it (the rural commune) from all sides ''. Given the elimination of these pernicious influences, Marx allowed that "normal conditions of spontaneous development '' of the rural commune could exist. However, in the same letter to Vera Zasulich he points out that "at the core of the capitalist system... lies the complete separation of the producer from the means of production ''. In one of the drafts of this letter, Marx reveals his growing passion for anthropology, motivated by his belief that future communism would be a return on a higher level to the communism of our prehistoric past. He wrote that "the historical trend of our age is the fatal crisis which capitalist production has undergone in the European and American countries where it has reached its highest peak, a crisis that will end in its destruction, in the return of modern society to a higher form of the most archaic type -- collective production and appropriation ''. He added that "the vitality of primitive communities was incomparably greater than that of Semitic, Greek, Roman, etc. societies, and, a fortiori, that of modern capitalist societies ''. Before he died, Marx asked Engels to write up these ideas, which were published in 1884 under the title The Origin of the Family, Private Property and the State.
Marx and von Westphalen had seven children together, but partly owing to the poor conditions in which they lived whilst in London, only three survived to adulthood. The children were: Jenny Caroline (m. Longuet; 1844 -- 1883); Jenny Laura (m. Lafargue; 1845 -- 1911); Edgar (1847 -- 1855); Henry Edward Guy ("Guido ''; 1849 -- 1850); Jenny Eveline Frances ("Franziska ''; 1851 -- 1852); Jenny Julia Eleanor (1855 -- 1898) and one more who died before being named (July 1857). There are allegations that Marx also fathered a son, Freddy, out of wedlock by his housekeeper, Helene Demuth.
Marx frequently used pseudonyms, often when renting a house or flat, apparently to make it harder for the authorities to track him down. While in Paris, he used that of "Monsieur Ramboz '', whilst in London he signed off his letters as "A. Williams ''. His friends referred to him as "Moor '', owing to his dark complexion and black curly hair, while he encouraged his children to call him "Old Nick '' and "Charley ''. He also bestowed nicknames and pseudonyms on his friends and family as well, referring to Friedrich Engels as "General '', his housekeeper Helene as "Lenchen '' or "Nym '', while one of his daughters, Jennychen, was referred to as "Qui Qui, Emperor of China '' and another, Laura, was known as "Kakadou '' or "the Hottentot ''.
Marx was afflicted by poor health (what he himself described as "the wretchedness of existence '') and various authors have sought to describe and explain it. His biographer Werner Blumenberg attributed it to liver and gall problems which Marx had in 1849 and from which he was never afterwards free, exacerbated by an unsuitable lifestyle. The attacks often came with headaches, eye inflammation, neuralgia in the head and rheumatic pains. A serious nervous disorder appeared in 1877 and protracted insomnia was a consequence, which Marx fought with narcotics. The illness was aggravated by excessive nocturnal work and faulty diet. Marx was fond of highly seasoned dishes, smoked fish, caviare, pickled cucumbers, "none of which are good for liver patients '', but he also liked wine and liqueurs and smoked an enormous amount "and since he had no money, it was usually bad - quality cigars ''. From 1863, Marx complained a lot about boils: "These are very frequent with liver patients and may be due to the same causes ''. The abscesses were so bad that Marx could neither sit nor work upright. According to Blumenberg, Marx 's irritability is often found in liver patients:
The illness emphasised certain traits in his character. He argued cuttingly, his biting satire did not shrink at insults, and his expressions could be rude and cruel. Though in general Marx had a blind faith in his closest friends, nevertheless he himself complained that he was sometimes too mistrustful and unjust even to them. His verdicts, not only about enemies but even about friends, were sometimes so harsh that even less sensitive people would take offence... There must have been few whom he did not criticize like this... not even Engels was an exception.
According to Princeton historian J.E. Seigel, in his late teens Marx may have had pneumonia or pleurisy, the effects of which led to his being exempted from Prussian military service. In later life whilst working on Capital (which he never completed), Marx suffered from a trio of afflictions. A liver ailment, probably hereditary, was aggravated by overwork, bad diet and lack of sleep. Inflammation of the eyes was induced by too much work at night. A third affliction, eruption of carbuncles or boils, "was probably brought on by general physical debility to which the various features of Marx 's style of life -- alcohol, tobacco, poor diet, and failure to sleep -- all contributed. Engels often exhorted Marx to alter this dangerous regime ''. In Professor Siegel 's thesis, what lay behind this punishing sacrifice of his health may have been guilt about self - involvement and egoism, originally induced in Karl Marx by his father.
In 2007, a retrodiagnosis of Marx 's skin disease was made by dermatologist Sam Shuster of Newcastle University and for Shuster the most probable explanation was that Marx suffered not from liver problems, but from hidradenitis suppurativa, a recurring infective condition arising from blockage of apocrine ducts opening into hair follicles. This condition, which was not described in the English medical literature until 1933 (hence would not have been known to Marx 's physicians), can produce joint pain (which could be misdiagnosed as rheumatic disorder) and painful eye conditions. To arrive at his retrodiagnosis, Shuster considered the primary material: the Marx correspondence published in the 50 volumes of the Marx / Engels Collected Works. There, "although the skin lesions were called ' furuncules ', ' boils ' and ' carbuncles ' by Marx, his wife and his physicians, they were too persistent, recurrent, destructive and site - specific for that diagnosis ''. The sites of the persistent ' carbuncles ' were noted repeatedly in the armpits, groins, perianal, genital (penis and scrotum) and suprapubic regions and inner thighs, "favoured sites of hidradenitis suppurativa ''. Professor Shuster claimed the diagnosis "can now be made definitively ''.
Shuster went on to consider the potential psychosocial effects of the disease, noting that the skin is an organ of communication and that hidradenitis suppurativa produces much psychological distress, including loathing and disgust and depression of self - image, mood and well - being, feelings for which Shuster found "much evidence '' in the Marx correspondence. Professor Shuster went on to ask himself whether the mental effects of the disease affected Marx 's work and even helped him to develop his theory of alienation.
Following the death of his wife Jenny in December 1881, Marx developed a catarrh that kept him in ill health for the last 15 months of his life. It eventually brought on the bronchitis and pleurisy that killed him in London on 14 March 1883 (age 64), dying a stateless person. Family and friends in London buried his body in Highgate Cemetery (East), Highgate, London on 17 March 1883 in an area reserved for agnostics and atheists (George Eliot 's grave is nearby). There were between nine and eleven mourners at his funeral.
Several of his closest friends spoke at his funeral, including Wilhelm Liebknecht and Friedrich Engels. Engels ' speech included the passage:
On the 14th of March, at a quarter to three in the afternoon, the greatest living thinker ceased to think. He had been left alone for scarcely two minutes, and when we came back we found him in his armchair, peacefully gone to sleep -- but forever.
Marx 's surviving daughters Eleanor and Laura, as well as Charles Longuet and Paul Lafargue, Marx 's two French socialist sons - in - law, were also in attendance. He had been predeceased by his wife and his eldest daughter, the latter dying a few months earlier in January 1883. Liebknecht, a founder and leader of the German Social Democratic Party, gave a speech in German and Longuet, a prominent figure in the French working - class movement, made a short statement in French. Two telegrams from workers ' parties in France and Spain were also read out. Together with Engels 's speech, this constituted the entire programme of the funeral. Non-relatives attending the funeral included three communist associates of Marx: Friedrich Lessner, imprisoned for three years after the Cologne communist trial of 1852; G. Lochner, whom Engels described as "an old member of the Communist League ''; and Carl Schorlemmer, a professor of chemistry in Manchester, a member of the Royal Society and a communist activist involved in the 1848 Baden revolution. Another attendee of the funeral was Ray Lankester, a British zoologist who would later become a prominent academic.
Upon his own death in 1895, Engels left Marx 's two surviving daughters a "significant portion '' of his $4.8 million estate.
Marx and his family were reburied on a new site nearby in November 1954. The memorial at the new site, unveiled on 14 March 1956, bears the carved message: "WORKERS OF ALL LANDS UNITE '', the final line of The Communist Manifesto; and from the 11th "Thesis on Feuerbach '' (edited by Engels): "The philosophers have only interpreted the world in various ways -- the point however is to change it ''. The Communist Party of Great Britain had the monument with a portrait bust by Laurence Bradshaw erected and Marx 's original tomb had only humble adornment. In 1970, there was an unsuccessful attempt to destroy the monument using a homemade bomb.
The late Marxist historian Eric Hobsbawm remarked: "One can not say Marx died a failure '' because although he had not achieved a large following of disciples in Britain, his writings had already begun to make an impact on the leftist movements in Germany and Russia. Within 25 years of his death, the continental European socialist parties that acknowledged Marx 's influence on their politics were each gaining between 15 and 47 per cent in those countries with representative democratic elections.
Marx 's thought demonstrates influences from many thinkers including, but not limited to:
Marx 's view of history, which came to be called historical materialism (controversially adapted as the philosophy of dialectical materialism by Engels and Lenin), certainly shows the influence of Hegel 's claim that one should view reality (and history) dialectically. However, Hegel had thought in idealist terms, putting ideas in the forefront, whereas Marx sought to rewrite dialectics in materialist terms, arguing for the primacy of matter over idea. Where Hegel saw the "spirit '' as driving history, Marx saw this as an unnecessary mystification, obscuring the reality of humanity and its physical actions shaping the world. He wrote that Hegelianism stood the movement of reality on its head, and that one needed to set it upon its feet. Despite his dislike of mystical terms, Marx used Gothic language in several of his works and in The Capital he refers to capital as "necromancy that surrounds the products of labour ''.
Though inspired by French socialist and sociological thought, Marx criticised utopian socialists, arguing that their favoured small - scale socialistic communities would be bound to marginalisation and poverty and that only a large - scale change in the economic system can bring about real change.
The other important contribution to Marx 's revision of Hegelianism came from Engels 's book, The Condition of the Working Class in England in 1844, which led Marx to conceive of the historical dialectic in terms of class conflict and to see the modern working class as the most progressive force for revolution.
Marx believed that he could study history and society scientifically and discern tendencies of history and the resulting outcome of social conflicts. Some followers of Marx therefore concluded that a communist revolution would inevitably occur. However, Marx famously asserted in the eleventh of his "Theses on Feuerbach '' that "philosophers have only interpreted the world, in various ways; the point however is to change it '' and he clearly dedicated himself to trying to alter the world.
Marx 's polemic with other thinkers often occurred through critique and thus he has been called "the first great user of critical method in social sciences ''. He criticised speculative philosophy, equating metaphysics with ideology. By adopting this approach, Marx attempted to separate key findings from ideological biases. This set him apart from many contemporary philosophers.
Like Tocqueville, who described a faceless and bureaucratic despotism with no identifiable despot, Marx also broke with classical thinkers who spoke of a single tyrant and with Montesquieu, who discussed the nature of the single despot. Instead, Marx set out to analyse "the despotism of capital ''. Fundamentally, Marx assumed that human history involves transforming human nature, which encompasses both human beings and material objects. Humans recognise that they possess both actual and potential selves. For both Marx and Hegel, self - development begins with an experience of internal alienation stemming from this recognition, followed by a realisation that the actual self, as a subjective agent, renders its potential counterpart an object to be apprehended. Marx further argues that by moulding nature in desired ways the subject takes the object as its own and thus permits the individual to be actualised as fully human. For Marx, the human nature -- Gattungswesen, or species - being -- exists as a function of human labour. Fundamental to Marx 's idea of meaningful labour is the proposition that in order for a subject to come to terms with its alienated object it must first exert influence upon literal, material objects in the subject 's world. Marx acknowledges that Hegel "grasps the nature of work and comprehends objective man, authentic because actual, as the result of his own work '', but characterises Hegelian self - development as unduly "spiritual '' and abstract. Marx thus departs from Hegel by insisting that "the fact that man is a corporeal, actual, sentient, objective being with natural capacities means that he has actual, sensuous objects for his nature as objects of his life - expression, or that he can only express his life in actual sensuous objects ''. Consequently, Marx revises Hegelian "work '' into material "labour '' and in the context of human capacity to transform nature the term "labour power ''.
The history of all hitherto existing society is the history of class struggles.
Marx had a special concern with how people relate to their own labour power. He wrote extensively about this in terms of the problem of alienation. As with the dialectic, Marx began with a Hegelian notion of alienation but developed a more materialist conception. Capitalism mediates social relationships of production (such as among workers or between workers and capitalists) through commodities, including labour, that are bought and sold on the market. For Marx, the possibility that one may give up ownership of one 's own labour -- one 's capacity to transform the world -- is tantamount to being alienated from one 's own nature and it is a spiritual loss. Marx described this loss as commodity fetishism, in which the things that people produce, commodities, appear to have a life and movement of their own to which humans and their behaviour merely adapt.
Commodity fetishism provides an example of what Engels called "false consciousness '', which relates closely to the understanding of ideology. By "ideology '', Marx and Engels meant ideas that reflect the interests of a particular class at a particular time in history, but which contemporaries see as universal and eternal. Marx and Engels 's point was not only that such beliefs are at best half - truths, as they serve an important political function. Put another way, the control that one class exercises over the means of production includes not only the production of food or manufactured goods and it includes the production of ideas as well (this provides one possible explanation for why members of a subordinate class may hold ideas contrary to their own interests). An example of this sort of analysis is Marx 's understanding of religion, summed up in a passage from the preface to his 1843 Contribution to the Critique of Hegel 's Philosophy of Right:
Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people. The abolition of religion as the illusory happiness of the people is the demand for their real happiness. To call on them to give up their illusions about their condition is to call on them to give up a condition that requires illusions.
Whereas his Gymnasium senior thesis at the Gymnasium zu Trier (de) argued that religion had as its primary social aim the promotion of solidarity, here Marx sees the social function of religion in terms of highlighting / preserving political and economic status quo and inequality.
Marx 's thoughts on labour were related to the primacy he gave to the economic relation in determining the society 's past, present and future (see also economic determinism). Accumulation of capital shapes the social system. For Marx, social change was about conflict between opposing interests, driven in the background by economic forces. This became the inspiration for the body of works known as the conflict theory. In his evolutionary model of history, he argued that human history began with free, productive and creative work that was over time coerced and dehumanised, a trend most apparent under capitalism. Marx noted that this was not an intentional process, rather no individual or even state can go against the forces of economy.
The organisation of society depends on means of production. Literally, those things, like land, natural resources and technology, necessary for the production of material goods and the relations of production. In other words, the social relationships people enter into as they acquire and use the means of production. Together, these compose the mode of production and Marx distinguished historical eras in terms of distinct modes of production. Marx differentiated between base and superstructure, with the base (or substructure) referring to the economic system and superstructure, to the cultural and political system. Marx regarded this mismatch between (economic) base and (social) superstructure as a major source of social disruption and conflict.
Despite Marx 's stress on critique of capitalism and discussion of the new communist society that should replace it, his explicit critique of capitalism is guarded, as he saw it as an improved society compared to the past ones (slavery and feudal). Marx also never clearly discusses issues of morality and justice, although scholars agree that his work contained implicit discussion of those concepts.
Marx 's view of capitalism was two - sided. On one hand, in the 19th century 's deepest critique of the dehumanising aspects of this system he noted that defining features of capitalism include alienation, exploitation and recurring, cyclical depressions leading to mass unemployment, while on the other hand capitalism is also characterised by "revolutionising, industrialising and universalising qualities of development, growth and progressivity '' (by which Marx meant industrialisation, urbanisation, technological progress, increased productivity and growth, rationality and scientific revolution) that are responsible for progress. Marx considered the capitalist class to be one of the most revolutionary in history because it constantly improved the means of production, more so than any other class in history and was responsible for the overthrow of feudalism and its transition to capitalism. Capitalism can stimulate considerable growth because the capitalist can and has an incentive to reinvest profits in new technologies and capital equipment.
According to Marx, capitalists take advantage of the difference between the labour market and the market for whatever commodity the capitalist can produce. Marx observed that in practically every successful industry, input unit - costs are lower than output unit - prices. Marx called the difference "surplus value '' and argued that this surplus value had its source in surplus labour, the difference between what it costs to keep workers alive and what they can produce. Marx 's dual view of capitalism can be seen in his description of the capitalists: he refers to them as vampires sucking worker 's blood, but at the same time he notes that drawing profit is "by no means an injustice '' and that capitalists simply can not go against the system. The true problem lies with the "cancerous cell '' of capital, understood not as property or equipment, but the relations between workers and owners -- the economic system in general.
At the same time, Marx stressed that capitalism was unstable and prone to periodic crises. He suggested that over time capitalists would invest more and more in new technologies and less and less in labour. Since Marx believed that surplus value appropriated from labour is the source of profits, he concluded that the rate of profit would fall even as the economy grew. Marx believed that increasingly severe crises would punctuate this cycle of growth, collapse and more growth. Moreover, he believed that in the long - term, this process would necessarily enrich and empower the capitalist class and impoverish the proletariat. In section one of The Communist Manifesto, Marx describes feudalism, capitalism and the role internal social contradictions play in the historical process:
We see then: the means of production and of exchange, on whose foundation the bourgeoisie built itself up, were generated in feudal society. At a certain stage in the development of these means of production and of exchange, the conditions under which feudal society produced and exchanged... the feudal relations of property became no longer compatible with the already developed productive forces; they became so many fetters. They had to be burst asunder; they were burst asunder. Into their place stepped free competition, accompanied by a social and political constitution adapted in it, and the economic and political sway of the bourgeois class. A similar movement is going on before our own eyes... The productive forces at the disposal of society no longer tend to further the development of the conditions of bourgeois property; on the contrary, they have become too powerful for these conditions, by which they are fettered, and so soon as they overcome these fetters, they bring order into the whole of bourgeois society, endanger the existence of bourgeois property.
Marx believed that those structural contradictions within capitalism necessitate its end, giving way to socialism, or a post-capitalistic, communist society:
The development of Modern Industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie, therefore, produces, above all, are its own grave - diggers. Its fall and the victory of the proletariat are equally inevitable.
Thanks to various processes overseen by capitalism, such as urbanisation, the working class, the proletariat, should grow in numbers and develop class consciousness, in time realising that they have to and can change the system. Marx believed that if the proletariat were to seize the means of production, they would encourage social relations that would benefit everyone equally, abolishing exploiting class and introduce a system of production less vulnerable to cyclical crises. Marx argued in The German Ideology that capitalism will end through the organised actions of an international working class:
Communism is for us not a state of affairs which is to be established, an ideal to which reality will have to adjust itself. We call communism the real movement which abolishes the present state of things. The conditions of this movement result from the premises now in existence.
In this new society, the self - alienation would end and humans would be free to act without being bound by the labour market. It would be a democratic society, enfranchising the entire population. In such a utopian world there would also be little if any need for a state, which goal was to enforce the alienation. He theorised that between capitalism and the establishment of a socialist / communist system, a dictatorship of the proletariat -- a period where the working class holds political power and forcibly socialises the means of production -- would exist. As he wrote in his Critique of the Gotha Program, "between capitalist and communist society there lies the period of the revolutionary transformation of the one into the other. Corresponding to this is also a political transition period in which the state can be nothing but the revolutionary dictatorship of the proletariat ''. While he allowed for the possibility of peaceful transition in some countries with strong democratic institutional structures (such as Britain, the United States and the Netherlands), he suggested that in other countries in what workers can not "attain their goal by peaceful means '' the "lever of our revolution must be force ''.
Marx 's ideas have had a profound impact on world politics and intellectual thought. Followers of Marx have frequently debated amongst themselves over how to interpret Marx 's writings and apply his concepts to the modern world. The legacy of Marx 's thought has become contested between numerous tendencies, each of which sees itself as Marx 's most accurate interpreter. In the political realm, these tendencies include Leninism, Marxism -- Leninism, Trotskyism, Maoism, Luxemburgism and libertarian Marxism. Various currents have also developed in academic Marxism, often under influence of other views, resulting in structuralist Marxism, historical Marxism, phenomenological Marxism, analytical Marxism and Hegelian Marxism.
From an academic perspective, Marx 's work contributed to the birth of modern sociology. He has been cited as one of the nineteenth century 's three masters of the "school of suspicion '' alongside Friedrich Nietzsche and Sigmund Freud and as one of the three principal architects of modern social science along with Émile Durkheim and Max Weber. In contrast to other philosophers, Marx offered theories that could often be tested with the scientific method. Both Marx and Auguste Comte set out to develop scientifically justified ideologies in the wake of European secularisation and new developments in the philosophies of history and science. Working in the Hegelian tradition, Marx rejected Comtean sociological positivism in attempt to develop a science of society. Karl Löwith considered Marx and Søren Kierkegaard to be the two greatest Hegelian philosophical successors. In modern sociological theory, Marxist sociology is recognised as one of the main classical perspectives. Isaiah Berlin considers Marx the true founder of modern sociology "in so far as anyone can claim the title ''. Beyond social science, he has also had a lasting legacy in philosophy, literature, the arts and the humanities.
In social theory, twentieth - and twenty - first - century, thinkers have pursued two main strategies in response to Marx. One move has been to reduce it to its analytical core, known as analytical Marxism. Another, more common move has been to dilute the explanatory claims of Marx 's social theory and to emphasise the "relative autonomy '' of aspects of social and economic life not directly related to Marx 's central narrative of interaction between the development of the "forces of production '' and the succession of "modes of production ''. Such has been for example the neo-marxist theorising adopted by historians inspired by Marx 's social theory, such as E.P. Thompson and Eric Hobsbawm. It has also been a line of thinking pursued by thinkers and activists like Antonio Gramsci who have sought to understand the opportunities and the difficulties of transformative political practice, seen in the light of Marxist social theory. Marx 's ideas would also have a profound influence on subsequent artists and art history, with avant - garde movements across literature, visual art, music, film and theater.
Politically, Marx 's legacy is more complex. Throughout the twentieth century, revolutions in dozens of countries labelled themselves "Marxist '', most notably the Russian Revolution, which led to the founding of the Soviet Union. Major world leaders including Vladimir Lenin, Mao Zedong, Fidel Castro, Salvador Allende, Josip Broz Tito, Kwame Nkrumah and Thomas Sankara all cited Marx as an influence and his ideas informed political parties worldwide beyond those where Marxist revolutions took place. The countries associated with some Marxist nations have led political opponents to blame Marx for millions of deaths, but the fidelity of these varied revolutionaries, leaders and parties to Marx 's work is highly contested and rejected by many Marxists. It is now common to distinguish between the legacy and influence of Marx specifically and the legacy and influence of those who shaped his ideas for political purposes.
|
world trade center path station to battery park | World Trade Center station (PATH) - wikipedia
World Trade Center is a terminal station on the PATH system. Located within the World Trade Center in the Financial District neighborhood of Manhattan, New York City, it is served by the Newark -- World Trade Center line at all times and the Hoboken -- World Trade Center line on weekdays. World Trade Center serves as the eastern terminus of both lines.
The station was originally opened on July 19, 1909, as Hudson Terminal, but was torn down, rebuilt as World Trade Center, and re-opened on July 6, 1971. Following the September 11, 2001 attacks, a temporary station opened in 2003. The main station house, the Oculus, opened on March 3, 2016, and the terminal was renamed the World Trade Center Transportation Hub, or World Trade Center for short.
The station currently has five tracks, three island platforms and one side platform in a basement four stories underground. The new Platform A, next to tracks 1 and 2, opened as part of the Transportation Hub on February 25, 2014. Platform B between tracks 2 and 3 opened on May 7, 2015. The other two platforms opened on September 8, 2016.
The current station has a temporary entrance that has been open since the temporary station entered service in November 2003. With the redevelopment of the World Trade Center site, the entrances and size of the temporary station have changed over time. The most current entrance to the station is located at Vesey Street, facing Greenwich Street and adjacent to 7 World Trade Center. The temporary entrance is a one - story building on the south side of Vesey Street with a Hudson News outlet and escalators extending into a lower level mezzanine. A connection to Brookfield Place was made available since October 27, 2013, through a permanent passageway known as the West Concourse. On August 16, 2016, the Westfield World Trade Center entrance opened.
Hudson Terminal was built by the Hudson and Manhattan Railroad at the turn of the twentieth century and was located between Greenwich, Cortlandt, Church, and Fulton Streets. The Hudson Terminal included two 22 - story office buildings located above the station.
The terminal was an architectural and engineering marvel of its time, designed with ramps to allow pedestrian traffic to flow in and out of the station quickly and easily. The station was served by two single - track tubes connected by a loop to speed train movements. The loop included five tracks and 3 platforms (2 center island and one side) and was somewhat similar to the current arrangement. By 1914, passenger volume at the Hudson Terminal had reached 30,535,500 annually. Volume nearly doubled by 1922, with 59,221,354 passengers that year.
Overall ridership on New Jersey 's Hudson and Manhattan Railroad declined substantially from a high of 113 million riders in 1927 to 26 million in 1958, after new automobile tunnels and bridges opened across the Hudson River. The State of New Jersey was interested in getting the Port Authority of New York and New Jersey to take over the railroad, but the Port Authority long viewed it as something unprofitable and had no interest in doing so. In the late 1950s, the Port Authority proposed to build a "world trade center '' in Lower Manhattan along the East River.
As a bi-state agency, Port Authority projects required approval from both the states of New Jersey and New York. Toward the end of 1961, negotiations with outgoing New Jersey Governor Robert B. Meyner regarding the World Trade Center project reached a stalemate. In December 1961, Port Authority executive director Austin J. Tobin met with newly elected New Jersey Governor Richard J. Hughes, and made a proposal to shift the World Trade Center project to the west side, where the Hudson Terminal was located.
In acquiring the Hudson & Manhattan Railroad, the Port Authority also acquired the Hudson Terminal and other buildings which were deemed obsolete. On January 22, 1962, the two states reached an agreement to allow the Port Authority to take over the railroad and build the World Trade Center on Manhattan 's lower west side. The shift in location for the World Trade Center to a site more convenient to New Jersey, together with Port Authority acquisition of the H&M Railroad, brought New Jersey to agreement in support of the World Trade Center project.
Groundbreaking on the World Trade Center took place in 1966. The site was on land fill, with bedrock located 65 metres (213 ft) below the surface. A new method was used to construct a slurry wall to keep out water from the Hudson River. During excavation of the site and construction of the towers, the original Hudson Tubes remained in service as elevated tunnels. The Hudson Terminal was shut down in 1971 when a new Port Authority Trans - Hudson, or PATH Railroad station was completed. The new station cost $35 million to build. At the time, it had a passenger volume of 85,000 daily.
The new PATH railroad station opened on July 6, 1971, and was at a different location from the original Hudson Terminal. Larger balloon loops in the PATH station platform allowed 10 - car trains; the previous station with its tight loops could handle only 6 linked cars. While construction of the World Trade Center neared completion, a temporary corridor was provided to take passengers between the station and a temporary entrance on Church Street. When it opened, the station had nine high - speed escalators between the platform level and the mezzanine level. The WTC PATH station was served by Newark -- World Trade Center and Hoboken -- World Trade Center trains. The station was connected to the World Trade Center towers via an underground concourse and a shopping center. There were also underground connections to the New York City Subway (A C E trains at World Trade Center, and N R W trains at Cortlandt Street). By 2001, the volume of passengers using the WTC PATH station was approximately 25,000 daily.
The station did not sustain significant damage during the 1993 World Trade Center bombing, although a section of ceiling in the station collapsed and trapped dozens. Within a week, the Port Authority was able to resume PATH service to the World Trade Center.
On September 11, 2001, the station was shut down by the Port Authority after the first airplane hit the North Tower of the World Trade Center. In the minutes after the first plane hit at 8: 46am, three PATH trains were pulling into the station. One was arriving from Newark and two others from Hoboken. The first Hoboken train turned around without stopping, not letting off passengers or opening any doors. The second Hoboken train arrived on track 3 at 8: 52 a.m., was ordered to be evacuated, employees included. This train never left the World Trade Center complex and was found later in 2001 during the long recovery process. The train from Newark that came into the terminal at 8: 55 a.m. stopped only to pick up passengers. An empty train was sent to the station at 9: 10 a.m. to pick up a dozen PATH employees and a homeless individual, leaving the station empty.
With the station destroyed, service to Lower Manhattan was suspended for over two years. Exchange Place, the next station on the Newark -- World Trade Center line, also had to be closed because it could not operate as a terminal station. Instead, two uptown services (Newark -- 33rd Street, red on the official PATH map; and Hoboken -- 33rd Street, blue on the map) and one intrastate New Jersey service (Hoboken -- Journal Square, green on the map) were put into operation.
Cleanup of the Exchange Place station was needed after the attacks. In addition, the downtown Hudson tubes had been flooded, which destroyed the track infrastructure. Modifications to the tracks were also required since the Exchange Place station was not a terminal station. The Exchange Place station re-opened in June 2003. PATH service to Lower Manhattan was restored when a temporary station opened on November 23, 2003. The inaugural train was the same one that had been used for the evacuation.
The temporary PATH station was designed by Port Authority chief architect Robert I. Davidson and constructed at a cost of $323 million. The station featured a canopy entrance along Church Street and a 118 - by - 12 foot mosaic mural, "Iridescent Lightning, '' by Giulio Candussio of the Scuola Mosaicisti del Friuli in Spilimbergo, Italy. The station was also adorned with opaque panel walls inscribed with inspirational quotes attesting to the greatness and resilience of New York City. These panels partially shielded the World Trade Center site from view.
In the 9 / 11 attacks, some sections of the station, including the floor and the signage on the northeast corner, were only lightly damaged in the collapse of the World Trade Center. These sections of the station were retained in the temporary station, and remained in the new station, where it connects with the platforms for the 2 3 A C E trains. Following its reopening and the resumption of Newark -- World Trade Center and Hoboken -- World Trade Center services, the station quickly reclaimed its status as the busiest station in the PATH system.
The station was also home to a Storycorps booth, which opened in 2005. Through this program, visitors could arrange to give oral recorded histories of the disaster. The booth closed in the spring of 2007 to make way for construction at the World Trade Center site. In June 2007, the street entrance to the temporary station was closed and demolished as part of the site construction. A set of new staircases was constructed several feet to the south, and a "tent '' structure was added to provide cover from the elements. The tent structure, by Voorsanger Architects and installed at a cost of $275,000, was designed to have an "aspiring quality, '' according to architect Bartholomew Voorsanger. That entrance on Church Street was closed in April 2008 when the entrance was relocated once again. On April 1, 2008, the third temporary entrance to the PATH station opened for commuters. The entrance was located on Vesey Street, adjacent to 7 World Trade Center, and served until the opening of the permanent station, designed by Calatrava, and also to make way for a Performing Arts Center if the proposed building finds approval.
The World Trade Center Transportation Hub is the Port Authority of New York and New Jersey 's formal name for the new PATH station and the associated transit and retail complex that opened on March 3, 2016. The station 's renaming took place when the station reopened. It was designed by Spanish architect Santiago Calatrava and composed of a train station with a large and open mezzanine under the National September 11 Memorial plaza. This mezzanine is connected to an aboveground head house structure called the Oculus -- located between 2 World Trade Center and 3 World Trade Center -- as well as to public concourses under the various towers in the World Trade Center complex.
In addition, the station was designed to connect the PATH to the New York City Subway system, and to facilitate a below ground east - west passageway that connects to the various modes of transportation in Lower Manhattan, from the Fulton Center to the Battery Park City Ferry Terminal. Furthermore, to replace the lost retail space from the original mall at the World Trade Center, significant portions of the Hub are devoted to the new 365,000 square feet (33,900 m) Westfield World Trade Center mall.
A large transit station was not part of the 2003 Memory Foundations master plan for the site by Daniel Libeskind, which called for a smaller station along the lines of the original subterranean station that existed beneath the World Trade Center. Libeskind 's design called for the Oculus space to be left open, forming a "Wedge of Light '' so that sun rays around the autumnal equinox would hit the World Trade Center footprints each September. In early 2004, the Port Authority, which owns the land, modified the Libeskind plan to include a large transportation station downtown, intended to rival Penn Station and Grand Central Terminal. In a nod to the Libeskind concept, the Oculus was built to maximize the effect of the autumnal equinox rays (coinciding with the skylight opening on or around September 11 every year).
Calatrava said that the Oculus resembles a bird being released from a child 's hand. The roof was originally designed to mechanically open to increase light and ventilation to the enclosed space. Herbert Muschamp, architecture critic of The New York Times, compared the design to the Bethesda Terrace and Fountain in Central Park, and wrote in 2004:
Another New York Times ' critic Michael Kimmelman wrote later in 2004:
However, Calatrava 's original soaring spike design was scaled back because of security issues. The New York Times observed in 2005:
The design was further modified in 2008 to eliminate the opening and closing roof mechanism because of budget and space constraints.
In 2014 The Atlantic 's CityLab criticized the emphasis placed on form over function, citing design flaws driven by aesthetic choices that detract from the station 's usability as a transit hub:
... the Port Authority 's new hub fails its customers, the PATH - riding public. One platform is already completed, and its design flaws are obvious. Staircases are too narrow to accommodate the morning crowds who come streaming out of the trains from Hoboken, Jersey City, and beyond, while the narrow platforms quickly fill with irate commuters. Anyone trying to catch a train back to the Garden State risks a stampede. The marble, bright and sterile, picks up any spill, and a drop of water creates dangerously slippery conditions until a Port Authority janitor scurries out of some unseen door, mop in hand. Passenger flow and comfort, two of the most important elements of terminal design, seem to be an afterthought. The PATH Hub is shaping up to be an example of design divorced from purpose.
Steve Cuozzo of the New York Post described the station in 2014 as it was being built as "a self - indulgent monstrosity '' and "a hideous waste of public money ''. Michael Kimmelman, architecture critic for The New York Times, referred to the structure as "a kitsch stegosaurus ''. New York magazine referred to it in 2015 as it neared completion as a "Glorious Boondoggle '' and, while withholding final judgement on the unfinished structure, did note the "Jurassic '' appearance. The New York Post editorial board also described the station when it opened in 2016 as the "world 's most obscenely overpriced commuter rail station -- and possibly its ugliest '', deeming the transit hub a "white elephant '' and "monstrosity '', comparing the Oculus to a "giant gray - white space insect ''.
Before the redesign, the station design had received critical acclaim. One writer for Curbed NY stated, "The platform 's mezzanine level... is kind of amazing and shows off the station 's soaring ribbed structure. '' A committee for Manhattan Community Board 1 called the station "beautiful. ''
The West Street pedestrian underpass (the West Concourse, formerly the "east - west connector '') links the WTC station mezzanine with Brookfield Place in Battery Park City, on the west side of the World Trade Center site. It opened on October 23, 2013. Access to One World Trade Center from the West Concourse became possible for employees when the tower opened on November 3, 2014. On May 29, the same day the tower 's observatory opened, the entrance to the observation deck opened.
The Transportation Hub has been dubbed "the world 's most expensive transportation hub '' for its massive cost for reconstruction -- $ 3.74 billion dollars. By contrast, the proposed two - mile PATH extension connecting Newark Liberty International Airport to the NWK - WTC service is projected to cost $1.5 billion. The hub has also been criticized for being delayed almost 10 years. However, even in inception, the hub was projected to cost almost $2 billion. The high cost for a single station is attributed to the extravagant design, which stems from the PA needing to convince the government of New Jersey to pay for a project situated entirely in New York.
Originally, the reconstruction was to be funded by the Federal Transit Administration, which gave approximately $1.9 billion to the project. The costs of the hub were still expensive, but it was to be finished at budget in 2009. In 2014 dollars, the cost of the hub and the adjacent Fulton Center, combined, was $5.1 billion. The hub cost twice as much in 2014 as it should have originally cost in 2004. A single hallway in the elegantly constructed hub cost $225 million and was billed as the "world 's most expensive hallway '', while construction, maintenance, and management alone cost $635 million; the Port Authority awarded several subcontracts, most of them costly. The fees of the main construction team took up almost a billion dollars, and utility installation around the entire World Trade Center site cost another $400 million. Over $500 million in cost savings was overlooked.
The design was costly, with Calatrava netting $80 million in design fees and the overall architectural design being another $405.8 million. Speeding up the pace of construction also contributed to the higher cost, with $100 million dedicated to building the National September 11 Memorial & Museum and another $24 million to speed up delivery of construction materials. The price of the station was further driven up by Calatrava 's architectural decisions. He wanted to import custom - made steel from a northern Italian factory, which cost $474 million, and have a columnless, aesthetically based design; skylights in the ground, instead of trees; and large, soaring "wings '', or rafters. Another $335 million was added to the cost overrun because the Port Authority of New York and New Jersey had to build around the New York City Subway 's IRT Broadway -- Seventh Avenue Line (carrying the 1 2 trains), since the Metropolitan Transportation Authority refused to close the line for fear of inconveniencing commuters from Staten Island taking the Staten Island Ferry. The line had to be supported on a bridge over the station instead of on columns through the station. In 2012, Hurricane Sandy damaged several hundred million dollars worth of materials. Some of the additional cost and delay was due to additions and modifications to the original plan by the Port Authority. Calatrava 's original entry pavilion was scaled back for security reasons, for instance.
The hub 's skyrocketing costs also attracted much controversy, with an editor at The New York Times saying that "Mr. Calatrava is amassing an unusually long list of projects marred by cost overruns, delays and litigation '', referring to his other projects around the world that were over budget. Especially because the current station has a ridership of only 46,000 daily passengers compared to 250,000 at Grand Central Terminal), the renovation is sometimes depicted as overpriced and overstylized. In fact, one Esquire magazine writer stated that the hub 's massive expense was in spite of the fact that operating the PATH itself was a waste of money for the Port Authority.
On November 5, 2015, the opening was delayed to early 2016 by a leaking roof. When the hub actually opened, the director of the Port Authority, Pat Foye, declined to hold an event to celebrate the opening of the Hub, describing it as "symbol of excess '' and noting he was "troubled with the huge cost '' of the construction project.
Construction of the Oculus called for relocation of the landmark World Trade Center cross in April 2006. However, the Oculus 's construction did not begin in earnest until July 8, 2008, when the first prefabricated "ribs '' for the pedestrian walkway under Fulton Street were installed on the site. The mezzanine level of the station was undergoing major construction and work on the foundation was underway. By March 2011, over 225 of the 300 steel pieces that make up the roof of the station were installed. Later that month installation of the Vierendeel Truss, one of the hub 's key components, began with the installation of a 50 short tons (45 long tons) section of the 271 - short - ton (242 - long - ton) truss. The truss serves as the mezzanine roof and also acts as a support for the northeast corner of the WTC Memorial.
The Oculus 's $542 million construction contract was awarded to Skanska in summer 2010. The project included steel erection for the substructure; construction connecting to Vehicle Security Center complex; the Greenwich Street corridor and the steel and concrete placing under the 1 2 trains subway box; and preparations for installation of east arch truss By June 2013, 10 pieces of exterior arches were installed for the Oculus. The initial construction was expected to conclude in the end of 2014 or beginning of 2015, and the internal construction including paint, turnstiles, ticket booths, etc. was expected to complete in the end of 2015, with an official opening scheduled on December 17, 2015. On October 23, 2013, the West Concourse opened with access to the World Financial Center, now renamed the Brookfield Place. The storefronts were covered and not yet opened, and the second floor was still closed for construction of One World Trade Center.
On February 25, 2014, half of the first platform of the new station, Platform A, was opened to the public with service to Hoboken. The new platform, an island platform, was fully modernized and contains new lighting, speakers, illuminated signs, escalators and elevators. The west side of the platform was walled off at the time. By June, nine wings had been installed, as well as the remaining eight ribs. The construction of the white rafters, or tips, was started. On November 3, the same day One World Trade Center opened, the West Concourse 's escalators, which had been closed off since October 23, 2013, opened with access to the second floor and to One World Trade Center. Eight days later, the Fulton Center and Dey Street Concourse opened to the public. The Dey Street Concourse entrance into the Transportation Hub was closed until the Hub opened. On November 22, the 114th and final rafter was installed. Also by this date, one crane had been disassembled and painting of the hub continued.
On May 7, 2015, Platform B and the remaining half of Platform A opened, and Platform C closed. A few weeks later on May 29, the entrance to the observation deck in the West Concourse was opened on the same day One World Observatory also opened. In late September, the temporary new West Walkway opened with access from the platforms to the West Concourse.
On March 3, 2016, the Oculus partially opened to the public, along with new entrances. Only the west end of the Oculus and the Westfield Mall corridor to Four World Trade Center was opened. Two months later, on May 26, 2016, PATH riders received a direct underground link to Fulton Center and Cortlandt Street through the Oculus. The PATH entrance into 2 World Trade Center opened on June 21, 2016, and the temporary PATH entrance to outside 7 World Trade Center closed five days later. Yet another entrance opened to the public when the Westfield World Trade Center mall was opened on August 16. On September 8, 2016, the restrooms were opened on the south side of the mezzanine. The last two station platforms in the hub, Platforms C and D, were also opened, as was the permanent West Concourse walkway. On December 19, the direct underground link to the New York City Subway 's World Trade Center station, at the northeast corner of the complex, reopened.
On October 13, 2016, the first birth inside the Oculus occurred. A woman was walking through the station with her husband when she went into labor, eventually giving birth on the floor of the station. (Another woman had previously given birth inside the old PATH station in August 2015.) A few months later, on February 11, 2017, the first death inside the Oculus also occurred. A woman from New Jersey fell to her death from the escalator. News reports initially said that she was trying to retrieve a hat dropped by her sister, but lost her balance and fell; it was later revealed that the woman "was pretending to be flying '' while lying on the escalator 's handrail. After the woman 's death, there was some focus on the height of the 3 - foot - 4 - inch (1.02 m) handrails, and how they could pose a safety risk.
The Transportation Hub is designed to connect the PATH subway system to the New York City Subway system. The 1 2 services, which runs through the Transportation Hub, was reconstructed under this project to run above the PATH mezzanine, and the rebuilt Cortlandt Street IRT station will have direct access into the Hub. There is also a direct access to the Chambers Street -- World Trade Center / Park Place station complex. The Cortlandt Street BMT station also has direct access into the Hub. In addition, the Dey Street Passageway along Dey Street connects the Transportation Hub east to the Fulton Center, providing access to the 2 3 4 5 A C J Z N R W services. A passageway, known as the West Concourse, connects west to Brookfield Place and the Battery Park City Ferry Terminal. A proposal for a connection to the Long Island Rail Road and John F. Kennedy International Airport via a new tunnel under the East River, the Lower Manhattan - Jamaica / JFK Transportation Project, was studied starting in 2004, but as of 2009 was a lower priority than other projects competing for funding.
Current services include:
The Fulton Street station complex (where the Fulton Center opened in November 2014) is two blocks away to the east and is fully compliant with the Americans with Disabilities Act of 1990. It features the following services:
At the northeast end of the station is the exit to the World Trade Center station of the New York City Subway. The doors and original ADA - accessible ramp, as well as the structure from the first World Trade Center leading into the station, survived the September 11 attacks. The station itself was not damaged, but it was covered by dust and was subsequently closed. The passageway reopened for a while to provide an ADA - connection from the temporary PATH station to the New York City Subway station, but was closed again when the temporary PATH station closed for a reconstruction. The passageway was then covered in plywood for preservation purposes. The renovated entrance, leading to the New York City Subway station from the Oculus headhouse and the Westfield World Trade Center, opened on December 19, 2016. The newly reopened passageway retained its pre-9 / 11 design, save for a door on display that has the words "MATF 1 / 9 13 '' spray - painted on it (a message from Urban Search and Rescue Massachusetts Task Force 1 of Beverly, Massachusetts, who searched the World Trade Center site on September 13, 2001). There is a plaque above the spray - painting, explaining the message on the door. PATH was required to preserve the passageway 's original design as per Section 106 of the National Historic Preservation Act, as a condition for getting funding to construct the Oculus and new stations. The passageway will not be made ADA - accessible again until 2017, as there are twenty - six steps down from the mezzanine to the Oculus headhouse 's lobby.
The M55 New York City Bus route runs northbound on Church Street and 6th Avenue to Midtown, and southbound to South Ferry on Broadway.
Suggestions from independent engineers and architects that the Oculus be even smaller (were rebuffed.)... Calatrava and his partners said that the impact and utility of the Oculus would be diminished if it were shrunken further, that the temporary station did not meet requirements for circulation of air and pedestrians, and that columns would interrupt visitors ' movement and provide a potential target for bombers.
|
what is the theme of the book of mormon | The Book of Mormon (musical) - wikipedia
The Book of Mormon is a musical comedy about two young Mormon missionaries who travel to Uganda to preach the Mormon religion. First staged in 2011, the play mocks various Mormon beliefs and practices. The script, lyrics, and music were written by Trey Parker, Robert Lopez, and Matt Stone. Parker and Stone were best known for creating the animated comedy South Park; Lopez had co-written the music for the musical Avenue Q.
The Book of Mormon follows two Mormon missionaries as they attempt to share their scriptures with the inhabitants of a remote Ugandan village. The earnest young men are challenged by the lack of interest of the locals, who are preoccupied with more pressing troubles such as AIDS, famine, and oppression from the local warlord.
In 2003, after Parker and Stone saw Avenue Q, they met with Lopez and began developing the musical, meeting sporadically for several years. Parker and Stone grew up in Colorado, and references to The Church of Jesus Christ of Latter - day Saints had been commonplace in their previous works. For research, the trio took a trip to Salt Lake City to meet with current and former Mormon missionaries. Beginning in 2008, developmental workshops were staged. The show 's producers, Scott Rudin and Anne Garefino, opted to open the show directly on Broadway.
The show opened on Broadway in March 2011, after nearly seven years of development. The LDS Church issued a polite, measured response to the musical, and purchased advertising space in its playbill in later runs. The Book of Mormon garnered overwhelmingly positive critical responses, and set records in ticket sales for the Eugene O'Neill Theatre. The show was awarded nine Tony Awards, one of which was for Best Musical, and a Grammy Award for Best Musical Theater Album. The original Broadway cast recording became the highest - charting Broadway cast album in over four decades, reaching number three on the Billboard charts. In 2013, the musical premiered in the West End, followed by two US national tours. A production in Melbourne and the first non-English version, in Stockholm, both opened in January 2017. Productions in Oslo and Copenhagen followed.
The Book of Mormon has grossed over $500 million.
The Book of Mormon was conceived by Trey Parker, Matt Stone and Robert Lopez. Parker and Stone grew up in Colorado, and were familiar with The Church of Jesus Christ of Latter - day Saints (LDS Church) and its members. They became friends at the University of Colorado Boulder and collaborated on a musical film, Cannibal! The Musical (1993), their first experience with movie musicals. In 1997, they created the TV series South Park for Comedy Central and in 1999, the musical film South Park: Bigger, Longer & Uncut. The two had first thought of a fictionalized Joseph Smith, religious leader and founder of the Latter Day Saint movement, while working on an aborted Fox series about historical characters. Their 1997 film, Orgazmo, and a 2003 episode of South Park, "All About Mormons '', both gave comic treatment to Mormonism. Smith was also included as one of South Park 's "Super Best Friends '', a Justice League parody team of religious figures like Jesus and Buddha.
During the summer of 2003, Parker and Stone flew to New York City to discuss the script of their new film, Team America: World Police, with friend and producer Scott Rudin (who also produced South Park: Bigger, Longer & Uncut). Rudin advised the duo to see the musical Avenue Q on Broadway, finding the cast of marionettes in Team America similar to the puppets of Avenue Q. Parker and Stone went to see the production during that summer and the writer - composers of Avenue Q, Lopez and Jeff Marx, noticed them in the audience and introduced themselves. Lopez revealed that South Park: Bigger, Longer & Uncut was highly influential in the creation of Avenue Q. The quartet went for drinks afterwards, and soon found that each camp wanted to write something involving Joseph Smith. The four began working out details nearly immediately, with the idea to create a modern story formulated early on. For research purposes, the quartet took a field trip to Salt Lake City where they "interviewed a bunch of missionaries -- or ex-missionaries. '' They had to work around Parker and Stone 's South Park schedule.
In 2006, Parker and Stone flew to London where they spent three weeks with Lopez, who was working on the West End production of Avenue Q. There, the three wrote "four or five songs '' and came up with the basic idea of the story. After a disagreement between Parker and Marx, who felt he was not getting enough creative control, Marx was separated from the project. For the next few years, the remaining trio met frequently to develop what they initially called The Book of Mormon: The Musical of the Church of Jesus Christ of Latter - day Saints. "There was a lot of hopping back and forth between L.A. and New York, '' Parker recalled.
There are numerous revealed changes from original script to final production. A song named "Family Home Evening '', which was in early workshops of the show, was cut. The warlord in Uganda was called General Kony in previews but later changed to General Butt Fucking Naked. The song "The Bible Is A Trilogy '' went through a major rewrite to become "All - American Prophet ''. The earlier version was based around how the third movie in movie trilogies is always the best one and sums everything up which led to a recurring Matrix joke where a Ugandan man said "I thought the third Matrix was the worst one '' which later changed to "I have maggots in my scrotum '' in the rewritten version. The song "Spooky Mormon Hell Dream '' was originally called "H-E Double Hockey Sticks. ''
Lopez pushed to "workshop '' the project, which baffled Parker and Stone, clueless about what he meant. Developmental workshops were directed by Jason Moore, and starred Cheyenne Jackson. Other actors in readings included Benjamin Walker and Daniel Reichard. The crew embarked on the first of a half - dozen workshops that would take place during the next four years, ranging from 30 - minute mini-performances for family and friends to much larger - scale renderings of the embryonic show. They spent hundreds of thousands of dollars of their own money, still unconvinced they would take it any further. In February 2008, a fully staged reading starred Walker and Josh Gad as Elders Price and Cunningham, respectively. Moore was originally set to direct, but left the production in June 2010. Other directors, including James Lapine, were optioned to join the creative team, but the producers recruited Casey Nicholaw. A final five - week workshop took place in August 2010, when Nicholaw came on board as choreographer and co-director with Parker.
Producers Scott Rudin and Anne Garefino originally planned to stage The Book of Mormon off - Broadway at the New York Theatre Workshop in summer 2010, but opted to premiere it directly on Broadway, "(s) ince the guys (Parker and Stone) work best when the stakes are highest. '' Rudin and Garefino booked the Eugene O'Neill Theatre and hired key players while sets were designed and built. The producers expected the production to cost $11 million, but it came in under budget at $9 million. Hundreds of actors auditioned and 28 were cast. The crew did four weeks of rehearsals, with an additional two weeks of technical rehearsals, and then went directly into previews. The producers first heard the musical with the full pit six days before the first paying audience.
The Book of Mormon premiered on Broadway at the Eugene O'Neill Theatre on March 24, 2011, following previews since February 24. The production is choreographed by Casey Nicholaw and co-directed by Nicholaw and Parker. Set design is by Scott Pask, with costumes by Ann Roth, lighting by Brian MacDevitt, and sound by Brian Ronan. Orchestrations were co-created by Larry Hochman and the show 's musical director and vocal arranger Stephen Oremus. The production was originally headlined by Josh Gad and Andrew Rannells in the two leading roles.
On April 25, 2011, the producers confirmed that "counterfeit tickets to the Broadway production had been sold to and presented by theatergoers on at least five different occasions ''. An article in The New York Times reported, "In each case, the tickets were purchased on Craigslist, and while a single seller is suspected, the ticket purchases have taken place in different locations each time... (T) he production 's management and Jujamcyn Theaters, which operates the O'Neill, had notified the New York Police Department ''.
The New York production of The Book of Mormon employed an innovative pricing strategy, similar to the ones used in the airline and hotel industries. The producers charged as much as $477 for the best seats for performances with particularly high demand. The strategy paid off handsomely. During its first year, the show was consistently one of the top five best - selling shows on Broadway and set 22 new weekly sales records for the Eugene O'Neill Theater. For the week of Thanksgiving 2011, the average paid admission was over $170 even though the highest - priced regular seat was listed at $155. High attendance coupled with aggressive pricing allowed the financial backers to recoup their investment of $11.4 million after just nine months of performances.
After Gad 's departure in June 2012, standby Jared Gertner played the role, until June 26 when Cale Krise permanently took over the role as Gertner left to play Elder Cunningham in the First National Tour. Two days after Gad left (June 2012), original star Rannells was replaced by his standby Nic Rouleau. The same day, Samantha Marie Ware played Nabulungi on Broadway as the start of a 6 - week engagement (James was shooting a film) in preparation for her tour performance. Following Rouleau 's departure in November 2012 (to originate the role of Elder Price in Chicago), the role of Elder Price was taken over by Matt Doyle. In December 2012, Jon Bass joined as Elder Cunningham. Original cast member Rory O'Malley was replaced by Matt Loehr in January 2013. In April 2013, Stanley Wayne Mathis joined the cast as Mafala Hatimbi. In May 2013, Jon Bass left the role of Elder Cunningham, and was replaced by Cody Jamison Strand. After Doyle and Strand 's contracts finished in January 2014, Rouleau and Ben Platt (who had previously played the role of Elder Cunningham while in Chicago with Rouleau) joined the Broadway cast to reprise their roles as Elder Price and Elder Cunningham. On August 26, 2014 Grey Henson took over for Loehr as Elder McKinley. Henson had previously played the role on the First National Tour. Rouleau and Platt left Broadway in January 2015. They were replaced by Gavin Creel and Christopher John O'Neill who played the roles of Price and Cunningham (respectively) on the First National Tour. On January 3, 2016 Creel left the show after three and a half years with The Book of Mormon. He was replaced by Kyle Selig, former Second National Tour Elder Price standby, who is scheduled to play the role through February 21, 2016. On January 25, 2016, Christopher John O'Neill was temporarily replaced by longtime Elder Cunningham standby Nyk Bielak. Bielak has been a standby for Elder Cunningham on all three North American companies before becoming the Broadway Elder Cunningham. On February 17, 2016 Nic Rouleau announced via Twitter that he would be taking over the role of Elder Price starting on February 23, 2016. This will be Rouleau 's third time playing the role on Broadway; he previously played the role in Chicago, the Second National Tour, and most recently, the West End. O'Neill and Rouleau 's first performance together was on February 23, 2016. August 21, 2016 was Grey Henson 's last performance as Elder McKinley. On August 23, 2016, Henson was replaced by Stephen Ashfield who came over from the West End Production. On November 7, 2016, Nikki Rene Daniels announced she was pregnant with her second child, and would be going on maternity leave. Later that week, Kim Exum then took over the role of Nabalungi. On February 20, 2017 Chris O'Neill and Daniel Breaker had their final performances as Elder Cunningham and Mafala Hatimbi. O'Neill was replaced by Brian Sears, who came over from the London Production. Breaker was replaced by Billy Eugene Jones. On February 18th, 2018, after six and a half years with the show, original cast member Nic Rouleau played his final performance as Elder Price. Original cast member Brian Sears also left the production that day. Rouleau was replaced by Dave Thomas Brown. Sears was replaced by longtime Elder Cunningham (on both Broadway and the 2nd National Tour), Cody Jamison Strand. Other Broadway cast members include, Original Broadway Cast member Lewis Cleale as Joseph Smith / Mission President and other roles, and Derrick Williams as the General.
The first North American tour began previews on August 14, 2012 at the Denver Center for the Performing Arts in Denver, Colorado, before moving to the Pantages Theatre in Los Angeles beginning September 5, with the official opening night for the tour on September 12. Originally planned to begin in December 2012, production was pushed forward four months. Gavin Creel (Price) and Jared Gertner (Cunningham) led the cast until late December when West End performer Mark Evans and Christopher John O'Neill took over, allowing time for Creel and Gertner to begin rehearsals for their move to the West End production. After Evans left the show on June 30, 2014, Broadway Elder Price stand - by, K.J. Hippensteel, temporarily covered as Elder Price. Hippensteel returned to Broadway and Ryan Bondy (who was covering for Hippensteel as the Broadway Elder Price stand - by) took over the role of Elder Price. Bondy continued on as Elder Price until Creel returned from London later in the summer of 2014. When Creel and O'Neill left the touring production to join the Broadway production, Bondy again took over the role of Elder Price while Chad Burris took over for O'Neill as Elder Cunningham. The two were only leads for six weeks as they waited for replacements to come from the West End Production. Billy Harrigan Tighe and A.J. Holmes moved over from the West End production to reprise their roles as Elder 's Price and Cunningham, respectively. Bondy and Burris then returned to the Second National Tour as stand - bys for Elder Price and Elder Cunningham.
As part of the tour, the musical was performed in Salt Lake City for the first time at the end of July and early August 2015.
The tour closed on May 1, 2016 in Honolulu, Hawaii.
The first replica sit - down production, separate from the tour, began previews on December 11, 2012, and officially opened on December 19 of that year, at the Bank of America Theatre in Chicago, Illinois as part of Broadway in Chicago. The limited engagement closed October 6, 2013 and became the second U.S. national tour. The cast included Nic Rouleau in the role of Price, along with Ben Platt as Cunningham.
A UK production debuted in the West End on February 25, 2013 at the Prince of Wales Theatre. Gavin Creel and Jared Gertner reprised their North American tour performances. The London cast members hosted a gala performance of the new musical on March 13, 2013, raising £ 200,000 for the British charity Comic Relief 's Red Nose Day. A typical London performance runs two hours and 30 minutes, including an interval of 15 minutes. In March 2014, The Book of Mormon was voted Funniest West End Show as part of the 2014 West End Frame Awards. On July 28, 2014, both Creel and Gertner left the production. Creel left the West End production to return to the 1st National Tour and was replaced by his stand - by, Billy Harrigan Tighe. Gertner was replaced by one of his stand - by 's, A.J. Holmes, who had previously played Cunningham on both the National Tour and Broadway.
After February 2, 2015, Broadway actor Nic Rouleau, cast in the role Elder Kevin Price replaced Billy Harrigan Tighe, and Brian Sears, who also starred on Broadway (as an ensemble member), replaced A.J. Holmes as Elder Cunningham. Tighe and Holmes then joined the cast of the 1st National Tour, filling the void that was there when Creel and O'Neill left the tour to play the leads on Broadway. On January 25, 2016 Rouleau announced via Twitter that January 30, 2016 will be his last performance as Elder Price in the West End. On February 1, 2016, longtime Broadway stand - by K.J. Hippensteel officially took over the role as Elder Price in the West End cast. On August 6, 2016 Stephen Ashfield had his last performance as Elder McKinley, as he was transferring over to the Broadway Production. On August 9, 2016 Steven Webb took over for Ashfield as Elder McKinley. On January 14, 2017 Brian Sears performed his last performance in the West End. Sears left London to join the Broadway company on February 20th. Sears was replaced by longtime Second National Tour Elder Cunningham, Cody Jamison Strand. Strand 's first performance was on January 30th, 2017 as he left the West End Company to rejoin the Broadway production.
The new 2018 / 19 cast includes Dom Simpson as Elder Price and J. Michael Finley as Elder Cunningham. Finley is coming to the West End production having recently been the stand - by for Elder Cunningham for the Broadway production.
After the Chicago production closed on October 6, 2013, the same production began touring the U.S. Platt never went on tour with the production and Rouleau performed in only a few cities on the tour before they both moved to New York and started rehearsals in preparation of joining the Broadway production. David Larsen succeeded Nic Rouleau as Elder Price. A.J. Holmes succeeded Ben Platt as Elder Cunningham. Cody Jamison Strand then succeeded A.J. Holmes in the role. December 14, 2014 was Pierce Cassedy 's last performance as Elder McKinley. He was replaced by former Broadway swing Daxton Bloomquist. On January 3, 2016, Larsen completed his final show as Elder Price. Larsen was replaced by his stand - by, Ryan Bondy. Gabe Gibbs replaced Bondy as Elder Price in October 2016. Oge Agulué replaced David Aron Damane as the General in December 2016. On January 1, 2017 Cody Jamison Strand had his last performance as Elder Cunningham. Strand left the show to join the West End Production. Strand was replaced by Conner Pierson on January 3, 2017. On October 24, 2017 long time ensemble member Kevin Clay assumed the role of Elder Price. Clay had been with the tour since November 2015, and worked his way up from ensemble, to Elder Price Understudy, and Elder Price Standby, before finally assuming the role. Bondy left the touring cast to take over the role of Elder Price in the Australia production. Other cast members include Kayla Pecchioni as Nabulungi, PJ Adzima as Elder McKinley, and Sterling Jarvis as Mafala Hatimbi. January 28, 2018 was PJ Adzima 's last performance as Elder McKinley. He was replaced by Andy Huntington Jones.
The Australia production of Book of Mormon opened at Melbourne 's Princess Theatre on January 18, 2017. Auditions were held in January 2016 in Sydney and Melbourne; rehearsals began in November. In November 2016, it was announced that Ryan Bondy and A.J. Holmes would reprise their roles as Elder Price and Elder Cunningham respectively. Zahra Newman would play Nabulungi, Bert Labonté would play Mafala, and Rowan Witt would play Elder McKinley. The production moved to the Sydney Lyric theater on February 28, 2018.
The first non-English version of the musical opened at the Chinateatern in Stockholm, Sweden, in January 2017. A Norwegian production opened at Det Norske Teatret in Oslo, Norway September 2017 to favorable reviews with demand crashing the ticketing website. The musical opened in Denmark at Copenhagen 's Det Ny Teater in January 2018.
At LDS Church Missionary Training Center, devout, supercilious missionary - to - be Elder Kevin Price leads his classmates in a demonstration of the door - to - door method to convert people to Mormonism ("Hello! ''). Price believes if he prays enough, he will be sent to Orlando, Florida for his two - year mission, but he and Elder Arnold Cunningham, an insecure, compulsive liar, are instead sent to Uganda as a pair ("Two By Two ''). Price is sure he is destined to do something incredible, while Cunningham is just happy to follow. ("You and Me (But Mostly Me) '').
Upon arrival in northern Uganda, the two are robbed by soldiers of a local warlord, General Butt - Fucking Naked (an allusion to the real General Butt Naked). They are welcomed to the village where a group of villagers share their daily reality of living in appalling conditions while being ruled by the General. To make their lives seem better, the villagers repeat a phrase that translates as "Fuck you, God! '' ("Hasa Diga Eebowai '').
Price and Cunningham are led to their living quarters by a young woman named Nabulungi, where they meet their fellow missionaries stationed in the area, who have been unable to convert anyone to Mormonism. Elder McKinley, the district leader, teaches Price and Cunningham a widely accepted method of dealing with the negative and upsetting feelings ("Turn It Off ''). Though Price is riddled with anxiety, Cunningham reassures him that he will succeed and that, as his partner, Cunningham will be by his side no matter what ("I Am Here for You '').
Price is certain he can succeed where the other Mormon elders have failed, teaching the villagers about Joseph Smith through a song that begins as a tribute to Smith but eventually descends into a tribute by Price to himself ("All - American Prophet ''). The General arrives and announces his demand for the genital mutilation of all female villagers. After a villager protests, the General executes him. Safely hiding back at home, Nabulungi, moved by Price 's promise of an earthly paradise, dreams of a better life in a new land ("Sal Tlay Ka Siti '').
The Mission President has requested a progress report on their mission. Shocked by the execution and the reality of Africa, Price decides to abandon his mission and requests a transfer to Orlando, while Cunningham, ever loyal, assures Price he will follow him anywhere ("I Am Here For You (Reprise) ''). However, Price unceremoniously dumps him as mission companion. Cunningham is crushed and alone, but when Nabulungi comes to him, wanting to learn more about the Book of Mormon and having convinced the villagers to listen to him, Cunningham finds the courage to take control of the situation ("Man Up '').
When his audience begins to get frustrated and leave, Cunningham quickly makes up stories by combining what he knows of Mormon doctrines with pieces of science fiction and fantasy. Cunningham 's conscience (personified by his father, Joseph Smith, hobbits, Lt. Uhura, Darth Vader, and Yoda) admonishes him, but he rationalizes that if it helps people, it surely can not be wrong ("Making Things Up Again '').
Price joyfully arrives in Orlando but then realizes that he is dreaming. He is reminded of the nightmares of hell he had as a child and panics when his nightmare begins once again ("Spooky Mormon Hell Dream ''). Price awakens and decides to re-commit to his mission.
Cunningham announces several Ugandans are interested in the church. McKinley points out that unless the General is dealt with, no one will convert. Price, seeing the chance to prove his worth, sets off on the "mission he was born to do ''. After re-affirming his faith, he confronts the General determined to convert him ("I Believe ''). The General is unimpressed and drags Price away.
Cunningham concludes his preaching and the villagers are baptized, with Nabulungi and Cunningham sharing a tender moment as they do ("Baptize Me ''). The Mormon missionaries feel oneness with the people of Uganda and celebrate ("I Am Africa ''). Price is seen in the village doctor 's office, having the Book of Mormon removed from his rectum. Meanwhile, the General hears of the villagers ' conversion and resolves to kill them all.
Having lost his faith, Price drowns his sorrows in coffee. Cunningham finds Price and tells him they need to at least act like mission companions, as the Mission President is coming to visit the Ugandan mission. Price reflects on all the broken promises the Church, his parents, his friends and life in general made to him ("Orlando '').
Nabulungi and the villagers perform a pageant to "honor (them) with the story of Joseph Smith, the American Moses '' ("Joseph Smith American Moses ''), which reflects the distortions put forth by Cunningham, such as making love to a frog to cure their AIDS. The Mission President is appalled, orders all the missionaries to go home, and tells Nabulungi that she and her fellow villagers are not Mormons. Nabulungi, heartbroken at the thought that she will never reach paradise, curses God for forsaking her ("Hasa Diga Eebowai (Reprise) ''). Price has had an epiphany and realizes Cunningham was right all along: though scriptures are important, what is more important is getting the message across ("You and Me (But Mostly Me) (Reprise) '').
The General arrives, and Nabulungi is ready to submit to him, telling the villagers that the stories Cunningham told them are untrue. To her shock, they respond that they have always known that the stories were metaphors rather than the literal truth. Price rallies the Mormons and the Ugandans to work together to make this their paradise. In an imagined future, the newly minted Ugandan elders go door to door to evangelize "The Book of Arnold. '' ("Tomorrow Is a Latter Day '' / "Hello! (Reprise) '' / "Finale '').
† This song is not on the cast album.
The Book of Mormon uses a nine - member orchestra:
A cast recording of the original Broadway production was released on May 17, 2011, by Ghostlight Records. All of the songs featured on stage are present on the recording with the exception of "I Am Here For You '' (Reprise), "Orlando '' (Reprise), "Hasa Diga Eebowai '' (Reprise) and "You and Me (But Mostly Me) '' (Reprise). "Hello '' (Reprise) and the "Encore '' are attached to the end of the last track of the CD, titled, "Tomorrow Is a Latter Day ''. A free preview of the entire recording was released on NPR starting on May 9, 2011. Excerpts from the cast recording are featured in an extended Fresh Air interview.
During its first week of its iTunes Store release, the recording became "the fastest - selling Broadway cast album in iTunes history, '' according to representatives for the production, ranking No. 2 on its day of release on the iTunes Top 10 Chart. According to Playbill, "It 's a rare occurrence for a Broadway cast album to place among the iTunes best sellers. '' The record has received positive reviews, with Rolling Stone calling the recording an "outstanding album that highlights the wit of the lyrics and the incredible tunefulness of the songs while leaving you desperate to score tickets to see the actual show. '' Although the cast album had a respectable debut on the US Billboard 200 chart in its initial week of release, after the show 's success at the 2011 Tony Awards, the record rapidly ascended the chart to number three, making it the highest - charting Broadway cast album in over four decades.
A vinyl version is planned.
The principal cast members of all major productions of The Book of Mormon.
(2017 -- present)
The Book of Mormon contains many religious themes, most notably those of faith and doubt. Although the musical satirizes organized religion and the literal credibility of the LDS Church, the Mormons in The Book of Mormon are portrayed as well - meaning and optimistic, if a little naïve and unworldly. In addition, the central theme that many religious stories are rigid, out of touch, and silly comes to the conclusion that, essentially, religion itself can do enormous good as long as it is taken metaphorically and not literally. Matt Stone, one of the show 's creators, described The Book of Mormon as "an atheist 's love letter to religion. ''
The opening scenes of Act I and II parody the Hill Cumorah Pageant.
The Book of Mormon received broad critical praise for the plot, score, actors ' performances, direction and choreography. Vogue Magazine called the show "the filthiest, most offensive, and -- surprise -- sweetest thing you 'll see on Broadway this year, and quite possibly the funniest musical ever. '' New York Post reported that audience members were "sore from laughing so hard ''. It praised the score, calling it "tuneful and very funny, '' and added that "the show has heart. It makes fun of organized religion, but the two Mormons are real people, not caricatures. ''
Ben Brantley of The New York Times compared the show favorably to Rodgers and Hammerstein 's The King and I and The Sound of Music but "rather than dealing with tyrannical, charismatic men with way too many children, our heroes... must confront a one - eyed, genocidal warlord with an unprintable name... That 's enough to test the faith of even the most optimistic gospel spreaders (not to mention songwriters). Yet in setting these dark elements to sunny melodies The Book of Mormon achieves something like a miracle. It both makes fun of and ardently embraces the all - American art form of the inspirational book musical. No Broadway show has so successfully had it both ways since Mel Brooks adapted his film The Producers for the stage a decade ago. '' Jon Stewart, host of The Daily Show, spent much of his interview with Parker and Stone on the March 10, 2011 episode praising the musical.
Charles McNulty of the Los Angeles Times praised the music, and stated: "The songs, often inspired lampoons of contemporary Broadway styles, are as catchy as they are clever. '' McNulty concluded by stating "Sure it 's crass, but the show is not without good intentions and, in any case, vindicates itself with musical panache. '' Peter Marks of the Washington Post wrote: "The marvel of The Book of Mormon is that even as it profanes some serious articles of faith, its spirit is anything but mean. The ardently devout and comedically challenged are sure to disagree. Anyone else should excitedly approach the altar of Parker, Stone and Lopez and expect to drink from a cup of some of the sweetest poison ever poured. '' Marks further describes the musical as "one of the most joyously acidic bundles Broadway has unwrapped in years. ''
However, The Wall Street Journal 's Terry Teachout called the show "slick and smutty: The Book of Mormon is the first musical to open on Broadway since La Cage aux Folles that has the smell of a send - in - the - tourists hit... The amateurish part relates mostly to the score, which is jointly credited to the three co-creators and is no better than what you might hear at a junior - varsity college show. The tunes are jingly - jangly, the lyrics embarrassingly ill - crafted. '' Other critics have called the show "crassly commercial '' as well as "dull '' and "derivative ''.
The show 's depiction of Africans has been called racist. NPR 's Janice Simpson notes that "the show does n't work unless the villagers are seen mainly as noble savages who need white people to show them the way to enlightenment. '' She further criticized the depiction of African doctors as well as the references to AIDS and female genital mutilation. Max Perry Mueller of Harvard writes that "The Book of Mormon producers worked so hard to get the ' Mormon thing ' right, while completely ignoring the Ugandan culture ''. The Aid Leap blog noted that "the gleeful depiction of traditional stereotypes about Africa (dead babies, warlord, HIV, etc.) reinforced rather than challenged general preconceptions '', and "the Africans are just a background to the emotional development of the Mormons ''.
The response of The Church of Jesus Christ of Latter - day Saints to the musical has been described as "measured. '' The church released an official response to inquiries regarding the musical, stating, "The production may attempt to entertain audiences for an evening, but the Book of Mormon as a volume of scripture will change people 's lives forever by bringing them closer to Christ. '' Michael Otterson, the head of Public Affairs for the church, followed in April 2011 with measured criticism. "Of course, parody is n't reality, and it 's the very distortion that makes it appealing and often funny. The danger is not when people laugh but when they take it seriously -- if they leave a theater believing that Mormons really do live in some kind of a surreal world of self - deception and illusion, '' Otterson wrote, outlining various humanitarian efforts achieved by Mormon missionaries in Africa since the early 2000s. Stone and Parker were unsurprised:
The official church response was something along the lines of "The Book of Mormon the musical might entertain you for a night, but the Book of Mormon, '' -- the book as scripture -- "will change your life through Jesus. '' Which we actually completely agree with. The Mormon church 's response to this musical is almost like our Q.E.D. at the end of it. That 's a cool, American response to a ribbing -- a big musical that 's done in their name. Before the church responded, a lot of people would ask us, "Are you afraid of what the church would say? '' And Trey and I were like, "They 're going to be cool. '' And they were like, "No, they 're not. There are going to be protests. '' And we were like, "Nope, they 're going to be cool. '' We were n't that surprised by the church 's response. We had faith in them.
The LDS Church has advertised in the playbills at many of the musical 's venues to encourage attendees to learn more about the Book of Mormon, with phrases like "you 've seen the play, now read the book '' and "the book is always better. ''
In Melbourne during the 2017 run, the Church advertised at Southern Cross railway station and elsewhere in the city, as well as on television with ads featuring prominent Australian Mormons, including rugby player Will Hopoate, stage actor Patrice Tipoki and ballet dancer Jake Mangakahia.
Mormons themselves have had varying responses to the musical. Richard Bushman, professor of Mormon studies, said of the musical, "Mormons experience the show like looking at themselves in a fun - house mirror. The reflection is hilarious but not really you. The nose is yours but swollen out of proportion. '' Bushman said that the musical was not meant to explain Mormon belief, and that many of the ideas in Elder Price 's "I Believe '' (like God living on a planet called Kolob), though having some roots in Mormon belief, are not doctrinally accurate.
When asked in January 2015 if he had met Mormons who disliked the musical, Gad stated "In the 1.5 years I did that show, I never got a single complaint from a practicing Mormon... To the contrary, I probably had a few people -- a dozen -- tell me they were so moved by the show that they took up the Mormon faith. ''
|
when does bloom trail go back to school | Bloom Trail High School - Wikipedia
Bloom Trail High School is a public high school in Chicago Heights, Illinois, United States. It is part of Bloom Township High School District 206. Originally the Bloom Township Freshman - Sophomore Division, in 1976 it became a four - year high school and was renamed Bloom Trail High School. Sports for both Bloom High School and Bloom Trail High School are combined. Football, volleyball, basketball, track, games / practices are held at Bloom Trail.
Bloom Trail High School has a great variety of classes, including AP and honors classes. The average ACT score is an 18. The class of 2017 set a record for the number of students (33) who passed the PSAE, which allows the students to go to the local community college, Prairie State College, to take college courses.
Bloom Trail High School serves the communities of Steger, Sauk Village, Crete, South Chicago Heights, Ford Heights, a small portion of Chicago Heights, and parts of Lynwood. The following schools feed into Bloom Trail High School: Columbia Central Middle School (Steger), Rickover Jr. High School (Sauk Village), Crete - Monee Sixth Grade Center Middle School (Crete) and Cottage Grove Middle School (Ford Heights).
|
the seat of the archbishop and the head of the english church was at | Church of England - wikipedia
The Church of England (C of E) is the state church of England. The Archbishop of Canterbury (currently Justin Welby) is the most senior cleric, although the monarch is the supreme governor. The Church of England is also the mother church of the international Anglican Communion. It traces its history to the Christian church recorded as existing in the Roman province of Britain by the third century, and to the 6th - century Gregorian mission to Kent led by Augustine of Canterbury.
The English church renounced papal authority when Henry VIII failed to secure an annulment of his marriage to Catherine of Aragon in 1534. The English Reformation accelerated under Edward VI 's regents, before a brief restoration of papal authority under Queen Mary I and King Philip. The Act of Supremacy 1558 renewed the breach and the Elizabethan Settlement charted a course enabling the English church to describe itself as both Catholic and Reformed:
In the earlier phase of the English Reformation there were both Catholic martyrs and radical Protestant martyrs. The later phases saw the Penal Laws punish Roman Catholic and nonconforming Protestants. In the 17th century, political and religious disputes raised the Puritan and Presbyterian faction to control of the church, but this ended with the Restoration. Papal recognition of George III in 1766 led to greater religious tolerance.
Since the English Reformation, the Church of England has used a liturgy in English. The church contains several doctrinal strands, the main three known as Anglo - Catholic, Evangelical and Broad Church. Tensions between theological conservatives and progressives find expression in debates over the ordination of women and homosexuality. The church includes both liberal and conservative clergy and members.
The governing structure of the church is based on dioceses, each presided over by a bishop. Within each diocese are local parishes. The General Synod of the Church of England is the legislative body for the church and comprises bishops, other clergy and laity. Its measures must be approved by both Houses of Parliament.
According to tradition, Christianity arrived in Britain in the 1st or 2nd century, during which time southern Britain became part of the Roman Empire. The earliest historical evidence of Christianity among the native Britons is found in the writings of such early Christian Fathers as Tertullian and Origen in the first years of the 3rd century. Three Romano - British bishops, including Restitutus, are known to have been present at the Council of Arles in 314. Others attended the Council of Serdica in 347 and that of Ariminum in 360, and a number of references to the church in Roman Britain are found in the writings of 4th century Christian fathers. Britain was the home of Pelagius, who opposed Augustine of Hippo 's doctrine of original sin.
While Christianity was long established as the religion of the Britons at the time of the Anglo - Saxon invasion, Christian Britons made little progress in converting the newcomers from their native paganism. Consequently, in 597, Pope Gregory I sent the prior of the Abbey of St Andrew 's (later canonised as Augustine of Canterbury) from Rome to evangelise the Angles. This event is known as the Gregorian mission and is the date the Church of England generally marks as the beginning of its formal history. With the help of Christians already residing in Kent, Augustine established his church at Canterbury, the capital of the Kingdom of Kent, and became the first in the series of Archbishops of Canterbury in 598. A later archbishop, the Greek Theodore of Tarsus, also contributed to the organisation of Christianity in England. The Church of England has been in continuous existence since the days of St Augustine, with the Archbishop of Canterbury as its episcopal head. Despite the various disruptions of the Reformation and the English Civil War, the Church of England considers itself to be the same church which was more formally organised by Augustine.
While some Celtic Christian practices were changed at the Synod of Whitby, the Christian in the British Isles was under papal authority from earliest times. Queen Bertha of Kent was among the Christians in England who recognised papal authority before Augustine arrived, and Celtic Christians were carrying out missionary work with papal approval long before the Synod of Whitby.
The Synod of Whitby established the Roman date for Easter and the Roman style of monastic tonsure in England. This meeting of the ecclesiastics with Roman customs with local bishops was summoned in 664 at Saint Hilda 's double monastery of Streonshalh (Streanæshalch), later called Whitby Abbey. It was presided over by King Oswiu, who did not engage in the debate but made the final ruling.
In 1534, King Henry VIII separated the English Church from Rome. A theological separation had been foreshadowed by various movements within the English Church, such as Lollardy, but the English Reformation gained political support when Henry VIII wanted an annulment of his marriage to Catherine of Aragon so he could marry Anne Boleyn. Pope Clement VII, considering that the earlier marriage had been entered under a papal dispensation and how Catherine 's nephew, Emperor Charles V, might react to such a move, refused the annulment. Eventually, Henry, although theologically opposed to Protestantism, took the position of Supreme Head of the Church of England to ensure the annulment of his marriage. He was excommunicated by Pope Paul III.
In 1536 -- 40 Henry VIII engaged in the Dissolution of the Monasteries, which controlled much of the richest land. He disbanded monasteries, priories, convents and friaries in England, Wales and Ireland, appropriated their income, disposed of their assets, and provided pensions for the former residents. The properties were sold to pay for the wars. Bernard argues:
Henry maintained a strong preference for traditional Catholic practices and, during his reign, Protestant reformers were unable to make many changes to the practices of the Church of England. Indeed, this part of Henry 's reign saw the trial for heresy of Protestants as well as Roman Catholics.
Under his son, King Edward VI, more Protestant - influenced forms of worship were adopted. Under the leadership of the Archbishop of Canterbury, Thomas Cranmer, a more radical reformation proceeded. A new pattern of worship was set out in the Book of Common Prayer (1549 and 1552). These were based on the older liturgy but influenced by Protestant principles. The confession of the reformed Church of England was set out in the Forty - two Articles (later revised to thirty - nine). The reformation however was cut short by the death of the king. Queen Mary I, who succeeded him, returned England again to the authority of the papacy, thereby ending the first attempt at an independent Church of England. During her co-reign with her husband, King Philip, many leaders and common people were burnt for their refusal to recant of their reformed faith. These are known as the Marian martyrs and the persecution led to her nickname of "Bloody Mary ''.
Mary also died childless and so it was left to the new regime of her half - sister Elizabeth to resolve the direction of the church. The settlement under Queen Elizabeth I (from 1558), known as the Elizabethan Settlement, developed the via media (middle way) character of the Church of England, a church moderately Reformed in doctrine, as expressed in the Thirty - Nine Articles, but also emphasising continuity with the Catholic and Apostolic traditions of the Church Fathers. It was also an established church (constitutionally established by the state with the head of state as its supreme governor). The exact nature of the relationship between church and state would be a source of continued friction into the next century.
For the next century, through the reigns of James I, who ordered the creation of what became known as the King James Bible, and Charles I, culminating in the English Civil War and the Protectorate of Oliver Cromwell, there were significant swings back and forth between two factions: the Puritans (and other radicals) who sought more far - reaching Protestant reforms, and the more conservative churchmen who aimed to keep closer to traditional beliefs and Catholic practices. The failure of political and ecclesiastical authorities to submit to Puritan demands for more extensive reform was one of the causes of open warfare. By Continental standards, the level of violence over religion was not high, but the casualties included King Charles I and the Archbishop of Canterbury, William Laud. Under the Commonwealth and the Protectorate of England from 1649 to 1660, the bishops were dethroned and former practices were outlawed, and Presbyterian ecclesiology was introduced in place of the episcopate. The 39 Articles were replaced by the Westminster Confession, the Book of Common Prayer by the Directory of Public Worship. Despite this, about one quarter of English clergy refused to conform to this form of State Presbyterianism.
With the Restoration of Charles II, Parliament restored the Church of England to a form not far removed from the Elizabethan version. One difference was that the ideal of encompassing all the people of England in one religious organisation, taken for granted by the Tudors, had to be abandoned. The religious landscape of England assumed its present form, with the Anglican established church occupying the middle ground, and those Puritans and Protestants who dissented from the Anglican establishment, and Roman Catholics, too strong to be suppressed altogether, having to continue their existence outside the national church rather than controlling it. Continuing official suspicion and legal restrictions continued well into the 19th century.
By the Fifth Article of the Union with Ireland 1800, the Church of England and Church of Ireland were united into "one Protestant Episcopal church, to be called, the United Church of England and Ireland ''. Although this union was declared "an essential and fundamental Part of the Union '', the Irish Church Act 1869 separated the Irish part of the church again and disestablished it, the Act coming into effect on 1 January 1871.
As the British Empire expanded, British colonists and colonial administrators took the established church doctrines and practices together with ordained ministry and formed overseas branches of the Church of England. As they developed or, beginning with the United States of America, became sovereign or independent states, many of their churches became separate organisationally but remained linked to the Church of England through the Anglican Communion.
In Bermuda, the oldest remaining English colony (now designated a British Overseas Territory), the first Church of England services were performed by the Reverend Richard Buck, one of the survivors of the 1609 wreck of the Sea Venture which initiated Bermuda 's permanent settlement. The nine parishes of the Church of England in Bermuda, each with its own church and glebe land, rarely had more than a pair of ordained ministers to share between them until the Nineteenth Century. From 1825 to 1839, Bermuda 's parishes were attached to the See of Nova Scotia. Bermuda was then grouped into the new Diocese of Newfoundland and Bermuda from 1839. In 1879, the Synod of the Church of England in Bermuda was formed. At the same time, a Diocese of Bermuda became separate from the Diocese of Newfoundland, but both continued to be grouped under the Bishop of Newfoundland and Bermuda until 1919, when Newfoundland and Bermuda each received its own Bishop.
The Church of England in Bermuda was renamed in 1978 as the Anglican Church of Bermuda, which is an extra-provincial diocese, with both metropolitan and primatial authority coming directly from the Archbishop of Canterbury. Among its parish churches is St Peter 's Church in the UNESCO World Heritage Site of St George 's Town, which is both the oldest Anglican and the oldest non-Roman Catholic church in the New World.
Under the guidance of Rowan Williams and with significant pressure from clergy union representatives, the ecclesiastical penalty for convicted felons to be defrocked was set aside from the Clergy Discipline Measure 2003. The clergy union argued that the penalty was unfair to victims of hypothetical miscarriages of criminal justice, because the ecclesiastical penalty is considered irreversible. Although clerics can still be banned for life from ministry, they remain ordained as priests.
The archbishops of Canterbury and York warned in January 2015 that the Church of England will no longer be able to carry on in its current form unless the downward spiral in membership is somehow reversed as typical Sunday attendances have halved to 800,000 in the last 40 years:
The urgency of the challenge facing us is not in doubt. Attendance at Church of England services has declined at an average of one per cent per annum over recent decades and, in addition, the age profile of our membership has become significantly older than that of the population... Renewing and reforming aspects of our institutional life is a necessary but far from sufficient response to the challenges facing the Church of England... The age profile of our clergy has also been increasing. Around 40 per cent of parish clergy are due to retire over the next decade or so.
However, Sarah Mullally, the fourth woman chosen to become a bishop in the Church of England, insisted in June 2015 that declining numbers at services should not necessarily be a cause of despair for churches because people will still "encounter God '' without ever taking their place in a pew, saying that people might hear the Christian message through social media sites such as Facebook or in a café run as a community project. Additionally, the church 's own statistics reveal that 9.7 million people visit an Anglican church every year and 1 million students are educated at Anglican schools.
In 2015 the Church of England admitted that it was embarrassed to be paying staff under the living wage. The Church of England had previously campaigned for all employers to pay this minimum amount. The archbishop acknowledged it was not the only area where the church "fell short of its standards ''.
The canon law of the Church of England identifies the Christian scriptures as the source of its doctrine. In addition, doctrine is also derived from the teachings of the Church Fathers and ecumenical councils (as well as the ecumenical creeds) in so far as these agree with scripture. This doctrine is expressed in the Thirty - Nine Articles of Religion, the Book of Common Prayer, and the Ordinal containing the rites for the ordination of deacons, priests, and the consecration of bishops. Unlike other traditions, the Church of England has no single theologian that it can look to as a founder. However, Richard Hooker 's appeal to scripture, church tradition, and reason as sources of authority continue to inform Anglican identity.
The Church of England 's doctrinal character today is largely the result of the Elizabethan Settlement, which sought to establish a comprehensive middle way between Roman Catholicism and Protestantism. The Church of England affirms the Protestant Reformation principle that scripture contains all things necessary to salvation and is the final arbiter in doctrinal matters. The Thirty - nine Articles are the church 's only official confessional statement. Though not a complete system of doctrine, the articles highlight areas of agreement with Lutheran and Reformed positions, while differentiating Anglicanism from Roman Catholicism and Anabaptism.
While embracing some themes of the Protestant Reformation, the Church of England also maintains Catholic traditions of the ancient church and teachings of the Church Fathers, unless these are considered contrary to scripture. It accepts the decisions of the first four ecumenical councils concerning the Trinity and the Incarnation. The Church of England also preserves Catholic order by adhering to episcopal polity, with ordained orders of bishops, priests and deacons. There are differences of opinion within the Church of England over the necessity of episcopacy. Some consider it essential, while others feel it is needed for the proper ordering of the church.
The Church of England has, as one of its distinguishing marks, a breadth and "open - mindedness ''. This tolerance has allowed Anglicans who emphasise the Catholic tradition and others who emphasise the Reformed tradition to coexist. The three "parties '' (see Churchmanship) in the Church of England are sometimes called high church (or Anglo - Catholic), low church (or evangelical Anglican) and broad church (or liberal). The high church party places importance on the Church of England 's continuity with the pre-Reformation Catholic Church, adherence to ancient liturgical usages and the sacerdotal nature of the priesthood. As their name suggests, Anglo - Catholics maintain many traditional Catholic practices and liturgical forms. The low church party is more Protestant in both ceremony and theology. Historically, broad church has been used to describe those of middle - of - the - road ceremonial preferences who lean theologically towards liberal Protestantism. The balance between these strands of churchmanship is not static: in 2013, 40 % of Church of England worshippers attended evangelical churches (compared with 26 % in 1989), and 83 % of very large congregations were evangelical. Such churches were also reported to attract higher numbers of men and young adults than others.
The Church of England 's official book of liturgy as established in English Law is the Book of Common Prayer. In addition to this book the General Synod has also legislated for a modern liturgical book, Common Worship, dating from 2000, which can be used as an alternative to the BCP. Like its predecessor, the 1980 Alternative Service Book, it differs from the Book of Common Prayer in providing a range of alternative services, mostly in modern language, although it does include some BCP - based forms as well, for example Order Two for Holy Communion. (This is a revision of the BCP service, altering some words and allowing the insertion of some other liturgical texts such as the Agnus Dei before communion.) The Order One rite follows the pattern of more modern liturgical scholarship.
The liturgies are organised according to the traditional liturgical year and the calendar of saints. The sacraments of baptism and the Eucharist are generally thought necessary to salvation. Infant baptism is practised. At a later age, individuals baptised as infants receive confirmation by a bishop, at which time they reaffirm the baptismal promises made by their parents or sponsors. The Eucharist, consecrated by a thanksgiving prayer including Christ 's Words of Institution, is believed to be "a memorial of Christ 's once - for - all redemptive acts in which Christ is objectively present and effectually received in faith ''.
The use of hymns and music in the Church of England has changed dramatically over the centuries. Traditional Choral evensong is a staple of most cathedrals. The style of psalm chanting harks back to the Church of England 's pre-reformation roots. During the 18th century, clergy such as Charles Wesley introduced their own styles of worship with poetic hymns.
In the latter half of the 20th century, the influence of the Charismatic Movement significantly altered the worship traditions of numerous Church of England parishes, primarily affecting those of evangelical persuasion. These churches now adopt a contemporary worship form of service, with minimal liturgical or ritual elements, and incorporating contemporary worship music.
Women were appointed as deaconesses from 1861 but they could not function fully as deacons and were not considered ordained clergy. Women have been lay readers for a long time. During the First World War, some women were appointed as lay readers, known as "bishop 's messengers '', who also led missions and ran churches in the absence of men. After that no more lay readers were appointed until 1969.
Legislation authorising the ordination of women as deacons was passed in 1986 and they were first ordained in 1987. The ordination of women as priests was passed by the General Synod in 1992 and began in 1994. In 2010, for the first time in the history of the Church of England, more women than men were ordained as priests (290 women and 273 men).
In July 2005, the synod voted to "set in train '' the process of allowing the consecration of women as bishops. In February 2006, the synod voted overwhelmingly for the "further exploration '' of possible arrangements for parishes that did not want to be directly under the authority of a bishop who is a woman. On 7 July 2008, the synod voted to approve the ordination of women as bishops and rejected moves for alternative episcopal oversight for those who do not accept the ministry of bishops who are women. Actual ordinations of women to the episcopate required further legislation, which was narrowly rejected in a vote at General Synod in November 2012.
On 20 November 2013, the General Synod voted overwhelmingly in support of a plan to allow the ordination of women as bishops, with 378 in favour, 8 against and 25 abstentions.
On 14 July 2014, the General Synod approved the ordination of women as bishops. The House of Bishops recorded 37 votes in favour, two against with one abstention. The House of Clergy had 162 in favour, 25 against and four abstentions. The House of Laity voted 152 for, 45 against with five abstentions. This legislation had to be approved by the Ecclesiastical Committee of the Parliament before it could be finally implemented at the November 2014 synod.
In December 2014, Libby Lane was announced as the first woman to become a bishop in the Church of England. She was consecrated as a bishop in January 2015.
In July 2015, Rachel Treweek was the first woman to become a diocesan bishop in the Church of England when she became the Bishop of Gloucester. She and Sarah Mullally, Bishop of Crediton, were the first women to be ordained as bishops at Canterbury Cathedral. Treweek later made headlines by calling for gender - inclusive language, saying that "God is not to be seen as male. God is God. ''
After the consecration of the first women as bishops, Women and the Church (WATCH), a group supporting the ministries of women in the Church of England, called for language referring to God as "Mother ''. This call for more gender inclusive language has receive the outspoken support of the Rt Rev Alan Wilson, the Bishop of Buckingham. In 2015, the Rev Jody Stowell, from WATCH, expressed her support for female images saying "we 're not restricted to understanding God with one gender. I would encourage people to explore those kinds of images. They 're wholly Biblical. ''
The Church of England has been discussing same - sex marriages and LGBT clergy. The church holds that marriage is a union of one man with one woman, however "Same - sex relationships often embody genuine mutuality and fidelity. '' The "Church of England does not conduct Civil Partnership Ceremonies or Same Sex Marriages but individual churches can conduct a service of thanksgiving after a ceremony. '' Within guidelines, "the law prevents ministers of the Church of England from carrying out same - sex marriages. And although there are no authorised services for blessing a same - sex civil marriage, your local church can still support you with prayer. '' The Archbishops ' Council said that "clergy in the Church of England are permitted to offer prayers of support on a pastoral basis for people in same - sex relationships; '' As such, many Anglican churches, with clergy open to it, "already bless same - sex couples on an unofficial basis. ''
Civil Partnerships for clergy have been allowed since 2005. By 2010, the General Synod voted in favour of extending pensions and other employee rights to clergy in civil unions. In a missive to clergy, the church communicated that "there was a need for committed same - sex couples to be given recognition and ' compassionate attention ' from the Church, including special prayers. '' Some congregations have published "Prayers for a Same Sex Commitment. '' After same - sex marriage was legalised, the Archbishops ' Council asked for the government to continue to offer civil unions saying "The Church of England recognises that same - sex relationships often embody fidelity and mutuality. Civil partnerships enable these Christian virtues to be recognised socially and legally in a proper framework. ''
In 2014, the Bishops released guidelines that permit "more informal kind of prayer '' for couples. Some congregations invite same - sex couples to receive "services of thanksgiving '' after a civil marriage. In the guidelines, "gay couples who get married will be able to ask for special prayers in the Church of England after their wedding, the bishops have agreed. '' In 2016, The Bishop of Grantham, the Rt Rev Nicholas Chamberlain, announced he is gay, in a same - sex relationship and celibate; becoming the first bishop to do so in the church. The church had decided in 2013 that gay clergy in civil partnerships could become bishops. "The House (of Bishops) has confirmed that clergy in civil partnerships, and living in accordance with the teaching of the church on human sexuality, can be considered as candidates for the episcopate. ''
In 2017, the House of Clergy voted against the motion to ' take note ' of the Bishops ' report defining marriage as between a man and a woman. Due to passage in all three houses being required for passage, the motion was rejected. After General Synod rejected the motion, the Archbishops of Canterbury and York called for "radical new Christian inclusion '' that is "based on good, healthy, flourishing relationships, and in a proper 21st century understanding of being human and of being sexual. '' The Diocese of Hereford approved a motion calling for the church "to create a set of formal services and prayers to bless those who have had a same - sex marriage or civil partnership. ''
Regarding transgender issues, the 2017 General Synod voted in favour of a motion saying that transgender people should be "welcomed and affirmed in their parish church... '' The motion also asked the Bishops "to look into special services for transgender people. '' The Diocese of Blackburn has already begun recognising the ceremony. Since 2000, the church has allowed priests to undergo gender transition and remain in office. The church has ordained openly transgender clergy since 2005.
Just as the Church of England has a large conservative or "traditionalist '' wing, it also has many liberal members and clergy. Approximately one third of clergy "doubt or disbelieve in the physical resurrection ''. Others, such as the Revd Giles Fraser, a contributor to The Guardian, have argued for an allegorical interpretation of the virgin birth of Jesus. The Independent reported in 2014 that, according to a YouGov survey of Church of England clergy, "as many as 16 per cent are unclear about God and two per cent think it is no more than a human construct. '' Moreover, many congregations are seeker - friendly environments. For example, one report from the Church Mission Society suggested that the church open up "a pagan church where Christianity (is) very much in the centre '' to reach out to spiritual people.
The Church of England is generally opposed to abortion but recognises that "there can be - strictly limited - conditions under which it may be morally preferable to any available alternative ''. The church also opposes euthanasia. Its official stance is that "While acknowledging the complexity of the issues involved in assisted dying / suicide and voluntary euthanasia, the Church of England is opposed to any change in the law or in medical practice that would make assisted dying / suicide or voluntary euthanasia permissible in law or acceptable in practice. '' It also states that "Equally, the Church shares the desire to alleviate physical and psychological suffering, but believes that assisted dying / suicide and voluntary euthanasia are not acceptable means of achieving these laudable goals. '' However, George Carey, a former Archbishop of Canterbury, announced that he had changed his stance on euthanasia in 2014 and now advocated legalising "assisted dying ''. On embryonic stem - cell research, the church has announced "cautious acceptance to the proposal to produce cytoplasmic hybrid embryos for research ''.
The Church of England set up the Church Urban Fund in the 1980s to tackle poverty and deprivation. They see poverty as trapping individuals and communities with some people in urgent need. This leads to dependency, homelessness, hunger, isolation, low income, mental health problems, social exclusion and violence. They feel that poverty reduces confidence and life expectancy and that people born in poor conditions have difficulty escaping their disadvantaged circumstances.
In parts of Liverpool, Manchester and Newcastle two - thirds of babies are born to poverty and have poorer life chances, also life expectancy 15 years lower than babies born in most fortunate communities. South Shore, Blackpool, has lowest life expectancy at 66 years for men.
The deep - rooted unfairness in our society is highlighted by these stark statistics. Children being born in this country, just a few miles apart, could n't witness a more wildly differing start to life. In child poverty terms, we live in one of the most unequal countries in the western world. We want people to understand where their own community sits alongside neighbouring communities. The disparity is often shocking but it 's crucial that, through greater awareness, people from all backgrounds come together to think about what could be done to support those born into poverty. (Paul Hackwood, the Chair of Trustees at Church Urban Fund)
Many prominent people in the Church of England have spoken out against poverty and welfare cuts in the United Kingdom. Twenty - seven bishops are among 43 Christian leaders who signed a letter which urged David Cameron to make sure people have enough to eat.
We often hear talk of hard choices. Surely few can be harder than that faced by the tens of thousands of older people who must ' heat or eat ' each winter, harder than those faced by families whose wages have stayed flat while food prices have gone up 30 % in just five years. Yet beyond even this we must, as a society, face up to the fact that over half of people using food banks have been put in that situation by cutbacks to and failures in the benefit system, whether it be payment delays or punitive sanctions.
Benefit cuts, failures and "punitive sanctions '' force thousands of UK citizens to use food banks. The campaign to end hunger considers this "truly shocking '' and called for a national day of fasting on 4 April 2014.
Official figures from 2005 showed there were 25 million baptised Anglicans in England and Wales. Due to its status as the established church, in general, anyone may be married, have their children baptised or their funeral in their local parish church, regardless of whether they are baptised or regular churchgoers.
Between 1890 and 2001, churchgoing in the United Kingdom declined steadily. In the years 1968 to 1999, Anglican Sunday church attendances almost halved, from 3.5 per cent of the population to 1.9 per cent. By the year 2014, Sunday church attendances had declined further to 1.4 per cent of the population. One study published in 2008 suggested that if current trends were to continue, Sunday attendances could fall to 350,000 in 2030 and just 87,800 in 2050.
In 2011, the Church of England published statistics showing 1.7 million people attending at least one of its services each month, a level maintained since the turn of the millennium; approximately one million participating each Sunday and three million taking part in a Church of England service on Christmas Day or Christmas Eve. The church also claimed that 30 % attend Sunday worship at least once a year; more than 40 % attend a wedding in their local church and still more attend a funeral there. Nationally the Church of England baptises one child in ten (2011). In 2015, the church 's statistics showed that 2.6 million people attended a special Advent service, 2.4 million attended a Christmas service, 1.3 million attended an Easter service, and 980,000 attended service during an average week. In 2016, 2.6 million people attended a Christmas service, 1.2 million attended an Easter service, 1.1 million people attended a service in the Church of England each month, an average of 930,000 people attended a weekly service, an additional 180,000 attended a service for school each week, and an average of 740,000 people attended Sunday service.
The Church of England has 18,000 active ordained clergy and 10,000 licensed lay ministers. In 2009, 491 people were recommended for ordination training, maintaining the level at the turn of the millennium, and 564 new clergy (266 women and 298 men) were ordained. More than half of those ordained (193 men and 116 women) were appointed to full - time paid ministry. In 2011, 504 new clergy were ordained, including 264 to paid ministry, and 349 lay readers were admitted to ministry; and the mode age - range of those recommended for ordination training had remained 40 -- 49 since 1999.
Article XIX (' Of the Church ') of the 39 Articles defines the church as follows:
The visible Church of Christ is a congregation of faithful men, in which the pure Word of God is preached, and the sacraments be duly ministered according to Christ 's ordinance in all those things that of necessity are requisite to the same.
The British monarch has the constitutional title of Supreme Governor of the Church of England. The canon law of the Church of England states, "We acknowledge that the Queen 's most excellent Majesty, acting according to the laws of the realm, is the highest power under God in this kingdom, and has supreme authority over all persons in all causes, as well ecclesiastical as civil. '' In practice this power is often exercised through Parliament and the Prime Minister.
The Church of Ireland and the Church in Wales separated from the Church of England in 1869 and 1920 respectively and are autonomous churches in the Anglican Communion; Scotland 's national church, the Church of Scotland, is Presbyterian but the Scottish Episcopal Church is in the Anglican Communion.
In addition to England, the jurisdiction of the Church of England extends to the Isle of Man, the Channel Islands and a few parishes in Flintshire, Monmouthshire, Powys and Radnorshire in Wales which voted to remain with the Church of England rather than joining the Church in Wales. Expatriate congregations on the continent of Europe have become the Diocese of Gibraltar in Europe.
The church is structured as follows (from the lowest level upwards):
All rectors and vicars are appointed by patrons, who may be private individuals, corporate bodies such as cathedrals, colleges or trusts, or by the bishop or directly by the Crown. No clergy can be instituted and inducted into a parish without swearing the Oath of Allegiance to Her Majesty, and taking the Oath of Canonical Obedience "in all things lawful and honest '' to the bishop. Usually they are instituted to the benefice by the bishop and then inducted by the archdeacon into the possession of the benefice property -- church and parsonage. Curates (assistant clergy) are appointed by rectors and vicars, or if priests - in - charge by the bishop after consultation with the patron. Cathedral clergy (normally a dean and a varying number of residentiary canons who constitute the cathedral chapter) are appointed either by the Crown, the bishop, or by the dean and chapter themselves. Clergy officiate in a diocese either because they hold office as beneficed clergy or are licensed by the bishop when appointed, or simply with permission.
The most senior bishop of the Church of England is the Archbishop of Canterbury, who is the metropolitan of the southern province of England, the Province of Canterbury. He has the status of Primate of All England. He is the focus of unity for the worldwide Anglican Communion of independent national or regional churches. Justin Welby has been Archbishop of Canterbury since the confirmation of his election on 4 February 2013.
The second most senior bishop is the Archbishop of York, who is the metropolitan of the northern province of England, the Province of York. For historical reasons (relating to the time of York 's control by the Danes) he is referred to as the Primate of England. John Sentamu became Archbishop of York in 2005. The Bishop of London, the Bishop of Durham and the Bishop of Winchester are ranked in the next three positions.
The process of appointing diocesan bishops is complex, due to historical reasons balancing hierarchy against democracy, and is handled by the Crown Nominations Committee which submits names to the Prime Minister (acting on behalf of the Crown) for consideration.
The Church of England has a legislative body, the General Synod. Synod can create two types of legislation, measures and canons. Measures have to be approved but can not be amended by the British Parliament before receiving the Royal Assent and becoming part of the law of England. Although it is the established church in England only, its measures must be approved by both Houses of Parliament including the non-English members. Canons require Royal Licence and Royal Assent, but form the law of the church, rather than the law of the land.
Another assembly is the Convocation of the English Clergy, which is older than the General Synod and its predecessor the Church Assembly. By the 1969 Synodical Government Measure almost all of the Convocations ' functions were transferred to the General Synod. Additionally, there are Diocesan Synods and deanery synods, which are the governing bodies of the divisions of the Church.
Of the 42 diocesan archbishops and bishops in the Church of England, 26 are permitted to sit in the House of Lords. The Archbishops of Canterbury and York automatically have seats, as do the Bishops of London, Durham and Winchester. The remaining 21 seats are filled in order of seniority by consecration. It may take a diocesan bishop a number of years to reach the House of Lords, at which point he becomes a Lord Spiritual. The Bishop of Sodor and Man and the Bishop of Gibraltar in Europe are not eligible to sit in the House of Lords as their dioceses lie outside the United Kingdom.
Although they are not part of England or the United Kingdom, the Church of England is also the Established Church in the Crown dependencies of the Isle of Man, the Bailiwick of Jersey and the Bailiwick of Guernsey. The Isle of Man has its own diocese of Sodor and Man, and the Bishop of Sodor and Man is an ex officio member of the Legislative Council of the Tynwald on the island. The Channel Islands are part of the Diocese of Winchester, and in Jersey the Dean of Jersey is a non-voting member of the States of Jersey. In Guernsey the Church of England is the Established Church, although the Dean of Guernsey is not a member of the States of Guernsey.
The Archbishop of Canterbury, Justin Welby, has taken strong action in an effort to prevent complaints of sex abuse cases being covered up. Independent investigators are examining files as far back as the 1950s and Welby hopes this independence will prevent any possibility of a cover - up.
We will systematically bring those transparently and openly first of all working with the survivors where they are still alive and then seeing what they want. The rule is survivors come first, not our own interests, and however important the person was, however distinguished, however well - known, survivors come first. (Justin Welby)
The personal files of all Anglican clergy since the 1950s are being audited in an effort to ensure no cover - up. Welby emphasised repeatedly that no cover - up would be acceptable.
Despite such assurances there is concern that not enough may be done and historic abuse may still sometimes be covered up. Keith Porteous Wood of the National Secular Society stated:
The problem was n't that bishops were n't trained in such matters, it is the institutional culture of denial and the bullying of the abused and whistleblowers into silence. One report suggests that 13 bishops ignored letters written in the 1990s warning of abuse by Ball on behalf of a victim who later committed suicide. I have seen evidence that such bullying persists to this day. I hope that the Archbishop 's review into the case of Peter Ball will deal with such bullying and what appears to be the undue influence exerted on the police and CPS by the Church in dealing with this case. The total failure of procedures, outlined by Ian Elliott, echoes that revealed in the totally damning Cahill Report about the conduct of the Archbishop Hope of York in respect of Robert Waddington. The current Archbishop of York has decided that this report should remain in printed form rather than be more widely available on the web.
Bishop Peter Ball was convicted in October 2015 on several charges of indecent assault against young adult men. There are allegations of large - scale earlier cover - ups involving many British establishment figures which prevented Ball 's earlier prosecution. There have also been allegations of child sex abuse, for example Robert Waddington. A complainant, known only as "Joe '', tried for decades to have action taken over sadistic sex abuse which Garth Moore perpetrated against him in 1976 when "Joe '' was 15 years old. None of the high ranking clergy who "Joe '' spoke to recall being told about the abuse, which "Joe '' considers incredible. A representative of the solicitors firm representing "Joe '' said:
The Church of England wants to bury and discourage allegations of non-recent abuse. They know how difficult it is for survivors to come forward, and it appears from this case that the Church has a plan of making it hard for these vulnerable people to come forward. This survivor has had the courage to press his case. Most do not. Most harbour the psychological fallout in silence. We need to find a way to make the system more approachable for survivors.
Although an established church, the Church of England does not receive any direct government support. Donations comprise its largest source of income, and it also relies heavily on the income from its various historic endowments. In 2005, the Church of England had estimated total outgoings of around £ 900 million.
The Church of England manages an investment portfolio which is worth more than £ 8000 million.
The Church of England supports A Church Near You, an online directory of churches. A user - edited resource, it currently lists 16,400 churches and has 7,000 editors in 42 dioceses. The directory enables parishes to maintain accurate location, contact and event information which is shared with other websites and mobile apps. In 2012, the directory formed the data backbone of Christmas Near You and in 2014 was used to promote the church 's Harvest Near You initiative.
|
terrace house opening new doors part 1 cast | Terrace house: Opening new Doors - wikipedia
"... Ready for It? '' by Taylor Swift (Japan)
Terrace House (テラス ハウス): Opening New Doors is a Japanese reality television series in the Terrace House franchise set in Karuizawa of the Nagano prefecture in Japan. It premiered on Netflix Japan as a Netflix Original on December 19, 2017. It is a Netflix and Fuji co-production which is also broadcast on Fuji Television in Japan, first through Fuji on Demand (FOD) on January 16, 2018 and on - air broadcast on January 22, 2018.
* Age when they first joined Terrace House.
|
which is the longest alpine glacier on earth | Glacier - Wikipedia
A glacier (US: / ˈɡleɪʃər / or UK: / ˈɡlæsiə /) is a persistent body of dense ice that is constantly moving under its own weight; it forms where the accumulation of snow exceeds its ablation (melting and sublimation) over many years, often centuries. Glaciers slowly deform and flow due to stresses induced by their weight, creating crevasses, seracs, and other distinguishing features. They also abrade rock and debris from their substrate to create landforms such as cirques and moraines. Glaciers form only on land and are distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water.
On Earth, 99 % of glacial ice is contained within vast ice sheets in the polar regions, but glaciers may be found in mountain ranges on every continent including Oceania 's high - latitude oceanic islands such as New Zealand and Papua New Guinea. Between 35 ° N and 35 ° S, glaciers occur only in the Himalayas, Andes, Rocky Mountains, a few high mountains in East Africa, Mexico, New Guinea and on Zard Kuh in Iran. Glaciers cover about 10 percent of Earth 's land surface. Continental glaciers cover nearly 13,000,000 km (5 × 10 ^ sq mi) or about 98 percent of Antarctica 's 13,200,000 km (5.1 × 10 ^ sq mi), with an average thickness of 2,100 m (7,000 ft). Greenland and Patagonia also have huge expanses of continental glaciers.
Glacial ice is the largest reservoir of fresh water on Earth. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. Within high - altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater.
Because glacial mass is affected by long - term climatic changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level.
A large piece of compressed ice, or a glacier, appears blue, as large quantities of water appear blue. This is because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the density of the created ice.
The word glacier is a loanword from French and goes back, via Franco - Provençal, to the Vulgar Latin glaciārium, derived from the Late Latin glacia, and ultimately Latin glaciēs, meaning "ice ''. The processes and features caused by or related to glaciers are referred to as glacial. The process of glacier establishment, growth and flow is called glaciation. The corresponding area of study is called glaciology. Glaciers are important components of the global cryosphere.
Glaciers are categorized by their morphology, thermal characteristics, and behavior. Cirque glaciers form on the crests and slopes of mountains. A glacier that fills a valley is called a valley glacier, or alternatively an alpine glacier or mountain glacier. A large body of glacial ice astride a mountain, mountain range, or volcano is termed an ice cap or ice field. Ice caps have an area less than 50,000 km (19,000 sq mi) by definition.
Glacial bodies larger than 50,000 km (19,000 sq mi) are called ice sheets or continental glaciers. Several kilometers deep, they obscure the underlying topography. Only nunataks protrude from their surfaces. The only extant ice sheets are the two that cover most of Antarctica and Greenland. They contain vast quantities of fresh water, enough that if both melted, global sea levels would rise by over 70 m (230 ft). Portions of an ice sheet or cap that extend into water are called ice shelves; they tend to be thin with limited slopes and reduced velocities. Narrow, fast - moving sections of an ice sheet are called ice streams. In Antarctica, many ice streams drain into large ice shelves. Some drain directly into the sea, often with an ice tongue, like Mertz Glacier.
Tidewater glaciers are glaciers that terminate in the sea, including most glaciers flowing from Greenland, Antarctica, Baffin and Ellesmere Islands in Canada, Southeast Alaska, and the Northern and Southern Patagonian Ice Fields. As the ice reaches the sea, pieces break off, or calve, forming icebergs. Most tidewater glaciers calve above sea level, which often results in a tremendous impact as the iceberg strikes the water. Tidewater glaciers undergo centuries - long cycles of advance and retreat that are much less affected by the climate change than those of other glaciers.
Thermally, a temperate glacier is at melting point throughout the year, from its surface to its base. The ice of a polar glacier is always below the freezing point from the surface to its base, although the surface snowpack may experience seasonal melting. A sub-polar glacier includes both temperate and polar ice, depending on depth beneath the surface and position along the length of the glacier. In a similar way, the thermal regime of a glacier is often described by its basal temperature. A cold - based glacier is below freezing at the ice - ground interface, and is thus frozen to the underlying substrate. A warm - based glacier is above or at freezing at the interface, and is able to slide at this contact. This contrast is thought to a large extent to govern the ability of a glacier to effectively erode its bed, as sliding ice promotes plucking at rock from the surface below. Glaciers which are partly cold - based and partly warm - based are known as polythermal.
Glaciers form where the accumulation of snow and ice exceeds ablation. The area in which a glacier forms is called a cirque (corrie or cwm) -- a typically armchair - shaped geological feature (such as a depression between mountains enclosed by arêtes) -- which collects and compresses through gravity the snow that falls into it. This snow collects and is compacted by the weight of the snow falling above it, forming névé. Further crushing of the individual snowflakes and squeezing the air from the snow turns it into "glacial ice ''. This glacial ice will fill the cirque until it "overflows '' through a geological weakness or vacancy, such as the gap between two mountains. When the mass of snow and ice is sufficiently thick, it begins to move due to a combination of surface slope, gravity and pressure. On steeper slopes, this can occur with as little as 15 m (50 ft) of snow - ice.
In temperate glaciers, snow repeatedly freezes and thaws, changing into granular ice called firn. Under the pressure of the layers of ice and snow above it, this granular ice fuses into denser and denser firn. Over a period of years, layers of firn undergo further compaction and become glacial ice. Glacier ice is slightly less dense than ice formed from frozen water because it contains tiny trapped air bubbles.
Glacial ice has a distinctive blue tint because it absorbs some red light due to an overtone of the infrared OH stretching mode of the water molecule. Liquid water is blue for the same reason. The blue of glacier ice is sometimes misattributed to Rayleigh scattering due to bubbles in the ice.
A glacier originates at a location called its glacier head and terminates at its glacier foot, snout, or terminus.
Glaciers are broken into zones based on surface snowpack and melt conditions. The ablation zone is the region where there is a net loss in glacier mass. The equilibrium line separates the ablation zone and the accumulation zone; it is the altitude where the amount of new snow gained by accumulation is equal to the amount of ice lost through ablation. The upper part of a glacier, where accumulation exceeds ablation, is called the accumulation zone. In general, the accumulation zone accounts for 60 -- 70 % of the glacier 's surface area, more if the glacier calves icebergs. Ice in the accumulation zone is deep enough to exert a downward force that erodes underlying rock. After a glacier melts, it often leaves behind a bowl - or amphitheater - shaped depression that ranges in size from large basins like the Great Lakes to smaller mountain depressions known as cirques.
The accumulation zone can be subdivided based on its melt conditions.
The health of a glacier is usually assessed by determining the glacier mass balance or observing terminus behavior. Healthy glaciers have large accumulation zones, more than 60 % of their area snowcovered at the end of the melt season, and a terminus with vigorous flow.
Following the Little Ice Age 's end around 1850, glaciers around the Earth have retreated substantially. A slight cooling led to the advance of many alpine glaciers between 1950 and 1985, but since 1985 glacier retreat and mass loss has become larger and increasingly ubiquitous.
Glaciers move, or flow, downhill due to gravity and the internal deformation of ice. Ice behaves like a brittle solid until its thickness exceeds about 50 m (160 ft). The pressure on ice deeper than 50 m causes plastic flow. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between layers. When the stress on the layer above exceeds the inter-layer binding strength, it moves faster than the layer below.
Glaciers also move through basal sliding. In this process, a glacier slides over the terrain on which it sits, lubricated by the presence of liquid water. The water is created from ice that melts under high pressure from frictional heating. Basal sliding is dominant in temperate, or warm - based glaciers.
Although evidence in favour of glacial flow was known by the early 19th century, other theories of glacial motion were advanced, such as the idea that melt water, refreezing inside glaciers, caused the glacier to dilate and extend its length. As it became clear that glaciers behaved to some degree as if the ice were a viscous fluid, it was argued that "regelation '', or the melting and refreezing of ice at a temperature lowered by the pressure on the ice inside the glacier, was what allowed the ice to deform and flow. James Forbes came up with the essentially correct explanation in the 1840s, although it was several decades before it was fully accepted.
The top 50 m (160 ft) of a glacier are rigid because they are under low pressure. This upper section is known as the fracture zone and moves mostly as a single unit over the plastically flowing lower section. When a glacier moves through irregular terrain, cracks called crevasses develop in the fracture zone. Crevasses form due to differences in glacier velocity. If two rigid sections of a glacier move at different speeds and directions, shear forces cause them to break apart, opening a crevasse. Crevasses are seldom more than 46 m (150 ft) deep but in some cases can be 300 m (1,000 ft) or even deeper. Beneath this point, the plasticity of the ice is too great for cracks to form. Intersecting crevasses can create isolated peaks in the ice, called seracs.
Crevasses can form in several different ways. Transverse crevasses are transverse to flow and form where steeper slopes cause a glacier to accelerate. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally. Marginal crevasses form from the edge of the glacier, due to the reduction in speed caused by friction of the valley walls. Marginal crevasses are usually largely transverse to flow. Moving glacier ice can sometimes separate from stagnant ice above, forming a bergschrund. Bergschrunds resemble crevasses but are singular features at a glacier 's margins.
Crevasses make travel over glaciers hazardous, especially when they are hidden by fragile snow bridges.
Below the equilibrium line, glacial meltwater is concentrated in stream channels. Meltwater can pool in proglacial lakes on top of a glacier or descend into the depths of a glacier via moulins. Streams within or beneath a glacier flow in englacial or sub-glacial tunnels. These tunnels sometimes reemerge at the glacier 's surface.
The speed of glacial displacement is partly determined by friction. Friction makes the ice at the bottom of the glacier move more slowly than ice at the top. In alpine glaciers, friction is also generated at the valley 's side walls, which slows the edges relative to the center.
Mean speeds vary greatly, but is typically around 1 m (3 ft) per day. There may be no motion in stagnant areas; for example, in parts of Alaska, trees can establish themselves on surface sediment deposits. In other cases, glaciers can move as fast as 20 -- 30 m (70 -- 100 ft) per day, such as in Greenland 's Jakobshavn Isbræ (Greenlandic: Sermeq Kujalleq). Velocity increases with increasing slope, increasing thickness, increasing snowfall, increasing longitudinal confinement, increasing basal temperature, increasing meltwater production and reduced bed hardness.
A few glaciers have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they accelerate, then return to their previous state. During these surges, the glacier may reach velocities far greater than normal speed. These surges may be caused by failure of the underlying bedrock, the pooling of meltwater at the base of the glacier -- perhaps delivered from a supraglacial lake -- or the simple accumulation of mass beyond a critical "tipping point ''. Temporary rates up to 90 m (300 ft) per day have occurred when increased temperature or overlying pressure caused bottom ice to melt and water to accumulate beneath a glacier.
In glaciated areas where the glacier moves faster than one km per year, glacial earthquakes occur. These are large scale earthquakes that have seismic magnitudes as high as 6.1. The number of glacial earthquakes in Greenland peaks every year in July, August and September and is increasing over time. In a study using data from January 1993 through October 2005, more events were detected every year since 2002, and twice as many events were recorded in 2005 as there were in any other year. This increase in the numbers of glacial earthquakes in Greenland may be a response to global warming.
Ogives are alternating wave crests and valleys that appear as dark and light bands of ice on glacier surfaces. They are linked to seasonal motion of glaciers; the width of one dark and one light band generally equals the annual movement of the glacier. Ogives are formed when ice from an icefall is severely broken up, increasing ablation surface area during summer. This creates a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives consist only of undulations or color bands and are described as wave ogives or band ogives.
Glaciers are present on every continent and approximately fifty countries, excluding those (Australia, South Africa) that have glaciers only on distant subantarctic island territories. Extensive glaciers are found in Antarctica, Chile, Canada, Alaska, Greenland and Iceland. Mountain glaciers are widespread, especially in the Andes, the Himalayas, the Rocky Mountains, the Caucasus, Scandinavian mountains and the Alps. Mainland Australia currently contains no glaciers, although a small glacier on Mount Kosciuszko was present in the last glacial period. In New Guinea, small, rapidly diminishing, glaciers are located on its highest summit massif of Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya and in the Rwenzori Mountains. Oceanic islands with glaciers include Iceland, several of the islands off the coast of Norway including Svalbard and Jan Mayen to the far North, New Zealand and the subantarctic islands of Marion, Heard, Grande Terre (Kerguelen) and Bouvet. During glacial periods of the Quaternary, Taiwan, Hawaii on Mauna Kea and Tenerife also had large alpine glaciers, while the Faroe and Crozet Islands were completely glaciated.
The permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. Glaciers can be found in all latitudes except from 20 ° to 27 ° north and south of the equator where the presence of the descending limb of the Hadley circulation lowers precipitation so much that with high insolation snow lines reach above 6,500 m (21,330 ft). Between 19 _̊ N and 19 _̊ S, however, precipitation is higher and the mountains above 5,000 m (16,400 ft) usually have permanent snow.
Even at high latitudes, glacier formation is not inevitable. Areas of the Arctic, such as Banks Island, and the McMurdo Dry Valleys in Antarctica are considered polar deserts where glaciers can not form because they receive little snowfall despite the bitter cold. Cold air, unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria, lowland Siberia, and central and northern Alaska, though extraordinarily cold, had such light snowfall that glaciers could not form.
In addition to the dry, unglaciated polar regions, some mountains and volcanoes in Bolivia, Chile and Argentina are high (4,500 to 6,900 m or 14,800 to 22,600 ft) and cold, but the relative lack of precipitation prevents snow from accumulating into glaciers. This is because these peaks are located near or in the hyperarid Atacama Desert.
Glaciers erode terrain through two principal processes: abrasion and plucking.
As glaciers flow over bedrock, they soften and lift blocks of rock into the ice. This process, called plucking, is caused by subglacial water that penetrates fractures in the bedrock and subsequently freezes and expands. This expansion causes the ice to act as a lever that loosens the rock by lifting it. Thus, sediments of all sizes become part of the glacier 's load. If a retreating glacier gains enough debris, it may become a rock glacier, like the Timpanogos Glacier in Utah.
Abrasion occurs when the ice and its load of rock fragments slide over bedrock and function as sandpaper, smoothing and polishing the bedrock below. The pulverized rock this process produces is called rock flour and is made up of rock grains between 0.002 and 0.00625 mm in size. Abrasion leads to steeper valley walls and mountain slopes in alpine settings, which can cause avalanches and rock slides, which add even more material to the glacier.
Glacial abrasion is commonly characterized by glacial striations. Glaciers produce these when they contain large boulders that carve long scratches in the bedrock. By mapping the direction of the striations, researchers can determine the direction of the glacier 's movement. Similar to striations are chatter marks, lines of crescent - shape depressions in the rock underlying a glacier. They are formed by abrasion when boulders in the glacier are repeatedly caught and released as they are dragged along the bedrock.
The rate of glacier erosion varies. Six factors control erosion rate:
When the bedrock has frequent fractures on the surface, glacial erosion rates tend to increase as plucking is the main erosive force on the surface; when the bedrock has wide gaps between sporadic fractures, however, abrasion tends to be the dominant erosive form and glacial erosion rates become slow.
Glaciers in lower latitudes tend to be much more erosive than glaciers in higher latitudes, because they have more meltwater reaching the glacial base and facilitate sediment production and transport under the same moving speed and amount of ice.
Material that becomes incorporated in a glacier is typically carried as far as the zone of ablation before being deposited. Glacial deposits are of two distinct types:
Larger pieces of rock that are encrusted in till or deposited on the surface are called "glacial erratics ''. They range in size from pebbles to boulders, but as they are often moved great distances, they may be drastically different from the material upon which they are found. Patterns of glacial erratics hint at past glacial motions.
Glacial moraines are formed by the deposition of material from a glacier and are exposed after the glacier has retreated. They usually appear as linear mounds of till, a non-sorted mixture of rock, gravel and boulders within a matrix of a fine powdery material. Terminal or end moraines are formed at the foot or terminal end of a glacier. Lateral moraines are formed on the sides of the glacier. Medial moraines are formed when two different glaciers merge and the lateral moraines of each coalesce to form a moraine in the middle of the combined glacier. Less apparent are ground moraines, also called glacial drift, which often blankets the surface underneath the glacier downslope from the equilibrium line.
The term moraine is of French origin. It was coined by peasants to describe alluvial embankments and rims found near the margins of glaciers in the French Alps. In modern geology, the term is used more broadly, and is applied to a series of formations, all of which are composed of till. Moraines can also create moraine dammed lakes.
Drumlins are asymmetrical, canoe shaped hills made mainly of till. Their heights vary from 15 to 50 meters and they can reach a kilometer in length. The steepest side of the hill faces the direction from which the ice advanced (stoss), while a longer slope is left in the ice 's direction of movement (lee).
Drumlins are found in groups called drumlin fields or drumlin camps. One of these fields is found east of Rochester, New York; it is estimated to contain about 10,000 drumlins.
Although the process that forms drumlins is not fully understood, their shape implies that they are products of the plastic deformation zone of ancient glaciers. It is believed that many drumlins were formed when glaciers advanced over and altered the deposits of earlier glaciers.
Before glaciation, mountain valleys have a characteristic "V '' shape, produced by eroding water. During glaciation, these valleys are often widened, deepened and smoothed to form a "U '' - shaped glacial valley or glacial trough, as it is sometimes called. The erosion that creates glacial valleys truncates any spurs of rock or earth that may have earlier extended across the valley, creating broadly triangular - shaped cliffs called truncated spurs. Within glacial valleys, depressions created by plucking and abrasion can be filled by lakes, called paternoster lakes. If a glacial valley runs into a large body of water, it forms a fjord.
Typically glaciers deepen their valleys more than their smaller tributaries. Therefore, when glaciers recede, the valleys of the tributary glaciers remain above the main glacier 's depression and are called hanging valleys.
At the start of a classic valley glacier is a bowl - shaped cirque, which has escarped walls on three sides but is open on the side that descends into the valley. Cirques are where ice begins to accumulate in a glacier. Two glacial cirques may form back to back and erode their backwalls until only a narrow ridge, called an arête is left. This structure may result in a mountain pass. If multiple cirques encircle a single mountain, they create pointed pyramidal peaks; particularly steep examples are called horns.
Passage of glacial ice over an area of bedrock may cause the rock to be sculpted into a knoll called a roche moutonnée, or "sheepback '' rock. Roches moutonnées may be elongated, rounded and asymmetrical in shape. They range in length from less than a meter to several hundred meters long. Roches moutonnées have a gentle slope on their up - glacier sides and a steep to vertical face on their down - glacier sides. The glacier abrades the smooth slope on the upstream side as it flows along, but tears rock fragments loose and carries them away from the downstream side via plucking.
As the water that rises from the ablation zone moves away from the glacier, it carries fine eroded sediments with it. As the speed of the water decreases, so does its capacity to carry objects in suspension. The water thus gradually deposits the sediment as it runs, creating an alluvial plain. When this phenomenon occurs in a valley, it is called a valley train. When the deposition is in an estuary, the sediments are known as bay mud.
Outwash plains and valley trains are usually accompanied by basins known as "kettles ''. These are small lakes formed when large ice blocks that are trapped in alluvium melt and produce water - filled depressions. Kettle diameters range from 5 m to 13 km, with depths of up to 45 meters. Most are circular in shape because the blocks of ice that formed them were rounded as they melted.
When a glacier 's size shrinks below a critical point, its flow stops and it becomes stationary. Meanwhile, meltwater within and beneath the ice leaves stratified alluvial deposits. These deposits, in the forms of columns, terraces and clusters, remain after the glacier melts and are known as "glacial deposits ''.
Glacial deposits that take the shape of hills or mounds are called kames. Some kames form when meltwater deposits sediments through openings in the interior of the ice. Others are produced by fans or deltas created by meltwater. When the glacial ice occupies a valley, it can form terraces or kames along the sides of the valley.
Long, sinuous glacial deposits are called eskers. Eskers are composed of sand and gravel that was deposited by meltwater streams that flowed through ice tunnels within or beneath a glacier. They remain after the ice melts, with heights exceeding 100 meters and lengths of as long as 100 km.
Very fine glacial sediments or rock flour is often picked up by wind blowing over the bare surface and may be deposited great distances from the original fluvial deposition site. These eolian loess deposits may be very deep, even hundreds of meters, as in areas of China and the Midwestern United States of America. Katabatic winds can be important in this process.
Large masses, such as ice sheets or glaciers, can depress the crust of the Earth into the mantle. The depression usually totals a third of the ice sheet or glacier 's thickness. After the ice sheet or glacier melts, the mantle begins to flow back to its original position, pushing the crust back up. This post-glacial rebound, which proceeds very slowly after the melting of the ice sheet or glacier, is currently occurring in measurable amounts in Scandinavia and the Great Lakes region of North America.
A geomorphological feature created by the same process on a smaller scale is known as dilation - faulting. It occurs where previously compressed rock is allowed to return to its original shape more rapidly than can be maintained without faulting. This leads to an effect similar to what would be seen if the rock were hit by a large hammer. Dilation faulting can be observed in recently de-glaciated parts of Iceland and Cumbria.
The polar ice caps of Mars show geologic evidence of glacial deposits. The south polar cap is especially comparable to glaciers on Earth. Topographical features and computer models indicate the existence of more glaciers in Mars ' past.
At mid-latitudes, between 35 ° and 65 ° north or south, Martian glaciers are affected by the thin Martian atmosphere. Because of the low atmospheric pressure, ablation near the surface is solely due to sublimation, not melting. As on Earth, many glaciers are covered with a layer of rocks which insulates the ice. A radar instrument on board the Mars Reconnaissance Orbiter found ice under a thin layer of rocks in formations called lobate debris aprons (LDAs).
The pictures below illustrate how landscape features on Mars closely resemble those on the Earth.
Romer Lake 's Elephant Foot Glacier in the Earth 's Arctic, as seen by Landsat 8. This picture shows several glaciers that have the same shape as many features on Mars that are believed to also be glaciers. The next three images from Mars show shapes similar to the Elephant Foot Glacier.
Mesa in Ismenius Lacus quadrangle, as seen by CTX. Mesa has several glaciers eroding it. One of the glaciers is seen in greater detail in the next two images from HiRISE. Image from Ismenius Lacus quadrangle.
Glacier as seen by HiRISE under the HiWish program. Area in rectangle is enlarged in the next photo. Zone of accumulation of snow at the top. Glacier is moving down valley, then spreading out on plain. Evidence for flow comes from the many lines on surface. Location is in Protonilus Mensae in Ismenius Lacus quadrangle.
Enlargement of area in rectangle of the previous image. On Earth the ridge would be called the terminal moraine of an alpine glacier. Picture taken with HiRISE under the HiWish program. Image from Ismenius Lacus quadrangle.
|
where is copper found in the earth's crust | Copper - wikipedia
Copper is a chemical element with symbol Cu (from Latin: cuprum) and atomic number 29. It is a soft, malleable, and ductile metal with very high thermal and electrical conductivity. A freshly exposed surface of pure copper has a reddish - orange color. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, and constantan used in strain gauges and thermocouples for temperature measurement.
Copper is one of the few metals that occur in nature in directly usable metallic form (native metals) as opposed to needing extraction from an ore. This led to very early human use, from c. 8000 BC. It was the first metal to be smelted from its ore, c. 5000 BC, the first metal to be cast into a shape in a mold, c. 4000 BC and the first metal to be purposefully alloyed with another metal, tin, to create bronze, c. 3500 BC.
In the Roman era, copper was principally mined on Cyprus, the origin of the name of the metal, from aes сyprium (metal of Cyprus), later corrupted to сuprum, from which the words copper (English), cuivre (French), cobre (Spanish), Koper (Dutch) and Kupfer (German) are all derived. The commonly encountered compounds are copper (II) salts, which often impart blue or green colors to such minerals as azurite, malachite, and turquoise, and have been used widely and historically as pigments. Copper used in buildings, usually for roofing, oxidizes to form a green verdigris (or patina). Copper is sometimes used in decorative art, both in its elemental metal form and in compounds as pigments. Copper compounds are used as bacteriostatic agents, fungicides, and wood preservatives.
Copper is essential to all living organisms as a trace dietary mineral because it is a key constituent of the respiratory enzyme complex cytochrome c oxidase. In molluscs and crustaceans, copper is a constituent of the blood pigment hemocyanin, replaced by the iron - complexed hemoglobin in fish and other vertebrates. In humans, copper is found mainly in the liver, muscle, and bone. The adult body contains between 1.4 and 2.1 mg of copper per kilogram of body weight.
Copper, silver, and gold are in group 11 of the periodic table; these three metals have one s - orbital electron on top of a filled d - electron shell and are characterized by high ductility, and electrical and thermal conductivity. The filled d - shells in these elements contribute little to interatomic interactions, which are dominated by the s - electrons through metallic bonds. Unlike metals with incomplete d - shells, metallic bonds in copper are lacking a covalent character and are relatively weak. This observation explains the low hardness and high ductility of single crystals of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries, hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is usually supplied in a fine - grained polycrystalline form, which has greater strength than monocrystalline forms.
The softness of copper partly explains its high electrical conductivity (59.6 × 10 S / m) and high thermal conductivity, second highest (second only to silver) among pure metals at room temperature. This is because the resistivity to electron transport in metals at room temperature originates primarily from scattering of electrons on thermal vibrations of the lattice, which are relatively weak in a soft metal. The maximum permissible current density of copper in open air is approximately 3.1 × 10 A / m of cross-sectional area, above which it begins to heat excessively.
Copper is one of a few metallic elements with a natural color other than gray or silver. Pure copper is orange - red and acquires a reddish tarnish when exposed to air. The characteristic color of copper results from the electronic transitions between the filled 3d and half - empty 4s atomic shells -- the energy difference between these shells corresponds to orange light.
As with other metals, if copper is put in contact with another metal, galvanic corrosion will occur.
Copper does not react with water, but it does slowly react with atmospheric oxygen to form a layer of brown - black copper oxide which, unlike the rust that forms on iron in moist air, protects the underlying metal from further corrosion (passivation). A green layer of verdigris (copper carbonate) can often be seen on old copper structures, such as the roofing of many older buildings and the Statue of Liberty. Copper tarnishes when exposed to some sulfur compounds, with which it reacts to form various copper sulfides.
There are 29 isotopes of copper. Cu and Cu are stable, with Cu comprising approximately 69 % of naturally occurring copper; both have a spin of ⁄. The other isotopes are radioactive, with the most stable being Cu with a half - life of 61.83 hours. Seven metastable isotopes have been characterized; Cu is the longest - lived with a half - life of 3.8 minutes. Isotopes with a mass number above 64 decay by β, whereas those with a mass number below 64 decay by β. Cu, which has a half - life of 12.7 hours, decays both ways.
Cu and Cu have significant applications. Cu is used in Cu - PTSM as a radioactive tracer for positron emission tomography.
Copper is produced in massive stars and is present in the Earth 's crust in a proportion of about 50 parts per million (ppm). It occurs as native copper, in the copper sulfides chalcopyrite and chalcocite, in the copper carbonates azurite and malachite, and in the copper (I) oxide mineral cuprite. The largest mass of elemental copper discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a polycrystal, with the largest single crystal ever described measuring 4.4 × 3.2 × 3.2 cm.
Most copper is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0 % copper. Sites include Chuquicamata, in Chile, Bingham Canyon Mine, in Utah, United States, and El Chino Mine, in New Mexico, United States. According to the British Geological Survey, in 2005, Chile was the top producer of copper with at least one - third of the world share followed by the United States, Indonesia and Peru. Copper can also be recovered through the in - situ leach process. Several sites in the state of Arizona are considered prime candidates for this method. The amount of copper in use is increasing and the quantity available is barely sufficient to allow all countries to reach developed world levels of usage.
Copper has been in use at least 10,000 years, but more than 95 % of all copper ever mined and smelted has been extracted since 1900, and more than half was extracted the last 24 years. As with many natural resources, the total amount of copper on Earth is vast, with around 10 tons in the top kilometer of Earth 's crust, which is about 5 million years ' worth at the current rate of extraction. However, only a tiny fraction of these reserves is economically viable with present - day prices and technologies. Estimates of copper reserves available for mining vary from 25 to 60 years, depending on core assumptions such as the growth rate. Recycling is a major source of copper in the modern world. Because of these and other factors, the future of copper production and supply is the subject of much debate, including the concept of peak copper, analogous to peak oil.
The price of copper has historically been unstable, and its price increased from the 60 - year low of US $0.60 / lb (US $1.32 / kg) in June 1999 to $3.75 per pound ($8.27 / kg) in May 2006. It dropped to $2.40 / lb ($5.29 / kg) in February 2007, then rebounded to $3.50 / lb ($7.71 / kg) in April 2007. In February 2009, weakening global demand and a steep fall in commodity prices since the previous year 's highs left copper prices at $1.51 / lb ($3.32 / kg).
The concentration of copper in ores averages only 0.6 %, and most commercial ores are sulfides, especially chalcopyrite (CuFeS) and to a lesser extent chalcocite (Cu S). These minerals are concentrated from crushed ores to the level of 10 -- 15 % copper by froth flotation or bioleaching. Heating this material with silica in flash smelting removes much of the iron as slag. The process exploits the greater ease of converting iron sulfides into oxides, which in turn react with the silica to form the silicate slag that floats on top of the heated mass. The resulting copper matte, consisting of Cu S, is roasted to convert all sulfides into oxides:
The cuprous oxide is converted to blister copper upon heating:
The Sudbury matte process converted only half the sulfide to oxide and then used this oxide to remove the rest of the sulfur as oxide. It was then electrolytically refined and the anode mud exploited for the platinum and gold it contained. This step exploits the relatively easy reduction of copper oxides to copper metal. Natural gas is blown across the blister to remove most of the remaining oxygen and electrorefining is performed on the resulting material to produce pure copper:
Like aluminium, copper is recyclable without any loss of quality, both from raw state and from manufactured products. In volume, copper is the third most recycled metal after iron and aluminium. An estimated 80 % of all copper ever mined is still in use today. According to the International Resource Panel 's Metal Stocks in Society report, the global per capita stock of copper in use in society is 35 -- 55 kg. Much of this is in more - developed countries (140 -- 300 kg per capita) rather than less - developed countries (30 -- 40 kg per capita).
The process of recycling copper is roughly the same as is used to extract copper but requires fewer steps. High - purity scrap copper is melted in a furnace and then reduced and cast into billets and ingots; lower - purity scrap is refined by electroplating in a bath of sulfuric acid.
Numerous copper alloys have been formulated, many with important uses. Brass is an alloy of copper and zinc. Bronze usually refers to copper - tin alloys, but can refer to any alloy of copper such as aluminium bronze. Copper is one of the most important constituents of silver and carat gold and carat solders used in the jewelry industry, modifying the color, hardness and melting point of the resulting alloys. Some lead - free solders consist of tin alloyed with a small proportion of copper and other metals.
The alloy of copper and nickel, called cupronickel, is used in low - denomination coins, often for the outer cladding. The US five - cent coin (currently called a nickel) consists of 75 % copper and 25 % nickel in homogeneous composition. The alloy of 90 % copper and 10 % nickel, remarkable for its resistance to corrosion, is used for various objects exposed to seawater, though it is vulnerable to the sulfides sometimes found in polluted harbors and estuaries. Alloys of copper with aluminium (about 7 %) have a golden color and are used in decorations. Shakudō is a Japanese decorative alloy of copper containing a low percentage of gold, typically 4 -- 10 %, that can be patinated to a dark blue or black color.
Copper forms a rich variety of compounds, usually with oxidation states + 1 and + 2, which are often called cuprous and cupric, respectively.
As with other elements, the simplest compounds of copper are binary compounds, i.e. those containing only two elements, the principal examples being oxides, sulfides, and halides. Both cuprous and cupric oxides are known. Among the numerous copper sulfides, important examples include copper (I) sulfide and copper (II) sulfide.
Cuprous halides (with chlorine, bromine, and iodine) are known, as are cupric halides with fluorine, chlorine, and bromine. Attempts to prepare copper (II) iodide yield only cuprous iodide and iodine.
Copper forms coordination complexes with ligands. In aqueous solution, copper (II) exists as (Cu (H O)). This complex exhibits the fastest water exchange rate (speed of water ligands attaching and detaching) for any transition metal aquo complex. Adding aqueous sodium hydroxide causes the precipitation of light blue solid copper (II) hydroxide. A simplified equation is:
Aqueous ammonia results in the same precipitate. Upon adding excess ammonia, the precipitate dissolves, forming tetraamminecopper (II):
Many other oxyanions form complexes; these include copper (II) acetate, copper (II) nitrate, and copper (II) carbonate. Copper (II) sulfate forms a blue crystalline penta hydrate, the most familiar copper compound in the laboratory. It is used in a fungicide called the Bordeaux mixture.
Polyols, compounds containing more than one alcohol functional group, generally interact with cupric salts. For example, copper salts are used to test for reducing sugars. Specifically, using Benedict 's reagent and Fehling 's solution the presence of the sugar is signaled by a color change from blue Cu (II) to reddish copper (I) oxide. Schweizer 's reagent and related complexes with ethylenediamine and other amines dissolve cellulose. Amino acids form very stable chelate complexes with copper (II). Many wet - chemical tests for copper ions exist, one involving potassium ferrocyanide, which gives a brown precipitate with copper (II) salts.
Compounds that contain a carbon - copper bond are known as organocopper compounds. They are very reactive towards oxygen to form copper (I) oxide and have many uses in chemistry. They are synthesized by treating copper (I) compounds with Grignard reagents, terminal alkynes or organolithium reagents; in particular, the last reaction described produces a Gilman reagent. These can undergo substitution with alkyl halides to form coupling products; as such, they are important in the field of organic synthesis. Copper (I) acetylide is highly shock - sensitive but is an intermediate in reactions such as the Cadiot - Chodkiewicz coupling and the Sonogashira coupling. Conjugate addition to enones and carbocupration of alkynes can also be achieved with organocopper compounds. Copper (I) forms a variety of weak complexes with alkenes and carbon monoxide, especially in the presence of amine ligands.
Copper (III) is most often found in oxides. A simple example is potassium cuprate, KCuO, a blue - black solid. The most extensively studied copper (III) compounds are the cuprate superconductors. Yttrium barium copper oxide (YBa Cu O) consists of both Cu (II) and Cu (III) centres. Like oxide, fluoride is a highly basic anion and is known to stabilize metal ions in high oxidation states. Both copper (III) and even copper (IV) fluorides are known, K CuF and Cs CuF, respectively.
Some copper proteins form oxo complexes, which also feature copper (III). With tetrapeptides, purple - colored copper (III) complexes are stabilized by the deprotonated amide ligands.
Complexes of copper (III) are also found as intermediates in reactions of organocopper compounds. For example, in the Kharasch -- Sosnovsky reaction.
Copper occurs naturally as native metallic copper and was known to some of the oldest civilizations on record. The history of copper use dates to 9000 BC in the Middle East; a copper pendant was found in northern Iraq that dates to 8700 BC. Evidence suggests that gold and meteoric iron (but not iron smelting) were the only metals used by humans before copper. The history of copper metallurgy is thought to follow this sequence: First, cold working of native copper, then annealing, smelting, and, finally, lost - wax casting. In southeastern Anatolia, all four of these techniques appear more or less simultaneously at the beginning of the Neolithic c. 7500 BC.
Copper smelting was independently invented in different places. It was probably discovered in China before 2800 BC, in Central America around 600 AD, and in West Africa about the 9th or 10th century AD. Investment casting was invented in 4500 -- 4000 BC in Southeast Asia and carbon dating has established mining at Alderley Edge in Cheshire, UK, at 2280 to 1890 BC. Ötzi the Iceman, a male dated from 3300 -- 3200 BC, was found with an axe with a copper head 99.7 % pure; high levels of arsenic in his hair suggest an involvement in copper smelting. Experience with copper has assisted the development of other metals; in particular, copper smelting led to the discovery of iron smelting. Production in the Old Copper Complex in Michigan and Wisconsin is dated between 6000 and 3000 BC. Natural bronze, a type of copper made from ores rich in silicon, arsenic, and (rarely) tin, came into general use in the Balkans around 5500 BC.
Alloying copper with tin to make bronze was first practiced about 4000 years after the discovery of copper smelting, and about 2000 years after "natural bronze '' had come into general use. Bronze artifacts from the Vinča culture date to 4500 BC. Sumerian and Egyptian artifacts of copper and bronze alloys date to 3000 BC. The Bronze Age began in Southeastern Europe around 3700 -- 3300 BC, in Northwestern Europe about 2500 BC. It ended with the beginning of the Iron Age, 2000 -- 1000 BC in the Near East, and 600 BC in Northern Europe. The transition between the Neolithic period and the Bronze Age was formerly termed the Chalcolithic period (copper - stone), when copper tools were used with stone tools. The term has gradually fallen out of favor because in some parts of the world, the Chalcolithic and Neolithic are coterminous at both ends. Brass, an alloy of copper and zinc, is of much more recent origin. It was known to the Greeks, but became a significant supplement to bronze during the Roman Empire.
In Greece, copper was known by the name chalkos (χαλκός). It was an important resource for the Romans, Greeks and other ancient peoples. In Roman times, it was known as aes Cyprium, aes being the generic Latin term for copper alloys and Cyprium from Cyprus, where much copper was mined. The phrase was simplified to cuprum, hence the English copper. Aphrodite (Venus in Rome) represented copper in mythology and alchemy because of its lustrous beauty and its ancient use in producing mirrors; Cyprus was sacred to the goddess. The seven heavenly bodies known to the ancients were associated with the seven metals known in antiquity, and Venus was assigned to copper.
Copper was first used in ancient Britain in about the 3rd or 2nd Century BC. In North America, copper mining began with marginal workings by Native Americans. Native copper is known to have been extracted from sites on Isle Royale with primitive stone tools between 800 and 1600. Copper metallurgy was flourishing in South America, particularly in Peru around 1000 AD. Copper burial ornamentals from the 15th century have been uncovered, but the metal 's commercial production did not start until the early 20th century.
The cultural role of copper has been important, particularly in currency. Romans in the 6th through 3rd centuries BC used copper lumps as money. At first, the copper itself was valued, but gradually the shape and look of the copper became more important. Julius Caesar had his own coins made from brass, while Octavianus Augustus Caesar 's coins were made from Cu - Pb - Sn alloys. With an estimated annual output of around 15,000 t, Roman copper mining and smelting activities reached a scale unsurpassed until the time of the Industrial Revolution; the provinces most intensely mined were those of Hispania, Cyprus and in Central Europe.
The gates of the Temple of Jerusalem used Corinthian bronze treated with depletion gilding. The process was most prevalent in Alexandria, where alchemy is thought to have begun. In ancient India, copper was used in the holistic medical science Ayurveda for surgical instruments and other medical equipment. Ancient Egyptians (~ 2400 BC) used copper for sterilizing wounds and drinking water, and later to treat headaches, burns, and itching.
The Great Copper Mountain was a mine in Falun, Sweden, that operated from the 10th century to 1992. It satisfied two thirds of Europe 's copper consumption in the 17th century and helped fund many of Sweden 's wars during that time. It was referred to as the nation 's treasury; Sweden had a copper backed currency.
Copper is used in roofing, currency, and for photographic technology known as the daguerreotype. Copper was used in Renaissance sculpture, and was used to construct the Statue of Liberty; copper continues to be used in construction of various types. Copper plating and copper sheathing were widely used to protect the under - water hulls of ships, a technique pioneered by the British Admiralty in the 18th century. The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant, starting its production in 1876. The German scientist Gottfried Osann invented powder metallurgy in 1830 while determining the metal 's atomic mass; around then it was discovered that the amount and type of alloying element (e.g., tin) to copper would affect bell tones. Flash smelting was developed by Outokumpu in Finland and first applied at Harjavalta in 1949; the energy - efficient process accounts for 50 % of the world 's primary copper production.
The Intergovernmental Council of Copper Exporting Countries, formed in 1967 by Chile, Peru, Zaire and Zambia, operated in the copper market as OPEC does in oil, though it never achieved the same influence, particularly because the second - largest producer, the United States, was never a member; it was dissolved in 1988.
The major applications of copper are electrical wire (60 %), roofing and plumbing (20 %), and industrial machinery (15 %). Copper is used mostly as a pure metal, but when greater hardness is required, it is put into such alloys as brass and bronze (5 % of total use). For more than two centuries, copper paint has been used on boat hulls to control the growth of plants and shellfish. A small part of the copper supply is used for nutritional supplements and fungicides in agriculture. Machining of copper is possible, although alloys are preferred for good machinability in creating intricate parts.
Despite competition from other materials, copper remains the preferred electrical conductor in nearly all categories of electrical wiring except overhead electric power transmission where aluminium is often preferred. Copper wire is used in power generation, power transmission, power distribution, telecommunications, electronics circuitry, and countless types of electrical equipment. Electrical wiring is the most important market for the copper industry. This includes structural power wiring, power distribution cable, appliance wire, communications cable, automotive wire and cable, and magnet wire. Roughly half of all copper mined is used for electrical wire and cable conductors. Many electrical devices rely on copper wiring because of its multitude of inherent beneficial properties, such as its high electrical conductivity, tensile strength, ductility, creep (deformation) resistance, corrosion resistance, low thermal expansion, high thermal conductivity, ease of soldering, malleability, and ease of installation.
For a short period from the late 1960s to the late 1970s, copper wiring was replaced by aluminium wiring in many housing construction projects in America. The new wiring was implicated in a number of house fires and the industry returned to copper.
Integrated circuits and printed circuit boards increasingly feature copper in place of aluminium because of its superior electrical conductivity; heat sinks and heat exchangers use copper because of its superior heat dissipation properties. Electromagnets, vacuum tubes, cathode ray tubes, and magnetrons in microwave ovens use copper, as do waveguides for microwave radiation.
Copper 's superior conductivity enhances the efficiency of electrical motors. This is important because motors and motor - driven systems account for 43 % -- 46 % of all global electricity consumption and 69 % of all electricity used by industry. Increasing the mass and cross section of copper in a coil increases the efficiency of the motor. Copper motor rotors, a new technology designed for motor applications where energy savings are prime design objectives, are enabling general - purpose induction motors to meet and exceed National Electrical Manufacturers Association (NEMA) premium efficiency standards.
In addition to Copper 's 100 % conductivity, falling behind only silver and gold in performance, the material is malleable, ductile and responsive to precision tooling. These features coupled with the metal 's natural corrosion resistance to industrial atmospheres make it a commonly used material in custom metal stamping projects. Some copper automotive components include reel to reel terminals, conductive lead frames, grids and wire forms.
Copper has been used since ancient times as a durable, corrosion resistant, and weatherproof architectural material. Roofs, flashings, rain gutters, downspouts, domes, spires, vaults, and doors have been made from copper for hundreds or thousands of years. Copper 's architectural use has been expanded in modern times to include interior and exterior wall cladding, building expansion joints, radio frequency shielding, and antimicrobial and decorative indoor products such as attractive handrails, bathroom fixtures, and counter tops. Some of copper 's other important benefits as an architectural material include low thermal movement, light weight, lightning protection, and recyclability.
The metal 's distinctive natural green patina has long been coveted by architects and designers. The final patina is a particularly durable layer that is highly resistant to atmospheric corrosion, thereby protecting the underlying metal against further weathering. It can be a mixture of carbonate and sulfate compounds in various amounts, depending upon environmental conditions such as sulfur - containing acid rain. Architectural copper and its alloys can also be ' finished ' to take on a particular look, feel, or color. Finishes include mechanical surface treatments, chemical coloring, and coatings.
Copper has excellent brazing and soldering properties and can be welded; the best results are obtained with gas metal arc welding.
Copper is biostatic, meaning bacteria and many other forms of life will not grow on it. For this reason it has long been used to line parts of ships to protect against barnacles and mussels. It was originally used pure, but has since been superseded by Muntz metal and copper - based paint. Similarly, as discussed in copper alloys in aquaculture, copper alloys have become important netting materials in the aquaculture industry because they are antimicrobial and prevent biofouling, even in extreme conditions and have strong structural and corrosion - resistant properties in marine environments.
Copper - alloy touch surfaces have natural properties that destroy a wide range of microorganisms (e.g., E. coli O157: H7, methicillin - resistant Staphylococcus aureus (MRSA), Staphylococcus, Clostridium difficile, influenza A virus, adenovirus, and fungi). Some 355 copper alloys were proven to kill more than 99.9 % of disease - causing bacteria within just two hours when cleaned regularly. The United States Environmental Protection Agency (EPA) has approved the registrations of these copper alloys as "antimicrobial materials with public health benefits ''; that approval allows manufacturers to make legal claims to the public health benefits of products made of registered alloys. In addition, the EPA has approved a long list of antimicrobial copper products made from these alloys, such as bedrails, handrails, over-bed tables, sinks, faucets, door knobs, toilet hardware, computer keyboards, health club equipment, and shopping cart handles (for a comprehensive list, see: Antimicrobial copper - alloy touch surfaces # Approved products). Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires ' disease is suppressed by copper tubing in plumbing systems. Antimicrobial copper alloy products are now being installed in healthcare facilities in the U.K., Ireland, Japan, Korea, France, Denmark, and Brazil and in the subway transit system in Santiago, Chile, where copper - zinc alloy handrails will be installed in some 30 stations between 2011 and 2014.
Copper is commonly used in jewelry, and according to some folklore, copper bracelets relieve arthritis symptoms. In one trial for osteoarthritis and one trial for rheumatoid arthritis no differences is found between copper bracelet and control (non-copper) bracelet. No evidence shows that copper can be absorbed through the skin. If it were, it might lead to copper poisoning.
Recently, some compression clothing with inter-woven copper has been marketed with health claims similar to the folk medicine claims. Because compression clothing is a valid treatment for some ailments, the clothing may have that benefit, but the added copper may have no benefit beyond a placebo effect.
Solutions of copper compounds are used as a wood preservative, particularly in treating the original portion of structures during restoration of dry rot damage. Together with zinc, copper wires may be installed over non-conductive roofing materials to discourage the growth of moss. Textile fibers are blended with copper to create antimicrobial protective fabrics. Copper alloys are used in musical instruments, particularly: the body of brass instruments; circuitry for all those that are electronically amplified; the bodies of brass percussion such as gongs, bells, and kettle drums; tuning heads on guitars and other string instruments; string windings on harps, pianos, harpsichords, and string instruments; and the frame elements of pianos and harps. Copper is commonly used as a base on which other metals such as nickel are electroplated.
Copper is one of three metals, along with lead and silver, used in the museum materials testing procedure called the Oddy test to detect chlorides, oxides, and sulfur compounds.
Copper is used as the printing plate in etching, engraving and other forms of intaglio printmaking.
Copper oxide and carbonate are used add color in stained glass works, in glassmaking, and in ceramic glazes to impart turquoise blue, green, and brown colors.
Copper is used to create stills for distilling spirits, for example to make whisky. Its malleability makes it easy to bend into the various shapes required and allows considerable flexibility in the shaping of the still and associated pipework; the metal also reacts with undesirable sulfur - containing components in the vapor and distillate making for a cleaner product.
Chromobacterium violaceum and Pseudomonas fluorescens can both mobilize solid copper as a cyanide compound. The ericoid mycorrhizal fungi associated with Calluna, Erica and Vaccinium can grow in metalliferous soils containing copper. The ectomycorrhizal fungus Suillus luteus protects young pine trees from copper toxicity. A sample of the fungus Aspergillus niger was found growing from gold mining solution and was found to contain cyano complexes of such metals as gold, silver, copper, iron, and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
Copper proteins have diverse roles in biological electron transport and oxygen transportation, processes that exploit the easy interconversion of Cu (I) and Cu (II). Copper is essential in the aerobic respiration of all eukaryotes. In mitochondria, it is found in cytochrome c oxidase, which is the last protein in oxidative phosphorylation. Cytochrome c oxidase is the protein that binds the O between a copper and an iron; the protein transfers 8 electrons to the O molecule to reduce it to two molecules of water. Copper is also found in many superoxide dismutases, proteins that catalyze the decomposition of superoxides by converting it (by disproportionation) to oxygen and hydrogen peroxide:
The protein hemocyanin is the oxygen carrier in most mollusks and some arthropods such as the horseshoe crab (Limulus polyphemus). Because hemocyanin is blue, these organisms have blue blood rather than the red blood of iron - based hemoglobin. Structurally related to hemocyanin are the laccases and tyrosinases. Instead of reversibly binding oxygen, these proteins hydroxylate substrates, illustrated by their role in the formation of lacquers. The biological role for copper commenced with the appearance of oxygen in earth 's atmosphere. Several copper proteins, such as the "blue copper proteins '', do not interact directly with substrates, hence they are not enzymes. These proteins relay electrons by the process called electron transfer.
A unique tetranuclear copper center has been found in nitrous - oxide reductase.
Copper is an essential trace element in plants and animals, but not all microorganisms. The human body contains copper at a level of about 1.4 to 2.1 mg per kg of body mass. Copper is absorbed in the gut, then transported to the liver bound to albumin. After processing in the liver, copper is distributed to other tissues in a second phase, which involves the protein ceruloplasmin, carrying the majority of copper in blood. Ceruloplasmin also carries the copper that is excreted in milk, and is particularly well - absorbed as a copper source. Copper in the body normally undergoes enterohepatic circulation (about 5 mg a day, vs. about 1 mg per day absorbed in the diet and excreted from the body), and the body is able to excrete some excess copper, if needed, via bile, which carries some copper out of the liver that is not then reabsorbed by the intestine.
The U.S. Institute of Medicine (IOM) updated the estimated average requirements (EARs) and recommended dietary allowances (RDAs) for copper in 2001. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EAR for copper for people age 14 and older is 0.7 mg / day. The RDA is 0.9 mg / day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. The RDA for pregnancy is 1.0 mg / day. RDA for lactation is 1.3 mg / day. For infants up to 12 months, AI is 0.22 mg / day. For children age 1 -- 13 years the RDA increases with age from 0.34 to 0.7 mg / day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of copper the UL is set at 10 mg / day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women and men ages 18 and older the AIs are set at 1.3 and 1.6 mg / day, respectively. AIs for pregnancy and lactation is 1.5 mg / day. For children ages 1 -- 17 years the AIs increase with age from 0.7 tp 1.3 mg / day. These AIs are higher than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 5 mg / day, which is half the U.S. value.
For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (% DV). For copper labeling purposes 100 % of the Daily Value was 2.0 mg, but as of May 27, 2016 it was revised to 0.9 mg to bring it into agreement with the RDA. A table of the old and new adult Daily Values is provided at Reference Daily Intake. The original deadline to be in compliance was July 28, 2018, but on September 29, 2017 the FDA released a proposed rule that extended the deadline to January 1, 2020 for large companies and January 1, 2021 for small companies.
Because of its role in facilitating iron uptake, copper deficiency can produce anemia - like symptoms, neutropenia, bone abnormalities, hypopigmentation, impaired growth, increased incidence of infections, osteoporosis, hyperthyroidism, and abnormalities in glucose and cholesterol metabolism. Conversely, Wilson 's disease causes an accumulation of copper in body tissues.
Severe deficiency can be found by testing for low plasma or serum copper levels, low ceruloplasmin, and low red blood cell superoxide dismutase levels; these are not sensitive to marginal copper status. The "cytochrome c oxidase activity of leucocytes and platelets '' has been stated as another factor in deficiency, but the results have not been confirmed by replication.
Gram quantities of various copper salts have been taken in suicide attempts and produced acute copper toxicity in humans, possibly due to redox cycling and the generation of reactive oxygen species that damage DNA. Corresponding amounts of copper salts (30 mg / kg) are toxic in animals. A minimum dietary value for healthy growth in rabbits has been reported to be at least 3 ppm in the diet. However, higher concentrations of copper (100 ppm, 200 ppm, or 500 ppm) in the diet of rabbits may favorably influence feed conversion efficiency, growth rates, and carcass dressing percentages.
Chronic copper toxicity does not normally occur in humans because of transport systems that regulate absorption and excretion. Autosomal recessive mutations in copper transport proteins can disable these systems, leading to Wilson 's disease with copper accumulation and cirrhosis of the liver in persons who have inherited two defective genes.
Elevated copper levels have also been linked to worsening symptoms of Alzheimer 's disease.
In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for copper dust and fumes in the workplace as a time - weighted average (TWA) of 1 mg / m. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 1 mg / m, time - weighted average. The IDLH (immediately dangerous to life and health) value is 100 mg / m.
Copper is a constituent of tobacco smoke. The tobacco plant readily absorbs and accumulates heavy metals, such as copper from the surrounding soil into its leaves. These are readily absorbed into the user 's body following smoke inhalation. The health implications are not clear.
|
describe the flora and fauna of the prairies | Prairie - wikipedia
Prairies are ecosystems considered part of the temperate grasslands, savannas, and shrublands biome by ecologists, based on similar temperate climates, moderate rainfall, and a composition of grasses, herbs, and shrubs, rather than trees, as the dominant vegetation type. Temperate grassland regions include the Pampas of Argentina, southern Brazil and Uruguay as well as the steppes of Eurasia. Lands typically referred to as "prairie '' tend to be in North America. The term encompasses the area referred to as the Interior Lowlands of Canada, the United States, and Mexico, which includes all of the Great Plains as well as the wetter, somewhat hillier land to the east.
In the U.S., the area is constituted by most or all of the states of North Dakota, South Dakota, Nebraska, Kansas, and Oklahoma, and sizable parts of the states of Montana, Wyoming, Colorado, New Mexico, Texas, Missouri, Iowa, Illinois, Indiana, Wisconsin, and western and southern Minnesota. The Central Valley of California is also a prairie. The Canadian Prairies occupy vast areas of Manitoba, Saskatchewan, and Alberta.
According to Theodore Roosevelt:
Prairie is the French word for meadow, but the ultimate root is the Latin pratum (same meaning).
The formation of the North American Prairies started with the upwelling of the Rocky Mountains near Alberta. The mountains created a rain shadow that resulted in lower precipitation rates downwind, creating an environment which most tree species will not tolerate.
The parent material of most prairie soil was distributed during the last glacial advance that began about 110,000 years ago. The glaciers expanding southward scraped the landscape, picking up geologic material and leveling the terrain. As the glaciers retreated about 10,000 years ago, it deposited this material in the form of till. Wind based loess deposits also form an important parent material for prairie soils.
Tallgrass Prairie evolved over tens of thousands of years with the disturbances of grazing and fire. Native ungulates such as bison, elk, and white - tailed deer, roamed the expansive, diverse, plentiful grassland before European colonization of the Americas. For 10,000 - 20,000 years native people used fire annually as a tool to assist in hunting, transportation and safety. Evidence of ignition sources of fire in the tallgrass prairie are overwhelmingly human as opposed to lightning. Humans, and grazing animals, were active participants in the process of prairie formation and the establishment of the diversity of graminoid and forbs species. Fire has the effect on prairies of removing trees, clearing dead plant matter, and changing the availability of certain nutrients in the soil from the ash produced. Fire kills the vascular tissue of trees, but not prairie, as up to 75 % (depending on the species) of the total plant biomass is below the soil surface and will re-grow from its deep (up to 6 feet) roots. Without disturbance, trees will encroach on a grassland, cast shade, which suppresses the understory. Prairie and widely spaced oak trees evolved to coexist in the oak savanna ecosystem.
In spite of long recurrent droughts and occasional torrential rains, the grasslands of the Great Plains were not subject to great soil erosion. The root systems of native prairie grasses firmly held the soil in place to prevent run - off of soil. When the plant died, the fungi, bacteria returned its nutrients to the soil.
These deep roots also helped native prairie plants reach water in even the driest conditions. The native grasses suffered much less damage from dry conditions than the farm crops currently grown.
Prairie in North America is usually split into three groups: wet, mesic, and dry. They are generally characterized by tallgrass prairie, mixed, or shortgrass prairie, depending on the quality of soil and rainfall.
In wet prairies the soil is usually very moist, including during most of the growing season, because of poor water drainage. The resulting stagnant water is conducive to the formation of bogs and fens. Wet prairies have excellent farming soil. The average precipitation amount is 10 - 30 inches a year.
Mesic prairie has good drainage, but good soil during the growing season. This type of prairie is the most often converted for agricultural usage, consequently it is one of the more endangered types of prairies.
Dry prairie has somewhat wet to very dry soil during the growing season because of good drainage in the soil. Often, this prairie can be found on uplands or slopes. Dry soil usually does n't get much vegetation due to lack of rain. This is the dominant biome in the Southern Canadian agricultural and climatic region known as Palliser 's Triangle. Once thought to be completely unarable, the Triangle is now one of the most important agricultural regions in Canada thanks to advances in irrigation technology. In addition to its very high local importance to Canada, Palliser 's Triangle is now also one of the most important sources of wheat in the world as a result of these improved methods of watering wheat fields (along with the rest of the Southern prairie provinces which also grow wheat, canola and many other grains). Despite these advances in farming technology, the area is still very prone to extended periods of drought which can be disastrous for the industry if it is significantly prolonged. An infamous example of this is the Dust Bowl of the 1930s, which also hit much of the United States great plains ecoregion - contributing greatly to the Great Depression.
Nomadic hunting has been the main human activity on the prairies for the majority of the archaeological record. This once included many now - extinct species of megafauna.
After the other extinction, the main hunted animal on the prairies was the plains bison. Using loud noises and waving large signals, Native peoples would drive bison in fenced pens called (buffalo pounds) to be killed with bows and arrows or spears, or drive them off a cliff (called a buffalo jump), to kill or injure the bison en masse. The introduction of the horse and the gun greatly expanded the killing power of the plains Natives. This was followed by the policy of indiscriminate killing by European Americans and Canadians and caused a dramatic drop in bison numbers from millions to a few hundred in a century 's time, and almost caused their extinction.
The very dense soil plagued the first settlers who were using wooden plows, which were more suitable for loose forest soil. On the prairie the plows bounced around and the soil stuck to them. This problem was solved in 1837 by an Illinois blacksmith named John Deere who developed a steel moldboard plow that was stronger and cut the roots, making the fertile soils ready for farming.
The tallgrass prairie has been converted into one of the most intensive crop producing areas in North America. Less than one tenth of one percent (< 0.09 %) of the original landcover of the tallgrass prairie biome remains. States formerly with landcover in native tallgrass prairie such as Iowa, Illinois, Minnesota, Wisconsin, Nebraska, and Missouri have become valued for their highly productive soils and are included in the Corn Belt. As an example of this land use intensity, Illinois and Iowa for the United States, rank 49th and 50th out of 50 states in total uncultivated land remaining.
Drier shortgrass prairies were once used mostly for open - range ranching. But the development of the barbed wire in the 1870s and improved irrigation techniques, means that this region has mostly been converted to cropland and small fenced pasture as well.
Research, by David Tilman, ecologist at the University of Minnesota, suggests that "biofuels made from high - diversity mixtures of prairie plants can reduce global warming by removing carbon dioxide from the atmosphere. Even when grown on infertile soils, they can provide a substantial portion of global energy needs, and leave fertile land for food production. '' Unlike corn and soybeans which are both directly and indirectly major food crops, including livestock feed, prairie grasses are not used for human consumption. Prairie grasses can be grown in infertile soil, eliminating the cost of adding nutrients to the soil. Tilman and his colleagues estimate that prairie grass biofuels would yield 51 percent more energy per acre than ethanol from corn grown on fertile land. Some plants commonly used are lupine, big bluestem (turkey foot), blazing star, switchgrass, and prairie clover.
Because rich and thick topsoil made the land well suited for agricultural use, only 1 % of tallgrass prairie remains in the U.S. today. Short grass prairie is more abundant.
Significant preserved areas of prairie include:
Virgin prairie refers to prairie land that has never been plowed. Small virgin prairies exist in the American Midwestern states and in Canada. Restored prairie refers to a prairie that has been reseeded after plowing or other disturbance.
A prairie garden is a garden primarily consisting of plants from a prairie.
The originally treeless prairies of the upper Mississippi basin began in Indiana and extended westward and north - westward until they merged with the drier region known as the Great Plains. An eastward extension of the same region, originally tree - covered, extended to central Ohio. Thus the prairies generally lie between the Ohio and Missouri rivers on the south and the Great Lakes on the north. The prairies are a contribution of the glacial period. They consist for the most part of glacial drift, deposited unconformably on an underlying rock surface of moderate or small relief. Here, the rocks are an extension of the same stratified Palaeozoic formations already described as occurring in the Appalachian region and around the Great Lakes. They are usually fine - textured limestones and shales, lying horizontal. The moderate or small relief that they were given by mature preglacial erosion is now buried under the drift.
The greatest area of the prairies, from Indiana to North Dakota, consists of till plains, that is, sheets of unstratified drift. These plains are 30, 50 or even 100 ft (up to 30 m) thick covering the underlying rock surface for thousands of square miles except where postglacial stream erosion has locally laid it bare. The plains have an extraordinarily even surface. The till is presumably made in part of preglacial soils, but it is more largely composed of rock waste mechanically transported by the creeping ice sheets. Although the crystalline rocks from Canada and some of the more resistant stratified rocks south of the Great Lakes occur as boulders and stones, a great part of the till has been crushed and ground to a clayey texture. The till plains, although sweeping in broad swells of slowly changing altitude, often appear level to the eye with a view stretching to the horizon. Here and there, faint depressions occur, occupied by marshy sloughs, or floored with a rich black soil of postglacial origin. It is thus by sub-glacial aggradation that the prairies have been levelled up to a smooth surface, in contrast to the higher and non-glaciated hilly country just to the south.
The great ice sheets formed terminal moraines around their border at various end stages. However, the morainic belts are of small relief in comparison to the great area of the ice. They rise gently from the till plains to a height of 50, 100 or more feet. They may be one, two or three miles (5 km) wide and their hilly surface, dotted over with boulders, contains many small lakes in basins or hollows, instead of streams in valleys. The morainic belts are arranged in groups of concentric loops, convex southward, because the ice sheets advanced in lobes along the lowlands of the Great Lakes. Neighboring morainic loops join each other in re-entrants (north - pointing cusps), where two adjacent glacial lobes came together and formed their moraines in largest volume. The moraines are of too small relief to be shown on any maps except of the largest scale. Small as they are, they are the chief relief of the prairie states, and, in association with the nearly imperceptible slopes of the till plains, they determine the course of many streams and rivers, which as a whole are consequent upon the surface form of the glacial deposits.
The complexity of the glacial period and its subdivision into several glacial epochs, separated by interglacial epochs of considerable length (certainly longer than the postglacial epoch) has a structural consequence in the superposition of successive till sheets, alternating with non-glacial deposits. It also has a physiographic consequence in the very different amount of normal postglacial erosion suffered by the different parts of the glacial deposits. The southernmost drift sheets, as in southern Iowa and northern Missouri, have lost their initially plain surface and are now maturely dissected into gracefully rolling forms. Here, the valleys of even the small streams are well opened and graded, and marshes and lakes are rare. These sheets are of early Pleistocene origin. Nearer the Great Lakes, the till sheets are trenched only by the narrow valleys of the large streams. Marshy sloughs still occupy the faint depressions in the till plains and the associated moraines have abundant small lakes in their undrained hollows. These drift sheets are of late Pleistocene origin.
When the ice sheets extended to the land sloping southward to the Ohio River, Mississippi River and Missouri River, the drift - laden streams flowed freely away from the ice border. As the streams escaped from their subglacial channels, they spread into broader channels and deposited some of their load and thus aggraded their courses. Local sheets or aprons of gravel and sand are spread more or less abundantly along the outer side of the morainic belts. Long trains of gravel and sands clog the valleys that lead southward from the glaciated to the non-glaciated area. Later, when the ice retreated farther and the unloaded streams returned to their earlier degrading habit, they more or less completely scoured out the valley deposits, the remains of which are now seen in terraces on either side of the present flood plains.
When the ice of the last glacial epoch had retreated so far that its front border lay on a northward slope, belonging to the drainage area of the Great Lakes, bodies of water accumulated in front of the ice margin, forming glacio - marginal lakes. The lakes were small at first, and each had its own outlet at the lowest depression of land to the south. As the ice melted further back, neighboring lakes became confluent at the level of the lowest outlet of the group. The outflowing streams grew in the same proportion and eroded a broad channel across the height of land and far down stream, while the lake waters built sand reefs or carved shore cliffs along their margin, and laid down sheets of clay on their floors. All of these features are easily recognized in the prairie region. The present site of Chicago was determined by an Indian portage or carry across the low divide between Lake Michigan and the headwaters of the Illinois River. This divide lies on the floor of the former outlet channel of the glacial Lake Michigan. Corresponding outlets are known for Lake Erie, Lake Huron and Lake Superior. A very large sheet of water, named Lake Agassiz, once overspread a broad till plain in northern Minnesota and North Dakota. The outlet of this glacial lake, called river Warren, eroded a large channel in which the Minnesota River evident today. The Red River of the North flows northward through a plain formerly covered by Lake Agassiz.
Certain extraordinary features were produced when the retreat of the ice sheet had progressed so far as to open an eastward outlet for the marginal lakes. This outlet occurred along the depression between the northward slope of the Appalachian plateau in west - central New York and the southward slope of the melting ice sheet. When this eastward outlet came to be lower than the south - westward outlet across the height of land to the Ohio or Mississippi river, the discharge of the marginal lakes was changed from the Mississippi system to the Hudson system. Many well - defined channels, cutting across the north - sloping spurs of the plateau in the neighborhood of Syracuse, New York, mark the temporary paths of the ice - bordered outlet river. Successive channels are found at lower and lower levels on the plateau slope, indicating the successive courses taken by the lake outlet as the ice melted farther and farther back. On some of these channels, deep gorges were eroded heading in temporary cataracts which exceeded Niagara in height but not in breadth. The pools excavated by the plunging waters at the head of the gorges are now occupied by little lakes. The most significant stage in this series of changes occurred when the glacio - marginal lake waters were lowered so that the long escarpment of Niagara limestone was laid bare in western New York. The previously confluent waters were then divided into two lakes. The higher one, Lake Erie, supplied the outflowing Niagara River, which poured its waters down the escarpment to the lower, Lake Ontario. This gave rise to Niagara Falls. Lake Ontario 's outlet for a time ran down the Mohawk Valley to the Hudson River. At this higher elevation it was known as Lake Iroquois. When the ice melted from the northeastern end of the lake, it dropped to a lower level, and drained through the St. Lawrence area. This created a lower base level for the Niagara River, increasing its erosive capacity.
In certain districts, the subglacial till was not spread out in a smooth plain, but accumulated in elliptical mounds, 100 -- 200 feet. high and 0.5 to 1 mile (0.80 to 1.61 kilometres) long with axes parallel to the direction of the ice motion as indicated by striae on the underlying rock floor. These hills are known by the Irish name, drumlins, used for similar hills in north - western Ireland. The most remarkable groups of drumlins occur in western New York, where their number is estimated at over 6,000, and in southern Wisconsin, where it is placed at 5,000. They completely dominate the topography of their districts.
A curious deposit of an impalpably fine and unstratified silt, known by the German name bess (or loess), lies on the older drift sheets near the larger river courses of the upper Mississippi basin. It attains a thickness of 20 ft (6.1 m) or more near the rivers and gradually fades away at a distance of ten or more miles (16 or more km) on either side. It contains land shells, and hence can not be attributed to marine or lacustrine submergence. The best explanation is that, during certain phases of the glacial period, it was carried as dust by the winds from the flood plains of aggrading rivers, and slowly deposited on the neighboring grass - covered plains. The glacial and eolian origin of this sediment is evidenced by the angularity of its grains (a bank of it will stand without slumping for years), whereas, if it had been transported significantly by water, the grains would have been rounded and polished. Loess is parent material for an extremely fertile, but droughty soil.
Southwestern Wisconsin and parts of the adjacent states of Illinois, Iowa and Minnesota are known as the driftless zone, because, although bordered by drift sheets and moraines, it is free from glacial deposits. It must therefore have been a sort of oasis, when the ice sheets from the north advanced past it on the east and west and joined around its southern border. The reason for this exemption from glaciation is the converse of that for the southward convexity of the morainic loops. For while they mark the paths of greatest glacial advance along lowland troughs (lake basins), the driftless zone is a district protected from ice invasion by reason of the obstruction which the highlands of northern Wisconsin and Michigan (part of the Superior upland) offered to glacial advance.
The course of the upper Mississippi River is largely consequent upon glacial deposits. Its sources are in the morainic lakes in northern Minnesota. The drift deposits thereabouts are so heavy that the present divides between the drainage basins of Hudson Bay, Lake Superior and the Gulf of Mexico evidently stand in no very definite relation to the preglacial divides. The course of the Mississippi through Minnesota is largely guided by the form of the drift cover. Several rapids and the Saint Anthony Falls (determining the site of Minneapolis) are signs of immaturity, resulting from superposition through the drift on the under rock. Farther south, as far as the entrance of the Ohio River, the Mississippi follows a rock - walled valley 300 to 400 ft (91 to 122 m) deep, with a flood - plain 2 to 4 mi (3.2 to 6.4 km) wide. This valley seems to represent the path of an enlarged early - glacial Mississippi, when much precipitation that is today discharged to Hudson Bay and the Gulf of St Lawrence was delivered to the Gulf of Mexico, for the curves of the present river are of distinctly smaller radii than the curves of the valley. Lake Pepin (30 mi (48 km) below St. Paul), a picturesque expansion of the river across its flood - plain, is due to the aggradation of the valley floor where the Chippewa River, coming from the northeast, brought an overload of fluvio - glacial drift. Hence even the father of waters, like so many other rivers in the Northern states, owes many of its features more or less directly to glacial action.
The fertility of the prairies is a natural consequence of their origin. During the mechanical transportation of the till no vegetation was present to remove the minerals essential to plant growth, as is the case in the soils of normally weathered and dissected peneplains. The soil is similar to the Appalachian piedmont which though not exhausted by the primeval forest cover, are by no means so rich as the till sheets of the prairies. Moreover, whatever the rocky understructure, the till soil has been averaged by a thorough mechanical mixture of rock grindings. Hence the prairies are continuously fertile for scores of miles together. The true prairies were once covered with a rich growth of natural grass and annual flowering plants, but today are covered with farms.
|
where is bile secreted from and what does it contain | Bile - wikipedia
Bile or gall is a dark green to yellowish brown fluid, produced by the liver of most vertebrates, that aids the digestion of lipids in the small intestine. In humans, bile is produced continuously by the liver (liver bile), and stored and concentrated in the gallbladder. After eating, this stored bile is discharged into the duodenum. The composition of gallbladder bile is 97 % water, 0.7 % bile salts, 0.2 % bilirubin, 0.51 % fats (cholesterol, fatty acids, and lecithin), and 200 meq / l inorganic salts.
Bile was the yellow bile in the four humor system of medicine, the standard of medical practice in Europe from around 500 BCE to the early 19th century. About 400 to 800 ml of bile is produced per day in adult human beings.
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 14 -- 33 μm.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which actually digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into three fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (re-esterified), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food would be excreted in faeces, undigested.
Since bile increases the absorption of fats, it is an important part of the absorption of the fat - soluble substances, such as the vitamins A, D, E, and K.
Besides its digestive function, bile serves also as the route of excretion for bilirubin, a byproduct of red blood cells recycled by the liver. Bilirubin derives from hemoglobin by glucuronidation.
Bile tends to be alkali on average. The pH of common duct bile (7.50 to 8.05) is higher than that of the corresponding gallbladder bile (6.80 to 7.65). Bile in the gallbladder becomes more acidic the longer a person goes without eating, though resting slows this fall in pH. As an alkali, it also has the function of neutralizing excess stomach acid before it enters the duodenum, the first section of the small intestine. Bile salts also act as bactericides, destroying many of the microbes that may be present in the food.
In the absence of bile, fats become indigestible and are instead excreted in feces, a condition called steatorrhea. Feces lack their characteristic brown color and instead are white or gray, and greasy. Steatorrhea can lead to deficiencies in essential fatty acids and fat - soluble vitamins. In addition, past the small intestine (which is normally responsible for absorbing fat from food) the gastrointestinal tract and gut flora are not adapted to processing fats, leading to problems in the large intestine.
The cholesterol contained in bile will occasionally accrete into lumps in the gallbladder, forming gallstones. Cholesterol gallstones are generally treated through surgical removal of the gallbladder. However, they can sometimes be dissolved by increasing the concentration of certain naturally occurring bile acids, such as chenodeoxycholic acid and ursodeoxycholic acid.
On an empty stomach -- after repeated vomiting, for example -- a person 's vomit may be green or dark yellow, and very bitter. The bitter and greenish component may be bile or normal digestive juices originating in the stomach. The color of bile is often likened to "fresh - cut grass '', unlike components in the stomach that look greenish yellow or dark yellow. Bile may be forced into the stomach secondary to a weakened valve (pylorus), the presence of certain drugs including alcohol, or powerful muscular contractions and duodenal spasms.
Biliary obstruction can be caused by a variety of dietary factors. Most biliary obstructions are caused by the high consumption of sugar, fat and processed foods. These foods can cause gallstones. Primarily, biliary obstruction is caused by blockage in the bile ducts. Bile ducts carry bile from the liver and gallbladder through the pancreas. A huge amount of the bile is then released into the small intestine duodenum. The remaining bile is stored in the gallbladder. After food consumption the bile in the gallbladder is released to help with digestion and fat absorption.
In medical theories prevalent in the West from Classical Antiquity to the Middle Ages, the body 's health depended on the equilibrium of four "humors '', or vital fluids, two of which related to bile: blood, phlegm, "yellow bile '' (choler), and "black bile ''. These "humors '' are believed to have its roots in the appearance of a blood sedimentation test made in open air, which exhibits a dark clot at the bottom ("black bile ''), a layer of unclotted erythrocytes ("blood ''), a layer of white blood cells ("phlegm '') and a layer of clear yellow serum ("yellow bile '').
Excesses of black bile and yellow bile were thought to produce aggression and depression, respectively, and the Greek names for them gave rise to the English words cholera (from Greek kholé) and melancholia. In the former of those senses, the same theories explain the derivation of the English word bilious from bile, the meaning of gall in English as "exasperation '' or "impudence '', and the Latin word cholera, derived from the Greek kholé, which was passed along into some Romance languages as words connoting anger, such as colère (French) and cólera (Spanish).
Bile from dead mammals can be mixed with soap. This mixture, called bile soap, can be applied to textiles a few hours before washing and is a traditional and rather effective method for removing various kinds of tough stains.
"Pinapaitan '' is a dish in Philippine cuisine that uses bile as flavoring.
In regions such as Asia where bile products are a popular ingredient in traditional medicine, the abuse of bears from which bile is farmed is common.
Cholic acid
Chenodeoxycholic acid
Glycocholic acid
Taurocholic acid
Deoxycholic acid
Lithocholic acid
|
what state does not have an nfl team | National Football league - wikipedia
The National Football League (NFL) is a professional American football league consisting of 32 teams, divided equally between the National Football Conference (NFC) and the American Football Conference (AFC). The NFL is one of the four major professional sports leagues in North America, and the highest professional level of American football in the world. The NFL 's 17 - week regular season runs from early September to late December, with each team playing 16 games and having one bye week. Following the conclusion of the regular season, six teams from each conference (four division winners and two wild card teams) advance to the playoffs, a single - elimination tournament culminating in the Super Bowl, which is usually held in the first Sunday in February, and is played between the champions of the NFC and AFC.
The NFL was formed in 1920 as the American Professional Football Association (APFA) before renaming itself the National Football League for the 1922 season. The NFL agreed to merge with the American Football League (AFL) in 1966, and the first Super Bowl was held at the end of that season; the merger was completed in 1970. Today, the NFL has the highest average attendance (67,591) of any professional sports league in the world and is the most popular sports league in the United States. The Super Bowl is among the biggest club sporting events in the world and individual Super Bowl games account for many of the most watched television programs in American history, all occupying the Nielsen 's Top 5 tally of the all - time most watched U.S. television broadcasts by 2015. The NFL 's executive officer is the commissioner, who has broad authority in governing the league.
The team with the most NFL championships is the Green Bay Packers with thirteen (nine NFL titles before the Super Bowl era, and four Super Bowl championships afterwards); the team with the most Super Bowl championships is the Pittsburgh Steelers with six. The current NFL champions are the Philadelphia Eagles, who defeated the New England Patriots in Super Bowl LII, their first Super Bowl championship after winning three NFL titles before the Super Bowl era.
On August 20, 1920, a meeting was held by representatives of the Akron Pros, Canton Bulldogs, Cleveland Indians, Rock Island Independents and Dayton Triangles at the Jordan and Hupmobile auto showroom in Canton, Ohio. This meeting resulted in the formation of the American Professional Football Conference (APFC), a group who, according to the Canton Evening Repository, intended to "raise the standard of professional football in every way possible, to eliminate bidding for players between rival clubs and to secure cooperation in the formation of schedules ''. Another meeting held on September 17, 1920 resulted in the renaming of the league to the American Professional Football Association (APFA). The league hired Jim Thorpe as its first president, and consisted of 14 teams. Only two of these teams, the Decatur Staleys (now the Chicago Bears) and the Chicago Cardinals (now the Arizona Cardinals), remain.
Although the league did not maintain official standings for its 1920 inaugural season and teams played schedules that included non-league opponents, the APFA awarded the Akron Pros the championship by virtue of their 8 -- 0 -- 3 (8 wins, 0 losses, and 3 ties) record. The first event occurred on September 26, 1920 when the Rock Island Independents defeated the non-league St. Paul Ideals 48 -- 0 at Douglas Park. On October 3, 1920, the first full week of league play occurred. The following season resulted in the Chicago Staleys controversially winning the title over the Buffalo All - Americans. On June 24, 1922, the APFA changed its name to the National Football League (NFL).
In 1932, the season ended with the Chicago Bears (6 -- 1 -- 6) and the Portsmouth Spartans (6 -- 1 -- 4) tied for first in the league standings. At the time, teams were ranked on a single table and the team with the highest winning percentage (not including ties, which were not counted towards the standings) at the end of the season was declared the champion; the only tiebreaker was that in the event of a tie, if two teams played twice in a season, the result of the second game determined the title (the source of the 1921 controversy). This method had been used since the league 's creation in 1920, but no situation had been encountered where two teams were tied for first. The league quickly determined that a playoff game between Chicago and Portsmouth was needed to decide the league 's champion. The teams were originally scheduled to play the playoff game, officially a regular season game that would count towards the regular season standings, at Wrigley Field in Chicago, but a combination of heavy snow and extreme cold forced the game to be moved indoors to Chicago Stadium, which did not have a regulation - size football field. Playing with altered rules to accommodate the smaller playing field, the Bears won the game 9 -- 0 and thus won the championship. Fan interest in the de facto championship game led the NFL, beginning in 1933, to split into two divisions with a championship game to be played between the division champions. The 1934 season also marked the first of 12 seasons in which African Americans were absent from the league. The de facto ban was rescinded in 1946, following public pressure and coinciding with the removal of a similar ban in Major League Baseball.
The NFL was always the foremost professional football league in the United States; it nevertheless faced a large number of rival professional leagues through the 1930s and 1940s. Rival leagues included at least three separate American Football Leagues and the All - America Football Conference (AAFC), on top of various regional leagues of varying caliber. Three NFL teams trace their histories to these rival leagues, including the Los Angeles Rams (who came from a 1936 iteration of the American Football League), the Cleveland Browns and San Francisco 49ers (the last two of which came from the AAFC). By the 1950s, the NFL had an effective monopoly on professional football in the United States; its only competition in North America was the professional Canadian football circuit, which formally became the Canadian Football League (CFL) in 1958. With Canadian football being a different football code than the American game, the CFL established a niche market in Canada and still survives as an independent league.
A new professional league, the fourth American Football League (AFL), began play in 1960. The upstart AFL began to challenge the established NFL in popularity, gaining lucrative television contracts and engaging in a bidding war with the NFL for free agents and draft picks. The two leagues announced a merger on June 8, 1966, to take full effect in 1970. In the meantime, the leagues would hold a common draft and championship game. The game, the Super Bowl, was held four times before the merger, with the NFL winning Super Bowl I and Super Bowl II, and the AFL winning Super Bowl III and Super Bowl IV. After the league merged, it was reorganized into two conferences: the National Football Conference (NFC), consisting of most of the pre-merger NFL teams, and the American Football Conference (AFC), consisting of all of the AFL teams as well as three pre-merger NFL teams.
Today, the NFL is considered the most popular sports league in North America; much of its growth is attributed to former Commissioner Pete Rozelle, who led the league from 1960 to 1989. Overall annual attendance increased from three million at the beginning of his tenure to seventeen million by the end of his tenure, and 400 million viewers watched 1989 's Super Bowl XXIII. The NFL established NFL Properties in 1963. The league 's licensing wing, NFL Properties earns the league billions of dollars annually; Rozelle 's tenure also marked the creation of NFL Charities and a national partnership with United Way. Paul Tagliabue was elected as commissioner to succeed Rozelle; his seventeen - year tenure, which ended in 2006, was marked by large increases in television contracts and the addition of four expansion teams, as well as the introduction of league initiatives to increase the number of minorities in league and team management roles. The league 's current Commissioner, Roger Goodell, has focused on reducing the number of illegal hits and making the sport safer, mainly through fining or suspending players who break rules. These actions are among many the NFL is taking to reduce concussions and improve player safety.
From 1920 to 1934, the NFL did not have a set number of games for teams to play, instead setting a minimum. The league mandated a 12 - game regular season for each team beginning in 1935, later shortening this to 11 games in 1937 and 10 games in 1943, mainly due to World War II. After the war ended, the number of games returned to 11 games in 1946 and to 12 in 1947. The NFL went to a 14 - game schedule in 1961, which it retained until switching to the current 16 - game schedule in 1978. Proposals to increase the regular season to 18 games have been made, but have been rejected in labor negotiations with the National Football League Players Association (NFLPA).
The NFL operated in a two - conference system from 1933 to 1966, where the champions of each conference would meet in the NFL Championship Game. If two teams tied for the conference lead, they would meet in a one - game playoff to determine the conference champion. In 1967, the NFL expanded from 15 teams to 16 teams. Instead of just evening out the conferences by adding the expansion New Orleans Saints to the seven - member Western Conference, the NFL realigned the conferences and split each into two four - team divisions. The four division champions would meet in the NFL playoffs, a two - round playoff. The NFL also operated the Playoff Bowl (officially the Bert Bell Benefit Bowl) from 1960 to 1969. Effectively a third - place game, pitting the two conference runners - up against each other, the league considers Playoff Bowls to have been exhibitions rather than playoff games. The league discontinued the Playoff Bowl in 1970 due to its perception as a game for losers.
Following the addition of the former AFL teams into the NFL in 1970, the NFL split into two conferences with three divisions each. The expanded league, now with twenty - six teams, would also feature an expanded eight - team eight playoff, the participants being the three division champions from each conference as well as one ' wild card ' team (the team with the best win percentage) from each conference. In 1978, the league added a second wild card team from each conference, bringing the total number of playoff teams to ten, and a further two wild card teams were added in 1990 to bring the total to twelve. When the NFL expanded to 32 teams in 2002, the league realigned, changing the division structure from three divisions in each conference to four divisions in each conference. As each division champion gets a playoff bid, the number of wild card teams from each conference dropped from three to two.
At the corporate level, the National Football League considers itself a trade association made up of and financed by its 32 member teams. Up until 2015, the league was an unincorporated nonprofit 501 (c) (6) association. Section 501 (c) (6) of the Internal Revenue Code provides an exemption from federal income taxation for "Business leagues, chambers of commerce, real - estate boards, boards of trade, or professional football leagues (whether or not administering a pension fund for football players), not organized for profit and no part of the net earnings of which inures to the benefit of any private shareholder or individual. ''. In contrast, each individual team (except the non-profit Green Bay Packers) is subject to tax because they make a profit.
The NFL gave up the tax exempt status in 2015 following public criticism; in a letter to the club owners, Commissioner Roger Goodell labeled it a "distraction '', saying "the effects of the tax exempt status of the league office have been mischaracterized repeatedly in recent years... Every dollar of income generated through television rights fees, licensing agreements, sponsorships, ticket sales, and other means is earned by the 32 clubs and is taxable there. This will remain the case even when the league office and Management Council file returns as taxable entities, and the change in filing status will make no material difference to our business. '' As a result, the league office might owe around US $10 million in income taxes, but it is no longer required to disclose the salaries of its executive officers.
The league has three defined officers: the commissioner, secretary, and treasurer. Each conference has one defined officer, the president, which is essentially an honorary position with few powers and mostly ceremonial duties (such as awarding the conference championship trophy).
The commissioner is elected by affirmative vote of two - thirds or 18 (whichever is greater) of the members of the league, while the president of each conference is elected by an affirmative vote of three - fourths or ten of the conference members. The commissioner appoints the secretary and treasurer and has broad authority in disputes between clubs, players, coaches, and employees. He is the "principal executive officer '' of the NFL and also has authority in hiring league employees, negotiating television contracts, disciplining individuals that own part or all of an NFL team, clubs, or employed individuals of an NFL club if they have violated league bylaws or committed "conduct detrimental to the welfare of the League or professional football ''. The commissioner can, in the event of misconduct by a party associated with the league, suspend individuals, hand down a fine of up to US $500,000, cancel contracts with the league, and award or strip teams of draft picks.
In extreme cases, the commissioner can offer recommendations to the NFL 's Executive Committee up to and including the "cancellation or forfeiture '' of a club 's franchise or any other action he deems necessary. The commissioner can also issue sanctions up to and including a lifetime ban from the league if an individual connected to the NFL has bet on games or failed to notify the league of conspiracies or plans to bet on or fix games. The current Commissioner of the National Football League is Roger Goodell, who was elected in 2006 after Paul Tagliabue, the previous commissioner, retired.
According to economist Richard Wolff, the NFL redistributes its wealth to all NFL teams equally in contravention of the typical corporate structure. By redistributing profits to all teams the NFL is ensuring that one team will not dominate the league through excessive earnings.
The NFL consists of 32 clubs divided into two conferences of 16 teams in each. Each conference is divided into four divisions of four clubs in each. During the regular season, each team is allowed a maximum of 53 players on its roster; only 46 of these may be active (eligible to play) on game days. Each team can also have a 10 - player practice squad separate from its main roster, but the practice squad may only be composed of players who were not active for at least nine games in any of their seasons in the league. A player can only be on a practice squad for a maximum of three seasons.
Each NFL club is granted a franchise, the league 's authorization for the team to operate in its home city. This franchise covers ' Home Territory ' (the 75 miles surrounding the city limits, or, if the team is within 100 miles of another league city, half the distance between the two cities) and ' Home Marketing Area ' (Home Territory plus the rest of the state the club operates in, as well as the area the team operates its training camp in for the duration of the camp). Each NFL member has the exclusive right to host professional football games inside its Home Territory and the exclusive right to advertise, promote, and host events in its Home Marketing Area. There are several exceptions to this rule, mostly relating to teams with close proximity to each other: the San Francisco 49ers and Oakland Raiders only have exclusive rights in their cities and share rights outside of it; and teams that operate in the same city (e.g. New York City and Los Angeles) or the same state (e.g. California, Florida, and Texas) share the rights to the city 's Home Territory and the state 's Home Marketing Area, respectively.
Every NFL team is based in the contiguous United States. Although no team is based in a foreign country, the Jacksonville Jaguars began playing one home game a year at Wembley Stadium in London, England in 2013 as part of the NFL International Series. The Jaguars ' agreement with Wembley was originally set to expire in 2016, but has since been extended through 2020. The Buffalo Bills played one home game every season at Rogers Centre in Toronto, Ontario, Canada as part of the Bills Toronto Series from 2008 to 2013. Mexico also hosted an NFL regular - season game, a 2005 game between the San Francisco 49ers and Arizona Cardinals known as "Fútbol Americano '', and 39 international preseason games were played from 1986 to 2005 as part of the American Bowl series. The Raiders and Houston Texans played a game in Mexico City at Estadio Azteca on November 21, 2016.
According to Forbes, the Dallas Cowboys, at approximately US $4 billion, are the most valuable NFL franchise and the most valuable sports team in the world. Also, all 32 NFL teams rank among the Top 50 most valuable sports teams in the world; and 14 of the NFL 's owners are listed on the Forbes 400, the most of any sports league or organization.
The 32 teams are organized into eight geographic divisions of four teams each. These divisions are further organized into two conferences, the National Football Conference and the American Football Conference. The two - conference structure has its origins in a time when major American professional football was organized into two independent leagues, the National Football League and its younger rival, the American Football League. The leagues merged in the late 1960s, adopting the older league 's name and reorganizing slightly to assure the same number of teams in both conferences.
The NFL season format consists of a four - week preseason, a seventeen - week regular season (each team plays 16 games), and a twelve - team single - elimination playoff culminating in the Super Bowl, the league 's championship game.
The NFL preseason begins with the Pro Football Hall of Fame Game, played at Fawcett Stadium in Canton. Each NFL team is required to schedule four preseason games, two of which must be at its home stadium, but the teams involved in the Hall of Fame game, as well as any teams playing in an American Bowl game, play five preseason games. Preseason games are exhibition matches and do not count towards regular - season totals. Because the preseason does not count towards standings, teams generally do not focus on winning games; instead, they are used by coaches to evaluate their teams and by players to show their performance, both to their current team and to other teams if they get cut. The quality of preseason games has been criticized by some fans, who dislike having to pay full price for exhibition games, as well as by some players and coaches, who dislike the risk of injury the games have, while others have felt the preseason is a necessary part of the NFL season.
Currently, the thirteen opponents each team faces over the 16 - game regular season schedule are set using a pre-determined formula: The league runs a seventeen - week, 256 - game regular season. Since 2001, the season has begun the week after Labor Day (first Monday in September) and concluded the week after Christmas. The opening game of the season is normally a home game on a Thursday for the league 's defending champion.
Most NFL games are played on Sundays, with a Monday night game typically held at least once a week and Thursday night games occurring on most weeks as well. NFL games are not normally played on Fridays or Saturdays until late in the regular season, as federal law prohibits professional football leagues from competing with college or high school football. Because high school and college teams typically play games on Friday and Saturday, respectively, the NFL can not hold games on those days until the third Friday in December. NFL games are rarely scheduled for Tuesday or Wednesday, and those days have only been used twice since 1948: in 2010, when a Sunday game was rescheduled to Tuesday due to a blizzard, and in 2012, when the Kickoff game was moved from Thursday to Wednesday to avoid conflict with the Democratic National Convention.
NFL regular season matchups are determined according to a scheduling formula. Within a division, all four teams play fourteen out of their sixteen games against common opponents -- two games (home and away) are played against the other three teams in the division, while one game is held against all the members of a division from the NFC and a division from the AFC as determined by a rotating cycle (three years for the conference the team is in, and four years in the conference they are not in). The other two games are intraconference games, determined by the standings of the previous year -- for example, if a team finishes first in its division, it will play two other first - place teams in its conference, while a team that finishes last would play two other last - place teams in the conference. In total, each team plays sixteen games and has one bye week, where they do not play any games.
Although the teams any given club will play are known by the end of the previous year 's regular season, the exact dates, times, and home / away status for NFL games are not determined until much later because the league has to account for, among other things, the Major League Baseball postseason and local events that could pose a scheduling conflict with NFL games. During the 2010 season, over 500,000 potential schedules were created by computers, 5,000 of which were considered "playable schedules '' and were reviewed by the NFL 's scheduling team. After arriving at what they felt was the best schedule out of the group, nearly 50 more potential schedules were developed to try and ensure that the chosen schedule would be the best possible one.
Following the conclusion of the regular season, a twelve - team single elimination tournament, the NFL Playoffs, is held. Six teams are selected from each conference: the winners of each of the four divisions as well as two wild card teams (the two remaining teams with the best overall record). These teams are seeded according to overall record, with the division champions always ranking higher than either of the wild card teams. The top two teams (seeded one and two) from each conference are awarded a bye week, while the remaining four teams (seeded 3 -- 6) from each conference compete in the first round of the playoffs, the Wild Card round, with the third seed competing against the sixth seed and the fourth seed competing against the fifth seed. The winners of the Wild Card round advance to the Divisional Round, which matches the lower seeded team against the first seed and the higher seeded team against the second seed. The winners of those games then compete in the Conference Championships, with the higher remaining seed hosting the lower remaining seed. The AFC and NFC champions then compete in the Super Bowl to determine the league champion.
The only other postseason event hosted by the NFL is the Pro Bowl, the league 's all - star game. Since 2009, the Pro Bowl has been held the week before the Super Bowl; in previous years, the game was held the week following the Super Bowl, but in an effort to boost ratings, the game was moved to the week before. Because of this, players from the teams participating in the Super Bowl are exempt from participating in the game. The Pro Bowl is not considered as competitive as a regular - season game because the biggest concern of teams is to avoid injuries to the players.
The National Football League has used three different trophies to honor its champion over its existence. The first trophy, the Brunswick - Balke Collender Cup, was donated to the NFL (then APFA) in 1920 by the Brunswick - Balke Collender Corporation. The trophy, the appearance of which is only known by its description as a "silver loving cup '', was intended to be a traveling trophy and not to become permanent until a team had won at least three titles. The league awarded it to the Akron Pros, champions of the inaugural 1920 season; however, the trophy was discontinued and its current whereabouts are unknown.
A second trophy, the Ed Thorp Memorial Trophy, was issued by the NFL from 1934 to 1969. The trophy 's namesake, Ed Thorp, was a referee in the league and a friend to many early league owners; upon his death in 1934, the league created the trophy to honor him. In addition to the main trophy, which would be in the possession of the current league champion, the league issued a smaller replica trophy to each champion, who would maintain permanent control over it. The current location of the Ed Thorp Memorial Trophy, like that of its predecessor, is unknown. The predominant theory is that the Minnesota Vikings, the last team to be awarded the trophy, somehow misplaced it after the 1969 season.
The current trophy of the NFL is the Vince Lombardi Trophy. The Super Bowl trophy was officially renamed in 1970 after Vince Lombardi, who as head coach led the Green Bay Packers to victories in the first two Super Bowls. Unlike the previous trophies, a new Vince Lombardi Trophy is issued to each year 's champion, who maintains permanent control of it. Lombardi Trophies are made by Tiffany & Co. out of sterling silver and are worth anywhere from US $25,000 to US $300,000. Additionally, each player on the winning team as well as coaches and personnel are awarded Super Bowl rings to commemorate their victory. The winning team chooses the company that makes the rings; each ring design varies, with the NFL mandating certain ring specifications (which have a degree of room for deviation), in addition to requiring the Super Bowl logo be on at least one side of the ring. The losing team are also awarded rings, which must be no more than half as valuable as the winners ' rings, but those are almost never worn.
The conference champions receive trophies for their achievement. The champions of the NFC receive the George Halas Trophy, named after Chicago Bears founder George Halas, who is also considered as one of the co-founders of the NFL. The AFC champions receive the Lamar Hunt Trophy, named after Lamar Hunt, the founder of the Kansas City Chiefs and the principal founder of the American Football League. Players on the winning team also receive a conference championship ring.
The NFL recognizes a number of awards for its players and coaches at its annual NFL Honors presentation. The most prestigious award is the AP Most Valuable Player (MVP) award. Other major awards include the AP Offensive Player of the Year, AP Defensive Player of the Year, AP Comeback Player of the Year, and the AP Offensive and Defensive Rookie of the Year awards. Another prestigious award is the Walter Payton Man of the Year Award, which recognizes a player 's off - field work in addition to his on - field performance. The NFL Coach of the Year award is the highest coaching award. The NFL also gives out weekly awards such as the FedEx Air & Ground NFL Players of the Week and the Pepsi MAX NFL Rookie of the Week awards.
In the United States, the National Football League has television contracts with four networks: CBS, ESPN, Fox, and NBC. Collectively, these contracts cover every regular season and postseason game. In general, CBS televises afternoon games in which the away team is an AFC team, and Fox carries afternoon games in which the away team belongs to the NFC. These afternoon games are not carried on all affiliates, as multiple games are being played at once; each network affiliate is assigned one game per time slot, according to a complicated set of rules. Since 2011, the league has reserved the right to give Sunday games that, under the contract, would normally air on one network to the other network (known as "flexible scheduling ''). The only way to legally watch a regionally televised game not being carried on the local network affiliates is to purchase NFL Sunday Ticket, the league 's out - of - market sports package, which is only available to subscribers to the DirecTV satellite service. The league also provides RedZone, an omnibus telecast that cuts to the most relevant plays in each game, live as they happen.
In addition to the regional games, the league also has packages of telecasts, mostly in prime time, that are carried nationwide. NBC broadcasts the primetime Sunday Night Football package ', which includes the Thursday NFL Kickoff game that starts the regular season and a primetime Thanksgiving Day game. ESPN carries all Monday Night Football games. The NFL 's own network, NFL Network, broadcasts a series titled Thursday Night Football, which was originally exclusive to the network, but which in recent years has had several games simulcast on CBS (since 2014) and NBC (since 2016) (except the Thanksgiving and kickoff games, which remain exclusive to NBC). For the 2017 season, the NFL Network will broadcast 18 regular season games under its Thursday Night Football brand, 16 Thursday - evening contests (10 of which are simulcast on either NBC or CBS) as well as one of the NFL International Series games on a Sunday morning and one of the 2017 Christmas afternoon games. In addition, 10 of the Thursday night games will be streamed live on Amazon Prime. In 2017, the NFL games occupied the top three rates for a 30 - second advertisement: $699,602 for Sunday Night Football, $550,709 for Thursday Night Football (NBC), and $549,791 for Thursday Night Football (CBS).
The Super Bowl television rights are rotated on a three - year basis between CBS, Fox, and NBC. In 2011, all four stations signed new nine - year contracts with the NFL, each running until 2022; CBS, Fox, and NBC are estimated by Forbes to pay a combined total of US $3 billion a year, while ESPN will pay US $1.9 billion a year. The league also has deals with Spanish - language broadcasters NBC Universo, Fox Deportes, and ESPN Deportes, which air Spanish language dubs of their respective English - language sister networks ' games. The league 's contracts do not cover preseason games, which individual teams are free to sell to local stations directly; a minority of preseason games are distributed among the league 's national television partners.
Through the 2014 season, the NFL had a blackout policy in which games were ' blacked out ' on local television in the home team 's area if the home stadium was not sold out. Clubs could elect to set this requirement at only 85 %, but they would have to give more ticket revenue to the visiting team; teams could also request a specific exemption from the NFL for the game. The vast majority of NFL games were not blacked out; only 6 % of games were blacked out during the 2011 season, and only two games were blacked out in 2013 and none in 2014. The NFL announced in March 2015 that it would suspend its blackout policy for at least the 2015 season. According to Nielsen, the NFL regular season since 2012 was watched by at least 200 million individuals, accounting for 80 % of all television households in the United States and 69 % of all potential viewers in the United States. NFL regular season games accounted for 31 out of the top 32 most - watched programs in the fall season and an NFL game ranked as the most - watched television show in all 17 weeks of the regular season. At the local level, NFL games were the highest - ranked shows in NFL markets 92 % of the time. Super Bowls account for the 22 most - watched programs (based on total audience) in US history, including a record 167 million people that watched Super Bowl XLVIII, the conclusion to the 2013 season.
In addition to radio networks run by each NFL team, select NFL games are broadcast nationally by Westwood One (known as Dial Global for the 2012 season). These games are broadcast on over 500 networks, giving all NFL markets access to each primetime game. The NFL 's deal with Westwood One was extended in 2012 and will run through 2017.
Some broadcasting innovations have either been introduced or popularized during NFL telecasts. Among them, the Skycam camera system was used for the first time in a live telecast, at a 1984 preseason NFL game in San Diego between the Chargers and 49ers, and televised by CBS. Commentator John Madden famously used a telestrator during games between the early 1980s to the mid-2000s, boosting the device 's popularity.
The NFL, as a one - time experiment, distributed the October 25, 2015 International Series game from Wembley Stadium in London between the Buffalo Bills and Jacksonville Jaguars. The game was live streamed on the Internet exclusively via Yahoo!, except for over-the - air broadcasts on the local CBS - TV affiliates in the Buffalo and Jacksonville markets.
In 2015, the NFL began sponsoring a series of public service announcements to bring attention to domestic abuse and sexual assault in response to what was seen as poor handling of incidents of violence by players.
Each April (excluding 2014 when it took place in May), the NFL holds a draft of college players. The draft consists of seven rounds, with each of the 32 clubs getting one pick in each round. The draft order for non-playoff teams is determined by regular - season record; among playoff teams, teams are first ranked by the furthest round of the playoffs they reached, and then are ranked by regular - season record. For example, any team that reached the divisional round will be given a higher pick than any team that reached the conference championships, but will be given a lower pick than any team that did not make the divisional round. The Super Bowl champion always drafts last, and the losing team from the Super Bowl always drafts next - to - last. All potential draftees must be at least three years removed from high school in order to be eligible for the draft. Underclassmen that have met that criterion to be eligible for the draft must write an application to the NFL by January 15 renouncing their remaining college eligibility. Clubs can trade away picks for future draft picks, but can not trade the rights to players they have selected in previous drafts.
Aside from the 32 picks each club gets, compensatory draft picks are given to teams that have lost more compensatory free agents than they have gained. These are spread out from rounds 3 to 7, and a total of 32 are given. Clubs are required to make their selection within a certain period of time, the exact time depending on which round the pick is made in. If they fail to do so on time, the clubs behind them can begin to select their players in order, but they do not lose the pick outright. This happened in the 2003 draft, when the Minnesota Vikings failed to make their selection on time. The Jacksonville Jaguars and Carolina Panthers were able to make their picks before the Vikings were able to use theirs. Selected players are only allowed to negotiate contracts with the team that picked them, but if they choose not to sign they become eligible for the next year 's draft. Under the current collective bargaining contract, all contracts to drafted players must be four - year deals with a club option for a fifth. Contracts themselves are limited to a certain amount of money, depending on the exact draft pick the player was selected with. Players who were draft eligible but not picked in the draft are free to sign with any club.
The NFL operates several other drafts in addition to the NFL draft. The league holds a supplemental draft annually. Clubs submit emails to the league stating the player they wish to select and the round they will do so, and the team with the highest bid wins the rights to that player. The exact order is determined by a lottery held before the draft, and a successful bid for a player will result in the team forfeiting the rights to its pick in the equivalent round of the next NFL draft. Players are only eligible for the supplemental draft after being granted a petition for special eligibility. The league holds expansion drafts, the most recent happening in 2002 when the Houston Texans began play as an expansion team. Other drafts held by the league include an allocation draft in 1950 to allocate players from several teams that played in the dissolved All - America Football Conference and a supplemental draft in 1984 to give NFL teams the rights to players who had been eligible for the main draft but had not been drafted because they had signed contracts with the United States Football League or Canadian Football League.
Like the other major sports leagues in the United States, the NFL maintains protocol for a disaster draft. In the event of a ' near disaster ' (less than 15 players killed or disabled) that caused the club to lose a quarterback, they could draft one from a team with at least three quarterbacks. In the event of a ' disaster ' (15 or more players killed or disabled) that results in a club 's season being cancelled, a restocking draft would be held. Neither of these protocols has ever had to be implemented.
Free agents in the National Football League are divided into restricted free agents, who have three accrued seasons and whose current contract has expired, and unrestricted free agents, who have four or more accrued seasons and whose contract has expired. An accrued season is defined as "six or more regular - season games on a club 's active / inactive, reserved / injured or reserve / physically unable to perform lists ''. Restricted free agents are allowed to negotiate with other clubs besides their former club, but the former club has the right to match any offer. If they choose not to, they are compensated with draft picks. Unrestricted free agents are free to sign with any club, and no compensation is owed if they sign with a different club.
Clubs are given one franchise tag to offer to any unrestricted free agent. The franchise tag is a one - year deal that pays the player 120 % of his previous contract or no less than the average of the five highest - paid players at his position, whichever is greater. There are two types of franchise tags: exclusive tags, which do not allow the player to negotiate with other clubs, and non-exclusive tags, which allow the player to negotiate with other clubs but gives his former club the right to match any offer and two first - round draft picks if they decline to match it.
Clubs also have the option to use a transition tag, which is similar to the non-exclusive franchise tag but offers no compensation if the former club refuses to match the offer. Due to that stipulation, the transition tag is rarely used, even with the removal of the "poison pill '' strategy (offering a contract with stipulations that the former club would be unable to match) that essentially ended the usage of the tag league - wide. Each club is subject to a salary cap, which is set at US $143.28 million for the 2015 season, US $10 million more than in 2014 and US $20 million more than in 2013. The salary cap for the 2016 NFL season was $155.27 million. The salary cap for the 2017 NFL season was about $167 million.
Members of clubs ' practice squads, despite being paid by and working for their respective clubs, are also simultaneously a kind of free agent and are able to sign to any other club 's active roster (provided their new club is not their previous club 's next opponent within a set number of days) without compensation to their previous club; practice squad players can not be signed to other clubs ' practice squads, however, unless released by their original club first.
Explanatory notes
Citations
Bibliography
|
andrew maxwell voice over ex on the beach | Andrew Maxwell - wikipedia
Andrew Maxwell (born 3 December 1974) is an Irish stand - up comedian raised in Kilbarrack, Dublin, and now resident in London. Maxwell is best known for being the narrator of the MTV reality television series, Ex on the Beach.
In 1992, at age 18, Maxwell tried stand - up comedy for the first time at a local club run by Ardal O'Hanlon. The club was The Comedy Cellar at The International Bar in central Dublin. Two years later he began to play on the international stand up circuit, mainly over in Britain. Warm - up slots on British television shows Jonathan Ross, Johnny Vaughan, and They Think It 's All Over soon followed. From 1995, he made regular appearances on BBC Two 's Sunday Show, which was transmitted live from Manchester.
Recent credits for Maxwell include RI: SE as the United States correspondent interviewing the likes of Sandra Bullock, Val Kilmer and Jodie Foster; and a regular guest slot on a weekly topical comedy - style chat show The Panel, that ran in Ireland on RTÉ One from 2003 until 2011.
He appeared on the Secret Policeman 's Ball in 2006 (shown on Channel 4 in the UK). He has also appeared on the TV shows Never Mind the Buzzcocks and Mock The Week. He was voted the "King of Comedy '' on the Channel 4 reality TV show of the same name. A Funny Cuts special for E4, called Andrew Maxwell -- My Name Up In Lights and catching Maxwell in performance, aired on 28 July 2006.
In 2007 he was nominated for the if. comedy award for the best show at the Edinburgh Fringe.
Maxwell also hosts his own weekly late - night comedy gig Fullmooners, which mainly takes place in London but also tours around comedy festivals such as Edinburgh. It has featured comics like Russell Brand, Simon Pegg, Tommy Tiernan and Ed Byrne. It also features break - dancers and singers.
He became a regular panelist on Irish current events / comedy show The Panel on Raidió Teilifís Éireann (RTÉ). He recently appeared on Argumental and Celebrity Juice. He also had his own show, entitled Smoke and Mirrors.
He supports Scottish football team Hibernian F.C., mainly due to the club 's Irish heritage.
In 2014 Maxwell features in Drunk History shown on Comedy Central, a light hearted show in which comedians get outrageously drunk before attempting to recount historical events.
In 2014 also, Maxwell became the narrator of Ex on the Beach, an MTV reality based television series. Maxwell has narrated every series since the show 's launch.
|
what did the principal do in 13 reasons why | 13 Reasons Why - Wikipedia
13 Reasons Why (stylized onscreen as TH1RTEEN R3ASONS WHY) is an American teen drama web television series developed for Netflix by Brian Yorkey, based on the 2007 novel Thirteen Reasons Why by Jay Asher. The series revolves around seventeen - year - old high school student, Clay Jensen, and his deceased friend Hannah Baker, who has killed herself after having to face a culture of gossip and sexual assault at her high school and a lack of support from her friends and her school. A box of cassette tapes recorded by Hannah in the lead up to her suicide detail thirteen reasons why she ended her life. The series is produced by July Moon Productions, Kicked to the Curb Productions, Anonymous Content and Paramount Television, with Yorkey and Diana Son serving as showrunners.
Dylan Minnette stars as Clay, while Katherine Langford plays Hannah. Christian Navarro, Alisha Boe, Brandon Flynn, Justin Prentice, Miles Heizer, Ross Butler, Devin Druid, Amy Hargreaves, Derek Luke, Kate Walsh, and Brian d'Arcy James also star. A film from Universal Pictures based on Thirteen Reasons Why began development in February 2011, with Selena Gomez set to star as Hannah, before being shelved in favor of a television series and Netflix ordering the show straight to series in October 2015, with Gomez instead serving as an executive producer.
The first season was released on Netflix on March 31, 2017. It received positive reviews from critics and audiences, who praised its subject matter and acting, particularly the performances of Minnette and Langford. For her performance, Langford received a Golden Globe Award nomination for Best Actress in a Drama Series. However, its graphic depiction of issues such as suicide and rape, along with other mature content prompted concerns from mental health professionals. In response, Netflix added a warning card and from March 2018, a video that plays at the start of each season warning viewers about its themes.
In May 2017, Netflix renewed 13 Reasons Why for a second season; filming began the next month and concluded that December. The second season was released on May 18, 2018, and received negative reviews from critics and mixed reviews from audiences. A third season was ordered in June 2018 and is set to be released in 2019. Critical and audience reaction to the series has been divided, with the program generating controversy between audiences and industry reviewers.
In season one, seventeen year old Clay Jensen returns home from school one day to find a mysterious box on his porch. Inside he discovers seven cassette tapes recorded by Hannah Baker, his deceased classmate and unrequited love, who killed herself two weeks earlier. On the tapes, Hannah unfolds an intensely emotional audio diary, detailing why she decided to end her life. It appears each person who receives this package of old - style tapes is fundamentally related to why she killed herself. Clay is not the first to receive the tapes, but there is implied detail as to how he should pass the tapes on after hearing them. There appears to be an order to distribution of the tapes, with an additional copy held by an overseer should the plan go awry. Each tape recording refers to a different person involved in Hannah 's life contributing to a reason for her suicide. The tapes refer to both friends and enemies.
In season two, months after Hannah 's suicide, Clay and the other people mentioned on the tapes, as well as close friends and Hannah 's family members, become embroiled in a civil legal battle between Hannah 's parents and Liberty High School. Alleging negligence on the part of the school, Hannah 's mother pursues her perception of justice, while her reluctance to settle pre-trial and her personal circumstances eventually break up her marriage with Hannah 's father. The story unfolds with narratives illustrating Hannah 's story told by those who present in court at trial.
Clay, who perceives himself as Hannah 's failed protector, embarks on an investigation using whatever evidence he can find in an effort to impact on the civil case between Hannah 's parents and the school. Clay also endeavours to expose the corrupted culture of the High School and its favour of wealthy, sports savvy male students over the average student, which especially compromises the integrity of young girls such as Hannah.
Throughout season two, Clay appears to be communicating with the ghost of Hannah as a plot narrative device.
Clay Jensen finds a box filled with audio cassette tapes anonymously left on his front doorstep. He plays the first in his father 's boombox and realizes they have been recorded by his recently deceased classmate Hannah Baker, before he accidentally drops and breaks the boombox when surprised by his mother. Clay steals his friend Tony 's Walkman to continue listening. Clay listens to the first tape, in which Hannah begins to relate the experiences that led to her suicide. She starts by sharing the story of her first kiss, with Justin Foley, who goes on to inadvertently spread a salacious rumor that begins the sequence of events leading to her suicide. Clay is revealed, through numerous short flashbacks, to have been in love with Hannah and to have worked with her at the local movie theater. It is revealed in this episode that Hannah has put her friend Tony in charge of the tapes.
Hannah reminisces about her friendship with two other new students: Jessica, who moves frequently because her father is in the Air Force, and Alex, whom they met at a coffee shop. Jessica and Alex eventually begin a relationship and stop spending time with Hannah. When Alex breaks up with Jessica, she very publicly blames Hannah. In the present, Hannah 's mother, Olivia, finds a note in her daughter 's textbook that leads her to believe Hannah was being bullied. Clay asks Jessica about the tapes, which results in Bryce Walker 's circle of peers meeting to discuss how Clay is listening to Hannah 's recordings.
As Clay attempts to pursue a romantic relationship with Hannah, her relationships are threatened by a "best / worst list '' made by Alex Standall, who has put a "target '' on Hannah. In the present, Hannah 's mother, Olivia Baker, seeks out the school principal about her suspicion of bullying and makes a disturbing discovery. In the midst of his investigation, Clay turns to Alex for answers, who not only feels regret for his actions on the tapes, but also warns Clay against trusting Tony, whom Clay later sees in a violent exchange with his brothers. As Justin tries to recuperate from his recent slump, Bryce strong - arms Clay and Alex into a drinking contest in an alleyway.
Hannah hears someone outside her window, and confesses to her friend, Courtney, that she has a stalker. Courtney offers to help her catch the offender in the act. While waiting for the stalker to arrive, they play an alcohol - fueled game of truth or dare that leads to the two of them kissing on Hannah 's bed. The stalker, school photographer Tyler Down, takes a photo of the girls and sends it around the school. This effectively ends Courtney and Hannah 's friendship as Courtney distances herself from Hannah to avoid being revealed as one of the people in the photograph. In the present, Clay goes to Hannah 's house and talks to her mother, though is unable to admit how close he and Hannah were. He also confronts Tony about the incident with his brothers. Tony responds that "people have to make their own justice '' and proves he has an extra set of tapes. Inspired by this, Clay takes a naked picture of Tyler and sends it around the school in revenge.
Courtney, afraid of her classmates finding out about her sexuality, spreads a rumor that the girls in the leaked photos are Hannah and Laura, an openly lesbian classmate. Courtney also adds to the rumor about Hannah and Justin, worsening Hannah 's poor reputation. In the present, Clay takes Courtney to visit Hannah 's grave. She leaves, not ready to face her involvement in the loss of her classmate or be more open about her sexuality. Tony arrives with Clay 's bike and gives him a tape with the song he and Hannah danced to at the Winter Formal. Later, Justin, Zach and Alex force Clay into the car with them by stealing his bike and scare him into silence about the tapes by driving over the speed limit. They are pulled over by the police but face no consequences as the officer is revealed to be Alex 's father. Clay denies knowing Hannah to his mother, who has been asked to represent the school in the lawsuit the Bakers are bringing.
Hannah 's date on Valentine 's Day with Marcus does not go as planned due to the rumors that she is promiscuous. In the present, Alex gets into a fight with Montgomery and they both must appear before the student honor board. Clay helps Sheri on an assignment, and they nearly hook up, but Sheri reveals she is only there because she is on the tapes and wants Clay to like her despite her role in Hannah 's death.
After Hannah refuses to go out with Zach, he gets revenge by sabotaging her emotionally during a class project. Zach removes compliments from Hannah 's box, affecting her self - confidence. In the present, Clay hears Zach 's tape and keys his car in an act of revenge, but things turn out to be different than they appeared. Clay is now having both auditory and visual hallucinations of Hannah during the day, including seeing her dead body on the floor of the basketball court during a game and hearing her tape playing over the school 's intercom system. He returns the tapes to Tony, unable to continue listening.
Hannah is touched by poetry recited by fellow student Ryan Shaver, and joins the Evergreen Poetry Club, a place where people write and perform their own poetry, and listen and critique others. Hannah presents some extremely revealing and confessional poetry at the poetry club after Ryan encourages her. Ryan betrays her by publishing the poem without her knowledge or consent in his school magazine. Almost everyone in school finds the poem hilarious, but Clay is both touched and disturbed by it, not realizing Hannah is the author. In the present day, Tony confides to Clay about the night of Hannah 's death, and Clay takes back the tapes. Clay later gives the poem to Hannah 's mother.
While hiding in Jessica 's room during a party, Hannah witnesses Bryce Walker raping an unconscious and intoxicated Jessica. In the present, Marcus warns Clay the worst is yet to come and again attempts to scare him into silence about the tapes, this time by planting drugs in his backpack to get him suspended from school. Clay finally admits to his mother that he and Hannah were close. After getting suspicious legal advice from his mother, he goes to Justin 's apartment to retrieve his bike and talk about getting justice for Jessica. Justin finally admits that what happened in the tapes is real, and claims it is better if Jessica does not know the truth.
After the party, Hannah gets a ride home from her classmate, cheerleader Sheri Holland. They have what appears to be a minor accident, knocking over a stop sign. While Hannah wants to call the police to report it, Sheri refuses to do so, because she is afraid she will get in trouble. While Hannah is on her way to find a phone to call the authorities, the downed stop sign causes a serious accident at that intersection, resulting in the death of Clay 's friend Jeff Atkins, which was incorrectly considered a drunk driving accident. When Hannah tries to tell Clay about the stop sign, he pushes her away, thinking she is being unnecessarily dramatic. In the present, Jessica 's behavior becomes more erratic. Clay finds out that Sheri is trying to make up for her mistake in her own way, and he tells Jeff 's parents that Jeff was sober when he died.
With Tony 's support, Clay finally listens to his tape and is overcome with guilt to the point of contemplating his own suicide because he feels he did not do enough to prevent Hannah 's death. Tony manages to calm him down. Justin finds out Jessica is at Bryce 's home. He confronts her there and admits that Bryce raped her on the night of the party, causing her to break up with him. Olivia Baker finds a list with the names of all the people on the tapes, although she does not know what the list means.
After accidentally losing her parents ' store 's earnings, a depressed Hannah stumbles upon a party being thrown by Bryce. The night ends in tragedy when she ends up alone with him, and he rapes her in his hot tub. This leads Hannah to create a list of people (the one that her mother found in the previous episode) who she feels were responsible for leading her to her current circumstances, which becomes the inspiration for the creation of the tapes. In the present, everyone on Hannah 's list is subpoenaed to testify in the lawsuit between the Bakers and the school. The subjects of the tapes disagree over what to do. Tyler eventually suggests they pin everything on Bryce, but Alex refuses and says they should tell the truth. Sheri turns herself in. Clay goes to Bryce 's house, on the pretext of buying marijuana, to confront him about the events of the night he raped Hannah. Clay provokes Bryce to attack him and is badly beaten. However, Clay has been secretly recording their conversation and gets Bryce to admit that he raped Hannah. An unknown teenager with a gunshot wound to the head is treated by paramedics.
Hannah begins to record the tapes and then visits Mr. Porter to tell him about her rape. Hannah secretly records the conversation, hoping he will help her. When he does not, she heads to a post office, and mails the tapes to Justin Foley, then she goes home and takes her own life by slitting her wrists. In the present, Clay gives Tony the tape of his conversation with Bryce to copy. He confronts Mr. Porter about meeting with Hannah on her last day. He also hands over the tapes, including the additional tape containing Bryce 's confession. Clay tells Porter that he is the subject of the final tape. The depositions begin, Marcus and Courtney deny their involvement in Hannah 's death as much as possible. Zach and Jessica admit their mistakes. Before his deposition, Tyler hides ammunition and guns in his room, and then reveals the existence of the tapes during his interview. Alex is revealed to have been the teenager with the gunshot wound; he is in critical condition at the hospital. Justin leaves town out of guilt, but not before telling Bryce about the tapes. Jessica finally tells her father about her rape. At school, Clay reaches out to Skye Miller, his former friend, to avoid repeating the same mistakes he made with Hannah.
Five months after the events of the first season, Hannah 's trial moves to court. Tyler is the first to testify in the trial and does so truthfully. Skye and Clay are dating, but Clay starts to have hallucinations of Hannah. Mr. Porter confronts Bryce in the bathroom about raping Hannah. Jessica returns to school, as does Alex who survived his suicide attempt, but has lost much of his memory from before it, including the contents of Hannah 's tapes. Tony is given the note Hannah left him the night she died, and is later seen burning it. Clay finds a Polaroid photograph in his locker, with a note saying "Hannah was n't the only one ''.
Courtney reveals that she is a lesbian and had feelings for Hannah during her testimony. A group of protesters gather at the court to demand justice for Hannah, but Jessica and Alex are both threatened to avoid revealing anything incriminating when they testify. Skye and Clay fight over her suspicion that Clay is still in love with Hannah, and Skye is hospitalized soon after leaving Clay 's house. Meanwhile, Tyler befriends a classmate named Cyrus.
Clay, riding home on his bicycle, is hit intentionally by a car, injuring him slightly. He visits Skye in hospital, but she breaks up with him. Clay and Alex try to encourage Jessica to reveal information about Bryce during her testimony, but she fails after seeing incriminating pictures of her stuck to the board in a classroom. Olivia asks her afterwards if she was the girl on the ninth tape, but Jessica does not answer. After discovering Jessica had been contacted by Justin, Clay finds him homeless in Oakland with Tony 's help. With no other option, Clay lets Justin stay in his bedroom with him. Skye 's parents move her to a psychiatric facility, and tell Clay not to contact her. Tyler meets the rest of Cyrus ' friends while Bryce is asked to testify.
Marcus lies about what happened with Hannah the night they went out on Valentine 's Day during his testimony (in order to protect his reputation) and briefly mentions Bryce, angering him. Cyrus and Tyler hear of Marcus ' lies and prank him, going to a nearby field afterwards to shoot guns. Clay finds out that Justin has been taking heroin and he and Sheri help him onto the path to sobriety. Jessica shows the threatening note she was left before her testimony to Mr Porter. Alex continues to be frustrated about not being able to remember anything and asks Clay for the tapes, who sends them to him. Jessica and Alex skip school and share a kiss. Clay also finds a second Polaroid photograph in his locker, which shows Bryce having sex with an unconscious girl, alongside a note saying "he wo n't stop ''.
Tyler is confronted by Mr. Porter, who suspects he was behind the pictures of Jessica found in the classroom before her testimony, but he denies involvement. Ryan testifies and talks about Hannah 's poems, saying they were written about Justin and that she and Justin maintained contact even after falling out. Afterwards, Olivia invites Ryan to help her decipher Hannah 's poems for additional clues, but Ryan soon leaves after Olivia mentions missing pages in Hannah 's journal, which Ryan had torn out. Clay realizes the Polaroid photos were taken at school and attempts to find out where. Chlöe meets with Bryce 's parents and his mother notices bruises on her. Jessica attends her first group therapy session. Mr. Porter finds a brick thrown through his car window, with a threatening note attached; he later confronts Justin 's mother and is arrested after a violent incident with her boyfriend.
Zach testifies and reveals that he and Hannah had a romantic relationship the summer before she died, but they kept it secret. After the testimony, Clay reacts angrily and confronts Zach, ignoring his apologies, while Bryce teases Zach about his relationship, prompting a small fight between them. Justin returns to school and talks to Jessica, but she asks him to leave. He then faints after seeing Bryce, and on his return to Clay 's house, has to hide as someone breaks in, at which point Clay 's parents find out he has been staying there, but allow it to continue.
During Clay 's testimony, he is forced to reveal he and Hannah did drugs at a small party one night and spent the night together, and Clay ignored a comment Hannah made the next morning about wanting to die. Alex 's birthday party is derailed after a number of arguments break out. When Clay leaves the birthday party, he finds a Polaroid photograph left on his car, with a note reading "The Clubhouse ''. After reading comments posted online about his testimony, Clay anonymously uploads Hannah 's tapes to the Internet. Meanwhile, Bryce is seen having sex with Chlöe without getting proper consent.
After the release of the tapes, Bryce returns to school to find his locker vandalized and his "confession tape '' Clay recorded being shared among students. After Marcus is blackmailed, he calls Bryce a rapist during a speech at a ceremony, in front of a large group of parents and students, in order to protect his own reputation. Clay finally contacts Skye again and meets with her at the psychiatric facility, but she tells him she is moving to a different state. Justin overdoses on heroin, but Alex saves his life -- he then returns to his mother 's home.
When testifying, Mr. Porter reveals that since Hannah 's death he has come to believe that Hannah was raped by Bryce. He then emotionally apologizes to Hannah 's mother for the part he played in her suicide. Justin steals money from his mother 's boyfriend, and when confronted by his mother, leaves her some suggesting she leave too in order to escape the relationship. Bryce confronts and threatens Clay under the assumption that it was Clay who blackmailed Marcus into publicly accusing Bryce of rape. Later, Clay is violently beaten at school by four masked students. He is then approached by Cyrus who invites him to join him and Tyler in vandalizing the school that evening, but when he does, he sees a group of students entering a storage shed next to the baseball field, which he correctly guesses is the location of The Clubhouse. He texts Justin and they reconvene. Meanwhile, Olivia contacts a girl, Sarah, and her mother and asks them not to testify.
Tony is asked to testify, but chooses not to reveal that Hannah left him her tapes because he owed her a favor after she helped him evade arrest. During Sarah 's testimony, she reveals Hannah was part of a trio of girls who bullied her at another high school. After an argument between Tyler and Mackenzie, his friendship with Cyrus breaks down. Offering marijuana, Sheri tempts some male students into taking her to The Clubhouse, where Bryce takes a picture of her and two other boys on a Polaroid camera, placing the photograph in a box filled with many others. She learns the code to unlock the door and shares it with Clay and Justin. During a baseball game, Zach confronts Bryce, tells him he knows Hannah was not lying, and quits the game. He goes to The Clubhouse to find Clay and Justin there, and hands Clay the box of Polaroid photographs taken in the Clubhouse, confessing that it was him who had given Clay the first three photographs. Clay reviews the photographs at home with Justin and Sheri, and they find a pair of photographs which show Bryce raping Chlöe. Clay also finds a picture of Hannah.
While testifying, Bryce lies and claims that he and Hannah had a casual sexual relationship, and that she falsely accused him of rape after he brought an end to it. When Bryce returns to school, Justin attacks him and a fight breaks out, which evolves into a mass brawl. Jessica shows Chlöe the two pictures of Bryce and her in The Clubhouse, and Chlöe confesses that she posted the pictures of Jessica in the classroom before she testified. Olivia, her legal team, and Jessica ask Chlöe to testify, and she agrees, but on the stand she testifies that she remembers Bryce having sex with her and remembers consenting. The box of Polaroid photographs taken from The Clubhouse is stolen from Clay 's car, and Alex is sent a package containing a gun and a threatening letter. Bryce 's mother later asks him whether he was telling the truth in his testimony, and, after being pressed, he confesses to raping Hannah. Flashbacks reveal that Bryce wanted a relationship with Hannah and was rejected himself. Clay becomes mentally tormented by hallucinations of Hannah, to the point where he contemplates both murdering Bryce and killing himself, but Justin manages to calm him down.
Justin receives a death threat before going to testify, but he tells of Bryce raping Jessica during his testimony nonetheless. After Alex realizes that Montgomery is responsible for intimidating people during the trial, Alex, Clay, Justin, Tony, Zach, and Scott confront Montgomery and he admits to stealing the box of Polaroid photos. However, after Montgomery takes Alex to a deserted location to retrieve them, he reveals he was lying and escapes. As a result, Jessica is encouraged by her friends to report her case of sexual assault to the police. After the Baker trial concludes and the jury find the school district not responsible for Hannah 's death, both Bryce and Justin are arrested outside the courtroom for their involvement in Jessica 's rape. Mr. Porter is fired after a performance review, and Tyler is placed on a diversion program after one of his social media posts reveals it was him who vandalized the school.
Universal Studios purchased film rights to the novel on February 8, 2011, with Selena Gomez cast to play Hannah Baker. On October 29, 2015, it was announced that Netflix would be making a television adaptation of the book with Gomez instead serving as an executive producer. Tom McCarthy was hired to direct the first two episodes. The series is produced by Anonymous Content and Paramount Television with Gomez, McCarthy, Joy Gorman, Michael Sugar, Steve Golin, Mandy Teefey, and Kristel Laiblin serving as executive producers.
Filming for the show took place in the Northern Californian towns of Vallejo, Benicia, San Rafael, Crockett and Sebastopol during the summer of 2016. The 13 - episode first season and the special were released on Netflix on March 31, 2017.
Therapy dogs were present on set for the actors because of the intense and emotional content of the series.
On May 7, 2017, it was announced that Netflix had renewed the series for a second season. Filming for the second season began on June 12, 2017, but was briefly halted in October in response to the then - ongoing Northern California wildfires happening around the areas where the series was being filmed. Production on the second season wrapped in December 2017. The second season was released on May 18, 2018.
On June 6, 2018, Netflix renewed the series for a third season, which is set to be released in 2019.
The marketing analytics firm Jumpshot determined the first season was the second-most viewed Netflix season in the first 30 days after it premiered, garnering 48 % of the viewers that the second season of Daredevil received, which was the most viewed season according to Jumpshot. The series also showed an 18 % increase in week - over-week viewership from week one to week two. Jumpshot, which "analyzes click - stream data from an online panel of more than 100 million consumers '', looked at the viewing behavior and activity of the company 's U.S. members, factoring in the relative number of U.S. Netflix viewers who watched at least one episode of the season.
The first season has received positive reviews from critics, with much of the praise for the show being aimed at its acting, directing, story, visuals, improvements upon its source material, and mature approach to dark and adult subject matter. The review aggregator website Rotten Tomatoes reported an 80 % approval rating with an average rating of 7.2 / 10, based on 51 reviews. The website 's critical consensus reads, "13 Reasons Why complements its bestselling source material with a gripping look at adolescent grief whose narrative maturity belies its YA milieu. '' Metacritic, which uses a weighted average, assigned a score of 76 out of 100, based on 17 critics, indicating generally favorable reviews.
Jesse Schedeen of IGN praised 13 Reasons Why, giving it a 9.2 out of 10, "Amazing '', stating that the show is "a very powerful and hard - hitting series '' and "ranks among the best high school dramas of the 21st century ''. Matthew Gilbert of The Boston Globe gave a glowing review for the show, saying, "The drama is sensitive, consistently engaging, and, most importantly, unblinking. '' Maureen Ryan of Variety asserts that the show "is undoubtedly sincere, but it 's also, in many important ways, creatively successful '' and called it "simply essential viewing ''. Leah Greenblatt of Entertainment Weekly gave the entire season a score of B+, calling the show "a frank, authentically affecting portrait of what it feels like to be young, lost and too fragile for the world ''. Daniel Feinberg of The Hollywood Reporter also praised the show, calling it "an honorably mature piece of young - adult adaptation '', and citing its performances, direction, relevance and maturity as some of the show 's strongest points.
The acting, particularly Katherine Langford as Hannah and Dylan Minnette as Clay, was frequently mentioned and widely lauded in several reviews. Schedeen of IGN praised the cast, particularly Minnette and Langford, stating: "Langford shines in the lead role... (and) embodies that optimism and that profound sadness (of Hannah 's) as well. Minnette 's Clay is, by design, a much more stoic and reserved character... and does a fine job in what 's often a difficult role. '' Gilbert of The Boston Globe praised the chemistry of Langford and Minnette, saying that "watching these two young actors together is pure pleasure '', while Schedeen of IGN also agreed, saying that they are "often at their best together, channeling just the right sort of warm but awkward chemistry you 'd expect from two teens who ca n't quite admit to their feelings for one another ''. Feinberg of The Hollywood Reporter also praises both actors: "Langford 's heartbreaking openness makes you root for a fate you know is n't possible. The actress ' performance is full of dynamic range, setting it against Minnette 's often more complicated task in differentiating between moods that mostly go from uncomfortable to gloomy to red - eyed, hygiene - starved despair. ''
Ryan of Variety also gave praise to not only the two leads, but also the supporting cast of actors, particularly Kate Walsh 's performance as Hannah 's mother, which Ryan describes as "career - best work ''. Positive mentions from various critics, such as Ryan, Feinberg and Schedeen, were also given to the supporting cast of actors (most particularly Alisha Boe, Miles Heizer and Christian Navarro 's respective performances of Jessica, Alex and Tony). Liz Shannon Miller of Indiewire, who enjoyed the show and gave it a positive score of B+, gave praise to the racial, gender and complex diversity of its supporting cast of teens.
Another aspect frequently mentioned within reviews was the show 's mature and emotional approach to its dark and adult subject matter. This was favorably reviewed by critics, such as Miller of Indiewire, particularly her statement that "the adult edges to this story ring with honesty and truth. '' Miller, and Feinberg of The Hollywood Reporter, also stated that the show can be difficult to watch at times, while Schedeen of IGN states that it is "an often depressing and even uncomfortable show to watch... a pretty emotionally draining experience, particularly towards the end as the pieces really start to fall into place. ''
Numerous critics also praised several other aspects of the show. Feinberg highlighted the show 's directors, saying: "A Sundance - friendly gallery of directors including Tom McCarthy, Gregg Araki and Carl Franklin keeps the performances grounded and the extremes from feeling exploitative '', while Gilbert of The Boston Globe praised the storytelling: "The storytelling techniques are powerful... (as it) builds on the world established in the previous hour, as we continually encounter new facets of Hannah 's life and new characters. The background on the show keeps getting deeper, richer. ''
Conversely, the series has also received criticism over its portrayal of teen angst. Mike Hale of The New York Times wrote a critical review, writing, "the show does n't make (Hannah 's) downward progress convincing. It too often feels artificial, like a very long public service announcement. '' He also criticized the plot device that has Clay listening to the tapes one by one instead of all in one sitting like the other teens did, which Hale felt was unbelievable: "It makes no sense as anything but a plot device, and you 'll find yourself, like Clay 's antagonists, yelling at him to listen to the rest of tapes already. ''
Writing for The Guardian, Rebecca Nicholson praised some aspects of the show, including the performances from Minnette and Walsh, but was troubled by much of the plot, writing, "a storyline that suggests the love of a sweet boy might have sorted all this out added to an uneasy feeling that stayed with me ''. Nicholson was skeptical that the show would appeal to older viewers, unlike other series set in high school such as Freaks and Geeks and My So - Called Life: "It lacks the crossover wit of its forebears... It 's too tied up in conveying the message that terrible behaviour can have horrible consequences to deal in any subtleties or shades of feeling. It 's largely one - note -- and that note is horrifying. ' It has to get better, ' implores one student towards the end, but given its fairly open ending, an apparent season two setup, it does not seem as if there 's much chance of that happening. ''
Washington Post television critic Hank Stuever wrote a negative review, finding 13 Reasons Why "contrived '' and implausible: "There are 13 episodes lasting 13 super-sullen hours -- a passive - aggressive, implausibly meandering, poorly written and awkwardly acted effort that is mainly about miscommunication, delivering no more wisdom or insight about depression, bullying and suicide than one of those old ABC Afterschool Specials people now mock for being so corny. '' He also wrote that he found Hannah 's suicide tapes "a protracted example of the teenager who fantasizes how everyone will react when she 's gone. The story... strikes me as remarkably, even dangerously, naive in its understanding of suicide, up to and including a gruesome, penultimate scene of Hannah opening her wrists in a bathtub. ''
David Wiegand of the San Francisco Chronicle gave the series a tepid review, saying that it was plagued by character inconsistencies, particularly Hannah. He praised Langford 's "stunning performance '' but noted, "There are times when we simply do n't believe the characters, when what they do or say is n't consistent with who we 've been led to believe they are... At times, (Hannah) is self - possessed and indifferent at best to the behavior of the popular kids. At other times, though, relatively minor misperceived slights seem to send her into an emotional tailspin. No doubt, teenagers embody a constant whirl of conflicting emotions, but the script pushes the bounds of credibility here and there. '' He noted that overall, the series worked: "The structure is gimmicky and the characters inconsistent, but there are still at least 13 Reasons Why the series is worthy. ''
The second season received largely mixed to negative reviews from critics, with criticism aimed at the poor execution of its topics; many declared it unnecessary. Rotten Tomatoes reports an approval rating of 27 % with an average rating of 5.5 / 10, based on 25 reviews. The site 's critical consensus states, "By deviating from its source material, 13 Reasons Why can better explore its tenderly crafted characters; unfortunately, in the process, it loses track of what made the show so gripping in the first place. '' On Metacritic the season has an average score of 49 out of 100, based on 16 critics, indicating "mixed or average reviews ''.
Catherine Pearson from DigitalSpy wrote a negative review, calling the season "even more problematic '' than the first. She ends the review saying that, "Unrelenting depression seems to shroud the season, briefly lifted only to collapse back down as the show 's thirteenth episode, once again, delivers a deeply disturbing scene of suffering. '' Jordan Davidson from The Mighty wrote that he "felt sick '' after watching the final episode of the season.
A scene in which the character Tyler is attacked and sexually assaulted during the finale also caused controversy from fans and critics of the series, with some describing it as "unnecessary '' and "traumatizing ''. The series ' showrunner has defended the scene, saying that it was included in an attempt to "(tell) truthful stories about things that young people go through in as unflinching a way as we can ''.
The series has generated controversy over its portrayal of suicide and self - harm, prompting Netflix to add strong advisory warnings prior to the first, twelfth, and thirteenth episodes. School psychologists and educators expressed concern about the series.
The superintendent of Palm Beach County, Florida schools reportedly told parents that their schools had seen an increase in suicidal and self - harming behavior from students, and that some of those students "have articulated associations of their at - risk behavior to the 13 Reasons Why Netflix series ''.
The Australian youth mental health service for 12 -- 25 year - olds, Headspace, issued a warning in late April 2017 over the graphic content featured in the series, due to the increased number of calls to the service following the show 's release in the country. Netflix however, demonstrably complied with the Australian viewer ratings system, by branding the series as "MA15 + '' when streamed via its own interface. They accompanied its presentation with additional warnings and viewer advice, and ensured that counselling referrals were included and not easily skipped at the conclusion of each episode, even including an Australian accent in the voice over for those referrals every fifth episode. The "MA15 + '' rating in Australia requires a guardian to accompany a minor to an Australian cinema to allow admission of those under 18 years of age. "MA15 + '' also prohibits the admission of anyone under the age of 15, requiring proof - of - age identification.
In response to the graphic nature of the show and New Zealand 's high youth suicide rate, which was the highest among the 34 OECD countries during 2009 to 2012, the Office of Film & Literature Classification in the country created a new rating, "RP18 '', allowing individuals aged 18 and over to watch the series alone and those below having to watch it with supervision from a parent or guardian.
-- Executive producer Selena Gomez, in defense of the controversy surrounding the series
In April 2017, the National Association of School Psychologists (NASP) in the United States released a statement regarding the series, saying: "Research shows that exposure to another person 's suicide, or to graphic or sensationalized accounts of death, can be one of the many risk factors that youth struggling with mental health conditions cite as a reason they contemplate or attempt suicide. '' NASP sent a letter to school mental health professionals across the country about the series, reportedly a first for NASP in response to a television show. The following month, the United States Society of Clinical Child and Adolescent Psychology (SCCAP) released a statement also noting how strongly the show may serve as a trigger for self - injury among vulnerable youth. They lamented the depiction of mental health professionals as ineffective for youth who have experienced trauma and may have been considering suicide. The statement implored Netflix to add a tag following each episode with mental health resources, and a reminder that depression and suicide can be effectively treated by a qualified mental health professional, such as a clinical child psychologist, using evidence - based practice.
Similarly, clinical psychologists such as Daniel J. Reidenberg and Erika Martinez, as well as mental health advocate MollyKate Cline of Teen Vogue magazine, have expressed concerns regarding the risk of suicide contagion. However, Eric Beeson, a counselor at The Family Institute at Northwestern University noted that "it 's unlikely that one show alone could trigger someone to attempt suicide. '' Mental health professionals have also criticized the series ' depiction of suicide itself, much of which violates widely promulgated recommendations for reporting on actual suicides or not depicting them in fiction, in order to not encourage copycat suicides. The season finale, which depicts Hannah 's suicide in graphic detail, has been particularly criticized in this regard. Nic Sheff, a writer for the show, has defended it as intended to dispel the myth that suicides "quietly drift off '', and recalled how he himself was deterred from a suicide attempt by recalling a survivor 's account of how painful and horrifying it was.
The NASP statement also criticized the show 's suggestion that bullying alone led Hannah to take her life, noting that while it may be a contributing factor, suicidal ideations far more often result from the bullied person having a treatable mental illness without adequate coping mechanisms. Alex Moen, a school counselor in Minneapolis, took issue with the show 's entire plotline as "... essentially a fantasy of what someone who is considering suicide might have -- that once you commit suicide, you can still communicate with your loved ones, and people will suddenly realize everything that you were going through and the depth of your pain... That the cute, sensitive boy will fall in love with you and seek justice for you, and you 'll be able to orchestrate it, and in so doing kind of still be able to live. '' Other counselors criticized the depiction of Hannah 's attempt to reach out to Mr. Porter as dangerously misleading, since not only does he miss obvious signs of her suicidal ideations, but says he can not report her sexual assault to the police without her identifying the assailant. School counselors are often portrayed as ineffective or clueless in popular culture, Moen says, but Porter 's behavior in the series goes beyond that, to being unethical and possibly illegal. "It 's ridiculous! Counselors are not police. We do n't have to launch an investigation. We bring whatever information we do have to the police '', she told Slate.
In May 2017, the Canadian Mental Health Association (CMHA) along with the Centre for Suicide Prevention (CSP) released a statement with similar concerns to the ones raised by NASP. CMHA believed that the series may glamorize suicide, and that some content may lead to distress in viewers, particularly in younger viewers. Furthermore, the portrayal of Hannah 's suicide does not follow the media guidelines as set out by the Canadian Association for Suicide Prevention (CASP) and the American Association of Suicidology. CMHA and CASP did praise the show for raising awareness about "... this preventable health concern, '' adding that, "Raising awareness needs to be done in a safe and responsible manner. A large and growing body of Canadian and international research has found clear links between increases in suicide rates and harmful media portrayals of suicide. '' Ways in which the portrayals of suicide may cause harm, according to CMHA and CASP, include the following: "They may simplify suicide, such as, by suggesting that bullying alone is the cause; they may make suicide seem romantic, such as, by putting it in the context of a Hollywood plot line; they may portray suicide as a logical or viable option; they may display graphic representations of suicide which may be harmful to viewers, especially young ones; and / or they may advance the false notion that suicides are a way to teach others a lesson. ''
One study found the release of 13 Reasons Why corresponded with between 900,000 and 1,500,000 more suicide - related searches in the United States, including a 26 % increase in searches for "how to commit suicide, '' an 18 % increase for "commit suicide, '' and a 9 % increase for "how to kill yourself. '' A review, however, found that it is unclear if searching for information about suicide on the Internet relates to the risk of suicide.
With the release of the first season of the series, Netflix also released 13 Reasons Why: Beyond the Reasons, an aftershow documentary television film. The 29 - minute documentary featured cast and crew of the series, and mental health professionals discussing their experiences working on the show and dealing with difficult issues, including bullying, depression and sexual assault. A second Beyond the Reasons special was released with the second season of the series.
|
who played the female alien in galaxy quest | Missi Pyle - wikipedia
Andrea Kay "Missi '' Pyle (born November 16, 1972) is an American actress and singer. She has appeared in several films, including the award - winning films The Artist, Galaxy Quest, DodgeBall: A True Underdog Story, Big Fish, 50 First Dates, Charlie and the Chocolate Factory, Harold & Kumar Escape from Guantanamo Bay and Gone Girl.
She has also appeared in various television roles, on shows such as Mad About You, Friends, Heroes, Two and a Half Men, Frasier, My Name Is Earl, 2 Broke Girls, Jennifer Falls, and Mom. She is also half of Smith & Pyle, a desert country - rock band, with actress Shawnee Smith, and has appeared in the music video for Foo Fighters ' single "Run '' as a nurse.
Pyle, the youngest daughter of Linda and Frank Pyle, was born in Houston, Texas, and raised in Memphis, Tennessee. She has two older sisters, Debbie and Julie, two older brothers, Sam and Paul, one younger half - brother, Gordon, and a half - sister, Meredith. Pyle attended the North Carolina School of the Arts in Winston - Salem, North Carolina and graduated in 1995. For her achievements, Pyle was honored by the Poplar Pike Playhouse at her alma mater Germantown High School in Germantown, Tennessee.
Pyle has guest starred on many television shows, including Heroes, Mad About You, Boston Legal, Frasier, The Sarah Silverman Program, and 2 Broke Girls. She started her film career with a minor role in As Good as It Gets, starring Helen Hunt and Jack Nicholson. After her breakout role in Galaxy Quest, she had supporting roles in Bringing Down the House (for which she and Queen Latifah were nominated for an MTV Movie Award for Best Fight), Josie and the Pussycats, Home Alone 4, Exposed, Big Fish, Along Came Polly, Soul Plane, Stormbreaker, and Charlie and the Chocolate Factory. She was also the female lead in BachelorMan. She also appears in Dodgeball: A True Underdog Story. She also had a brief appearance in 50 First Dates and starred in A Cinderella Story: Once Upon a Song.
Pyle played Jake 's Elementary School teacher Ms. Pasternak on Two and a Half Men. The role was recast in 2009 for undisclosed reasons, but Pyle returned to the series in the same role in 2011 and again in the series finale "Of Course He 's Dead '' in 2015.
In 2008, Missi starred in the Broadway play Boeing - Boeing opposite Christine Baranski, Mark Rylance, Greg Germann, Paige Davis and Rebecca Gayheart. The play closed in January 2009.
Pyle was scheduled to ring the closing bell for the trading day at the New York Stock Exchange on September 29, 2008 to promote Boeing - Boeing, but backed out minutes before the close of trading in order to not associate herself with ending a calamitous day of financial turmoil which resulted in the Dow Jones Industrial Average losing 778 points, the largest point loss ever in a session in the wake of the global financial crisis of September -- October 2008. She watched from the floor as an NYSE staffer pushed the button and gaveled the end of trading instead.
She was part of a country music group with actress Shawnee Smith called Smith & Pyle. The two actresses met while filming an ABC comedy pilot titled Traveling in Packs. The band started after Smith invited Pyle to join her in attending the Coachella Valley Music and Arts Festival. While stuck in traffic, Pyle talked about her dream to be a rock star and Smith agreed to form a band with her. Their first album, "It 's OK to Be Happy '', was released digitally through iTunes and Amazon.com in July 2008. The debut album was recorded in Joshua Tree, California at Rancho de la Luna and was produced by Chris Goss. The two actresses have also become business partners and formed their own record label called Urban Prairie Records, under which "It 's OK to be Happy '' was released. According to an interview Smith did with pretty-scary.net in August 2008, a Smith & Pyle television or webisode series might be in the works. Smith also mentioned the idea of a series on Fangoria radio with Dee Snider. Currently there are 3 videos posted on YouTube that show the making of the record. They also have a 10 - minute "making of '' video on Vimeo called Smith & Pyle: Desert Sessions.
Pyle was married to author Antonio Sacre from 2000 to 2005.
Pyle married naturalist Casey Anderson on September 12, 2008. The wedding was country - western themed and took place in Montana. Wedding guests included bandmate Shawnee Smith and comedian Steve Agee. Anderson 's pet grizzly bear, Brutus, served as his best man in the ceremony. Pyle and Anderson appeared in "Grizzly Dogs '', the October 22, 2010 episode of Dog Whisperer, in which they sought the help of Cesar Millan to rehabilitate their dog. Pyle confirmed their breakup in 2013.
Pyle "married '' actress and bandmate Shawnee Smith in a faux ceremony at the All Love is Equal Launch Party in West Hollywood on November 18, 2009. The two actresses pretended to get married in support of repealing Prop 8 in California. Actor Hal Sparks dressed as a priest and performed the ceremony with them using rainbow - colored hula hoops as rings.
The drag queen "Pissi Myles '' is named after her.
American Crude
|
where would muscle c most likely be found | Muscle fatigue - wikipedia
Muscle fatigue is the decline in ability of a muscle to generate force. It can be a result of vigorous exercise but abnormal fatigue may be caused by barriers to or interference with the different stages of muscle contraction. There are two main causes of muscle fatigue: the limitations of a nerve 's ability to generate a sustained signal (neural fatigue) and the reduced ability of the muscle fiber to contract (metabolic fatigue).
Muscle cells work by detecting a flow of electrical impulses from the brain which signals them to contract through the release of calcium by the sarcoplasmic reticulum. Fatigue (reduced ability to generate force) may occur due to the nerve, or within the muscle cells themselves.
Nerves are responsible for controlling the contraction of muscles, determining the number, sequence and force of muscular contraction. Most movements require a force far below what a muscle could potentially generate, and barring pathological nervous fatigue, is seldom an issue. But in extremely powerful contractions that are close to the upper limit of a muscle 's ability to generate force, nervous fatigue (enervation), in which the nerve signal weakens, can be a limiting factor in untrained individuals.
In novice strength trainers, the muscle 's ability to generate force is most strongly limited by nerve 's ability to sustain a high - frequency signal. After a period of maximum contraction, the nerve 's signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ' stop listening ' and gradually cease to move, often going backwards. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout.
Part of the process of strength training is increasing the nerve 's ability to generate sustained, high frequency signals which allow a muscle to contract with its greatest force. It is this neural training that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrilar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force.
Though not universally used, ' metabolic fatigue ' is a common term for the reduction in contractile force due to the direct or indirect effects of two main factors:
Substrates within the muscle serve to power muscular contractions. They include molecules such as adenosine triphosphate (ATP), glycogen and creatine phosphate. ATP binds to the myosin head and causes the ' ratchetting ' that results in contraction according to the sliding filament model. Creatine phosphate stores energy so ATP can be rapidly regenerated within the muscle cells from adenosine diphosphate (ADP) and inorganic phosphate ions, allowing for sustained powerful contractions that last between 5 -- 7 seconds. Glycogen is the intramuscular storage form of glucose, used to generate energy quickly once intramuscular creatine stores are exhausted, producing lactic acid as a metabolic byproduct.
Substrate shortage is one of the causes of metabolic fatigue. Substrates are depleted during exercise, resulting in a lack of intracellular energy sources to fuel contractions. In essence, the muscle stops contracting because it lacks the energy to do so.
Metabolites are the substances (generally waste products) produced as a result of muscular contraction. They include chloride, potassium, lactic acid, ADP, magnesium (Mg), reactive oxygen species, and inorganic phosphate. Accumulation of metabolites can directly or indirectly produce metabolic fatigue within muscle fibers through interference with the release of calcium (Ca) from the sarcoplasmic reticulum or reduction of the sensitivity of contractile molecules actin and myosin to calcium.
Intracellular chloride partially inhibits the contraction of muscles. Namely, it prevents muscles from contracting due to "false alarms '', small stimuli which may cause them to contract (akin to myoclonus). This natural brake helps muscles respond solely to the conscious control or spinal reflexes but also has the effect of reducing the force of conscious contractions.
High concentrations of potassium (K) also causes the muscle cells to decrease in efficiency, causing cramping and fatigue. Potassium builds up in the t - tubule system and around the muscle fiber as a result of action potentials. The shift in K changes the membrane potential around the muscle fiber. The change in membrane potential causes a decrease in the release of calcium (Ca) from the sarcoplasmic reticulum.
It was once believed that lactic acid build - up was the cause of muscle fatigue. The assumption was lactic acid had a "pickling '' effect on muscles, inhibiting their ability to contract. The impact of lactic acid on performance is now uncertain, it may assist or hinder muscle fatigue.
Produced as a by - product of fermentation, lactic acid can increase intracellular acidity of muscles. This can lower the sensitivity of contractile apparatus to Ca but also has the effect of increasing cytoplasmic Ca concentration through an inhibition of the chemical pump that actively transports calcium out of the cell. This counters inhibiting effects of potassium on muscular action potentials. Lactic acid also has a negating effect on the chloride ions in the muscles, reducing their inhibition of contraction and leaving potassium ions as the only restricting influence on muscle contractions, though the effects of potassium are much less than if there were no lactic acid to remove the chloride ions. Ultimately, it is uncertain if lactic acid reduces fatigue through increased intracellular calcium or increases fatigue through reduced sensitivity of contractile proteins to Ca.
Lactic acid is now used as a measure of endurance training effectiveness and VO2 max.
Muscle weakness may be due to problems with the nerve supply, neuromuscular disease (such as myasthenia gravis) or problems with muscle itself. The latter category includes polymyositis and other muscle disorders.
Muscle fatigue may be due to precise molecular changes that occur in vivo with sustained exercise. It has been found that the ryanodine receptor present in skeletal muscle undergoes a conformational change during exercise, resulting in "leaky '' channels that are deficient in calcium release. These "leaky '' channels may be a contributor to muscle fatigue and decreased exercise capacity.
Fatigue has been found to play a big role in limiting performance in just about every individual in every sport. In research studies, participants were found to show reduced voluntary force production in fatigued muscles (measured with concentric, eccentric, and isometric contractions), vertical jump heights, other field tests of lower body power, reduced throwing velocities, reduced kicking power and velocity, less accuracy in throwing and shooting activities, endurance capacity, anaerobic capacity, anaerobic power, mental concentration, and many other performance parameters when sport specific skills are examined.
Electromyography is a research technique that allows researchers to look at muscle recruitment in various conditions, by quantifying electrical signals sent to muscle fibers through motor neurons. In general, fatigue protocols have shown increases in EMG data over the course of a fatiguing protocol, but reduced recruitment of muscle fibers in tests of power in fatigued individuals. In most studies, this increase in recruitment during exercise correlated with a decrease in performance (as would be expected in a fatiguing individual).
Median power frequency is often used as a way to track fatigue using EMG. Using the median power frequency, raw EMG data is filtered to reduce noise and then relevant time windows are Fourier Transformed. In the case of fatigue in a 30 - second isometric contraction, the first window may be the first second, the second window might be at second 15, and the third window could be the last second of contraction (at second 30). Each window of data is analyzed and the median power frequency is found. Generally, the median power frequency decreases over time, demonstrating fatigue. Some reasons why fatigue is found are due to action potentials of motor units having a similar pattern of repolarization, fast motor units activating and then quickly deactivating while slower motor units remain, and conduction velocities of the nervous system decreasing over time.
|
how long did it take to build great pyramid of giza | Great pyramid of Giza - wikipedia
The Great Pyramid of Giza (also known as the Pyramid of Khufu or the Pyramid of Cheops) is the oldest and largest of the three pyramids in the Giza pyramid complex bordering what is now El Giza, Egypt. It is the oldest of the Seven Wonders of the Ancient World, and the only one to remain largely intact.
Based on a mark in an interior chamber naming the work gang and a reference to fourth dynasty Egyptian Pharaoh Khufu, Egyptologists believe that the pyramid was built as a tomb over a 10 to 20 - year period concluding around 2560 BC. Initially at 146.5 metres (481 feet), the Great Pyramid was the tallest man - made structure in the world for more than 3,800 years. Originally, the Great Pyramid was covered by casing stones that formed a smooth outer surface; what is seen today is the underlying core structure. Some of the casing stones that once covered the structure can still be seen around the base. There have been varying scientific and alternative theories about the Great Pyramid 's construction techniques. Most accepted construction hypotheses are based on the idea that it was built by moving huge stones from a quarry and dragging and lifting them into place.
There are three known chambers inside the Great Pyramid. The lowest chamber is cut into the bedrock upon which the pyramid was built and was unfinished. The so - called Queen 's Chamber and King 's Chamber are higher up within the pyramid structure. The main part of the Giza complex is a setting of buildings that included two mortuary temples in honour of Khufu (one close to the pyramid and one near the Nile), three smaller pyramids for Khufu 's wives, an even smaller "satellite '' pyramid, a raised causeway connecting the two temples, and small mastaba tombs surrounding the pyramid for nobles.
It is believed the pyramid was built as a tomb for Fourth Dynasty Egyptian pharaoh Khufu (often Hellenicised as "Cheops '') and was constructed over a 20 - year period. Khufu 's vizier, Hemiunu (also called Hemon), is believed by some to be the architect of the Great Pyramid. It is thought that, at construction, the Great Pyramid was originally 280 Egyptian cubits tall (146.5 metres (480.6 ft)), but with erosion and absence of its pyramidion, its present height is 138.8 metres (455.4 ft). Each base side was 440 cubits, 230.4 metres (755.9 ft) long. The mass of the pyramid is estimated at 5.9 million tonnes. The volume, including an internal hillock, is roughly 2,500,000 cubic metres (88,000,000 cu ft).
Based on these estimates, building the pyramid in 20 years would involve installing approximately 800 tonnes of stone every day. Additionally, since it consists of an estimated 2.3 million blocks, completing the building in 20 years would involve moving an average of more than 12 of the blocks into place each hour, day and night. The first precision measurements of the pyramid were made by Egyptologist Sir Flinders Petrie in 1880 -- 82 and published as The Pyramids and Temples of Gizeh. Almost all reports are based on his measurements. Many of the casing stones and inner chamber blocks of the Great Pyramid fit together with extremely high precision. Based on measurements taken on the northeastern casing stones, the mean opening of the joints is only 0.5 millimetre wide (1 / 50 of an inch).
The pyramid remained the tallest man - made structure in the world for over 3,800 years, unsurpassed until the 160 - metre - tall (520 ft) spire of Lincoln Cathedral was completed c. 1300. The accuracy of the pyramid 's workmanship is such that the four sides of the base have an average error of only 58 millimetres in length. The base is horizontal and flat to within ± 15 mm (0.6 in). The sides of the square base are closely aligned to the four cardinal compass points (within four minutes of arc) based on true north, not magnetic north, and the finished base was squared to a mean corner error of only 12 seconds of arc.
The completed design dimensions, as suggested by Petrie 's survey and subsequent studies, are estimated to have originally been 280 royal cubits high by 440 cubits long at each of the four sides of its base. The ratio of the perimeter to height of 1760 / 280 royal cubits equates to 2 π to an accuracy of better than 0.05 % (corresponding to the well - known approximation of π as 22 / 7). Some Egyptologists consider this to have been the result of deliberate design proportion. Verner wrote, "We can conclude that although the ancient Egyptians could not precisely define the value of π, in practice they used it ''. Petrie, author of Pyramids and Temples of Gizeh concluded: "but these relations of areas and of circular ratio are so systematic that we should grant that they were in the builder 's design ''. Others have argued that the Ancient Egyptians had no concept of pi and would not have thought to encode it in their monuments. They believe that the observed pyramid slope may be based on a simple seked slope choice alone, with no regard to the overall size and proportions of the finished building. In 2013 rolls of papyrus called The diary of Merer were discovered written by some of those who delivered stone and other construction materials to Khufu 's brother at Giza.
The Great Pyramid consists of an estimated 2.3 million blocks which most believe to have been transported from nearby quarries. The Tura limestone used for the casing was quarried across the river. The largest granite stones in the pyramid, found in the "King 's '' chamber, weigh 25 to 80 tonnes and were transported from Aswan, more than 800 km (500 mi) away. Traditionally, ancient Egyptians cut stone blocks by hammering into them some wooden wedges, which were then soaked with water. As the water was absorbed, the wedges expanded, causing the rock to crack. Once they were cut, they were carried by boat either up or down the Nile River to the pyramid. It is estimated that 5.5 million tonnes of limestone, 8,000 tonnes of granite (imported from Aswan), and 500,000 tonnes of mortar were used in the construction of the Great Pyramid.
At completion, the Great Pyramid was surfaced by white "casing stones '' -- slant - faced, but flat - topped, blocks of highly polished white limestone. These were carefully cut to what is approximately a face slope with a seked of 51⁄2 palms to give the required dimensions. Visibly, all that remains is the underlying stepped core structure seen today. In AD 1303, a massive earthquake loosened many of the outer casing stones, which were then carted away by Bahri Sultan An - Nasir Nasir - ad - Din al - Hasan in 1356 to build mosques and fortresses in nearby Cairo. Many more casing stones were removed from the great pyramids by Muhammad Ali Pasha in the early 19th century to build the upper portion of his Alabaster Mosque in Cairo not far from Giza. These limestone casings can still be seen as parts of these structures. Later explorers reported massive piles of rubble at the base of the pyramids left over from the continuing collapse of the casing stones, which were subsequently cleared away during continuing excavations of the site.
Nevertheless, a few of the casing stones from the lowest course can be seen to this day in situ around the base of the Great Pyramid, and display the same workmanship and precision that has been reported for centuries. Petrie also found a different orientation in the core and in the casing measuring 193 centimetres ± 25 centimetres. He suggested a redetermination of north was made after the construction of the core, but a mistake was made, and the casing was built with a different orientation. Petrie related the precision of the casing stones as to being "equal to opticians ' work of the present day, but on a scale of acres '' and "to place such stones in exact contact would be careful work; but to do so with cement in the joints seems almost impossible ''. It has been suggested it was the mortar (Petrie 's "cement '') that made this seemingly impossible task possible, providing a level bed, which enabled the masons to set the stones exactly.
Many alternative, often contradictory, theories have been proposed regarding the pyramid 's construction techniques. Many disagree on whether the blocks were dragged, lifted, or even rolled into place. The Greeks believed that slave labour was used, but modern discoveries made at nearby workers ' camps associated with construction at Giza suggest that it was built instead by tens of thousands of skilled workers. Verner posited that the labour was organized into a hierarchy, consisting of two gangs of 100,000 men, divided into five zaa or phyle of 20,000 men each, which may have been further divided according to the skills of the workers.
One mystery of the pyramid 's construction is its planning. John Romer suggests that they used the same method that had been used for earlier and later constructions, laying out parts of the plan on the ground at a 1 - to - 1 scale. He writes that "such a working diagram would also serve to generate the architecture of the pyramid with precision unmatched by any other means ''. He also argues for a 14 - year time span for its construction. A modern construction management study, in association with Mark Lehner and other Egyptologists, estimated that the total project required an average workforce of 14,567 people and a peak workforce of roughly 40,000. Without the use of pulleys, wheels, or iron tools, they used critical path analysis methods, which suggest that the Great Pyramid was completed from start to finish in approximately 10 years.
The original entrance to the Great Pyramid is on the north, 17 metres (56 ft) vertically above ground level and 7.29 metres (23.9 ft) east of the centre line of the pyramid. From this original entrance, there is a Descending Passage 0.96 metres (3.1 ft) high and 1.04 metres (3.4 ft) wide, which goes down at an angle of 26 ° 31'23 '' through the masonry of the pyramid and then into the bedrock beneath it. After 105.23 metres (345.2 ft), the passage becomes level and continues for an additional 8.84 metres (29.0 ft) to the lower Chamber, which appears not to have been finished. There is a continuation of the horizontal passage in the south wall of the lower chamber; there is also a pit dug in the floor of the chamber. Some Egyptologists suggest that this Lower Chamber was intended to be the original burial chamber, but Pharaoh Khufu later changed his mind and wanted it to be higher up in the pyramid.
28.2 metres (93 ft) from the entrance is a square hole in the roof of the Descending Passage. Originally concealed with a slab of stone, this is the beginning of the Ascending Passage. The Ascending Passage is 39.3 metres (129 ft) long, as wide and high as the Descending Passage and slopes up at almost precisely the same angle to reach the Grand Gallery. The lower end of the Ascending Passage is closed by three huge blocks of granite, each about 1.5 metres (4.9 ft) long. One must use the Robbers ' Tunnel (see below) to access the Ascending Passage. At the start of the Grand Gallery on the right - hand side there is a hole cut in the wall. This is the start of a vertical shaft which follows an irregular path through the masonry of the pyramid to join the Descending Passage. Also at the start of the Grand Gallery there is the Horizontal Passage leading to the "Queen 's Chamber ''. The passage is 1.1 m (3'8 ") high for most of its length, but near the chamber there is a step in the floor, after which the passage is 1.73 metres (5.7 ft) high.
The "Queen 's Chamber '' is exactly halfway between the north and south faces of the pyramid and measures 5.75 metres (18.9 ft) north to south, 5.23 metres (17.2 ft) east to west, and has a pointed roof with an apex 6.23 metres (20.4 ft) above the floor. At the eastern end of the chamber there is a niche 4.67 metres (15.3 ft) high. The original depth of the niche was 1.04 metres (3.4 ft), but has since been deepened by treasure hunters.
In the north and south walls of the Queen 's Chamber there are shafts, which unlike those in the King 's Chamber that immediately slope upwards (see below), are horizontal for around 2 m (6.6 ft) before sloping upwards. The horizontal distance was cut in 1872 by a British engineer, Waynman Dixon, who believed a shaft similar to those in the King 's Chamber must also exist. He was proved right, but because the shafts are not connected to the outer faces of the pyramid or the Queen 's Chamber, their purpose is unknown. At the end of one of his shafts, Dixon discovered a ball of black diorite (a type of rock) and a bronze implement of unknown purpose. Both objects are currently in the British Museum.
The shafts in the Queen 's Chamber were explored in 1993 by the German engineer Rudolf Gantenbrink using a crawler robot he designed, Upuaut 2. After a climb of 65 m (213 ft), he discovered that one of the shafts was blocked by limestone "doors '' with two eroded copper "handles ''. Some years later the National Geographic Society created a similar robot which, in September 2002, drilled a small hole in the southern door, only to find another door behind it. The northern passage, which was difficult to navigate because of twists and turns, was also found to be blocked by a door.
Research continued in 2011 with the Djedi Project. Realizing the problem was that the National Geographic Society 's camera was only able to see straight ahead of it, they instead used a fiber - optic "micro snake camera '' that could see around corners. With this they were able to penetrate the first door of the southern shaft through the hole drilled in 2002, and view all the sides of the small chamber behind it. They discovered hieroglyphs written in red paint. They were also able to scrutinize the inside of the two copper "handles '' embedded in the door, and they now believe them to be for decorative purposes. They also found the reverse side of the "door '' to be finished and polished, which suggests that it was not put there just to block the shaft from debris, but rather for a more specific reason.
The Grand Gallery continues the slope of the Ascending Passage, but is 8.6 metres (28 ft) high and 46.68 metres (153.1 ft) long. At the base it is 2.06 metres (6.8 ft) wide, but after 2.29 metres (7.5 ft) the blocks of stone in the walls are corbelled inwards by 7.6 centimetres (3.0 in) on each side. There are seven of these steps, so, at the top, the Grand Gallery is only 1.04 metres (3.4 ft) wide. It is roofed by slabs of stone laid at a slightly steeper angle than the floor of the gallery, so that each stone fits into a slot cut in the top of the gallery like the teeth of a ratchet. The purpose was to have each block supported by the wall of the Gallery rather than resting on the block beneath it, which would have resulted in an unacceptable cumulative pressure at the lower end of the Gallery.
At the upper end of the Gallery on the right - hand side there is a hole near the roof that opens into a short tunnel by which access can be gained to the lowest of the Relieving Chambers. The other Relieving Chambers were discovered in 1837 - 1838 by Colonel Howard Vyse and J.S. Perring, who dug tunnels upwards using blasting powder.
The floor of the Grand Gallery consists of a shelf or step on either side, 51 centimetres (20 in) wide, leaving a lower ramp 1.04 metres (3.4 ft) wide between them. In the shelves there are 54 slots, 27 on each side matched by vertical and horizontal slots in the walls of the Gallery. These form a cross shape that rises out of the slot in the shelf. The purpose of these slots is not known, but the central gutter in the floor of the Gallery, which is the same width as the Ascending Passage, has led to speculation that the blocking stones were stored in the Grand Gallery and the slots held wooden beams to restrain them from sliding down the passage. This, in turn, has led to the proposal that originally many more than 3 blocking stones were intended, to completely fill the Ascending Passage.
At the top of the Grand Gallery, there is a step giving onto a horizontal passage some metres long and approximately 1.02 metres (3.3 ft) in height and width, in which can be detected four slots, three of which were probably intended to hold granite portcullises. Fragments of granite found by Petrie in the Descending Passage may have come from these now - vanished doors.
The "King 's Chamber '' is 20 cubits or 10.47 metres (34.4 ft) from east to west and 10 cubits or 5.234 metres (17.17 ft) north to south. It has a flat roof 11 cubits and 5 digits or 5.852 metres (19 feet 2 inch) above the floor. 0.91 m (3.0 ft) above the floor there are two narrow shafts in the north and south walls (one is now filled by an extractor fan in an attempt to circulate air inside the pyramid). The purpose of these shafts is not clear: they appear to be aligned towards stars or areas of the northern and southern skies, yet one of them follows a dog - leg course through the masonry, indicating no intention to directly sight stars through them. They were long believed by Egyptologists to be "air shafts '' for ventilation, but this idea has now been widely abandoned in favour of the shafts serving a ritualistic purpose associated with the ascension of the king 's spirit to the heavens.
The King 's Chamber is entirely faced with granite. Above the roof, which is formed of nine slabs of stone weighing in total about 400 tons, are five compartments known as Relieving Chambers. The first four, like the King 's Chamber, have flat roofs formed by the floor of the chamber above, but the final chamber has a pointed roof. Vyse suspected the presence of upper chambers when he found that he could push a long reed through a crack in the ceiling of the first chamber. From lower to upper, the chambers are known as "Davison 's Chamber '', "Wellington 's Chamber '', "Nelson 's Chamber '', "Lady Arbuthnot 's Chamber '', and "Campbell 's Chamber ''. It is believed that the compartments were intended to safeguard the King 's Chamber from the possibility of a roof collapsing under the weight of stone above the Chamber. As the chambers were not intended to be seen, they were not finished in any way and a few of the stones still retain masons ' marks painted on them. One of the stones in Campbell 's Chamber bears a mark, apparently the name of a work gang.
The only object in the King 's Chamber is a rectangular granite sarcophagus, one corner of which is broken. The sarcophagus is slightly larger than the Ascending Passage, which indicates that it must have been placed in the Chamber before the roof was put in place. Unlike the fine masonry of the walls of the Chamber, the sarcophagus is roughly finished, with saw marks visible in several places. This is in contrast with the finely finished and decorated sarcophagi found in other pyramids of the same period. Petrie suggested that such a sarcophagus was intended but was lost in the river on the way north from Aswan and a hurriedly made replacement was used instead.
Today tourists enter the Great Pyramid via the Robbers ' Tunnel, a tunnel purportedly created around AD 820 by Caliph al - Ma'mun 's workmen using a battering ram. The tunnel is cut straight through the masonry of the pyramid for approximately 27 metres (89 ft), then turns sharply left to encounter the blocking stones in the Ascending Passage. It is believed that their efforts dislodged the stone fitted in the ceiling of the Descending Passage to hide the entrance to the Ascending Passage and it was the noise of that stone falling and then sliding down the Descending Passage, which alerted them to the need to turn left. Unable to remove these stones, however, the workmen tunnelled up beside them through the softer limestone of the Pyramid until they reached the Ascending Passage. It is possible to enter the Descending Passage from this point, but access is usually forbidden.
The Great Pyramid is surrounded by a complex of several buildings including small pyramids. The Pyramid Temple, which stood on the east side of the pyramid and measured 52.2 metres (171 ft) north to south and 40 metres (130 ft) east to west, has almost entirely disappeared apart from the black basalt paving. There are only a few remnants of the causeway which linked the pyramid with the valley and the Valley Temple. The Valley Temple is buried beneath the village of Nazlet el - Samman; basalt paving and limestone walls have been found but the site has not been excavated. The basalt blocks show "clear evidence '' of having been cut with some kind of saw with an estimated cutting blade of 15 feet (4.6 m) in length, capable of cutting at a rate of 1.5 inches (38 mm) per minute. John Romer suggests that this "super saw '' may have had copper teeth and weighed up to 300 pounds (140 kg). He theorizes that such a saw could have been attached to a wooden trestle and possibly used in conjunction with vegetable oil, cutting sand, emery or pounded quartz to cut the blocks, which would have required the labour of at least a dozen men to operate it.
On the south side are the subsidiary pyramids, popularly known as Queens ' Pyramids. Three remain standing to nearly full height but the fourth was so ruined that its existence was not suspected until the recent discovery of the first course of stones and the remains of the capstone. Hidden beneath the paving around the pyramid was the tomb of Queen Hetepheres I, sister - wife of Sneferu and mother of Khufu. Discovered by accident by the Reisner expedition, the burial was intact, though the carefully sealed coffin proved to be empty.
The Giza pyramid complex, which includes among other structures the pyramids of Khufu, Khafre and Menkaure, is surrounded by a cyclopean stone wall, the Wall of the Crow. Mark Lehner has discovered a worker 's town outside of the wall, otherwise known as "The Lost City '', dated by pottery styles, seal impressions, and stratigraphy to have been constructed and occupied sometime during the reigns of Khafre (2520 -- 2494 BC) and Menkaure (2490 -- 2472 BC). Recent discoveries by Mark Lehner and his team at the town and nearby, including what appears to have been a thriving port, suggest the town and associated living quarters consisting of barracks called "galleries '' may not have been for the pyramid workers after all, but rather for the soldiers and sailors who utilized the port. In light of this new discovery, as to where then the pyramid workers may have lived Lehner now suggests the alternative possibility they may have camped on the ramps he believes were used to construct the pyramids or possibly at nearby quarries.
In the early 1970s, the Australian archaeologist Karl Kromer excavated a mound in the South Field of the plateau. This mound contained artefacts including mudbrick seals of Khufu, which he identified with an artisans ' settlement. Mudbrick buildings just south of Khufu 's Valley Temple contained mud sealings of Khufu and have been suggested to be a settlement serving the cult of Khufu after his death. A worker 's cemetery used at least between Khufu 's reign and the end of the Fifth Dynasty was discovered south of the Wall of the Crow by Zahi Hawass in 1990.
There are three boat - shaped pits around the pyramid, of a size and shape to have held complete boats, though so shallow that any superstructure, if there ever was one, must have been removed or disassembled. In May 1954, the Egyptian archaeologist Kamal el - Mallakh discovered a fourth pit, a long, narrow rectangle, still covered with slabs of stone weighing up to 15 tons. Inside were 1,224 pieces of wood, the longest 23 metres (75 ft) long, the shortest 10 centimetres (0.33 ft). These were entrusted to a boat builder, Haj Ahmed Yusuf, who worked out how the pieces fit together. The entire process, including conservation and straightening of the warped wood, took fourteen years.
The result is a cedar - wood boat 43.6 metres (143 ft) long, its timbers held together by ropes, which is currently housed in a special boat - shaped, air - conditioned museum beside the pyramid. During construction of this museum, which stands above the boat pit, a second sealed boat pit was discovered. It was deliberately left unopened until 2011 when excavation began on the boat.
Although succeeding pyramids were smaller, pyramid building continued until the end of the Middle Kingdom. However, as authors Briar and Hobbs claim, "all the pyramids were robbed '' by the New Kingdom, when the construction of royal tombs in a desert valley, now known as the Valley of the Kings, began. Joyce Tyldesley states that the Great Pyramid itself "is known to have been opened and emptied by the Middle Kingdom '', before the Arab caliph Abdullah al - Mamun entered the pyramid around AD 820.
I.E.S. Edwards discusses Strabo 's mention that the pyramid "a little way up one side has a stone that may be taken out, which being raised up there is a sloping passage to the foundations ''. Edwards suggested that the pyramid was entered by robbers after the end of the Old Kingdom and sealed and then reopened more than once until Strabo 's door was added. He adds: "If this highly speculative surmise be correct, it is also necessary to assume either that the existence of the door was forgotten or that the entrance was again blocked with facing stones '', in order to explain why al - Ma'mun could not find the entrance.
He also discusses a story told by Herodotus. Herodotus visited Egypt in the 5th century BC and recounts a story that he was told concerning vaults under the pyramid built on an island where the body of Cheops lies. Edwards notes that the pyramid had "almost certainly been opened and its contents plundered long before the time of Herodotus '' and that it might have been closed again during the Twenty - sixth Dynasty of Egypt when other monuments were restored. He suggests that the story told to Herodotus could have been the result of almost two centuries of telling and retelling by Pyramid guides.
|
which of the following is not a feature of election system in india | Electoral system - wikipedia
An electoral system is a set of rules that determines how elections and referendums are conducted and how their results are determined. Political electoral systems are organized by governments, while non-political elections may take place in business, non-profit organisations and informal organisations.
Electoral systems consist of sets of rules that govern all aspects of the voting process: when elections occur, who is allowed to vote, who can stand as a candidate, how ballots are marked and cast, how the ballots are counted (electoral method), limits on campaign spending, and other factors that can affect the outcome. Political electoral systems are defined by constitutions and electoral laws, are typically conducted by election commissions, and can use multiple types of elections for different offices.
Some electoral systems elect a single winner to a unique position, such as prime minister, president or governor, while others elect multiple winners, such as members of parliament or boards of directors. There are a large number of variations in electoral systems, but the most common systems are first - past - the - post voting, the two - round (runoff) system, proportional representation and ranked or preferential voting. Some electoral systems, such as mixed systems, attempt to combine the benefits of non-proportional and proportional systems.
The study of formally defined electoral methods is called social choice theory or voting theory, and this study can take place within the field of political science, economics, or mathematics, and specifically within the subfields of game theory and mechanism design. Impossibility proofs such as Arrow 's impossibility theorem demonstrates that when voters have three or more alternatives, it is not possible to design a electoral system that reflects the preferences of individuals in a global preference of the community, present in countries with proportional representation and plurality voting such as the United Kingdom.
Plurality voting is a system in which the candidate (s) with the highest amount of vote wins, with no requirement to get a majority of votes. In cases where there is a single position to be filled, it is known as first - past - the - post; this is the second most common electoral system for national legislatures, with 58 countries using it to elect their parliaments, the vast majority of which are current or former British or American colonies or territories. It is also the second most common system used for presidential elections, being used in 19 countries.
In cases where there are multiple positions to be elected, most commonly in cases of multi-member constituencies, plurality voting is referred to as bloc voting or plurality - at - large. This takes two main forms; in one form voters have as many votes as there are seats and can vote for any candidate, regardless of party -- this is used in eight countries. There are variations on this system such as limited voting, where voters are given fewer votes than there are seats to be elected (Gibraltar is the only territory where this system is in use) and single non-transferable vote (SNTV), in which voters are only able to vote for one candidate in a multi-member constituency, with the candidates receiving the most votes declared the winners; this system is used in Afghanistan, Kuwait, the Pitcairn Islands and Vanuatu. In the other main form of bloc voting, also known as party bloc voting, voters can only vote for the multiple candidates of a single party. This is used in five countries as part of mixed systems.
The Dowdall system, a multi-member constituency variation on the Borda count, is used in Nauru for parliamentary elections and sees voters rank the candidates depending on how many seats there are in their constituency. First preference votes are counted as whole numbers; the second preference votes divided by two, third preferences by three; this continues to the lowest possible ranking. The sums achieved by each candidate are then totalled to determine the winner.
Majoritarian voting is a system in which candidates have to receive a majority of the votes to be elected, although in some cases only a plurality is required in the last round of counting if no candidate can achieve a majority. There are two main forms of majoritarian systems, one using a single round of ranked voting and the other using two or more rounds. Both are primarily used for single - member constituencies.
Majoritarian voting can take place in a single round using instant - runoff voting (IRV), whereby voters rank candidates in order of preference; this system is used for parliamentary elections in Australia and Papua New Guinea. If no candidate receives a majority of the vote in the first round, the second preferences of the lowest - ranked candidate are then added to the totals. This is repeated until a candidate achieves over 50 % of the number of valid votes. If not all voters use all their preference votes, then the count may continue until two candidates remain, at which point the winner is the one with the most votes. A modified form of IRV is the contingent vote where voters do not rank all candidates, but have a limited number of preference votes. If no candidate has a majority in the first round, all candidates are excluded except the top two, with the highest remaining preference votes from the votes for the excluded candidates then added to the totals to determine the winner. This system is used in Sri Lankan presidential elections, with voters allowed to give three preferences.
The other main form of majoritarian system is the two - round system, which is the most common system used for presidential elections around the world, being used in 88 countries. It is also used in 20 countries for electing the legislature. If no candidate achieves a majority of votes in the first round of voting, a second round is held to determine the winner. In most cases the second round is limited to the top two candidates from the first round, although in some elections more than two candidates may choose to contest the second round; in these cases the second round is decided by plurality voting. Some countries use a modified form of the two - round system, such as Ecuador where a candidate in the presidential election is declared the winner if they receive 40 % of the vote and is 10 % ahead of their nearest rival, or Argentina (45 % plus 10 % ahead), where the system is known as ballotage.
An exhaustive ballot is not limited to two rounds, but sees the last - placed candidate eliminated in the round of voting. Due to the large potential number of rounds, this system is not used in any major popular elections, but is used to elect the Speakers of parliament in several countries and members of the Swiss Federal Council. In some formats there may be multiple rounds held without any candidates being removed until a candidate achieves a majority, a system used in the United States Electoral College.
Proportional representation is the most widely used electoral system for national legislatures, with the parliaments of over eighty countries elected by various forms of the system.
Party - list proportional representation is the single most common electoral system and is used by 80 countries, and involves voters voting for a list of candidates proposed by a party. In closed list systems voters do not have any influence over the candidates put forward by the party, but in open list systems voters are able to both vote for the party list and influence the order in which candidates will be assigned seats. In some countries, notably Israel and the Netherlands, elections are carried out using ' pure ' proportional representation, with the votes tallied on a national level before assigning seats to parties. However, in most cases several multi-member constituencies are used rather than a single nationwide constituency, giving an element of geographical representation. However, this can result in the distribution of seats not reflecting the national vote totals. As a result, some countries have leveling seats to award to parties whose seat totals are lower than their proportion of the national vote.
In addition to the electoral threshold, the minimum percentage of the vote that a party must obtain to win seats, there are several different methods for calculating seat allocation in proportional systems, typically broken down into the two main types; highest average and largest remainder. Highest average systems involve dividing the votes received by each party by a series of divisors, producing figures that determine seat allocation; examples include the d'Hondt method (of which there are variants including Hagenbach - Bischoff) or the Webster / Sainte - Laguë method. Under largest remainder systems, party 's vote shares are divided by the quota (obtained by dividing the number of votes by the number of seats available). This usually leaves some seats unallocated, which are awarded to parties based on the largest fractions of seats that they have remaining. Examples of largest remainder systems include the Hare quota, Droop quota, the Imperiali quota and the Hagenbach - Bischoff quota.
Single transferable vote (STV) is another form of proportional representation, but is achieved by voters ranking candidates in a multi-member constituency by preference rather than voting for a party list; it is used in Malta and the Republic of Ireland. To be elected, candidates must pass a quota (the Droop quota being the most common). Candidates that pass the quota on the first count are elected. Votes are then reallocated from the least successful candidates until the number of candidates that have passed the quota is equal to the number of seats to be filled.
In several countries, mixed systems are used to elect the legislature. These include parallel voting and mixed - member proportional representation.
In parallel voting systems, which are used in 20 countries, there are two methods by which members of a legislature are elected; part of the membership is elected by a plurality or majority vote in single - member constituencies and the other part by proportional representation. The results of the constituency vote has no effect on the outcome of the proportional vote.
Mixed - member proportional representation, in use in eight countries, also sees the membership of the legislature elected by constituency and proportional methods, but the results of the proportional vote are adjusted to balance the seats won in the constituency vote in order to ensure that parties have a number of seats proportional to their vote share. This may result in overhang seats, where parties win more seats in the constituency system than they would be entitled to based on their vote share. Variations of this include the Additional Member System and Alternative Vote Plus, in which voters rank candidates, and the other from multi-member constituencies elected on a proportional party list basis. A form of mixed - member proportional representation, Scorporo, was used in Italy from 1993 until 2006.
Some electoral systems feature a majority bonus system to either ensure one party or coalition gains a majority in the legislature, or to give the party receiving the most votes a clear advantage in terms of the number of seats. In Greece the party receiving the most votes is given an additional 50 seats, San Marino has a modified two - round system, which sees a second round of voting featuring the top two parties or coalitions if there is no majority in the first round. The winner of the second round is guaranteed 35 seats in the 60 - seat Grand and General Council.
In Uruguay, the President and members of the General Assembly are elected by on a single ballot, known as the double simultaneous vote. Voters cast a single vote, voting for the presidential, Senatorial and Chamber of Deputies candidates of that party. This system was also previously used in Bolivia and the Dominican Republic.
Primary elections are a feature of some electoral systems, either as a formal part of the electoral system or informally by choice of individual political parties as a method of selecting candidates, as is the case in Italy. Primary elections limit the risk of vote splitting by ensuring a single party candidate. In Argentina they are a formal part of the electoral system and take place two months before the main elections; any party receiving less than 1.5 % of the vote is not permitted to contest the main elections. In the United States, there are both partisan and non-partisan primary elections.
Some elections feature an indirect electoral system, whereby there is either no popular vote, or the popular vote is only one stage of the election; in these systems the final vote is usually taken by an electoral college. In several countries, such as Mauritius or Trinidad and Tobago, the post of President is elected by the legislature. In others like India, the vote is taken by an electoral college consisting of the national legislature and state legislatures. In the United States, the president is indirectly elected using a two - stage process; a popular vote in each state elects members to the electoral college that in turn elects the President. This can result in a situation where a candidate who receives the most votes nationwide does not win the electoral college vote, as most recently happened in 2000 and 2016.
In addition to the various electoral systems in use in the political sphere, there are numerous others, some of which are proposals and some of which have been adopted for usage in business (such as electing corporate board members) or for organisations but not for public elections.
Ranked systems include Bucklin voting, the various Condorcet methods (Copeland 's, Dodgson 's, Kemeny - Young, Maximal lotteries, Minimax, Nanson 's, Ranked pairs, Schulze), the Coombs ' method and positional voting. There are also several variants of single transferable vote, including CPO - STV, Schulze STV and the Wright system. Dual - member proportional representation is a proposed system with two candidates elected in each constituency, one with the most votes and one to ensure proportionality of the combined results. Biproportional apportionment is a system whereby the total number of votes is used to calculate the number of seats each party is due, followed by a calculation of the constituencies in which the seats should be awarded in order to achieve the total due to them.
Cardinal electoral systems allow voters to score candidates independently. The complexity ranges from approval voting where voters simply state whether they approve of a candidate or not to range voting, where a candidate is scored from a set range of numbers. Other cardinal systems include Proportional approval voting, sequential proportional approval voting, Satisfaction approval voting and majority judgment.
Historically, weighted voting systems were used in some countries. These allocated a greater weight to the votes of some voters than others, either indirectly by allocating more seats to certain groups (such as the Prussian three - class franchise), or by weighting the results of the vote. The latter system was used in colonial Rhodesia for the 1962 and 1965 elections. The elections featured two voter rolls (the ' A ' roll being largely European and the ' B ' roll largely African); the seats of the House Assembly were divided into 50 constituency seats and 15 district seats. Although all voters could vote for both types of seats, ' A roll votes were given greater weight for the constituency seats and ' B ' roll votes greater weight for the district seats. Weighted systems are still used in corporate elections, with votes weighted to reflect stock ownership.
In addition to the specific method of electing candidates, electoral systems are also characterised by their wider rules and regulations, which are usually set out in a country 's constitution or electoral law. Participatory rules determine candidate nomination and voter registration, in addition to the location of polling places and the availability of online voting, postal voting, and absentee voting. Other regulations include the selection of voting devices such as paper ballots, machine voting or open ballot systems, and consequently the type of vote counting systems, verification and auditing used.
Electoral rules place limits on suffrage and candidacy. Most countries 's electorates are characterised by universal suffrage, but there are differences on the age at which people are allowed to vote, with the youngest being 16 and the oldest 21 (although voters must be 25 to vote in Senate elections in Italy). People may be disenfranchised for a range of reasons, such as being a serving prisoner, being declared bankrupt, having committed certain crimes or being a serving member of the armed forces. Similar limits are placed on candidacy (also known as passive suffrage), and in many cases the age limit for candidates is higher than the voting age. A total of 21 countries have compulsory voting, although in some there is an upper age limit on enforcement of the law. Many countries also have the none of the above option on their ballot papers.
In systems that use constituencies, apportionment or districting defines the area covered by each constituency. Where constituency boundaries are drawn has a strong influence on the likely outcome of elections in the constituency due to the geographic distribution of voters. Political parties may seek to gain an advantage during redistricting by ensuring their voter base has a majority in as many constituencies as possible, a process known as gerrymandering. Historically rotten and pocket boroughs, constituencies with unusually small populations, were used by wealthy families to gain parliamentary representation.
Some countries have minimum turnout requirements for elections to be valid. In Serbia this rule caused multiple re-runs of presidential elections, with the 1997 election re-run once and the 2002 elections re-run three times due insufficient turnout in the first, second and third attempts to run the election. The turnout requirement was scrapped prior to the fourth vote in 2004. Similar problems in Belarus led to the 1995 parliamentary elections going to a fourth round of voting before enough parliamentarains were elected to make a quorum.
Reserved seats are used in many countries to ensure representation for ethnic minorities, women, young people or the disabled. These seats are separate from general seats, and may be elected separately (such as in Morocco where a separate ballot is used to elect the 60 seats reserved for women and 30 seats reserved for young people in the House of Representatives), or be allocated to parties based on the results of the election; in Jordan the reserved seats for women are given to the female candidates who failed to win constituency seats but with the highest number of votes, whilst in Kenya the Senate seats reserved for women, young people and the disabled are allocated to parties based on how many seats they won in the general vote. Some countries achieve minority representation by other means, including requirements for a certain proportion of candidates to be women, or by exempting minority parties from the electoral threshold, as is done in Poland, Romania and Serbia.
Voting has been used as a feature of democracy since the 6th century BC, when democracy was introduced by the Athenian democracy. However, in Athenian democracy, voting was seen as the least democratic among methods used for selecting public officials, and was little used, because elections were believed to inherently favor the wealthy and well - known over average citizens. Viewed as more democratic were assemblies open to all citizens, and selection by lot (known as sortition), as well as rotation of office. One of the earliest recorded elections in Athens was a plurality vote that it was undesirable to win; in the process called ostracism, voters chose the citizen they most wanted to exile for ten years. Most elections in the early history of democracy were held using plurality voting or some variant, but as an exception, the state of Venice in the 13th century adopted approval voting to elect their Great Council.
The Venetians ' method for electing the Doge was a particularly convoluted process, consisting of five rounds of drawing lots (sortition) and five rounds of approval voting. By drawing lots, a body of 30 electors was chosen, which was further reduced to nine electors by drawing lots again. An electoral college of nine members elected 40 people by approval voting; those 40 were reduced to form a second electoral college of 12 members by drawing lots again. The second electoral college elected 25 people by approval voting, which were reduced to form a third electoral college of nine members by drawing lots. The third electoral college elected 45 people, which were reduced to form a fourth electoral college of 11 by drawing lots. They in turn elected a final electoral body of 41 members, who ultimately elected the Doge. Despite its complexity, the method had certain desirable properties such as being hard to game and ensuring that the winner reflected the opinions of both majority and minority factions. This process, with slight modifications, was central to the politics of the Republic of Venice throughout its remarkable lifespan of over 500 years, from 1268 to 1797.
Jean - Charles de Borda proposed the Borda count in 1770 as a method for electing members to the French Academy of Sciences. His method was opposed by the Marquis de Condorcet, who proposed instead the method of pairwise comparison that he had devised. Implementations of this method are known as Condorcet methods. He also wrote about the Condorcet paradox, which he called the intransitivity of majority preferences. However, recent research has shown that the philosopher Ramon Llull devised both the Borda count and a pairwise method that satisfied the Condorcet criterion in the 13th century. The manuscripts in which he described these methods had been lost to history until they were rediscovered in 2001.
Later in the 18th century, apportionment methods came to prominence due to the United States Constitution, which mandated that seats in the United States House of Representatives had to be allocated among the states proportionally to their population, but did not specify how to do so. A variety of methods were proposed by statesmen such as Alexander Hamilton, Thomas Jefferson, and Daniel Webster. Some of the apportionment methods devised in the United States were in a sense rediscovered in Europe in the 19th century, as seat allocation methods for the newly proposed method of party - list proportional representation. The result is that many apportionment methods have two names; Jefferson 's method is equivalent to the d'Hondt method, as is Webster 's method to the Sainte - Laguë method, while Hamilton 's method is identical to the Hare largest remainder method.
The single transferable vote (STV) method was devised by Carl Andræ in Denmark in 1855 and in the United Kingdom by Thomas Hare in 1857. STV elections were first held in Denmark in 1856, and in Tasmania in 1896 after its use was promoted by Andrew Inglis Clark. Party - list proportional representation began to be used to elect European legislatures in the early 20th century, with Belgium the first to implement it for its 1900 general elections. Since then, proportional and semi-proportional methods have come to be used in almost all democratic countries, with most exceptions being former British colonies.
Perhaps influenced by the rapid development of multiple - winner electoral systems, theorists began to publish new findings about single - winner methods in the late 19th century. This began around 1870, when William Robert Ware proposed applying STV to single - winner elections, yielding instant - runoff voting (IRV). Soon, mathematicians began to revisit Condorcet 's ideas and invent new methods for Condorcet completion; Edward J. Nanson combined the newly described instant runoff voting with the Borda count to yield a new Condorcet method called Nanson 's method. Charles Dodgson, better known as Lewis Carroll, proposed the straightforward Condorcet method known as Dodgson 's method as well as a proportional multiwinner method based on proxy voting.
Ranked voting electoral systems eventually gathered enough support to be adopted for use in government elections. In Australia, IRV was first adopted in 1893, and continues to be used along with STV today. In the United States in the early - 20th - century progressive era, some municipalities began to use Bucklin voting, although this is no longer used in any government elections, and has even been declared unconstitutional in Minnesota.
The use of game theory to analyze electoral systems led to discoveries about the effects of certain methods. Earlier developments such as Arrow 's impossibility theorem had already shown the issues with Ranked voting systems. Research led Steven Brams and Peter Fishburn to formally define and promote the use of approval voting in 1977. Political scientists of the 20th century published many studies on the effects that the electoral systems have on voters ' choices and political parties, and on political stability. A few scholars also studied which effects caused a nation to switch to a particular electoral system. One prominent current voting theorist is Nicolaus Tideman, who formalized concepts such as strategic nomination and the spoiler effect in the independence of clones criterion. Tideman also devised the ranked pairs method, a Condorcet method that is not susceptible to clones.
The study of electoral systems influenced a new push for electoral reform beginning around the 1990s, with proposals being made to replace plurality voting in governmental elections with other methods. New Zealand adopted mixed - member proportional representation for the 1993 general elections and STV for some local elections in 2004. After plurality voting was a key factor in the contested results of the 2000 presidential elections in the United States, various municipalities in the United States began to adopt IRV, although some of them subsequently returned to their prior method. However, attempts at introducing more proportional systems were not always successful; in Canada there were two referendums in British Columbia in 2005 and 2009 on adopting an STV method, both of which failed. In the United Kingdom, a 2011 referendum on adopting Instant - runoff voting saw the proposal rejected.
In other countries there were calls for the restoration of plurality or majoritarian systems or their establishment where they have never been used; a referendum was held in Ecuador in 1994 on the adoption the two round system, but the idea was rejected. In Romania a proposal to switch to a two - round system for parliamentary elections failed only because voter turnout in the referendum was too low. Attempts to reintroduce single - member constituencies in Poland (2015) and two - round system in Bulgaria (2016) via referendums both also failed due to low turnout.
Electoral systems can be compared by different means. Attitudes towards systems are highly influenced by the systems ' impact on groups that one supports or opposes, which can make the objective comparison of voting systems difficult. There are several ways to address this problem:
One approach is to define criteria mathematically, such that any electoral system either passes or fails. This gives perfectly objective results, but their practical relevance is still arguable.
Another approach is to define ideal criteria that no electoral system passes perfectly, and then see how often or how close to passing various methods are over a large sample of simulated elections. This gives results which are practically relevant, but the method of generating the sample of simulated elections can still be arguably biased.
A final approach is to create imprecisely defined criteria, and then assign a neutral body to evaluate each method according to these criteria. This approach can look at aspects of electoral systems which the other two approaches miss, but both the definitions of these criteria and the evaluations of the methods are still inevitably subjective.
Arrow 's and Gibbard 's theorems prove that no system using ranked voting, as opposed to cardinal voting, can meet all such criteria simultaneously. Instead of debating the importance of different criteria, another method is to simulate many elections with different electoral systems, and estimate the typical overall happiness of the population with the results, their vulnerability to strategic voting, their likelihood of electing the candidate closest to the average voter, etc.
|
how many of each gender in the world | Human sex ratio - wikipedia
In anthropology and demography, the human sex ratio is the ratio of males to females in a population. More data are available for humans than for any other species, and the human sex ratio is more studied than that of any other species, but interpreting these statistics can be difficult.
Like most sexual species, the sex ratio in humans is approximately 1: 1. Due to higher female fetal mortality, the sex ratio at birth worldwide is commonly thought to be 107 boys to 100 girls, although this value is subject to debate in the scientific community. The sex ratio for the entire world population is 101 males to 100 females. Depending upon which definition is used, between 0.1 % and 1.7 % of live births are intersex.
Gender imbalance may arise as a consequence of various factors including natural factors, exposure to pesticides and environmental contaminants, war casualties, gender - selective abortions and infanticides, aging, and deliberate gendercide.
Human sex ratios, either at birth or in the population as a whole, are reported in any of four ways: the ratio of males to females, the ratio of females to males, the proportion of males, or the proportion of females. If there are 108,000 males and 100,000 females the ratio of males to females is 1.080 and the proportion of males is 51.9 %. Scientific literature often uses the proportion of males. This article uses the ratio of males to females, unless specified otherwise.
In a study around 2002, the natural sex ratio at birth was estimated to be close to 1.06 males / female.
Infant mortality is higher in boys than girls in most parts of the world. Recent studies have found that numerous preconception or prenatal environmental factors affect the probability of a baby being conceived male or female. It has been proposed that these environmental factors also explain sex differences in mortality. In most populations, adult males tend to have higher death rates than adult females of the same age (even after allowing for causes specific to females such as death in childbirth), both due to natural causes such as heart attacks and strokes, which account for by far the majority of deaths, and also to violent causes, such as homicide and warfare, resulting in higher life expectancy of females. For example, in the United States, as of 2006, an adult non-elderly male is 3 to 6 times more likely to become a victim of a homicide and 2.5 to 3.5 times more likely to die in an accident than a female of the same age. Consequently, the sex ratio tends to reduce as age increases and among the elderly there is usually a greater proportion of females. For example, the male to female ratio falls from 1.05 for the group aged 15 to 65 to 0.70 for the group over 65 in Germany, from 1.00 to 0.72 in the United States, from 1.06 to 0.91 in mainland China, and from 1.07 to 1.02 in India.
In the United States, the sex ratios at birth over the period 1970 -- 2002 were 1.05 for the white non-Hispanic population, 1.04 for Mexican Americans, 1.03 for African Americans and Indians, and 1.07 for mothers of Chinese or Filipino ethnicity. Among Western European countries c. 2001, the ratios ranged from 1.04 in Belgium to 1.07 in Switzerland, Italy, Ireland and Portugal. In the aggregated results of 56 Demographic and Health Surveys in African countries, the ratio is 1.03, albeit with considerable country - to - country variation.
Even in the absence of sex selection practices, a range of "normal '' sex ratios at birth of between 103 and 108 boys per 100 girls has been observed in different economically developed countries, and among different ethnic and racial groups within a given country.
In an extensive study, carried out around 2005, of sex ratio at birth in the United States from 1940 over 62 years, statistical evidence suggested the following:
Fisher 's principle is an explanation of why the sex ratio of most species is approximately 1: 1. Outlined by Ronald Fisher in his 1930 book, it is an argument in terms of parental expenditure. Essentially he argues that the 1: 1 ratio is the evolutionarily stable strategy.
The natural factors that affect the human sex ratio are an active area of scientific research. Over 1000 articles have been published in various journals. Two of the often cited reviews of scientific studies on human sex ratio are by W.H. James. The scientific studies are based on extensive birth and death records in Africa, the Americas, Asia, Australia, and Europe. A few of these studies extend to over 100 years of yearly human sex ratio data for some countries. These studies suggest that the human sex ratio, both at birth and as a population matures, can vary significantly according to a large number of factors, such as paternal age, maternal age, plural birth, birth order, gestation weeks, race, parent 's health history, and parent 's psychological stress. Remarkably, the trends in human sex ratio are not consistent across countries at a given time, or over time for a given country. In economically developed countries, as well as developing countries, these scientific studies have found that the human sex ratio at birth has historically varied between 0.94 and 1.15 for natural reasons.
In a scientific paper published in 2008, James states that conventional assumptions have been:
James cautions that available scientific evidence stands against the above assumptions and conclusions. He reports that there is an excess of males at birth in almost all human populations, and the natural sex ratio at birth is usually between 1.02 and 1.08. However the ratio may deviate significantly from this range for natural reasons.
A 1999 scientific paper published by Jacobsen reported the sex ratio for 815,891 children born in Denmark between 1980 -- 1993. They studied the birth records to identify the effects of multiple birth, birth order, age of parents and the sexes of preceding siblings on the proportion of males using contingency tables, chi - squared tests and regression analysis. The secondary sex ratio decreased with increased number of children per plural birth and with paternal age, whereas no significant independent effect was observed for maternal age, birth order, or other natural factors.
A 2009 research paper published by Branum et al. reports the sex ratio derived from data in United States birth records over a 25 - year period (1981 -- 2006). This paper reports that the sex ratio at birth for the white ethnic group in the United States was 1.04 when the gestational age was 33 -- 36 weeks, but 1.15 for gestational ages of less than 28 weeks, 28 -- 32 weeks, and 37 or more weeks. This study also found that the sex ratios at birth in the United States, between 1981 -- 2006, were lower in both black and Hispanic ethnic groups when compared with white ethnic group.
The relationship between natural factors and human sex ratio at birth, and with aging, remains an active area of scientific research.
Various scientists have examined the question whether human birth sex ratios have historically been affected by environmental stressors such as climate change and global warming. Catalano et al. report that cold weather is an environmental stressor, and women subjected to colder weather abort frail male fetuses in greater proportion thereby lowering birth sex ratios. Cold weather stressors simultaneously extend male longevity thereby raise human sex ratio in its older age bracket. Catalano team finds that a 1 ° C increase in annual temperature predicts one more male than expected for every 1,000 females born in a year.
Helle et al. have studied 138 years worth of human birth sex ratio data, from 1865 to 2003. They find an increased excess of male births during periods of the exogenous stress (World War II) and during warm years. In the warmest period over the 138 years, the birth sex ratio peaked at about 1.08 in northern Europe.
Causes of stress during gestation, such as maternal malnutrition generally appear to increase fetal deaths particularly among males, resulting in a lower boy to girl ratio at birth. Also, higher incidence of Hepatitis B virus in populations is believed to increase the male to female sex ratio, while some unexplained environmental health hazards are thought to have the opposite effect.
The effects of gestational environment on human sex ratio are complicated and unclear, with numerous conflicting reports. For example, Oster et al. examined a data set of 67,000 births in China, 15 percent of whom were Hepatitis B carriers. They found no effect on birth sex ratio from Hepatitis B presence in either the mothers or fathers.
A 2007 survey by the Arctic Monitoring and Assessment Program noted abnormally low sex ratios in Russian Arctic villages and Inuit villages in Greenland and Canada, and attributed this imbalance to high levels of endocrine disruptors in the blood of inhabitants, including PCBs and DDT. These chemicals are believed to have accumulated in the tissues of fish and animals that make up the bulk of these populations ' diets. However, as noted in the Social factors section below, it is important to exclude alternative explanations, including social ones, when examining large human populations whose composition by ethnicity and race may be changing.
A 2008 report provides further evidence of effects of feminizing chemicals on male development in each class of vertebrate species as a worldwide phenomenon, possibly leading to a decline in the sex ratio in humans and a possible decline in sperm counts. Out of over 100,000 recently introduced chemicals, 99 % are poorly regulated.
Other factors that could possibly affect the sex ratio include:
Other scientific studies suggest that environmental effects on human sex ratio at birth are either limited or not properly understood. For example, a research paper published in 1999, by scientists from Finland 's National Public Health Institute, reports the effect of environmental chemicals and changes in sex ratio over 250 years in Finland. This scientific team evaluated whether Finnish long - term data are compatible with the hypothesis that the decrease in the ratio of male to female births in industrial countries is caused by environmental factors. They analyzed the sex ratio of births from the files of Statistics Finland and all live births in Finland from 1751 to 1997. They found an increase in the proportion of males from 1751 to 1920; this was followed by a decrease and interrupted by peaks in births of males during and after World War I and World War II. None of the natural factors such as paternal age, maternal age, age difference of parents, birth order could explain the time trends. The scientists found that the peak ratio of male proportion precedes the period of industrialization or the introduction of pesticides or hormonal drugs, rendering a causal association between environmental chemicals and human sex ratio at birth unlikely. Moreover, these scientists claim that the trends they found in Finland are similar to those observed in other countries with worse pollution and much greater pesticide use.
Some studies have found that certain kinds of environmental pollution, in particular dioxins leads to higher rates of female births. Other pollutants such as PCBs can cause an increase in the male ratio.
Sex - selective abortion and infanticide are thought to significantly skew the naturally occurring ratio in some populations, such as China, where the introduction of ultrasound scans in the late 1980s has led to a birth sex ratio (males to females) of 1.181 (2010 official census data for China). The 2011 India census reports India 's sex ratio in 0 -- 6 age bracket at 1.088. The 2011 birth sex ratios for China and India are significantly above the mean ratio recorded in the United States from 1940 through 2002 (1.051); however, their birth sex ratios are within the 0.98 -- 1.14 range observed in the United States for significant ethnic groups over the same time period. Along with Asian countries, a number of European, Middle East, and Latin American countries have recently reported high birth sex ratios in the 1.06 to 1.14 range. High birth sex ratios, some claim, may be caused in part by social factors.
Reported sex ratios at birth, outside the typical range of 1.03 to 1.07, thus call for an explanation of some kind.
Another hypothesis has been inspired by the recent and persistent high birth sex ratios observed in Georgia and Armenia -- both predominantly Orthodox Christian societies -- and Azerbaijan, a predominantly Muslim society. Since their independence from Soviet Union, the birth sex ratio in these Caucasus countries has risen sharply to between 1.11 and 1.20, some of world 's highest. Mesle et al. consider the hypothesis that the high birth sex ratio may be because of the social trend of more than two children per family, and birth order possibly affects the sex ratio in this region of the world. They also consider the hypothesis that sons are preferred in these countries of the Caucasus, the spread of scans and there being a practice of sex - selective abortion; however, the scientists admit that they do not have definitive proof that sex - selective abortion is actually happening or that there are no natural reasons for the persistently high birth sex ratios.
In all such research, it is important to consider plausible alternative explanations. For example, in some populations that have experienced declining sex ratios, researchers have suggested that ecological factors may be at work.
As an example of how the social composition of a human population may produce unusual changes in sex ratios, we can consider a study in several counties of California where declining sex ratios had been observed. Smith and Von Behren observe that: "In the raw data, the male birth proportion is indeed declining. However, during this period, there were also shifts in demographics that influence the sex ratio. Controlling for birth order, parents ' age, and race / ethnicity, different trends emerged. White births (which account for over 80 %) continued to show a statistically significant decline, while other racial groups showed non-statistically significant declines (Japanese, Native American, other), little or no change (black), or an increase (Chinese). Finally, when the white births were divided into Hispanic and non-Hispanic (possible since 1982), it was found that both white subgroups suggest an increase in male births. '' They concluded "that the decline in male births in California is largely attributable to changes in demographics. ''
Several studies have examined human birth sex ratio data to determine if there is a natural relationship between the age of mother or father to the birth sex ratio. For example, Ruder has studied 1.67 million births in 33 states in the United States to discern effect of parent 's age and birth sex ratios. Similarly, Jacobsen et al. have studied 0.82 million births in Denmark with the same goal. These scientists find that maternal age has no statistically significant role on human birth sex ratio. However, they report a significant effect of paternal age. Significantly more male babies were born per 1000 female babies to younger fathers than to older fathers. These studies suggest that social factors such as early marriage and quickly fertile couples may play a role in raising birth sex ratios in certain societies.
Reported sex ratios at birth for some human populations may be influenced not only by cultural preferences and social practices that favor the birth or survival of one sex over the other but also by incomplete or inaccurate reporting or recording of the births or the survival of infants. Even what constitutes a live birth or infant death may vary from one population to another. For example, for most of the 20th century in Russia (and the Soviet Union), extremely premature newborns (less than 28 weeks gestational age, or less than 1000 grams in weight, or less than 35 centimeters in length) were not counted as a live birth until they had survived 7 days; and if that infant died in those first 168 hours it would not be counted as an infant death. This led to serious underreporting of the Infant mortality rate (by 22 % to 25 %) relative to standards recommended by the World Health Organization.
When unusual sex ratios at birth (or any other age) are observed, it is important to consider misreporting, misrecording, or underregistration of births or deaths as possible reasons. Some researchers have in part attributed the high male to female sex ratios reported in mainland China in the last 25 years to the underreporting of the births of female children after the implementation of the one - child policy, though alternative explanations are now generally more widely accepted, including above all the use of ultrasound technology and sex - selective abortion of female fetuses and, probably to a more limited degree, neglect or in some cases infanticide of females. In the case of China, because of deficiencies in the vital statistics registration system, studies of sex ratios at birth have relied either on special fertility surveys, whose accuracy depends on full reporting of births and survival of both male and female infants, or on the national population census from which both birth rates and death rates are calculated from the household 's reporting of births and deaths in the 18 months preceding the census. To the extent that this underreporting of births or deaths is sex - selective, both fertility surveys and censuses may inaccurately reflect the actual sex ratios at birth.
Catalano has examined the hypothesis that population stress induced by a declining economy reduces the human sex ratio. He compared the sex ratio in East and West Germany for the years 1946 to 1999, with genetically similar populations. The population stressors theory predicts that the East German sex ratio should have been lower in 1991, when East Germany 's economy collapsed, than expected from its previous years. Furthermore, the theory suggests that East German birth sex ratios should generally be lower than the observed sex ratio in West Germany for the same years, over time. According to Catalano 's study, the birth sex ratio data from East Germany and West Germany over 45 years supports the hypothesis. The sex ratio in East Germany was also at its lowest in 1991. According to Catalano study, assuming women in East Germany did not opt to abort male more than female, the best hypothesis is that a collapsing economy lowers the human birth sex ratio, while a booming economy raises the birth sex ratio. Catalano notes that these trends may be related to the observed trend of elevated incidences of very low birth weight from maternal stress, during certain macroeconomic circumstances.
A research group led by Ein - Mor reported that sex ratio does not seem to change significantly with either maternal or paternal age. Neither gravidity nor parity seem to affect the male - to - female ratio. However, there is a significant association of sex ratio with the length of gestation. These Ein - mor conclusions have been disputed. For example, James suggested that Ein - Mor results are based on some demographic variables and a small data set, a broader study of variables and larger population set suggests human sex ratio shows substantial variation for various reasons and different trend effects of length of gestation than those reported by Ein - Mor. In another study, James has offered the hypothesis that human sex ratios, and mammalian sex ratios in general, are causally related to the hormone levels of both parents at the time of conception. This hypothesis is yet to be tested and proven true or false over large population sets.
Gender imbalance is a disparity between males and females in a population. As stated above, males usually exceed females at birth but subsequently experience different mortality rates due to many possible causes such as differential natural death rates, war casualties, and deliberate gender control.
According to Nicholas Kristof and Sheryl WuDunn, two Pulitzer Prize - winning reporters for the New York Times, violence against women is causing gender imbalances in many developing countries. They detail sex - selective infanticide in the developing world, particularly in China, India and Pakistan.
Commonly, countries with gender imbalances have three characteristics in common. The first is a rapid decline in fertility, either because of preference for smaller families or to comply with their nation 's population control measures. Second, there is pressure for women to give birth to sons, often because of cultural preferences for male heirs. Third, families have widespread access to technology to selectively abort female fetuses.
As a contributing measure to gender imbalance in developing countries, Kristof and WuDunn 's best estimate is that a girl in India, from 1 to 5 years of age, dies from discrimination every four minutes (132,000 deaths per year); that 39,000 girls in China die annually, within the first year of life, because parents did not give girls the same medical care and attention that boys received. The authors describe similar gender discrimination and gendercide in Congo, Kenya, Pakistan, Iraq, Bahrain, Thailand and many other developing countries.
Some of the factors suggested as causes of the gender imbalance are warfare (excess of females, notably in the wake of WWI in western Europe, and WWII, particularly in the Soviet Union); sex - selective abortion and infanticide (excess of males, notably in China as a result of the one - child policy, or in India); and large - scale migration, such as that by male labourers unable to bring their families with them (as in Qatar and other Gulf countries). Gender imbalance may result in the threat of social unrest, especially in the case of an excess of low - status young males unable to find spouses, and being recruited into the service of militaristic political factions. Economic factors such as male - majority industries and activities like the petrochemical, agriculture, engineering, military, and technology also have created a male gender imbalance in some areas dependent on one of these industries. Conversely, the entertainment, banking, tourism, fashion, and service industries may have resulted in a female - majority gender imbalance in some areas dependent on them.
One study found that the male - to - female sex ratio in the German state of Bavaria fell as low as 0.60 after the end of World War II for the most severely affected age cohort (those between 21 and 23 years old in 1946). This same study found that out - of - wedlock births spiked from approximately 10 -- 15 % during the inter-war years up to 22 % at the end of the war. This increase in out - of - wedlock births was attributed to a change in the marriage market caused by the decline in the sex ratio.
The Northern Mariana Islands have the highest female ratio with 0.77 males per female. Qatar has the highest male ratio, with 2.87 males / female. For the group aged below 15, Sierra Leone has the highest female ratio with 0.96 males / female, and the Republic of Georgia and the People 's Republic of China are tied for the highest male ratio with 1.13 males / female (according to the 2006 CIA World Factbook).
The value for the entire world population is 1.01 males / female, with 1.07 at birth, 1.06 for those under 15, 1.02 for those between 15 and 64, and 0.78 for those over 65.
The "First World '' G7 members all have a gender ratio in the range of 0.95 -- 0.98 for the total population, of 1.05 -- 1.07 at birth, of 1.05 -- 1.06 for the group below 15, of 1.00 -- 1.04 for the group aged 15 -- 64, and of 0.70 -- 0.75 for those over 65.
Countries on the Arabian peninsula tend to have a ' natural ' ratio of about 1.05 at birth but a very high ratio of males for those over 65 (Saudi Arabia 1.13, Arab Emirates 2.73, Qatar 2.84), indicating either an above - average mortality rate for females or a below - average mortality for males, or, more likely in this case, a large population of aging male guest workers. Conversely, countries of Northern and Eastern Europe (the Baltic states, Belarus, Ukraine, Russia) tend to have a ' normal ' ratio at birth but a very low ratio of males among those over 65 (Russia 0.46, Latvia 0.48, Ukraine 0.52); similarly, Armenia has a far above average male ratio at birth (1.17), and a below - average male ratio above 65 (0.67). This effect may be caused by emigration and higher male mortality as result of higher Soviet era deaths; it may also be related to the enormous (by western standards) rate of alcoholism in the former Soviet states. Another possible contributory factor is an aging population, with a higher than normal proportion of relatively elderly people: we recall that due to higher differential mortality rates the ratio of males to females reduces for each year of age.
In the evolutionary biology of sexual reproduction, the operational sex ratio (OSR), is the ratio of sexually competing males that are ready to mate to sexually competing females that are ready to mate, or alternatively the local ratio of fertilizable females to sexually active males at any given time. Its difference from the physical sex ratio, is that it does not take into account sexually inactive or non-competitive individuals (individuals that do not compete for mates).
There are several social consequences of an imbalanced sex ratio. High ratios of males make it easier for women to marry, but harder for men. In parts of China and India, there is a 12 -- 15 % excess of young men. These men will remain single and will be unable to have families, in societies where marriage is regarded as virtually universal and social status and acceptance depend, in large part, on being married and creating a new family. Analyses of how sex ratio imbalances affect personal consumption and intra-household distribution were pioneered by Gary Becker, Shoshana Grossbard - Shechtman, and Marcia Guttentag and Paul Secord. High ratios of males have a positive effect on marital fertility and women 's share of household consumption and negative effects on non-marital cohabitation and fertility and women 's labor supply. It has been shown that variation in sex ratio over time is inversely related to married women 's labor supply in the U.S.
An additional problem is that many of these men are of low socioeconomic class with limited education. When there is a shortage of women in the marriage market, the women can "marry up '', inevitably leaving the least desirable men with no marriage prospects. In many communities today, there are growing numbers of young men who come from lower classes who are marginalized because of lack of family prospects and the fact that they have little outlet for sexual energy. There is evidence that this situation will lead to increased levels of antisocial behavior and violence and will ultimately present a threat to the stability and security of society.
Countries:
|
which is the largest indoor stadium in india | List of Indoor arenas in India - wikipedia
This is a list of Indoor arenas in India that have been used for major Indoor matches.
|
when does a population reach the carrying capacity of the environment | Carrying capacity - wikipedia
The carrying capacity of a biological species in an environment is the maximum population size of the species that the environment can sustain indefinitely, given the food, habitat, water, and other necessities available in the environment. In population biology, carrying capacity is defined as the environment 's maximal load, which is different from the concept of population equilibrium. Its effect on population dynamics may be approximated in a logistic model, although this simplification ignores the possibility of overshoot which real systems may exhibit.
Carrying capacity was originally used to determine the number of animals that could graze on a segment of land without destroying it. Later, the idea was expanded to more complex populations, like humans. For the human population, more complex variables such as sanitation and medical care are sometimes considered as part of the necessary establishment. As population density increases, birth rate often increases and death rate typically decreases. The difference between the birth rate and the death rate is the "natural increase ''. The carrying capacity could support a positive natural increase or could require a negative natural increase. Thus, the carrying capacity is the number of individuals an environment can support without significant negative impacts to the given organism and its environment. Below carrying capacity, populations typically increase, while above, they typically decrease. A factor that keeps population size at equilibrium is known as a regulating factor. Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment may vary for different species and may change over time due to a variety of factors including: food availability, water supply, environmental conditions and living space. The origins of the term "carrying capacity '' are uncertain, with researchers variously stating that it was used "in the context of international shipping '' or that it was first used during 19th - century laboratory experiments with micro-organisms. A recent review finds the first use of the term in an 1845 report by the US Secretary of State to the US Senate.
Several estimates of the carrying capacity have been made with a wide range of population numbers. A 2001 UN report said that two - thirds of the estimates fall in the range of 4 billion to 16 billion with unspecified standard errors, with a median of about 10 billion. More recent estimates are much lower, particularly if non-renewable resource depletion and increased consumption are considered. Changes in habitat quality or human behavior at any time might increase or reduce carrying capacity. In the view of Paul and Anne Ehrlich, "for earth as a whole (including those parts of it we call Australia and the United States), human beings are far above carrying capacity today. ''
The application of the concept of carrying capacity for the human population has been criticized for not successfully capturing the multi-layered processes between humans and the environment, which have a nature of fluidity and non-equilibrium, and for sometimes being employed in a blame - the - victim framework.
Supporters of the concept argue that the idea of a limited carrying capacity is just as valid applied to humans as when applied to any other species. Animal population size, living standards, and resource depletion vary, but the concept of carrying capacity still applies. The number of people is not the only factor in the carrying capacity of Earth. Waste and over-consumption, especially by wealthy and near - wealthy people and nations, are also putting significant strain on the environment together with human overpopulation. Population and consumption together appear to be at the core of many human problems. Some of these issues have been studied by computer simulation models such as World3. When scientists talk of global change today, they are usually referring to human - caused changes in the environment of sufficient magnitude eventually to reduce the carrying capacity of much of Earth (as opposed to local or regional areas) to support organisms, especially Homo sapiens.
Some aspects of a system 's carrying capacity may involve matters such as available supplies of food, water, raw materials, and / or other similar resources. In addition, there are other factors that govern carrying capacity which may be less instinctive or less intuitive in nature, such as ever - increasing and / or ever - accumulating levels of wastes, damage, and / or eradication of essential components of any complex functioning system. Eradication of, for example, large or critical portions of any complex system (envision a space vehicle, for instance, or an airplane, or an automobile, or computer code, or the body components of a living vertebrate) can interrupt essential processes and dynamics in ways that induce systems failures or unexpected collapse. (As an example of these latter factors, the "carrying capacity '' of a complex system such an airplane is more than a matter of available food, or water, or available seating, but also reflects total weight carried and presumes that its passengers do not damage, destroy, or eradicate parts, doors, windows, wings, engine parts, fuel, and oil, and so forth.) Thus, on a global scale, food and similar resources may affect planetary carrying capacity to some extent so long as Earth 's human passengers do not dismantle, eradicate, or otherwise destroy critical biospheric life - support capacities for essential processes of self - maintenance, self - perpetuation, and self - repair.
Thus, carrying capacity interpretations that focus solely on resource limitations alone (such as food) may neglect wider functional factors. If the humans neither gain nor lose weight in the long - term, the calculation is fairly accurate. If the quantity of food is invariably equal to the "Y '' amount, carrying capacity has been reached. Humans, with the need to enhance their reproductive success (see Richard Dawkins ' The Selfish Gene), understand that food supply can vary and also that other factors in the environment can alter humans ' need for food. A house, for example, might mean that one does not need to eat as much to stay warm as one otherwise would. Over time, monetary transactions have replaced barter and local production, and consequently modified local human carrying capacity. However, purchases also impact regions thousands of miles away. For example, carbon dioxide from an automobile travels to the upper atmosphere. This led Paul R. Ehrlich to develop the I = PAT equation.
where:
An important model related to carrying capacity (K), is the logistic growth curve. The logistic growth curve depicts a more realistic version of how population growth rate, available resources, and the carrying capacity are inter-connected. As illustrated in the logistic growth curve model, when the population size is small and there are many resources available, population over-time increases and so does the growth rate. However, as population size nears the carrying capacity and resources become limited, the growth rate decreases and population starts to level out at K. This model is based on the assumption that carrying capacity does not change. One thing to keep in mind, however, is that carrying capacity of a population can increase or decrease and there are various factors that affect it. For instance, an increase in the population growth can lead to over-exploitation of necessary natural resources and therefore decrease the overall carrying capacity of that environment.
Technology can play a role in the dynamics of carrying capacity and while this can sometimes be positive, in other cases its influence can be problematic. For example, it has been suggested that in the past that the Neolithic revolution increased the carrying capacity of the world relative to humans through the invention of agriculture. In a similar way, viewed from the perspective of foods, the use of fossil fuels has been alleged to artificially increase the carrying capacity of the world by the use of stored sunlight, even though that food production does not guarantee the capacity of the Earth 's climatic and biospheric life - support systems to withstand the damage and wastes arising from such fossil fuels. However, such interpretations presume the continued and uninterrupted functioning of all other critical components of the global system. It has also been suggested that other technological advances that have increased the carrying capacity of the world relative to humans are: polders, fertilizer, composting, greenhouses, land reclamation, and fish farming. In an adverse way, however, many technologies enable economic entities and individual humans to inflict far more damage and eradication, far more quickly and efficiently on a wider - scale than ever. Examples include machine guns, chainsaws, earth - movers, and the capacity of industrialized fishing fleets to capture and harvest targeted fish species faster than the fish themselves can reproduce are examples of such problematic outcomes of technology.
Agricultural capability on Earth expanded in the last quarter of the 20th century. But now there are many projections of a continuation of the decline in world agricultural capability (and hence carrying capacity) which began in the 1990s. Most conspicuously, China 's food production is forecast to decline by 37 % by the last half of the 21st century, placing a strain on the entire carrying capacity of the world, as China 's population could expand to about 1.5 billion people by the year 2050. This reduction in China 's agricultural capability (as in other world regions) is largely due to the world water crisis and especially due to mining groundwater beyond sustainable yield, which has been happening in China since the mid-20th century.
Lester Brown of the Earth Policy Institute, has said: "It would take 1.5 Earths to sustain our present level of consumption. Environmentally, the world is in an overshoot mode. ''
One way to estimate human demand compared to ecosystem 's carrying capacity is "ecological footprint '' accounting. Rather than speculating about future possibilities and limitations imposed by carrying capacity constraints, Ecological Footprint accounting provides empirical, non-speculative assessments of the past. It compares historic regeneration rates, biocapacity, against historical human demand, ecological footprint, in the same year. One result shows that humanity 's demand footprint in 1999 exceeded the planet 's bio-capacity by > 20 %. However, this measurement does not take into account the depletion of the actual fossil fuels, "which would result in a carbon Footprint many hundreds of times higher than the current calculation. ''
There is also concern of the ability of countries around to globe to decrease and maintain their ecological footprints. Holden and Linnerud, scholars working to provide a better framework that adequately judge sustainability development and maintenance in policy making, have generated a diagram that measures the global position of different countries around the world, which shows a linear relation between GDP PPP and ecological foot print in 2007. Possible answers to the question of where we are as individual countries attempting to reach sustainability and development methods to reduce ecological foot print. According to the Figure 1 diagram, the United States had the largest ecological foot print per capita along with Norway, Sweden, and Austria, in comparison to Cuba, Bangladesh, and Korea.
|
what is meant by functions in excel explain any four general functions of excel | Microsoft Excel - Wikipedia
Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS, Android and iOS. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications. It has been a very widely applied spreadsheet for these platforms, especially since version 5 in 1993, and it has replaced Lotus 1 - 2 - 3 as the industry standard for spreadsheets. Excel forms part of Microsoft Office.
Microsoft Excel has the basic features of all spreadsheets, using a grid of cells arranged in numbered rows and letter - named columns to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering and financial needs. In addition, it can display data as line graphs, histograms and charts, and with a very limited three - dimensional graphical display. It allows sectioning of data to view its dependencies on various factors for different perspectives (using pivot tables and the scenario manager). It has a programming aspect, Visual Basic for Applications, allowing the user to employ a wide variety of numerical methods, for example, for solving differential equations of mathematical physics, and then reporting the results back to the spreadsheet. It also has a variety of interactive features allowing user interfaces that can completely hide the spreadsheet from the user, so the spreadsheet presents itself as a so - called application, or decision support system (DSS), via a custom - designed user interface, for example, a stock analyzer, or in general, as a design tool that asks the user questions and provides answers and reports. In a more elaborate realization, an Excel application can automatically poll external databases and measuring instruments using an update schedule, analyze the results, make a Word report or PowerPoint slide show, and e-mail these presentations on a regular basis to a list of participants. Excel was not designed to be used as a database.
Microsoft allows for a number of optional command - line switches to control the manner in which Excel starts.
The Windows version of Excel supports programming through Microsoft 's Visual Basic for Applications (VBA), which is a dialect of Visual Basic. Programming with VBA allows spreadsheet manipulation that is awkward or impossible with standard spreadsheet techniques. Programmers may write code directly using the Visual Basic Editor (VBE), which includes a window for writing code, debugging code, and code module organization environment. The user can implement numerical methods as well as automating tasks such as formatting or data organization in VBA and guide the calculation using any desired intermediate results reported back to the spreadsheet.
VBA was removed from Mac Excel 2008, as the developers did not believe that a timely release would allow porting the VBA engine natively to Mac OS X. VBA was restored in the next version, Mac Excel 2011, although the build lacks support for ActiveX objects, impacting some high level developer tools.
A common and easy way to generate VBA code is by using the Macro Recorder. The Macro Recorder records actions of the user and generates VBA code in the form of a macro. These actions can then be repeated automatically by running the macro. The macros can also be linked to different trigger types like keyboard shortcuts, a command button or a graphic. The actions in the macro can be executed from these trigger types or from the generic toolbar options. The VBA code of the macro can also be edited in the VBE. Certain features such as loop functions and screen prompts by their own properties, and some graphical display items, can not be recorded, but must be entered into the VBA module directly by the programmer. Advanced users can employ user prompts to create an interactive program, or react to events such as sheets being loaded or changed.
Macro Recorded code may not be compatible between Excel versions. Some code that is used in Excel 2010 can not be used in Excel 2003. Making a Macro that changes the cell colors and making changes to other aspects of cells may not be backward compatible.
VBA code interacts with the spreadsheet through the Excel Object Model, a vocabulary identifying spreadsheet objects, and a set of supplied functions or methods that enable reading and writing to the spreadsheet and interaction with its users (for example, through custom toolbars or command bars and message boxes). User - created VBA subroutines execute these actions and operate like macros generated using the macro recorder, but are more flexible and efficient.
From its first version Excel supported end user programming of macros (automation of repetitive tasks) and user defined functions (extension of Excel 's built - in function library). In early versions of Excel these programs were written in a macro language whose statements had formula syntax and resided in the cells of special purpose macro sheets (stored with file extension. XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, though Microsoft discourages their use.
Excel supports charts, graphs, or histograms generated from specified groups of cells. The generated graphic component can either be embedded within the current sheet, or added as a separate object.
These displays are dynamically updated if the content of cells change. For example, suppose that the important design requirements are displayed visually; then, in response to a user 's change in trial values for parameters, the curves describing the design change shape, and their points of intersection shift, assisting the selection of the best design.
Versions of Excel up to 7.0 had a limitation in the size of their data sets of 16K (2 = 7004163840000000000 ♠ 16 384) rows. Versions 8.0 through 11.0 could handle 64K (2 = 7004655360000000000 ♠ 65 536) rows and 256 columns (2 as label ' IV '). Version 12.0 can handle 1M (2 = 7006104857600000000 ♠ 1048576) rows, and 7004163840000000000 ♠ 16 384 (2 as label ' XFD ') columns.
Microsoft Excel up until 2007 version used a proprietary binary file format called Excel Binary File Format (. XLS) as its primary format. Excel 2007 uses Office Open XML as its primary file format, an XML - based format that followed after a previous XML - based format called "XML Spreadsheet '' ("XMLSS ''), first introduced in Excel 2002.
Although supporting and encouraging the use of new XML - based formats as replacements, Excel 2007 remained backwards - compatible with the traditional, binary formats. In addition, most versions of Microsoft Excel can read CSV, DBF, SYLK, DIF, and other legacy formats. Support for some older file formats was removed in Excel 2007. The file formats were mainly from DOS - based programs.
OpenOffice.org has created documentation of the Excel format. Since then Microsoft made the Excel binary format specification available to freely download.
The XML Spreadsheet format introduced in Excel 2002 is a simple, XML based format missing some more advanced features like storage of VBA macros. Though the intended file extension for this format is. xml, the program also correctly handles XML files with. xls extension. This feature is widely used by third - party applications (e.g. MySQL Query Browser) to offer "export to Excel '' capabilities without implementing binary file format. The following example will be correctly opened by Excel if saved either as Book1. xml or Book1. xls:
Microsoft Excel 2007, along with the other products in the Microsoft Office 2007 suite, introduced new file formats. The first of these (. xlsx) is defined in the Office Open XML (OOXML) specification.
Windows applications such as Microsoft Access and Microsoft Word, as well as Excel can communicate with each other and use each other 's capabilities. The most common are Dynamic Data Exchange: although strongly deprecated by Microsoft, this is a common method to send data between applications running on Windows, with official MS publications referring to it as "the protocol from hell ''. As the name suggests, it allows applications to supply data to others for calculation and display. It is very common in financial markets, being used to connect to important financial data services such as Bloomberg and Reuters.
OLE Object Linking and Embedding: allows a Windows application to control another to enable it to format or calculate data. This may take on the form of "embedding '' where an application uses another to handle a task that it is more suited to, for example a PowerPoint presentation may be embedded in an Excel spreadsheet or vice versa.
Excel users can access external data sources via Microsoft Office features such as (for example). odc connections built with the Office Data Connection file format. Excel files themselves may be updated using a Microsoft supplied ODBC driver.
Excel can accept data in real time through several programming interfaces, which allow it to communicate with many data sources such as Bloomberg and Reuters (through addins such as Power Plus Pro).
Alternatively, Microsoft Query provides ODBC - based browsing within Microsoft Excel.
Programmers have produced APIs to open Excel spreadsheets in a variety of applications and environments other than Microsoft Excel. These include opening Excel documents on the web using either ActiveX controls, or plugins like the Adobe Flash Player. The Apache POI opensource project provides Java libraries for reading and writing Excel spreadsheet files. ExcelPackage is another open - source project that provides server - side generation of Microsoft Excel 2007 spreadsheets. PHPExcel is a PHP library that converts Excel5, Excel 2003, and Excel 2007 formats into objects for reading and writing within a web application. Excel Services is a current. NET developer tool that can enhance Excel 's capabilities. Excel spreadsheets can be accessed from Python with xlrd and openpyxl. js - xlsx and js - xls can open Excel spreadsheets from JS.
Microsoft Excel protection offers several types of passwords:
All passwords except password to open a document can be removed instantly regardless of Microsoft Excel version used to create the document. These types of passwords are used primarily for shared work on a document. Such password - protected documents are not encrypted, and a data sources from a set password is saved in a document 's header. Password to protect workbook is an exception -- when it is set, a document is encrypted with the standard password "VelvetSweatshop '', but since it is known to public, it actually does not add any extra protection to the document. The only type of password that can prevent a trespasser from gaining access to a document is password to open a document. The cryptographic strength of this kind of protection depends strongly on the Microsoft Excel version that was used to create the document.
In Microsoft Excel 95 and earlier versions, password to open is converted to a 16 - bit key that can be instantly cracked. In Excel 97 / 2000 the password is converted to a 40 - bit key, which can also be cracked very quickly using modern equipment. As regards services which use rainbow tables (e.g. Password - Find), it takes up to several seconds to remove protection. In addition, password - cracking programs can brute - force attack passwords at a rate of hundreds of thousands of passwords a second, which not only lets them decrypt a document, but also find the original password.
In Excel 2003 / XP the encryption is slightly better -- a user can choose any encryption algorithm that is available in the system (see Cryptographic Service Provider). Due to the CSP, an Excel file ca n't be decrypted, and thus the password to open ca n't be removed, though the brute - force attack speed remains quite high. Nevertheless, the older Excel 97 / 2000 algorithm is set by the default. Therefore, users who did not changed the default settings lack reliable protection of their documents.
The situation changed fundamentally in Excel 2007, where the modern AES algorithm with a key of 128 bits started being used for decryption, and a 50,000-fold use of the hash function SHA1 reduced the speed of brute - force attacks down to hundreds of passwords per second. In Excel 2010, the strength of the protection by the default was increased two times due to the use of a 100,000-fold SHA1 to convert a password to a key.
Microsoft Excel Viewer is a freeware program for viewing and printing spreadsheet documents created by Excel. Excel Viewer is similar to Microsoft Word Viewer in functionality. (There is not a current version for the Mac.) Excel Viewer is available for Microsoft Windows and Windows CE handheld PCs, such as the NEC MobilePro. It is also possible to open Excel files using certain online tools and services. Online excel viewers do not require users to have Microsoft Excel installed.
Other errors specific to Excel include misleading statistics functions, mod function errors, date limitations and the Excel 2007 error.
The accuracy and convenience of statistical tools in Excel has been criticized, as mishandling missing data, as returning incorrect values due to inept handling of round - off and large numbers, as only selectively updating calculations on a spreadsheet when some cell values are changed, and as having a limited set of statistical tools. Microsoft has announced some of these issues are addressed in Excel 2010.
Excel has issues with modulo operations. In the case of excessively large results, Excel will return the error warning # NUM! instead of an answer.
Excel includes February 29, 1900, incorrectly treating 1900 as a leap year, even though e.g. 2100 is correctly treated as a regular year. The bug originated from Lotus 1 - 2 - 3 (deliberately implemented to save computer memory), and was also purposely implemented in Excel, for the purpose of bug compatibility. This legacy has later been carried over into Office Open XML file format.
Thus a (not necessarily whole) number greater than or equal to 61 interpreted as a date and time is the (real) number of days after December 30, 1899, 0: 00, a non-negative number less than 60 is the number of days after December 31, 1899, 0: 00, and numbers with whole part 60 represent the fictional day.
Excel supports dates with years in the range 1900 - 9999, except that December 31, 1899 can be entered as 0 and is displayed as 0 - jan - 1900.
Converting a fraction of a day into hours, minutes and days by treating it as a moment on the day January 1, 1900, does not work for a negative fraction.
Entering text that happens to be in a form that is interpreted as a date, the text can be unintentionally changed to a standard date format. A similar problem occurs when a text happens to be in the form of a floating point notation of a number. In these cases the original exact text can not be recovered from the result. In the case of entering gene names this is a well known problem in the analysis of DNA, for example in bioinformatics. The problem was first described in 2004.
Microsoft Excel will not open two documents with the same name and instead will display the following error:
The reason is for calculation ambiguity with linked cells. If there is a cell = ' (Book1. xlsx) Sheet1 '! $ G $33, and there are two books named "Book1 '' open, there is no way to tell which one the user means.
Despite the use of 15 - figure precision, Excel can display many more figures (up to thirty) upon user request. But the displayed figures are not those actually used in its computations, and so, for example, the difference of two numbers may differ from the difference of their displayed values. Although such departures are usually beyond the 15th decimal, exceptions do occur, especially for very large or very small numbers. Serious errors can occur if decisions are made based upon automated comparisons of numbers (for example, using the Excel If function), as equality of two numbers can be unpredictable.
In the figure the fraction 1 / 9000 is displayed in Excel. Although this number has a decimal representation that is an infinite string of ones, Excel displays only the leading 15 figures. In the second line, the number one is added to the fraction, and again Excel displays only 15 figures. In the third line, one is subtracted from the sum using Excel. Because the sum in the second line has only eleven 1 's after the decimal, the difference when 1 is subtracted from this displayed value is three 0 's followed by a string of eleven 1 's. However, the difference reported by Excel in the third line is three 0 's followed by a string of thirteen 1 's and two extra erroneous digits. This is because Excel calculates with about half a digit more than it displays.
Excel works with a modified 1985 version of the IEEE 754 specification. Excel 's implementation involves conversions between binary and decimal representations, leading to accuracy that is on average better than one would expect from simple fifteen digit precision, but that can be worse. See the main article for details.
Besides accuracy in user computations, the question of accuracy in Excel - provided functions may be raised. Particularly in the arena of statistical functions, Excel has been criticized for sacrificing accuracy for speed of calculation.
As many calculations in Excel are executed using VBA, an additional issue is the accuracy of VBA, which varies with variable type and user - requested precision.
In 2004 scientists reported automatic (and inadvertent) conversion of gene nomenclature into dates. A follow - up study in 2016 found many peer reviewed scientific journal papers had been affected and that "Of the selected journals, the proportion of published articles with Excel files containing gene lists that are affected by gene name errors is 19.6 %. Excel parses the copied and pasted data and sometimes changes them depending on what it thinks they are. For example, MARCH1 (Membrane Associated Ring - CH - type finger 1) gets converted to the date March 1 (1 - Mar) and SEPT2 (Septin 2) is converted into September 2 (2 - Sep) etc. While some secondary news sources reported this as a fault with Excel, the original authors of the 2016 paper placed the blame with the researchers mis - using Excel.
Microsoft originally marketed a spreadsheet program called Multiplan in 1982. Multiplan became very popular on CP / M systems, but on MS - DOS systems it lost popularity to Lotus 1 - 2 - 3. Microsoft released the first version of Excel for the Macintosh on September 30, 1985, and the first Windows version was 2.05 (to synchronize with the Macintosh version 2.2) in November 1987. Lotus was slow to bring 1 - 2 - 3 to Windows and by the early 1990s Excel had started to outsell 1 - 2 - 3 and helped Microsoft achieve its position as a leading PC software developer. This accomplishment solidified Microsoft as a valid competitor and showed its future of developing GUI software. Microsoft maintained its advantage with regular new releases, every two years or so.
Excel 2.0 is the first version of Excel for Intel platform. Versions prior to 2.0 were only available on the Apple Macintosh.
The first Windows version was labeled "2 '' to correspond to the Mac version. This included a run - time version of Windows.
BYTE in 1989 listed Excel for Windows as among the "Distinction '' winners of the BYTE Awards. The magazine stated that the port of the "extraordinary '' Macintosh version "shines '', with a user interface as good as or better than the original.
Included toolbars, drawing capabilities, outlining, add - in support, 3D charts, and many more new features.
Introduced auto - fill.
Also, an easter egg in Excel 4.0 reveals a hidden animation of a dancing set of numbers 1 through 3, representing Lotus 1 - 2 - 3, which was then crushed by an Excel logo.
With version 5.0, Excel has included Visual Basic for Applications (VBA), a programming language based on Visual Basic which adds the ability to automate tasks in Excel and to provide user - defined functions (UDF) for use in worksheets. VBA is a powerful addition to the application and includes a fully featured integrated development environment (IDE). Macro recording can produce VBA code replicating user actions, thus allowing simple automation of regular tasks. VBA allows the creation of forms and in ‐ worksheet controls to communicate with the user. The language supports use (but not creation) of ActiveX (COM) DLL 's; later versions add support for class modules allowing the use of basic object - oriented programming techniques.
The automation functionality provided by VBA made Excel a target for macro viruses. This caused serious problems until antivirus products began to detect these viruses. Microsoft belatedly took steps to prevent the misuse by adding the ability to disable macros completely, to enable macros when opening a workbook or to trust all macros signed using a trusted certificate.
Versions 5.0 to 9.0 of Excel contain various Easter eggs, including a "Hall of Tortured Souls '', although since version 10 Microsoft has taken measures to eliminate such undocumented features from their products.
Released in 1995 with Microsoft Office for Windows 95, this is the first major version after Excel 5.0, as there is no Excel 6.0.
Internal rewrite to 32 - bits. Almost no external changes, but faster and more stable.
Included in Office 97 (for x86 and Alpha). This was a major upgrade that introduced the paper clip office assistant and featured standard VBA used instead of internal Excel Basic. It introduced the now - removed Natural Language labels.
This version of Excel includes a flight simulator as an Easter Egg.
Included in Office 2000. This was a minor upgrade, but introduced the upgrade to the clipboard where it can hold multiple objects at once. The Office Assistant, whose frequent unsolicited appearance in Excel 97 had annoyed many users, became less intrusive.
Included in Office XP. Very minor enhancements.
Included in Office 2003. Minor enhancements, most significant being the new Tables.
Included in Office 2007. This release was a major upgrade from the previous version. Similar to other updated Office products, Excel in 2007 used the new Ribbon menu system. This was different from what users were used to, and was met with mixed reactions. One study reported fairly good acceptance by users except highly experienced users and users of word processing applications with a classical WIMP interface, but was less convinced in terms of efficiency and organisation. However, an online survey reported that a majority of respondents had a negative opinion of the change, with advanced users being "somewhat more negative '' than intermediate users, and users reporting a self - estimated reduction in productivity.
Added functionality included the SmartArt set of editable business diagrams. Also added was an improved management of named variables through the Name Manager, and much improved flexibility in formatting graphs, which allow (x, y) coordinate labeling and lines of arbitrary weight. Several improvements to pivot tables were introduced.
Also like other office products, the Office Open XML file formats were introduced, including. xlsm for a workbook with macros and. xlsx for a workbook without macros.
Specifically, many of the size limitations of previous versions were greatly increased. To illustrate, the number of rows was now 1,048,576 (2) and columns was 16,384 (2; the far - right column is XFD). This changes what is a valid A1 reference versus a named range. This version made more extensive use of multiple cores for the calculation of spreadsheets; however, VBA macros are not handled in parallel and XLL add ‐ ins were only executed in parallel if they were thread - safe and this was indicated at registration.
Included in Office 2010, this is the next major version after v12. 0, as version number 13 was skipped.
Minor enhancements and 64 - bit support, including the following:
Included in Office 2013, along with a lot of new tools included in this release:
Included in Office 2016, along with a lot of new tools included in this release:
Office 2016 for Mac brings the Mac version much closer to parity with its Windows cousin, harmonising many of the reporting and high - level developer functions, while bringing the ribbon and styling into line with its PC counterpart.
Excel offers many user interface tweaks over the earliest electronic spreadsheets; however, the essence remains the same as in the original spreadsheet software, VisiCalc: the program displays cells organized in rows and columns, and each cell may contain data or a formula, with relative or absolute references to other cells.
Excel 2.0 for Windows, which was modeled after its Mac GUI - based counterpart, indirectly expanded the installed base of the then - nascent Windows environment. Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a run - time version of Windows 1.0 with Excel 2.0. Unlike Microsoft Word, there never was a DOS version of Excel.
Excel became the first spreadsheet to allow the user to define the appearance of spreadsheets (fonts, character attributes and cell appearance). It also introduced intelligent cell recomputation, where only cells dependent on the cell being modified are updated (previous spreadsheet programs recomputed everything all the time or waited for a specific user command). Excel introduced auto - fill, the ability to drag and expand the selection box to automatically copy cell or row contents to adjacent cells or rows, adjusting the copies intelligently by automatically incrementing cell references or contents. Excel also introduced extensive graphing capabilities.
Because Excel is widely used, it has been attacked by hackers. While Excel is not directly exposed to the Internet, if an attacker can get a victim to open a file in Excel, and there is an appropriate security bug in Excel, then the attacker can get control of the victim 's computer. UK 's GCHQ has a tool named TORNADO ALLEY with this purpose.
|
the best that we can do is fall in love | Arthur 's Theme (Best That You Can Do) - wikipedia
"Arthur 's Theme (Best That You Can Do) '' is a song performed and co-written by American singer - songwriter Christopher Cross, which was the main theme for the 1981 film Arthur starring Dudley Moore and Liza Minnelli. The song won the Oscar for Best Original Song in 1981. In the US, it reached number one on the Billboard Hot 100 and on the Hot Adult Contemporary charts during October 1981, remaining at the top on the Hot 100 for three consecutive weeks. Overseas, it also went to number one on the VG - lista chart in Norway, and was a top ten hit in several other countries. The song became the second and last American number one hit by Christopher Cross. It was included as a bonus track only on the CD & Cassette versions of his second album Another Page, released in 1983.
The B - side of record, "Minstrel Gigolo, '' was the same song used on the back of Cross 's debut single, "Ride Like the Wind. ''
Indie pop band Fitz and the Tantrums recorded a cover of this song for the soundtrack to the 2011 remake of the film.
The song was written in collaboration between Cross, pop music composer Burt Bacharach, and Bacharach 's frequent writing partner Carole Bayer Sager. A fourth writing credit went to Minnelli 's ex-husband and Australian songwriter Peter Allen, also a frequent collaborator with Bayer Sager: the line "When you get caught between the moon and New York City '' from the chorus was taken from an unreleased song Allen and Bayer Sager had previously written together. Allen came up with the line while his plane was in a holding pattern during a night arrival at John F. Kennedy International Airport.
The song won the 1981 Academy Award for Best Original Song, and the Golden Globe Award for Best Original Song. In 2004 it finished at # 79 in AFI 's 100 Years... 100 Songs survey of the top tunes in American cinema. In 2008 Barry Manilow released a cover version.
The music video consisted of two acts, which are edited together in fade outs. Christopher Cross in one offers the song with some studio musicians in a recording studio and the other is the story the song illustrates.
shipments figures based on certification alone
|
who has won the 2017 laureus world sportsman of the year and comeback of the year awards | Laureus World Sports Awards - Wikipedia
The Laureus World Sports Awards is an annual award ceremony honouring individuals and teams from the world of sports along with sporting achievements throughout the year. It was established in 1999 by Laureus Sport for Good Foundation founding patrons Daimler and Richemont and supported by its global partners Mercedes - Benz and IWC Schaffhausen. The awards support the work of Laureus Sport for Good, which supports over 100 community projects in around 40 countries. These projects aim to use the power of sport to end violence, discrimination and disadvantage, and prove that sport has the power to change the world. The name "Laureus '' is derived from the Greek word for laurel, considered a traditional symbol of victory in athletics.
The first ceremony was held on 25 May 2000 in Monte Carlo at which Nelson Mandela gave the keynote speech. As of 2018, awards are made annually in eight categories, with a number of discretionary categories irregularly recognised. The recipient of each award is presented with a Laureus statuette, created by Cartier, at an annual ceremony held in various locations around the world. As of 2018, the ceremonies have been held in eleven cities around the world, and are broadcast in at least 160 countries.
Swiss tennis player Roger Federer holds the record for the most awards with six. A number of awards have been rescinded, namely those presented to American cyclist Lance Armstrong, American sprinter Marion Jones and Canadian amputee sprinter Earle Connor, all of whom were subsequently found to have doped.
Nelson Mandela, 2000
South African businessman Johann Rupert, chairman of Richemont, proposed that an organisation be created "based on the principle that sport can bridge the gaps in society and change the way people look at the world. '' The organisation, established in 1998 by a partnership of Richemont and Daimler became known as "Laureus '', its name being derived from the Greek word for laurel, considered a traditional symbol of victory in athletics. The first Laureus World Sports Awards ceremony was held two years later, at which the patron, Nelson Mandela, delivered a speech which Edwin Moses has described as "iconic ''.
Awards were made in seven regular categories and two discretionary categories at the inaugural ceremony, hosted by the American actors Jeff Bridges and Dylan McDermott. Two of those awards would later be rescinded: both the American cyclist Lance Armstrong and the American track athlete Marion Jones were found to have used performance - enhancing drugs and had their accolades removed. The American amputee sprinter Earle Connor 's Laureus World Sportsperson of the Year with a Disability Award, which he won in 2004, was also later rescinded.
The awards are considered highly prestigious and are frequently referred to as the sporting equivalent of an "Oscar ''. Despite this, the awards have come in for some criticism, particularly in the manner in which athletes are selected for inclusion.
In order to determine the winners of the Awards, the Laureus Nominations Panel, composed of professional sports editors, writers and broadcasters from more than 100 countries, vote to create a shortlist of six nominations in five categories:
The nominees of two categories are chosen by specialist panels:
The Laureus World Sports Academy is an association of 65 retired sportspeople who volunteer their time to support the work of the Laureus Sport for Good Foundation. They also vote each year to decide the winners of the Laureus World Sports Awards. As of 2018, the chairman of the Academy is New Zealand former rugby player Sean Fitzpatrick. The members of the Academy select the winners by voting in a secret ballot.
One category is voted for by the public:
The Academy also makes discretionary awards, including:
The Laureus World Sports Awards ceremony is held annually at various venues in various locations around the world. The inaugural ceremony took place at the Sporting Club in Monaco on 25 May 2000. As of 2018, the ceremonies have been held in eleven cities around the world, and are broadcast in at least 160 countries. Each Laureus World Sports Award winner receives a Cartier Laureus statuette which features a "representation of the striving human form ''. The award weighs approximately 2.5 kilograms (5.5 lb) (with 670 grams (24 oz) of solid silver and a 650 - gram (23 oz) gold - finish base) and is 30 centimetres (12 in) tall.
Prior to 2007, this award was called Newcomer of the Year.
Prior to 2007, this award was called Alternative Sportsperson of the Year.
The Best Sporting Moment Award, inaugurated in 2017, and voted for by the public, was won by the FC Barcelona under - 12 (Infantil - B) side for their sportsmanship in consoling a defeated opposition team. The 2018 award was won by fans of the Iowa Hawkeyes football team, who at the end of the first quarter of each home game turn toward the children 's hospital that overlooks the playing field and wave to patients watching the game.
Since 2000, the Laureus World Sports Awards have included a number of accolades given by the Academy at their discretion. At the first ceremony in 2000, Brazilian footballer Pelé became the first recipient of the Lifetime Achievement Award, while American Eunice Kennedy Shriver, founder of the Special Olympics was presented with the inaugural Laureus Sport for Good Award. The first Spirit of Sport award was presented in 2005 to the Boston Red Sox who had won the World Series for the first time in 86 years. In 2013, American swimmer Michael Phelps became the first recipient of the Exceptional Achievement Award. As of 2018, Chinese tennis player Li Na (2015) and Italian footballer Francesco Totti (2018) are the only other people to be honoured with the award. In 2017, the Refugee Olympic Team, comprising ten athletes from Syria, Congo, Ethiopia and South Sudan, was awarded the first Sporting Inspiration Award. The following year, the award was presented to the American footballer J.J. Watt whose "exceptional humanitarian efforts '' raised more than $37 million for those impacted by Hurricane Harvey.
|
who played davenport in last of the summer wine | List of last of the Summer Wine characters - Wikipedia
The following is a list of characters in the BBC sitcom Last of the Summer Wine. The series focused primarily on a trio of old men and their interaction with other characters in the town. Due to the longevity of the series it was often necessary to replace key characters due to an actor 's death, illness, or unavailability for other reasons. Many characters were first seen in "one - off '' appearances and were popular enough or felt to have enough potential for them to be brought back as regulars, in some instances replacing previous members of the cast.
(1973 -- 1975) The first "third man '', and the most childishly argumentative and snobbish, Blamire was the contrast to Compo. Blamire was fired up by displays of youthful enthusiasm, energetic gusto, or any sign of the British spirit. He served as a corporal in the British Army in the Royal Signals regiment during "The Great Fight for Freedom '' as a "supply wallah '' (a storeman) in India and retains his military bearing. He was a Tory and a self - important know - it - all with upper - class aspirations, who often dissociated himself from the other two, especially Compo, as he considered himself superior to them. Because of his sophisticated interests and insistence on table manners, Compo liked to refer to him as a "poof '' (in turn, Cyril would often use insults such as "grotty little herbert '' to Compo). Cyril would often reprimand Compo whenever he addressed him by his given name, as he preferred the "more rounded tone of Mr. Blamire '' and would say that Compo had to touch his "tatty cap '' whenever he did so. Out of all of the third men, Blamire tolerated Compo 's antics the least (though sometimes when he got caught up in them he would join in, such as backchatting Miss Probert on one occasion). In spite of this, Compo and Blamire were close, as shown by Compo 's misery in the episodes immediately after he left. Despite his snobby nature Blamire had more commonsense than most of his successors. Bates left the cast in 1975 due to cancer that left him too ill to perform in any long - term projects aside from It Ai n't Half Hot Mum. So Blamire was written out of the series; it was said that he had left to get married. The last we hear of him is a very organised letter, instructing Clegg and Compo to meet their old classmate, Foggy Dewhurst.
(1976 -- 1985, 1990 -- 1997) The successor to Blamire, Foggy was a former soldier who liked to boast of his military exploits in Burma during the Second World War. In fact, he was a signwriter; and unlike Blamire, many of his old military stories were untrue.
Although he considered himself very regimental and heroic, when confronted Foggy was generally meek and incompetent. Like the previous third man -- and all subsequent third men -- he considered himself the leader of the trio, and frequently took charge of Compo and Clegg. Foggy was infamous for trying to figure out a solution to the trio 's everyday problems, only to make them much worse. In earlier years Foggy wore a scarf with regimental colours on it. When Wilde left the series in 1985 to star in his own sitcom and to pursue other TV work, it was explained that Foggy had moved to Bridlington to take over his family 's egg - painting business.
Returning in 1990 after the sudden departure of Michael Aldridge, he claimed he had tired of egg painting, and wanted to return to his old life. A regular skit from this period included Foggy crossing paths with a stranger and then rambling about his supposed military career, typically boring each stranger to death. At other times he would try and recreate scenarios from his military days which also confused and bored passing strangers. He would often explain that he was a trained killer, which would inevitably lead to him getting into trouble and on the odd occasion being arrested (stupidly, he could never understand why people always found this explanation strange). During his second stint Foggy was shown to have mellowed somewhat and he did not argue with Compo as much as he had done previously. In 1997, when Wilde 's illness stopped him taking part, he was written out of the series in the Special, "There Goes the Groom '', in which the character was only seen in brief, non-face shots, played by a double; this episode also introduced his successor, Truly.
An unconscious, hung - over Foggy was swept off to Blackpool by the local postmistress. There he inadvertently proposed to her in a verbal slip - up over the wedding rings of which he had taken charge "for safe keeping '' (out of the dubious care of Best Man, Barry). But he must have at least liked her, as he was never heard from again after that. Foggy 's real first name was revealed to be Walter (with the middle initial "C ''); "Foggy '' is a nickname, derived from the traditional song "The Foggy Foggy Dew ''; perhaps also because, in his earlier episodes, he would occasionally "blank out '' everything around him to help him concentrate, particularly when he was thinking up new ideas or finding solutions to problems. This is particularly noticeable in the episode "The Man from Oswestry ''. In one of his earlier episodes, his name is hinted to be Oliver when Clegg finds one of his old army trunks with the initials ' COD '. (Because he was a corporal in the army). Due to his dislike of Compo 's attire and nature he was often seen making insults of disgust to Clegg and often addressed Compo as "him '' or "that man ''.
The episode in which Foggy is written out of the series, ' There Goes The Groom ', does not feature Brian Wilde and only features Foggy in brief sequences in shots where he is seen from behind or in far - off shots. Colin Harris acted as Foggy 's body double for these shots.
(1986 -- 1990) The first successor to Foggy. A snobbish inventor, Edie 's and Ros 's brother Seymour always felt it was his duty to educate the masses, and in particular, Compo and Clegg, to whom he was reintroduced by his brother - in - law, Wesley Pegden (who often called him a pillock), shortly before the wedding of Wesley 's daughter. Seymour went to school with Clegg and Compo but lost touch when he went to grammar school (in Series 10, episode 5, ' Downhill Racer ', Nora Batty undermines Edie 's bragging about Seymour 's intellect by pointing out that their grandmother was on the Education Committee). Whereas Cyril and Foggy tried to solve the problems of the residents of Holmfirth, when Seymour was around he always liked to invent, but the resulting inventions invariably led to disaster -- especially for Compo, who was always the reluctant test subject and called him a twit whenever anything went disastrously wrong. Despite this, he was well - liked by the other two and was more willing to play along with their childish antics than his predecessors. He did have occasional bouts of bravery: in series 9, episode 6 ("The Ice - Cream Man Cometh '') he contradicted Pearl, Ivy and Nora Batty in one sitting for which Clegg, Compo and a random passer - by heartily congratulated him.
Seymour usually blamed the failure of his inventions on divine punishment for his once having had an affair with a barmaid. Seymour 's house, outside the town, was modified into a laboratory, filled with new devices and contraptions that seldom, if ever, worked properly. His sister Edie always spoke very highly of him and how he was ' educated ', refusing to take into account his continual failed inventions (though she would secretly be embarrassed by his involvement in the antics of the other two). Because Seymour 's inventions were always built poorly he would normally get Wesley to fix them (or he would just get Wesley to build them in the first place, much to the latter 's annoyance). Seymour had previously been the headmaster of a school, although it is not entirely clear how successful he was in running it. When Compo and Clegg were in his home Seymour would often put on his old headmaster 's gown and treat the two of them like schoolchildren when trying to explain a new invention. He sometimes appeared to take an unhealthy delight in corporal punishment, and was appalled to hear that it has been prohibited.
When Aldridge left the series in 1990 for personal reasons, Seymour was last seen leaving on a bus to take up a new job as interim headmaster at a private school -- just as previous third man Foggy returned.
There were allegedly plans for Seymour to make a comeback, but Michael Aldridge died in 1994. The character was never alluded to again.
(1997 -- 2010) The second (and last) successor to Foggy. A retired policeman, Truly was initially played with a pompous self - importance in all things criminal. However, this aspect of the character was fairly quickly softened, and Truly is more relaxed and fun - loving, and can be more of an equal match at the local pub than his predecessors as third man. He can also be a bit more devious with practical jokes or witty schemes. Likewise, he can be equally sly in getting people out of a scrape or just helping out a friend. He is divorced, and makes disparaging comments about "the former Mrs. Truelove '' (who evidently feels the same way about him, judging by the reaction of her new husband, who appears in one episode, to Truly). The former Mrs Truelove is an unseen character. Because of his previous job in the police, Truly refers to himself as "Truly of the Yard '', he was also once misheard and thought to have said he was "Trudy '' (of the Yard). In early appearances he was initially shown as snobbish and pompous, like his predecessors (sometimes taking out his police notebook in unnecessary situations) but he gradually became a more likeable character and made less snide remarks over Compo 's attire. He also appeared to be more respected than his predecessors by the other regular characters such as Wesley and Howard as well as the local ladies. In the two final series he is demoted to a secondary character along with Norman Clegg, so his role as third man was filled by Hobbo.
(1999, 2000, 2001 -- 2006) Billy Hardcastle was first introduced in the 1999 series as a guest star and also appeared in the 2000 New Year 's special and a guest role in the 2000 series. Because of his popularity, he was made a regular character in the 2001 series.
Billy believes he is a direct descendant of Robin Hood. His first appearance on the show showed him attempting to recruit a band of Merry Men to go with him while he robs from the rich to give to the poor. At the end of the 21st series, Billy moves next door to Truly and is teamed as the third member of the trio. When Billy joined with Clegg and Truly, much of the humour Compo previously brought to the series returned in Billy 's childlike demeanour, although an element of physical humour was still lacking in the series. On his first appearance, Nora was shown to be attracted to him dressed in his Robin Hood costume, which made Compo extremely jealous and decided to dress up as Robin Hood himself. Much of his dialogue bemoaned the domestic presence of "the wife '' or "the wife 's sister '' (two other characters who are never seen, only referred to). Billy was last seen at the end of the 27th series following the departure of Keith Clifford from the show and the character was never alluded to again.
(2003 -- 2010) Alvin Smedley was introduced in the 2003 series as Nora Batty 's new next door neighbour following the death of Compo. When Tom 's former acquaintance, Mrs. Avery, gives up the lease she owns on Compo 's old house, Alvin purchases it. Although he publicly claims to hate Nora Batty, he feels it is his duty to try to bring some joy to her life, often in the form of practical jokes similar to those Compo once played on her. In the 2005 series he joined the main trio thus making them a quartet (this was largely to compensate for Clegg 's decreasing role) but after the 2006 series following Billy Hardcastle 's departure the quartet once again became a trio although in the 2007 and 2008 series he was mostly teamed up with Entwistle. His arrival to the main trio brought a sense physical humour that had been missing since Compo 's death. Despite his childlike personality, he was shown to be more level headed than his predecessors. In the final two series he and Entwistle teamed up with Hobbo, thus making a new trio.
(2002 -- 2010) Electrician and fortune - teller from the land of eastern wisdom, Hull. His original surname was McIntyre, but he changed it so that people would n't mistake him for a Scotsman. When Wesley died, Entwistle took over his job of shuttling the others across the countryside, in a battered red Toyota Hilux pick - up truck, and occasionally constructing the various contraptions the main trio produce. He also seemed to be taking over a character version of Auntie Wainwright, although he mainly sold second - hand washing machines.
Following the departure of Billy Hardcastle in series 28, Entwistle was often paired with Alvin, with many stories revolving around their dealings with Howard or Barry. During this period his role increased and he often hung around with the main trio (sometimes to compensate for Clegg 's decreasing role.) In series 30 and 31, Entwistle became the second man (officially taking over from Clegg) in a new trio when Hobbo arrived and recruited Alvin and Entwistle to form a band of volunteers to respond to emergencies in the village.
(2008 -- 2010) Hobbo is a former milkman with ties to MI5 who was first introduced in the 2008 New Years special, to set up his role in the 30th series. He is Clegg 's new next door neighbour. Upon first arriving in the village, Hobbo recruits Alvin and Entwistle to form a small band of volunteers who will react to any emergency that arises in the village, thus forming a new trio (with Hobbo taking Truly 's role in the trio). Hobbo is incredibly cautious, and always on the lookout for enemy attack. He fondly remembers his time spent with MI5, when he used to leap from aeroplanes ("Holding crates of milk? '' asks Entwistle) and dive for cover from enemy fire. Throughout his time on the show Hobbo is convinced that Nelly is his mother and he frequently bothers her (or uses other people) for attention, much to her annoyance. Clegg and Truly recall that Hobbo was never much of a milkman but was exemplary at needlework. He was also one of the last new characters to be introduced to the series.
(1975, 1976, 1977 -- 1987) Nora 's perennially shell - shocked husband and Compo 's next - door neighbour, Wally Batty was a short quiet man, kept on a short leash by his wife. His relationship with Nora stood in stark contrast to Compo 's unrequited lust after her; in fact, he often welcomed the prospect of Compo running off with her. Initially mentioned but not seen, he first appeared on screen in 1975. He was generally seen doing chores or stealing a quick moment away from Nora at the pub. Despite being dominated by his wife, Wally had an acerbic wit (like that of Norman Clegg) and was often quick to reply with a sharp - tongued comment when Nora told him off (though this often caused more trouble for him from Nora). Wally had a passion for racing pigeons and owned a motorbike and sidecar, occasionally taking Nora for a spin around the countryside. When Joe Gladwin died in 1987, Wally died off screen, but he is still occasionally mentioned. (Note: in the pilot episode of the series, which was part of the Comedy Playhouse strand, Nora referred to her unseen husband as Harold, not Wally.) Joe Gladwin last appeared in series 9. He died just days before the broadcast of his final appearance.
(2008 -- 2010) Stella is Nora 's sister, she first appeared in the 2008 New Years Special, I Was A Hitman for Primrose Dairies as a replacement for and to compensate for the absence of actress Kathy Staff, (who was unable to continue her role as Nora owing to ill health and subsequent death).
With Nora having departed for Australia, Stella moved in to house - sit for her sister, and had become a new member of the elder women 's talking circle. She is a former pub landlady and appears to take a more free - spirited approach to life than Nora, as evidenced by her brighter wardrobe and hair. The storyline in her first episode saw her trying to give up smoking, and her yearning for a cigarette has continued unabated into subsequent episodes. Despite this she was equally annoyed as Nora by the pranks that Alvin played on her.
Stella has four siblings: Madge, Billy, Clara and Nora (See Nora Batty).
In the episode "Get Out of That, Then '' Young wore a brown wig and played the part of Florrie, wife of Barry 's cousin Lenny (Bobby Ball).
(1973 -- 2010) joint owner of tea - shop with husband Sid, with whom she would often have blazing rows in the kitchen, until his death. She later ran it alone. Physically formidable, she viciously scolded anyone who dared misbehave or criticize the food by throwing them out the cafe or often hitting them on the head with a tray. Generally the wisest and most level - headed of the show 's female social circle, she was also on occasion a target of Compo 's (unwanted) affection, who often said that if it was n't for Nora Batty, he 'd be all over her. This regularly resulted in Compo along with the others (sometimes including Sid) being thrown out or being on the receiving end of her anger in other ways. In earlier episodes she was shown to tolerate the main trio more when they visited the cafe. Eventually she became more strict with them (especially after Sid 's death), although after Compo 's death she became kinder to the trio. When taking into account Kathy Staff 's brief exit from the show in 2001 and later absence from series 30 to series 31 (see above), Jane Freeman as Ivy is the only character other than Clegg (Peter Sallis) to have been present throughout the course of the series (274 episodes, although Clegg is the only one to have appeared in every single episode). In some of the episodes she seemed to have a sort of rivalry with Nora participating in what Nora once called a "slanging match '' Ivy would often criticize Nora 's taste in hats and Nora in turn once said Ivy 's pastry was n't light enough which succeeded in bringing Ivy to the verge of tears. On the other hand, Nora and Ivy are normally best friends with each other like their husbands and the two were regularly seen to be having a coffee together chatting about the downsides of men and life in general. Nora would also sometimes help Ivy and Sid with the café (on one occasion Wally was also shown helping them).
(1973 -- 1983), bluff tea - shop owner, who featured prominently for the first ten years, before Comer 's death in 1984. Ivy remembers him fondly, and often mentions him in conversation. Sid was one of the few characters who actually seemed to enjoy getting involved in the misadventures of the three central characters, and often saw them as an excuse to get out of the cafe for a few hours. However, occasionally he was shown to be extremely irritated by some of their schemes and antics (most notably in the episode "Getting on Sidney 's wire '' where he gets angry with Foggy for ruining his attempts to fit a new doorbell to the cafe and subsequently throws him out). Like Wally Batty he often welcomed Compo 's affection for his wife. In one episode he remarks that he "ca n't help admiring (Compo 's) nerve ''.
Ivy and Sid often shouted and argued with each other (and Ivy was never shy about bringing up Sid 's infidelity), but, as with many of the show 's couples, there was little doubt that they loved each other. Throughout his time in the series Sid and Wally were shown to be best friends and the two of them often joined each other in trying to sneak away from their wives to the pub or any other activity (often involving the main trio). Another long running gag during his time on the show were ongoing rumours of his supposed affair with a local unseen bus conductress (likely to be an early structure for Howard and Marina). Ivy was aware of this and often accused him of being unfaithful. Although Sid once admitted to the trio he was friends with the conductress, he always flatly denied the rumours and despite the odd verbal hint very little evidence of this was ever seen onscreen.
For John Comer 's last ever appearance, in the 1983 feature - length Christmas special, ' Getting Sam Home ', illness caused by cancer affected his speech, and so his lines were dubbed over by another actor, Tony Melody. Comer died two months later in February 1984. Sid 's death was not referred to until "Uncle of the Bride '' on New Year 's Day 1986. It is hinted (after his death) by Ivy that Sid was a supporter of Manchester United (though this was not mentioned when he was alive).
In the 2000 episode "Just a Small Funeral '' as Ivy is getting ready for Compo 's funeral, she finds a photo of Sid in her handbag.
(1984, 1985 -- 1987) Sid and Ivy 's giant, lumbering and very strong nephew, who looked like a younger version of his own late uncle.
The character was first introduced in 1984, following the death of John Comer (who played Sid in the series). Crusher helped his widowed auntie Ivy out in the cafe for 31⁄2 years. His real name was Milburn, but he insisted on being called "Crusher ''. He was influenced by the Rock and Rollers of the 1950s and was into heavy metal music. Well - meaning but not overly bright, he was rather easily led. Crusher was first seen in the touring stage show around 1984 before being introduced into the 8th series. In the 1988 Christmas Special "Crums '' he was shown to have a girlfriend (though Crusher himself did not appear in this episode as Jonathan Linsley had left the show by then) named Fran (played by Yvette Fielding) who, according to Ivy, was as daft as he is. In his early episodes, he seemed to have a crush on Marina much to Ivy 's displeasure. This stemmed from the fact that Ivy told him to find "some poor lass that 's had a hard time ''.
However Crusher did not return in the tenth series, as Jonathan Linsley left the show to work on other TV projects: most of the character 's humour came from the contrast between his menacing size and his total harmlessness. Following his departure in early 1988 (after the 1987 Christmas special), Ivy ran the cafe alone (with occasional help from Nora Batty).
(Tony Haygarth, 1973) A forgotten character, Chip was Compo 's nephew from the first series episode 3 Pate and Chips. Chip and his wife Connie, played by Margaret Nolan, with their children and dog, take the Yorkshire trio to a large country home for a ' bit of culture ' in a cramped van for transport (much to Cyril 's disgust). When they arrive at the country home Cyril points out that Chip has n't renewed his Road Fund License since 1967.
(Paul Luty, 1976) Big Malcolm is revealed to be, in "The Man From Oswestry '' Compo 's cousin and he appeared in just two 1976 episodes. Within hours of his arrival in "The Man from Oswestry '', Foggy is unfortunate enough to let Big Malcolm overhear him in a pub, saying he will fight to the death anyone who mocks his regimental scarf. Foggy is taken outdoors by Big Malcolm and returns the worse for wear. Several episodes later, Malcolm is one of the family guests in "Going to Gordon 's Wedding ''.
(Philip Jackson, 1976) An oft - forgotten character, Gordon was Compo 's gormless fishing - obsessed nephew, and appeared in a few 1976 episodes, joining the trio on a Bank Holiday trip to Scarborough. He became friendly with a young woman named Josie whilst in Scarborough, and married her in a later episode. In some ways he was a prototype of Barry, who was introduced in the mid-1980s. When he is married, it is revealed he has a sister, Julie. In the same episode his mother states to him that he 's "queer '', much to his annoyance, as he reveals that he knows there 's been some rumors to that effect.
(Liz Goulding, 1976) Another forgotten character, Josie met Compo 's nephew Gordon in the trio 's Bank Holiday trip to Scarborough. She and Gordon go back to Gordon 's room at the Guest House at which they are staying, and start a game of chess. In a later episode, she and Gordon marry, but as the wedding turns more and more disastrous, she turns more and more into her rather foreboding and complaining mother, Madge (Joan Scott). It is unknown what became of her and Gordon after the wedding.
(Margaret Burton, 1976) Compo 's sister - in - law, and mother of Gordon and his sister Julie. She is flamboyent in her dress, and screeches instead of shouts. She is the object of Big Malcolm and Eric 's, affections, and hits both Malcolm and Eric with her handbag when they attempt to drag her to two different seats at once. It is revealed her husband, Compo 's brother, left her with Julie and Gordon a few years back. Like Gordon and Josie, it is unknown what happened to her after Gordon 's wedding.
(Barry Hart, 1976) Eric 's exact relationship to Compo is unknown, and he only appears in one episode, but he is shown to have feelings for Gordon 's mother Dolly, and that he drinks a lot, and almost gets in a fight with Big Malcolm. In his only appearance, he is a guest at Gordon 's wedding. Eric is also referred to, but not seen, in the episode of the first series, "Short Back and Palais Glide. '' When the trio are in the police station whilst looking for Mr Wainwright, the desk sergeant asks Compo, "How 's your Eric? ''
(Tom Owen, 2000 -- 2010) Compo 's long - lost son, arriving just after his father 's death, Tom is played by Bill Owen 's real - life son. Tom is a layabout like Compo but seems a bit more enterprising in his attempts to maintain his slothful lifestyle. Originally it was planned that Tom would fill the gap in the three - man line - up left by his father, but it was soon felt that this line - up did not quite work. For most of his time in the series, he was paired with Smiler working for Auntie Wainwright, and also, in one episode, goes to live with Smiler (though it 's not clear if this continued). Of the duo, he designates himself the ' leader ' and the planner (often leaving Smiler to struggle with Auntie Wainwright 's antiquated hand - cart while he strolls on ahead), although in truth, he is not particularly bright himself. After Smiler was written out of the series, Tom continued to work for Aunty Wainwright until the conclusion of the show 's run. Clegg and Truly often take advantage of his desire to live up to his father 's reputation in order to convince him to do rather stupid things. After the death of Compo, Nora feels somewhat maternal towards Tom, and often showers him with affection -- much to the embarrassment of Tom. He also has a scruffy puppet dog called Waldo which he aspires to use in an unconvincing ventriloquist act. When not working for Auntie Wainwright, Tom can usually be found in his allotment shed, avoiding the repo man (he rarely, if ever used his allotment to grow vegetables). When he first arrived in the series, Tom also had a tatty old yellow Renault van, but this was seen in only a couple of his early appearances. (Note: For some years before joining the series as Tom Simmonite, Tom Owen sometimes appeared in small walk - on parts on the show (for instance appearing on the 1991 Christmas special), sometimes with no dialogue, and not always credited.)
(Julie T. Wallace, 2000 -- 2001) Tom 's live - in "associate ''; much larger than him, and something of a battle - axe, yet rather easily manipulated. Although Tom always insisted that she was merely an acquaintance, Mrs Avery always wanted more, and was under the impression that Tom had promised to marry her. After a brief spell of living in the pair 's bus, they moved into the deceased Compo 's home, next - door to Nora Batty. During her stay at Compo 's home, she began a rivalry with Nora, often copying each other (cleaning their windows or vacuuming their rugs). This was not to last; she threw him out and disappeared from the series after only a year on the show.
(Helen Turaya, 2000) Along with Tom Simmonite was Babs, Mrs. Avery 's niece, involved in a couple of schemes. The character was so unpopular that she was axed after just three episodes without explanation.
Seymour Utterthwaite was the third man of the trio from 1986 to 1990. He left the series in 1990 when Foggy Dewhurst returned to the show, but his family had gained so much popularity themselves that they remained on the show.
(1982, 1984, 1985 -- 2002) Edie 's husband, who spent all his time in his workshop / garage.
In one of the most popular and often reused scenes in the series, Edie would call Wesley in from his garage (after much shouting, first in gentle ' posh ' tones, before ending up in harsh yelling) and would lay down a trail of newspaper for him to stand on -- often putting one on the wall just in time as he leaned against it. Wesley generally kept out of Edie 's way in his garage, restoring old motors.
The character first appeared in the 1982 episode ' Car and Garter ' in a cameo role. The writer and producers liked him so much they brought him back for the 1984 Christmas Special ' The Loxley Lozenge ' and again twice in the 1985 episode ' Who 's Looking After The Cafe Then? '. He reappeared in the 1985 feature - length Christmas special ' Uncle Of The Bride ', in which he was established as Edie 's husband, at which point both became regulars from this special thereafter.
Mechanic Wesley was often called upon by the main trio to construct the many bizarre creations they came up with, and to drive them into the hills for test runs. One recurring theme is the occasional explosion caused by projects in Wesley 's shed accompanied by billows of white smoke. On some occasions, Wesley 's hat is also smouldering and smoking. In his early years in the series, Wesley seemed to have a love of loud rock music, which led to the trio desperately trying to call over it to get his attention on a number of occasions. Though he was clearly a very skilled builder and mechanic, much of his projects were poorly and hastily built and he would get easily embarrassed and annoyed by anyone managing to fix something he ca n't (notably, Compo once managed to rewire Edie 's car correctly, much to Wesley 's annoyance). Unlike Edie, Wesley did not speak highly of Seymour (Wesley calling him a pillock) and was often annoyed by Seymour 's requests to construct the latter 's ridiculous inventions as well as Seymour 's pompous school headmaster nature. His attitude towards Foggy was similar to that of Seymour but during later years when Truly was introduced on the show he was shown to be more willing to help the trio out in their schemes. Sometimes Wesley would be extremely secretive about his inventions (largely down to his fear of other people copying them) but they were often exposed by the main trio or Edie and would go to extreme lengths to hide what he was building (on one occasion he kept a guard dog in his shed that chased Barry away).
When Gordon Wharmby died in 2002, the character is said to have also died. Although he was not formally written out, subsequent references to him were in the past tense.
(1986, 1987, 1988 -- 2003), a highly opinionated older woman, sister of Seymour Utterthwaite (who called her Edith) and Wesley 's wife, she was the house - proud hostess of the women 's coffee mornings. She was introduced, along with Seymour, daughter Glenda and son - in - law Barry in the 1986 New Years Day special episode "Uncle of the Bride '' (husband Wesley had been introduced in 1982, 4 years before).
The ladies ' tea parties, where they would sit and discuss life (particularly the shortcomings of men), became a popular staple of the show from the 1990s onwards; they were usually held in Edie 's front room. Wesley restored a convertible car for her to drive, although she was a terrible driver, and was always accusing Wesley of moving things (particularly the gear lever) around. The other ladies (including Glenda) often accompanied her on the roads and as a result of Edie 's poor driving, they would be fearing for their lives. Another running gag was Edie making a big performance of locking the front door, repeatedly pushing it to check that it was locked properly. When her brother Seymour was around Edie would speak very highly of him and his inventions (refusing to count his numerous failed ones) despite the other ladies thinking he is just as daft as the rest of the trio (although when Seymour 's antics became extreme she would secretly be annoyed and embarrassed).
In later years Hird, who was still in the series at the age of 90, suffered poor health, which affected her ability to stand. To cover this, she was often seen sitting down, or, when standing, had something to hold on to (often out of camera shot). For driving and distance shots, her double, Amy Shaw, was used.
When Thora Hird died in 2003, Edie was also said to have died. As with her husband Wesley previously, it was not immediately made obvious, but later references to the character indicated that she had died. In the final three series, a framed photo of Edie can be seen on Barry and Glenda 's mantelpiece.
In one episode Barry talks about ghosts and Glenda asks if he had seen her mother. Barry 's response in the negative includes immense gladness, in that she scared him enough alive.
For the first few series in which she appeared, Edie was extremely concerned with her reputation in the neighbourhood: whenever there was company, Edie would try to put on a posh, educated voice -- which would suddenly vanish when she was shouting for (or at) Wesley. This aspect of Edie 's character was a prototype for Hyacinth Bucket in Keeping Up Appearances (also written by Roy Clarke). Once the latter series was created, this aspect of Edie 's personality was toned down a bit (although not completely) in order to differentiate the two characters.
(Sarah Thomas: 1986, 1987, 1988 -- 1990, 1991 -- 2010) daughter of Edie and Wesley. The other women in the group consider that she is somewhat naive, despite her being middle - aged. When her mother was alive, if she attempted to join in a mature conversation, Edie would snap "Drink your coffee! '' She speaks glowingly of her husband Barry, but is often insecure and unsatisfied with him at home, often because of the pressure of her mother and other ladies in the group. She often comes to the defence of men when other women in the group speak the worst about them and does not believe that all men are evil, as they do. Likewise she is generally shown to be kinder to the main trio than the other ladies (particularly when her uncle Seymour was with them and notably in the episode "The McDonaghs of Jamieson Street '' she lends Billy a skirt after his trousers are mauled by a vicious dog). She appears, like her husband, to have a very meek demeanour, but under duress she has proven to be quite a force to be reckoned with. In the very last episode of the programme, Glenda clearly seems to have joined the bossy Yorkshire women 's brigade in her suggestions to Barry and Morton that are, in Barry 's words "not optional ''. Although the rest of the ladies (particularly Pearl) disliked the flirtatious Marina, Glenda was seen to strike up friendship with her on a number of occasions (although this role was generally taken by Miss Davenport in the later series).
(1986 -- 1990, 1996 -- 2010) meek and mild husband of Glenda. Dull and ineffectual, accountant Barry strives for adventure but seems destined for paperwork and domesticity. His one pride is his shiny new car, which he was always trying to keep away from father - in - law Wesley, who could not resist tinkering under the bonnet (although in one episode, he did completely dismantle the engine).
Barry is often trying out new hobbies in an attempt to stop his life being humdrum; and in more recent years, has made a number of attempts to fit in at a local golf club, often upsetting the golf captain "the Major ''. Though he clearly loved his wife he was afraid to kiss her in public, out of fear of being judged by the neighbours. He was also afraid of his mother in law Edie, largely because she (along with the other ladies) would often judge Barry or accuse him of being guilty. In later series Barry became more regularly involved in the schemes of the main trio and in series 28 -- 29 was often involved in schemes with Alvin, Entwistle and Howard. After being introduced in the feature - length "Uncle of the Bride '' in 1986, which centres around Barry and Glenda 's wedding, Barry was much - mentioned but not seen for around six years when Mike Grady originally left to pursue several other television projects, before returning as a regular from 1996 thereafter. He is one of the few characters to have left the series but returned in later series.
(2000, 2001 -- 2005) Edie 's and Seymour 's sister, who has always been more romantically adventurous, to Edie 's unending shame. She often speaks of past flings, frequently with married men. She was often paired with Pearl Sibshaw. Ros was last seen at the end of the 26th series following the departure of Dora Bryan owing to ill health. Her role of being paired with Pearl was replaced by June Whitfield 's character Nelly.
Before Ros actually appeared in the series, she had never been mentioned and it was not known that Edie and Seymour even had a sister.
(1985, 1986 -- 2010) Howard is the shy, beady - eyed, constantly conniving, simpering, henpecked husband of Pearl. Doubtless owing to his wife 's domineering nature, Howard often tries to escape from her. Most episodes involve Howard dating peroxide blonde, Marina, behind his wife 's back. In most episodes, Marina would simper, "Oh Howard. '', followed by Howard 's "Oh Marina. '' - sometimes the order was reversed, He is a creative but unconvincing liar. He and Pearl live next door to Clegg, and, much to the annoyance of the latter, Howard is always pestering him for aid in his various schemes to escape Pearl and be with Marina. Over the years he has come up with countless disguises, cover stories and hideaways to allow him to see Marina, all of which have ultimately been doomed or exposed by Pearl. In their earlier appearances, they were frequently shown in disguise with Howard saying, "I think we 've really cracked it this time ''. However, he tends to ignore Marina when he 's out with her, partly out of fear of his wife Pearl, and partly because he gets so deeply caught up in fabricating charades to cover up his affair. As a result, their relationship does not appear to have gone beyond hand - holding and gazing into each other 's eyes (much to the annoyance of Marina), and the occasional kiss in a field, haystack, or mobile hut somewhere, and it is hinted that if Howard ever did get the chance, he would be too cowardly to go through with it anyway. It has also been suggested that Howard loves Pearl underneath it all. In later series Howard was shown to be out of the house more regularly (despite Pearl knowing about his attempted affair with Marina) and eventually became more involved in the schemes of the main trio. Howard first appeared in the Bournemouth summer season show of the series, and was popular enough and felt to have enough potential that he was soon made a regular character. At first, he, Pearl and Marina were used semi-regularly, but as time passed and their popularity grew, they appeared in every episode (particularly after Wally Batty died). Howard and Pearl 's surname was given as Sibshaw in Roy Clarke 's novel ' The Moonbather ' in 1987, but only mentioned once in the entire TV series, in one of the last episodes, when Glenda refers to Howard as Mr. Sibshaw.
(1985, 1986 -- 2010) Howard 's wife, a bit of a shrew and always one step ahead of his crafty schemes, she is often shown to know about his (attempted) affair with Marina, but is almost gleefully obsessed with exposing Howard 's philandering and generally tormenting him. Although she has a fearsome reputation, she, like Nora, occasionally surprises Norman Clegg and others (not including Howard) with displays of kindness, especially after Compo died. She also showed shock when, after seeing Howard in the appropriate uniform, believed he had joined the French foreign legion and outright fainted in a Christmas Special when Compo casually remarked that Howard was in Wesley 's hearse.
When she was first introduced on the show, Pearl was somewhat naive, especially towards Howard 's affair with Marina. When introduced to the ladies ' tea group, Nora, Ivy, and Edie integrated her into the group and, over time, her demeanour has hardened.
(1985, 1986 -- 2010) Busty but over-age, Howard 's love interest Marina works in the local supermarket. Despite her carefree appearance, Marina is a long - suffering type, having to deal with the disapproval of the prominent village women, the indirect wrath of Pearl, and timorous and neglectful romancing by Howard. She is often thought of as a "tart '', and not without reason. She seems to have a soft spot for Clegg (often referring to him as "Norman Clegg that was '' implying that they have a past), and occasionally briefly leaves Howard for other men. In the episode "A Double For Howard '', she is also content for Eli to kiss her when he impersonates Howard. Marina works as a check - out girl at the local Co-op (although in more recent series, the store 's name has been seen as Lodges); Howard often sneaks there to pass or receive notes from her (or more often sends Norman Clegg in his place; leading on several occasions for Marina to believe mistakenly that Clegg is interested in her romantically). In A Sidecar Named Desire Clegg reveals that he was once trapped in a lift with Marina and she cuddled him for warmth, much to Howard 's ire and jealousy. Though she perceived it to be a romantic incident, it left Clegg terrified of her. Clegg always strongly denies any romantic interest in her. Marina first appeared in the spin - off 1984 Eastbourne summer season show, and soon became a regular character.
(1988, 1989, 1992 -- 2010) Howard 's aunt. A sly and grasping bric - a-brac shop owner. Whilst she and her nephew both have a general predisposition towards sneakiness, Auntie Wainwright is much more adept at applying it.
Clegg is reluctant to go into her shop, since she always sells him something he does n't want, but she usually finds ways to trick him into entering. She is extremely mean, and pretends to be cheated when she gives the slightest discount. At Compo 's funeral, she grabbed Eli by the arm and pretended to be blind in order to avoid giving money to a collection outside the church. Whenever customers entered the shop she would surprise them by talking through a loudspeaker, saying things like "Stay where you are! '', "Do n't touch anything or you will be electrocuted '', (or things of that nature). Though she is largely based in her usual junk shop, she was occasionally shown to own (or she was the tenant of) other shops and even junkyards (which comes to the shock of the trio and other characters). She was also extremely security conscious (even pointing a shotgun at the trio on one occasion).
As with several other characters, she was originally seen in a "one - off '' appearance in the 1988 Christmas Special Crums. However she became so popular that she was brought back for a second appearance at Christmas 1989, eventually becoming a regular from 1992 thereafter.
She may have had a sister called Elsie -- this is the name of Howard 's mother.
Note: Auntie Wainwright is no relation to Mr Wainwright from the library. (See Below)
(Stephen Lewis: 1988, 1990, 1991 -- 2007) eternally miserable and none - too - bright comic foil, similar to Lewis ' character Inspector Cyril "Blakey '' Blake in LWT 's hit comedy On The Buses (some episodes of which he co-wrote) from 1969 to 1973. Smiler was first seen as a one - off character in 1988 's ' That Certain Smile ', in which the trio had to sneak a hospitalised Smiler 's beloved dog Bess in to see him. During his first appearance he was almost entirely referred to by everyone else as his real name "Clem ''. The character was popular enough to be brought back on a semi-regular basis, and was a regular throughout the 1990s and most of the 2000s (although his dog died between his first and second appearances). In some early appearances, he was a lollipop man, but for much of his time on the show worked for Auntie Wainwright, with whom he seems to be suffering some sort of indentured servitude. In early appearances, Smiler was also a lodger with Nora Batty, which enraged the jealous Compo. Smiler once described that working for Nora Batty was like being in the Army again, and always on Jankers. He also described it akin to jail at Stalag 14. Smiler also owned a big, but rather beaten up and poorly maintained, white convertible 1972 Chevrolet Impala, in which he sometimes drove around with Tom, and which on occasion has been used in various promotions for Auntie Wainwright. The trio would often cross paths with Smiler and use him for whatever scheme or activity they were doing (largely because of his tall height and gormless nature). Smiler was last seen in the series 28 episode "Sinclair and the Wormley Witches ''. Lewis left the show at the end of series 28 because of ill health. He was last mentioned in the series 29 episode "Of Passion and Pizza '' by Tom 's saying that Smiler had disappeared.
(Blake Butler, 1973, 1976) The rather timid head of the local library, which the trio visited a lot in the show 's early days -- Compo nicknamed him ' Old Shagnasty '. Mr Wainwright left at the same time as Mrs Partridge 's departure (see below), but was "transferred back '' to the area in the third series, featuring in two episodes where he was once again romancing his new assistant, Miss Moody. It is shown in Series 1 he, unlike Miss Probert, approves of the books with four - letter words. (Note: Mr Wainwright is not related to Auntie Wainwright.)
(Rosemary Martin: 1973), a librarian at the same library, and who was engaged in an affair with Mr Wainwright which they mistakenly believed was secret. The characters were never really felt to catch on, and disappeared as the library was written out as a favourite haunt of the main trio. However, a few years later, the storyline was resurrected and occasionally used for Howard and Marina. The library was also brought back for Foggy to get thrown out of all the time. She has twelve - year - old son, as of Short Back and Palais Glide.
(June Watson, 1975) One of the librarians who briefly replaced Wainwright and Partridge during the second series. Miss Probert is a radical "feminist '', who is always railing against men to the more timid Miss Jones. Miss Probert has two missions in life; one is discouraging the lending out of books she considers "filthy ''; the other is making a misandrist out of Miss Jones, in whom she seems to take a more than professional interest. Her disappearance from the series is unexplained, and it is presumed she went back to wherever she worked before.
(Janet Davies:, 1975) The other librarian who replaced Mr. Wainwright and Mrs. Partridge in the second series. Miss Jones is a quiet, timid female who is overshadowed by Miss Probert. She previously worked in a children 's library, which she frequently says she wants to return to. She has a pair of pink fly - away glasses that are on a chain around her neck. She does n't like working at the Holmfirth library, because of the four - letter words. She always does what Miss Probert asks her, always without question or protest. Like Miss Probert, her disappearance is unexplained, and it is believed she returned to the children 's library. This is most likely due to the remark she made to Miss Probert about wanting to go back where "Puss in Boots means just that and not like that awful magazine ''.
(Kate Brown, 1976) The librarian who replaced Mrs. Partridge on Mr. Wainwright 's return. She only appeared in two episodes, and it is shown she shares Mr. Wainwrights dreams about revolution. She is the first woman to suffer the sight of Compo 's matchbox. Although middle - aged, she is attractive and she and Mr. Wainwright are believed to one of the original structures for Howard and Marina.
(Josephine Tewson, 2003 -- 2010) After many years of the library setting seldom being used, Miss Davenport was introduced as the new librarian in 2003. A very emotional woman haunted by a string of past rejections, she first appeared as a guest, driving Gavin Hinchcliff around while he skied on the van roof. Originally, Glenda took up the cause of socializing her and tried to fit her in with the coffee - drinker circle of Nora, Ivy, Pearl, and co. They did not take too well to each other; in more recent episodes, she 's bonded with Marina instead, with the pair of them both longing for love in their individual ways. In the episode: "In Which Howard Remembers Where He Left His Bicycle Pump '', it is revealed that Miss Davenport 's first name is "Lucinda ''.
(1987 -- 2002) An extremely long - sighted bumbler, Eli maintained a highly cheerful, friendly attitude despite not having a clue what was going on around him. He generally made only brief cameo appearances, walking into a scene and commenting on his long - sighted misinterpretation of the action, and then walking off again. He was occasionally seen on a bicycle.
On occasion, his long - sightedness caused him to walk into slapstick (and carefully choreographed) mishaps such as walking into the back of a lorry and over the tops of cars, or falling into a skip. For much of his time in the series, Eli also had a Jack Russell dog (which once disappeared, leading Eli to mistake a sheep for the dog). Despite his long - sightedness, Eli is eternally cheerful and optimistic, and glad to see anyone who stops to talk to him. In one episode, a passing comment by Compo seemed to suggest that Eli was a sniper during the Second World War.
In the 1995 New Year Special episode featuring Sir Norman Wisdom, ' The Man Who Nearly Knew Pavarotti ', Eli is the conductor of the Holme Silver Band. Originally brought in as a friend of Wally Batty, the character was so popular that Eli remained on the show after the death of actor Joe Gladwin. Eli and Wally appeared together in the series 9 episode, "Jaws '', in 1987.
Eli never appeared again following the death of O'Dea, though the character was not explicitly killed off. He was replaced by two drunks (who were also in earlier episodes of the series, sometimes credited as Villagers), but appeared in only a few episodes.
In the 1988 episode "The Pig Man Cometh '' of All Creatures Great and Small O'Dea played the character Rupe who, like Eli, had defective vision, clearly alluding to his role in Last of the Summer Wine.
(2005, 2006 -- 2010) A more recent addition to the ladies ' coffee - drinking set, and Pearl 's comrade - in - arms. Nelly 's never - seen husband Travis needs constant attention, which Nelly generally administers over her mobile phone. Nelly occasionally provides more "sophisticated '' viewpoints as a result of having lived further south for some time, but even she regards them with some befuddlement. June Whitfield previously made a "one off '' appearance in the series as a different character, Delphi Potts, in the 2001 Christmas Special, ' Potts in Pole Position ', married to Lother, the character of Warren Mitchell, a couple of years before she became a regular as Nelly. In Series 30, she became the object of Hobbo 's obsession when he became convinced that she was his long - lost mother, much to her annoyance. She was one of the only two regular characters (the other being Ivy) not to appear in the final episode.
(2001, 2002, 2003, 2004 -- 2005, 2007, 2008 -- 2010) He first appeared as Herman Teesdale, "the Repo man '', who is always pursuing Tom Simmonite, claiming that he owes money. (May be related to character Jack Harry Teesdale, who appeared in two episodes.)
He is determined but gullible, and Tom always evades him. From 2005 on, he has not only been mentioned by name, but also calls on Barry for social visits, with Barry not being too thrilled at this newfound friendship. In certain episodes in 2005, it is clear that he still repossesses belongings, which Glenda suggests is the reason none of his friendships lasted: he kept repossessing his friends ' goods.
The character returned in a 2007 episode of the show; and again in the 2008 New Year special, saying that he has retired from debt collecting and changed his name to Morton Beemish in order to start a new life for himself. He seeks out the friendship of his former nemesis, Tom (though Tom was still suspicious of him and would often hide from him when he saw sight of him).
In the final two seasons 30 -- 31 the character practically lives next door to Barry and Glenda as a near - lodger with Toby Mulberry, (aka The Captain).
(1992, 2001, 2002 -- 03, 2004, 2005 -- 06, 2008 -- 2010) The Captain of the local golf club where Barry is often trying to fit in as a member; but, despite his best efforts to impress him, Barry always manages to annoy or offend the Captain, either by becoming involved with some escapade with the main trio, or by some other social faux pas.
Trevor Bannister is best known for playing Mr Lucas in another comedy favourite, Are You Being Served?, with Frank Thornton from 1972 to 1979, and also starred with Brian Wilde in the short - lived Wyatt 's Watchdogs in 1988.
He had previously played a tailor in the 1992 episode ' Who 's Got Rhythm? '. The Captain returned for the 2008 New Years Special I Was A Hitman For Primrose Dairies, where he received a name, Toby, for the first time. In series 30 he moves in next door to Barry and Glenda and shortly after gains Morton Beemish (aka Herman Teesdale), the former repo man, as a near - lodger, since he 's always there doing tasks around the house. During this time his relationship with Barry appeared to improve and the two (along with Glenda) would often bond over their annoyance of Morton.
(1995, 1996, 2001, 2002 and 2004) Like a number of other characters, Norman Wisdom was originally intended to make one guest appearance in the show, and ended up as a recurring character. He originally played the hapless Billy Ingleton in the 1995 New Year special ' The Man Who Nearly Knew Pavarotti '. He proved so popular that like Auntie Wainwright before him, he was asked to appear in the following year 's special (' Extra! Extra! '). From then on, much - loved comedian Norman Wisdom occasionally pops up, sometimes for the storyline of an episode, at other times in smaller appearances. He is not always credited for smaller appearances.
Local policemen often witness the bizarre goings on, usually related to the main trio, and watch in bemusement. They are generally seen parked up around the moors and trying not to get involved with anything, instead eating (they have even been seen to have a roadside barbecue on occasion) or drinking tea. They would often be in deep conversation speculating bizarre scenarios and whether or not they would come true, only for them to become true largely down to the main trio along with other characters.
(1983, 1988 -- 2010) Kitson first appeared in the 1983 Christmas special ' Getting Sam Home ' and made 2 further guest appearances before becoming a semi-regular character from series 12 onwards. In series 29 he was finally given the name PC Cooper. Cooper tends to be the bigger - headed of the two, but he has many ingenious ways of dealing with petty crimes with minimal disruption to his relaxation. In his first episode he is shown to be a friend of Sid 's (which was the latter 's last appearance on the show before his death).
(1988, 1989, 2004 -- 2010) Emerick first appeared alongside Kitson in ' Downhill Racer '. He made one more appearance in the next series, in the episode ' Three Men and a Mangle ', and later reappeared in 2004 to partner Kitson after Tony Capstick 's death. In series 29 he was finally given the name PC Walsh. Walsh is more level - headed than Cooper and enjoys "taking the mickey '', but he tends to be a little more naïve.
(1987, 1990 -- 2004) Capstick made his first appearance in the 1987 special ' Big Day at Dream Acres ', before becoming a semi-regular alongside Kitson from series 12 in 1990, up to his death in late 2003. His last appearance was the episode ' Yours Truly -- If You 're Not Careful '. Capstick 's character was spacey and less intelligent even than the often - oblivious Cooper.
Bright, Morris; Ross, Robert (6 April 2000). Last of the Summer Wine: The Finest Vintage. London: BBC Worldwide. ISBN 0 - 563 - 55151 - 8.
|
when was delhi designated the national capital territory | Delhi - Wikipedia
Delhi (/ ˈdɛli /, Hindustani pronunciation: (d̪ɪlliː) Dilli), officially the National Capital Territory of Delhi (NCT), is a city and a union territory of India. It is bordered by Haryana on three sides and by Uttar Pradesh to the east. The NCT covers an area of 1,484 square kilometres (573 sq mi). According to 2011 census, Delhi city 's proper population was over 11 million, the second highest in India after Mumbai, while the whole NCT 's population was about 16.8 million. Delhi 's urban area is now considered to extend beyond the NCT boundary to include an estimated population of over 26 million people, making it the world 's second largest urban area. As of 2016 recent estimates of the metro economy of its urban area have ranked Delhi either the top or second most productive metro area of India. Delhi is the second wealthiest city after Mumbai in India, with a total wealth of $450 billion and home to 18 billionaires and 23,000 millionaires.
Delhi has been continuously inhabited since the 6th century BC. Through most of its history, Delhi has served as a capital of various kingdoms and empires. It has been captured, ransacked and rebuilt several times, particularly during the medieval period, and modern Delhi is a cluster of a number of cities spread across the metropolitan region. A union territory, the political administration of the NCT of Delhi today more closely resembles that of a state of India, with its own legislature, high court and an executive council of ministers headed by a Chief Minister. New Delhi is jointly administered by the federal government of India and the local government of Delhi, and is the capital of the NCT of Delhi. Delhi hosted the first and ninth Asian Games in 1951 and 1982 respectively, 1983 NAM Summit, 2010 Men 's Hockey World Cup, 2010 Commonwealth Games, 2012 BRICS Summit and was one of the major host cities of the 2011 Cricket World Cup.
Delhi is also the centre of the National Capital Region (NCR), which is a unique ' interstate regional planning ' area created by the National Capital Region Planning Board Act of 1985.
There are a number of myths and legends associated with the origin of the name Delhi. One of them is derived from Dhillu or Dilu, a king who built a city at this location in 50 BC and named it after himself. Another legend holds that the name of the city is based on the Hindi / Prakrit word dhili (loose) and that it was used by the Tomaras to refer to the city because the Iron Pillar of Delhi had a weak foundation and had to be moved. The coins in circulation in the region under the Tomaras were called dehliwal. According to the Bhavishya Purana, King Prithiviraja of Indraprastha built a new fort in the modern - day Purana Qila area for the convenience of all four castes in his kingdom. He ordered the construction of a gateway to the fort and later named the fort dehali. Some historians believe that the name is derived from Dilli, a corruption of the Hindustani words dehleez or dehali -- both terms meaning ' threshold ' or ' gateway ' -- and symbolic of the city as a gateway to the Gangetic Plain. Another theory suggests that the city 's original name was Dhillika.
The people of Delhi are referred to as Delhiites or Dilliwalas. The city is referenced in various idioms of the Northern Indo - Aryan languages. Examples include:
The area around Delhi was probably inhabited before the second millennium BC and there is evidence of continuous inhabitation since at least the 6th century BC. The city is believed to be the site of Indraprastha, the legendary capital of the Pandavas in the Indian epic Mahabharata. According to Mahabharata, this land was initially a huge mass of forests called ' Khandavaprastha ' which was burnt down to build the city of Indraprastha. The earliest architectural relics date back to the Maurya period (c. 300 BC); in 1966, an inscription of the Mauryan Emperor Ashoka (273 -- 235 BC) was discovered near Srinivaspuri. Remains of eight major cities have been discovered in Delhi. The first five cities were in the southern part of present - day Delhi. King Anang Pal of the Tomara dynasty founded the city of Lal Kot in AD 736. Prithviraj Chauhan conquered Lal Kot in 1178 and renamed it Qila Rai Pithora.
The king Prithviraj Chauhan was defeated in 1192 by Muhammad Ghori, a Muslim invader from Afghanistan, who made a concerted effort to conquer northern India. By 1200, native Hindu resistance had begun to crumble, the dominance of foreign Turkic Muslim dynasties in north India was to last for the next five centuries. The slave general of Ghori, Qutb - ud - din Aibak was given the responsibility of governing the conquered territories of India and then Ghori returned to his capital, Ghor. He died in 1206 AD. He had no heirs and so his generals declared themselves independent in different parts of his empire. Qutb - ud - din assumed control of Ghori 's Indian possessions. He laid the foundation of the Delhi Sultanate and the Mamluk Dynasty. he began construction of the Qutb Minar and Quwwat - al - Islam (Might of Islam) mosque, the earliest extant mosque in India. Qutb - ud - din faced widespread Hindu rebellions because he broke several ancient temples to acquire wealth and material to build mosques and other monuments. It was his successor, Iltutmish (1211 -- 36), who consolidated the Turkic conquest of northern India. Razia Sultan, daughter of Iltutmish, succeeded him as the Sultan of Delhi. She is the first and only woman to rule over Delhi.
For the next three hundred years, Delhi was ruled by a succession of Turkic and an Afghan, Lodhi dynasty. They built several forts and townships that are part of the seven cities of Delhi. Delhi was a major centre of Sufism during this period. The Mamluk Sultanate (Delhi) was overthrown in 1290 by Jalal ud din Firuz Khalji (1290 -- 1320). Under the second Khalji ruler, Ala - ud - din Khalji, the Delhi sultanate extended its control south of the Narmada River in the Deccan. The Delhi sultanate reached its greatest extent during the reign of Muhammad bin Tughluq (1325 -- 1351). In an attempt to bring the whole of the Deccan under control, he moved his capital to Daulatabad, Maharashtra in central India. However, by moving away from Delhi he lost control of the north and was forced to return to Delhi to restore order. The southern provinces then broke away. In the years following the reign of Firoz Shah Tughlaq (1351 -- 1388), the Delhi sultanate rapidly began to lose its hold over its northern provinces. Delhi was captured and sacked by Timur Lenk in 1398, who massacred 100,000 captives. Delhi 's decline continued under the Sayyid dynasty (1414 -- 1451), until the sultanate was reduced to Delhi and its hinterland. Under the Afghan Lodhi dynasty (1451 -- 1526), the Delhi sultanate recovered control of the Punjab and the Gangetic plain to once again achieve domination over Northern India. However, the recovery was short - lived and the sultanate was destroyed in 1526 by Babur, founder of the Mughal dynasty.
Babur, was a descendant of Genghis Khan and Timur, from the Fergana Valley in modern - day Uzbekistan. In 1526, he invaded India, defeated the last Lodhi sultan in the First Battle of Panipat and founded the Mughal Empire that ruled from Delhi and Agra. The Mughal dynasty ruled Delhi for more than three centuries, with a sixteen - year hiatus during the reigns of Sher Shah Suri and Hemu from 1540 to 1556. In 1553, the Hindu king Hemu acceded to the throne of Delhi by defeating forces of Mughal Emperor Humayun at Agra and Delhi. However, the Mughals re-established their rule after Akbar 's army defeated Hemu during the Second Battle of Panipat in 1556. Shah Jahan built the seventh city of Delhi that bears his name Shahjahanabad, which served as the capital of the Mughal Empire from 1638 and is today known as the Old City or Old Delhi.
After the death of Aurangzeb in 1707, the Mughal Empire 's influence declined rapidly as the Hindu Maratha Empire from Deccan Plateau rose to prominence. In 1737, Maratha forces sacked Delhi following their victory against the Mughals in the First Battle of Delhi. In 1739, the Mughal Empire lost the huge Battle of Karnal in less than three hours against the numerically outnumbered but militarily superior Persian army led by Nader Shah of Persia. After his invasion, he completely sacked and looted Delhi, carrying away immense wealth including the Peacock Throne, the Daria - i - Noor, and Koh - i - Noor. The Mughals, severely further weakened, could never overcome this crushing defeat and humiliation which also left the way open for more invaders to come, including eventually the British. Nader eventually agreed to leave the city and India after forcing the Mughal emperor Muhammad Shah I to beg him for mercy and granting him the keys of the city and the royal treasury. A treaty signed in 1752 made Marathas the protectors of the Mughal throne in Delhi.
In 1757, the Afghan ruler, Ahmad Shah Durrani, sacked Delhi. He returned to Afghanistan leaving a Mughal puppet ruler in nominal control. The Marathas again occupied Delhi in 1758, and were in control until their defeat in 1761 at the third battle of Panipat when the city was captured again by Ahmad Shah. However, in 1771, the Marathas established a protectorate over Delhi when the Maratha ruler, Mahadji Shinde, recaptured Delhi and the Mughal Emperor Shah Alam II was installed as a puppet ruler in 1772. In 1783, Sikhs under Baghel Singh captured Delhi and Red Fort but due to the treaty signed, Sikhs withdrew from Red Fort and agreed to restore Shah Alam II as the emperor. In 1803, during the Second Anglo - Maratha War, the forces of British East India Company defeated the Maratha forces in the Battle of Delhi.
During the Indian Rebellion of 1857, Delhi fell to the forces of East India Company after a bloody fight known as the Siege of Delhi. The city came under the direct control of the British Government in 1858. It was made a district province of the Punjab. In 1911, it was announced that the capital of British held territories in India was to be transferred from Calcutta to Delhi. The name "New Delhi '' was given in 1927, and the new capital was inaugurated on 13 February 1931. New Delhi, also known as Lutyens ' Delhi, was officially declared as the capital of the Union of India after the country gained independence on 15 August 1947. During the partition of India, thousands of Hindu and Sikh refugees, mainly from West Punjab fled to Delhi, while many Muslim residents of the city migrated to Pakistan. Migration to Delhi from the rest of India continues (as of 2013), contributing more to the rise of Delhi 's population than the birth rate, which is declining.
The States Reorganisation Act, 1956 and the States Reorganisation Act, 1956 created the Union Territory of Delhi from the its predecessor the Chief Commissioner 's Province of Delhi. The Constitution (Sixty - ninth Amendment) Act, 1991 declared the Union Territory of Delhi to be formally known as the National Capital Territory of Delhi. The Act gave Delhi its own legislative assembly along Civil lines, though with limited powers.
In December 2001, the Parliament of India building in New Delhi was attacked by armed militants, killing six security personnel. India suspected Pakistan - based militant groups were behind the attack, which caused a major diplomatic crisis between the two countries. There were further terrorist attacks in Delhi in October 2005 and September 2008, resulting in a total of 103 deaths.
Delhi is located at 28 ° 37 ′ N 77 ° 14 ′ E / 28.61 ° N 77.23 ° E / 28.61; 77.23, and lies in Northern India. It borders the Indian states of Haryana on the north, west and south and Uttar Pradesh (UP) to the east. Two prominent features of the geography of Delhi are the Yamuna flood plains and the Delhi ridge. The Yamuna river was the historical boundary between Punjab and UP, and its flood plains provide fertile alluvial soil suitable for agriculture but are prone to recurrent floods. The Yamuna, a sacred river in Hinduism, is the only major river flowing through Delhi. The Hindon River separates Ghaziabad from the eastern part of Delhi. The Delhi ridge originates from the Aravalli Range in the south and encircles the west, north - east and north - west parts of the city. It reaches a height of 318 m (1,043 ft) and is a dominant feature of the region.
The National Capital Territory of Delhi covers an area of 1,484 km (573 sq mi), of which 783 km (302 sq mi) is designated rural, and 700 km (270 sq mi) urban therefore making it the largest city in terms of area in the country. It has a length of 51.9 km (32 mi) and a width of 48.48 km (30 mi).
Delhi is included in India 's seismic zone - IV, indicating its vulnerability to major earthquakes.
Delhi features an atypical version of the humid subtropical climate (Köppen Cwa) bordering a hot semi-arid climate (Köppen BSh). The warm season lasts from 21 March to 15 June with an average daily high temperature above 39 ° C (102 ° F). The hottest day of the year is 22 May, with an average high of 46 ° C (115 ° F) and low of 30 ° C (86 ° F). The cold season lasts from 26 November to 9 February with an average daily high temperature below 20 ° C (68 ° F). The coldest day of the year is 4 January, with an average low of 2 ° C (36 ° F) and high of 14 ° C (57 ° F). In early March, the wind direction changes from north - westerly to south - westerly. From April to October the weather is hot. The monsoon arrives at the end of June, along with an increase in humidity. The brief, mild winter starts in late November, peaks in January and heavy fog often occurs.
Temperatures in Delhi usually range from 2 to 47 ° C (35.6 to 116.6 ° F), with the lowest and highest temperatures ever recorded being − 2.2 and 48.4 ° C (28.0 and 119.1 ° F) respectively. The annual mean temperature is 25 ° C (77 ° F); monthly mean temperatures range from 13 to 32 ° C (55 to 90 ° F). The highest temperature recorded in July was 45 ° C (113 ° F) in 1931. The average annual rainfall is approximately 886 mm (34.9 in), most of which falls during the monsoon in July and August. The average date of the advent of monsoon winds in Delhi is 29 June.
According to the World Health Organization (WHO) Delhi was the most polluted city in the world in 2014. In 2016 WHO downgraded Delhi to eleventh - worst in the urban air quality database. According to one estimate, air pollution causes the death of about 10,500 people in Delhi every year. During 2013 -- 14, peak levels of fine particulate matter (PM) in Delhi increased by about 44 %, primarily due to high vehicular and industrial emissions, construction work and crop burning in adjoining states. It has the highest level of the airborne particulate matter, PM2. 5 considered most harmful to health, with 153 micrograms. Rising air pollution level has significantly increased lung - related ailments (especially asthma and lung cancer) among Delhi 's children and women. The dense smog in Delhi during winter season results in major air and rail traffic disruptions every year. According to Indian meteorologists, the average maximum temperature in Delhi during winters has declined notably since 1998 due to rising air pollution.
Environmentalists have criticised the Delhi government for not doing enough to curb air pollution and to inform people about air quality issues. Most of Delhi 's residents are unaware of alarming levels of air pollution in the city and the health risks associated with it; however, as of 2015, awareness, particularly among the foreign diplomatic community and high - income Indians, was noticeably increasing. Since the mid-1990s, Delhi has undertaken some measures to curb air pollution -- Delhi has the third highest quantity of trees among Indian cities and the Delhi Transport Corporation operates the world 's largest fleet of environmentally friendly compressed natural gas (CNG) buses. In 1996, the Centre for Science and Environment (CSE) started a public interest litigation in the Supreme Court of India that ordered the conversion of Delhi 's fleet of buses and taxis to run on compressed natural gas (CNG) and banned the use of leaded petrol in 1998. In 2003, Delhi won the United States Department of Energy 's first ' Clean Cities International Partner of the Year ' award for its "bold efforts to curb air pollution and support alternative fuel initiatives ''. The Delhi Metro has also been credited for significantly reducing air pollutants in the city.
However, according to several authors, most of these gains have been lost, especially due to stubble burning, a rise in the market share of diesel cars and a considerable decline in bus ridership. According to CSE and System of Air Quality Weather Forecasting and Research (SAFAR), burning of agricultural waste in nearby Punjab, Haryana and Uttar Pradesh regions results in severe intensification of smog over Delhi. The state government of Uttar Pradesh is considering imposing a ban on crop burning to reduce pollution in Delhi NCR and an environmental panel has appealed to India 's Supreme Court to impose a 30 % cess on diesel cars.
The Circles of Sustainability assessment of Delhi gives a marginally more favourable impression of the ecological sustainability of the city only because it is based on a more comprehensive series of measures than only air pollution. Part of the reason that the city remains assessed at basic sustainability is because of the low resource - use and carbon emissions of its poorer neighbourhoods.
As of July 2007, the National Capital Territory of Delhi comprises nine districts, 27 tehsils, 59 census towns, 300 villages, and three statutory towns, the Municipal Corporation of Delhi (MCD) -- 1,397.3 km or 540 sq mi, the New Delhi Municipal Council (NDMC) -- 42.7 km or 16 sq mi and the Delhi Cantonment Board (DCB) -- 43 km or 17 sq mi).
Since the trifurcation of the DMC at the start of 2012, Delhi has been run by five local municipal corporations: the North Delhi, South Delhi and East Delhi Municipal Corporations, the New Delhi Municipal Council and Delhi Cantonment Board. In July of that year, shortly after the MCD trifurcation, the Delhi Government increased the number of districts in Delhi from nine to eleven.
Delhi (civic administration) was ranked 5th out of 21 Cities for best governance & administrative practices in India in 2014. It scored 3.6 on 10 compared to the national average of 3.3.
Delhi houses the Supreme Court of India and the regional Delhi High Court along with the Small Causes Court for civil cases; the Magistrate Court and the Sessions Court for criminal cases has jurisdiction over Delhi. The city is administratively divided into eleven police - zones which are subdivided into 95 local police stations.
As a first - level administrative division, the National Capital Territory of Delhi has its own Legislative Assembly, Lieutenant Governor, council of ministers and Chief Minister. Members of the legislative assembly are directly elected from territorial constituencies in the NCT. The legislative assembly was abolished in 1956, after which direct federal control was implemented until it was re-established in 1993. The Municipal corporation handles civic. administration for the city as part of the Panchayati Raj Act. The Government of India and the Government of National Capital Territory of Delhi jointly administer New Delhi, where both bodies are located. The Parliament of India, the Rashtrapati Bhavan (Presidential Palace), Cabinet Secretariat and the Supreme Court of India are located in the municipal district of New Delhi. There are 70 assembly constituencies and seven Lok Sabha (Indian parliament 's lower house) constituencies in Delhi. The Indian National Congress (Congress) formed all the governments in Delhi until the 1990s, when the Bharatiya Janata Party (BJP), led by Madan Lal Khurana, came to power. In 1998, the Congress returned to power under the leadership of Sheila Dikshit, who was subsequently re-elected for 3 consecutive terms. But in 2013, the Congress was ousted from power by the newly formed Aam Aadmi Party (AAP) led by Arvind Kejriwal forming the government with outside support from the Congress. However, that government was short - lived, collapsing only after 49 days. Delhi was then under President 's rule till February 2015. On 10 February 2015, the Aam Aadmi Party returned to power after a landslide victory, winning 67 out of the 70 seats in the Delhi Legislative Assembly.
Since 2011 Delhi has three municipal bodies
In 2017 BJP became victorious in all the three corporations
Delhi is the largest commercial centre in northern India. As of 2016 recent estimates of the economy of the Delhi urban area have ranged from $167 to $370 billion (PPP metro GDP) ranking it either the most or second-most productive metro area of India. The nominal GSDP of the NCT of Delhi for 2016 - 17 was estimated at ₹ 6,224 billion (US $98 billion), 13 % higher than in 2015 -- 16.
As per the Economic survey of Delhi (2005 -- 2006), the tertiary sector contributes 70.95 % of Delhi 's gross SDP followed by secondary and primary sectors with 25.20 % and 3.85 % contributions respectively. Delhi 's workforce constitutes 32.82 % of the population, and increased by 52.52 % between 1991 and 2001. Delhi 's unemployment rate decreased from 12.57 % in 1999 -- 2000 to 4.63 % in 2003. In December 2004, 636,000 people were registered with various employment exchange programmes in Delhi. In 2001 the total workforce in national and state governments and the quasi-government sector was 620,000, and the private sector employed 219,000. Key service industries are information technology, telecommunications, hotels, banking, media and tourism. Construction, power, health and community services and real estate are also important to the city 's economy. Delhi has one of India 's largest and fastest growing retail industries. Manufacturing also grew considerably as consumer goods companies established manufacturing units and headquarters in the city. Delhi 's large consumer market and the availability of skilled labour has also attracted foreign investment. In 2001, the manufacturing sector employed 1,440,000 workers and the city had 129,000 industrial units.
Delhi 's municipal water supply is managed by the Delhi Jal Board (DJB). As of June 2005, it supplied 650 million gallons per day (MGD), whereas the estimated consumption requirement is 963 MGD. The shortfall is met by private and public tube wells and hand pumps. At 240 MGD, the Bhakra storage is DJB 's largest water source, followed by the Yamuna and Ganges rivers. Delhi 's groundwater level is falling and its population density is increasing, so residents often encounter acute water shortage. Research on Delhi suggests that up to half of the city 's water use is unofficial groundwater. In Delhi, daily domestic solid waste production is 8000 tonnes which is dumped at three landfill locations by MCD. The daily domestic waste water production is 470 MGD and industrial waste water is 70 MGD. A large portion of the sewage flows untreated into the Yamuna river.
The city 's electricity consumption is about 1,265 kWh per capita but the actual demand is higher. In Delhi power distribution is managed by Tata Power Distribution and BSES Yamuna & Rajdhani since 2002. The Delhi Fire Service runs 43 fire stations that attend about 15,000 fire and rescue calls per year. The state - owned Mahanagar Telephone Nigam Limited (MTNL) and private enterprises such as Vodafone, Airtel, Idea Cellular, Reliance Infocomm, Aircel, Reliance Jio and Tata Docomo provide telephone and cell phone services to the city. Cellular coverage is available in GSM, CDMA, 3G and 4G.
Indira Gandhi International Airport, situated to the southwest of Delhi, is the main gateway for the city 's domestic and international civilian air traffic. In 2015 -- 16, the airport handled more than 48 million passengers, making it the busiest airport in India and South Asia. Terminal 3, which cost ₹ 96.8 billion (US $1.5 billion) to construct between 2007 and 2010, handles an additional 37 million passengers annually.
The Delhi Flying Club, established in 1928 with two de Havilland Moth aircraft named Delhi and Roshanara, was based at Safdarjung Airport which started operations in 1929, when it was the Delhi 's only airport and the second in India. The airport functioned until 2001, however in January 2002 the government closed the airport for flying activities because of security concerns following the New York attacks in September 2001. Since then, the club only carries out aircraft maintenance courses and is used for helicopter rides to Indira Gandhi International Airport for VIP including the president and the prime minister.
A second airport open for commercial flights has been suggested either by expansion of Meerut Airport or construction of a new airport in Greater Noida.
Delhi has the highest road density of 2103 km / 100 km in India.
Buses are the most popular means of road transport catering to about 60 % of Delhi 's total demand. Delhi has one of India 's largest bus transport systems. Buses are operated by the state - owned Delhi Transport Corporation (DTC), which owns the largest fleet of compressed natural gas (CNG) - fueled buses in the world. Personal vehicles especially cars also form a major chunk of vehicles plying on Delhi roads. Delhi has the highest number of registered cars compared to any other metropolitan city in India. Taxis, auto rickshaws, and cycle rickshaws also ply on Delhi roads in large numbers.
Important Roads in Delhi
Some roads and expressways serve as important pillars of Delhi 's road infrastructure:
National Highways Passing Through Delhi
Delhi is connected by Road to various parts of the country through several National Highways:
Delhi is a major junction in the Indian railway network and is the headquarters of the Northern Railway. The five main railway stations are New Delhi railway station, Old Delhi Railway Station, Hazrat Nizamuddin Railway Station, Anand Vihar Railway Terminal and Sarai Rohilla. The Delhi Metro, a mass rapid transit system built and operated by Delhi Metro Rail Corporation (DMRC), serves many parts of Delhi and the neighbouring cities Faridabad, Gurgaon, Noida and Ghaziabad. As of August 2011, the metro consists of six operational lines with a total length of 189 km (117 mi) and 146 stations, and several other lines are under construction. The Phase - I was built at a cost of US $2.3 billion and the Phase - II was expected to cost an additional ₹ 216 billion (US $3.4 billion). Phase - II has a total length of 128 km and was completed by 2010. Delhi Metro completed 10 years of operation on 25 December 2012. It carries millions of passengers every day. In addition to the Delhi Metro, a suburban railway, the Delhi Suburban Railway exists.
The Delhi Metro is a rapid transit system serving Delhi, Faridabad, Gurgaon, Noida and Ghaziabad in the National Capital Region of India. Delhi Metro is the world 's 10th largest metro system in terms of length. Delhi Metro was India 's second modern public transportation system, which has revolutionised travel by providing a fast, reliable, safe, and comfortable means of transport. The network consists of six lines with a total length of 189.63 kilometres (117.83 miles) with 142 stations, of which 35 are underground, five are at - grade, and the remainder are elevated. All stations have escalators, lifts, and tactile tiles to guide the visually impaired from station entrances to trains. There are 18 designated parking sites at Metro stations to further encourage use of the system. In March 2010, DMRC partnered with Google India (through Google Transit) to provide train schedule and route information to mobile devices with Google Maps. It has a combination of elevated, at - grade, and underground lines, and uses both broad gauge and standard gauge rolling stock. Four types of rolling stock are used: Mitsubishi - ROTEM Broad gauge, Bombardier MOVIA, Mitsubishi - ROTEM Standard gauge, and CAF Beasain Standard gauge. The Phase - I of Delhi Metro was built at a cost of US $2.3 billion and the Phase - II was expected to cost an additional ₹ 216 billion (US $3.4 billion). Phase - II has a total length of 128 km and was completed by 2010. Delhi Metro completed 10 years of operation on 25 December 2012. It carries millions of passengers every day. In addition to the Delhi Metro, a suburban railway, the Delhi Suburban Railway exists.
Delhi Metro is being built and operated by the Delhi Metro Rail Corporation Limited (DMRC), a state - owned company with equal equity participation from Government of India and Government of National Capital Territory of Delhi. However, the organisation is under the administrative control of Ministry of Urban Development, Government of India. Besides construction and operation of Delhi Metro, DMRC is also involved in the planning and implementation of metro rail, monorail, and high - speed rail projects in India and providing consultancy services to other metro projects in the country as well as abroad. The Delhi Metro project was spearheaded by Padma Vibhushan E. Sreedharan, the Managing Director of DMRC and popularly known as the "Metro Man '' of India. He famously resigned from DMRC taking moral responsibility for a metro bridge collapse, which took five lives. Sreedharan was awarded the prestigious Legion of Honour by the French Government for his contribution to Delhi Metro.
Metro services are being extended to important hubs in the cities that are close to offices, colleges, and tourist spots. This will facilitate easy conveyance for the citizens, who otherwise have to rely on public buses that are heavily crowded and are often stuck in traffic jams.
The 08 RRTS Corridors have been proposed by National Capital Region Planning Board (NCRPB) to facilitate the people travelling from nearby cities in NCR to Delhi. The three main corridors in the first phase are as follows which are expected to become operational before 2019:
Remaining five corridors are also approved by National Capital Region Planning Board but are planned in the second phase.
As of 2007, private vehicles account for 30 % of the total demand for transport. Delhi has 1922.32 km of road length per 100 km, one of the highest road densities in India. It is connected to other parts of India by five National Highways: NH 1, 2, 8, 10 and 24. The city 's road network is maintained by MCD, NDMC, Delhi Cantonment Board, Public Works Department (PWD) and Delhi Development Authority. The Delhi - Gurgaon Expressway connects Delhi with Gurgaon and the international airport. "The Delhi - Faridabad Skyway ''. connects Delhi with the neighbouring industrial town of Faridabad. The DND Flyway and Noida - Greater Noida Expressway connect Delhi with the suburbs of Noida and Greater Noida. Delhi 's rapid rate of economic development and population growth has resulted in an increasing demand for transport, creating excessive pressure on the city 's transport infrastructure. As of 2008, the number of vehicles in the metropolitan region, Delhi NCR, is 11.2 million (11.2 million). In 2008, there were 85 cars in Delhi for every 1,000 of its residents.
To meet the transport demand, the State and Union government constructed a mass rapid transit system, including the Delhi Metro. In 1998, the Supreme Court of India ordered that all public transport vehicles in Delhi must be fuelled by compressed natural gas (CNG). Buses are the most popular means of public transport, catering to about 60 % of the total demand. The state - owned Delhi Transport Corporation (DTC) is a major bus service provider which operates the world 's largest fleet of CNG - fuelled buses. Delhi Bus Rapid Transit System runs between Ambedkar Nagar and Delhi Gate.
According to the 2011 census of India, the population of NCT of Delhi is 16,753,235. The corresponding population density was 11,297 persons per km with a sex ratio of 866 women per 1000 men, and a literacy rate of 86.34 %. In 2004, the birth rate, death rate and infant mortality rate per 1000 population were 20.03, 5.59 and 13.08 respectively. In 2001, the population of Delhi increased by 285,000 as a result of migration and by 215,000 as a result of natural population growth, which made Delhi one of the fastest growing cities in the world. Dwarka Sub City, Asia 's largest planned residential area, is located within the National Capital Territory of Delhi. Urban expansion has resulted in Delhi 's urban area now being considered as extending beyond NCT boundaries to incorporate towns and cities of neighbouring states including Gurgaon and Faridabad of Haryana, and Ghaziabad and Noida of Uttar Pradesh, the total population estimated by the United Nations at over 26 million. According to the UN this makes Delhi urban area the world 's second largest, after Tokyo, although Demographia declares the Jakarta urban area to be the second largest. The 2011 census provided two figures for urban area population: 16,314,838 within the NCT boundary, and 21,753,486 for the Extended Urban Area.
Hinduism is Delhi 's predominant religious faith, with 81.68 % of Delhi 's population, followed by Islam (12.86 %), Sikhism (3.40 %), Jainism (0.99 %), Christianity (0.87 %), and Buddhism (0.11 %). Other minority religions include Zoroastrianism, Baha'ism and Judaism.
According to the 50th report of the commissioner for linguistic minorities in India, which was submitted in 2014, Hindi is Delhi 's most spoken language, with 80.94 % speakers, followed by Punjabi (7.14 %) and Urdu (6.31 %). Hindi is also the official language of Delhi while Urdu and Punjabi have been declared as the additional official languages. 5.61 % of the Delhites speak different languages.
Around 22 % of the population of Delhi lives in slum areas with "inadequate provision of basic services ''. Majority of these slums has inadequate provisions to the basic facilities and according to DUSIB report 16 % of people do n't use toilets and almost 22 % of the people do open defecation.
Delhi 's culture has been influenced by its lengthy history and historic association as the capital of India, Although a strong Punjabi Influence can be seen in language, Dress and Cuisine brought by the large number of refugees who came following the partition in 1947 the recent migration from other parts of India has made it a melting pot. This is exemplified by many significant monuments in the city. Delhi is also identified as the location of Indraprastha, the ancient capital of the Pandavas. The Archaeological Survey of India recognises 1200 heritage buildings and 175 monuments as national heritage sites. In the Old City, the Mughals and the Turkic rulers constructed several architecturally significant buildings, such as the Jama Masjid -- India 's largest mosque built in 1656 and the Red Fort. Three World Heritage Sites -- the Red Fort, Qutab Minar and Humayun 's Tomb -- are located in Delhi. Other monuments include the India Gate, the Jantar Mantar -- an 18th - century astronomical observatory -- and the Purana Qila -- a 16th - century fortress. The Laxminarayan temple, Akshardham temple, Bangla Sahib the Bahá'í Lotus temple and the ISKCON temple are examples of modern architecture. Raj Ghat and associated memorials houses memorials of Mahatma Gandhi and other notable personalities. New Delhi houses several government buildings and official residences reminiscent of British colonial architecture, including the Rashtrapati Bhavan, the Secretariat, Rajpath, the Parliament of India and Vijay Chowk. Safdarjung 's Tomb is an example of the Mughal gardens style. Some regal havelis (palatial residences) are in the Old City.
Lotus Temple, is a Bahá'í House of Worship completed in 1986. Notable for its flowerlike shape, it serves as the Mother Temple of the Indian subcontinent and has become a prominent attraction in the city. The Lotus Temple has won numerous architectural awards and been featured in hundreds of newspaper and magazine articles. Like all other Bahá'í Houses of Worship, is open to all regardless of religion, or any other distinction, as emphasised in Bahá'í texts. The Bahá'í laws emphasise that the spirit of the House of Worship be that it is a gathering place where people of all religions may worship God without denominational restrictions. The Bahá'í laws also stipulate that only the holy scriptures of the Bahá'í Faith and other religions can be read or chanted inside in any language; while readings and prayers can be set to music by choirs, no musical instruments can be played inside. Furthermore, no sermons can be delivered, and there can be no ritualistic ceremonies practised.
Chandni Chowk, a 17th - century market, is one of the most popular shopping areas in Delhi for jewellery and Zari saris. Delhi 's arts and crafts include, Zardozi -- an embroidery done with gold thread -- and Meenakari -- the art of enamelling.
Delhi 's association and geographic proximity to the capital, New Delhi, has amplified the importance of national events and holidays like Republic Day, Independence Day (15 August) and Gandhi Jayanti. On Independence Day, the Prime Minister addresses the nation from the Red Fort. Most Delhiites celebrate the day by flying kites, which are considered a symbol of freedom. The Republic Day Parade is a large cultural and military parade showcasing India 's cultural diversity and military strength. Over the centuries, Delhi has become known for its composite culture, and a festival that symbolises this is the Phool Walon Ki Sair, which takes place in September. Flowers and pankhe -- fans embroidered with flowers -- are offered to the shrine of the 13th - century Sufi saint Khwaja Bakhtiyar Kaki and the Yogmaya temple, both situated in Mehrauli.
Religious festivals include Diwali (the festival of lights), Mahavir Jayanti, Guru Nanak 's Birthday, Raksha Bandhan, Durga Puja, Holi, Lohri, Chauth, Krishna Janmastami, Maha Shivratri, Eid ul - Fitr, Moharram and Buddha Jayanti. The Qutub Festival is a cultural event during which performances of musicians and dancers from all over India are showcased at night, with the Qutub Minar as a backdrop. Other events such as Kite Flying Festival, International Mango Festival and Vasant Panchami (the Spring Festival) are held every year in Delhi. The Auto Expo, Asia 's largest auto show, is held in Delhi biennially. The New Delhi World Book Fair, held biennially at the Pragati Maidan, is the second largest exhibition of books in the world. Delhi is often regarded as the "Book Capital '' of India because of high readership. India International Trade Fair (IITF), organised by ITPO is the biggest cultural and shopping fair of Delhi which takes place in November each year and is visited by more than 15 lakh people.
As India 's national capital and centuries old Mughal capital, Delhi influenced the food habits of its residents and is where Mughlai cuisine originated. Along with Indian cuisine, a variety of international cuisines are popular among the residents. The dearth of food habits among the city 's residents created a unique style of cooking which became popular throughout the world, with dishes such as Kebab, biryani, tandoori. The city 's classic dishes include butter chicken, dal makhani, shahi paneer, aloo chaat, chaat, dahi bhalla, kachori, gol gappe, samosa, chole bhature, chole kulche, gulab jamun, jalebi and lassi.
The fast living habits of Delhi 's people has motivated the growth of street food outlets. A trend of dining at local dhabas is popular among the residents. High - profile restaurants have gained popularity in recent years, among the popular restaurants are the Karim Hotel, the Punjab Grill and Bukhara. The Gali Paranthe Wali (the street of fried bread) is a street in Chandni Chowk particularly for food eateries since the 1870s. Almost the entire street is occupied by fast food stalls or street vendors. It has nearly become a tradition that almost every prime minister of India has visited the street to eat paratha at least once. Other Indian cuisines are also available in this area even though the street specialises in north Indian food.
According to Euromonitor International, Delhi ranked as 28th most visited city in the world and first in India by foreign visitors in 2015. There are numerous tourist attractions in Delhi, both historic and modern. The three UNESCO World Heritage Sites in Delhi, Qutb Complex, Red Fort and Humayun 's Tomb are among the finest examples of Indo - Islamic architecture. Another prominent landmark of Delhi is India Gate, a 1931 built war memorial to soldiers of British Indian Army who died during First World War. Delhi has several famous places of worship of various religions. One of the largest Hindu temple complexes in the world, Akshardham is a major tourist attraction in the city. Other famous religious sites include Laxminarayan Temple, Gurudwara Bangla Sahib, Lotus Temple, Jama Masjid and ISKCON Temple. Delhi is also a hub for shopping of all kinds. Connaught Place, Chandni Chowk, Khan Market and Dilli Haat are some of the major retail markets in Delhi. Major shopping malls include Select Citywalk, DLF Promenade, DLF Emporio, Metro Walk and Ansal Plaza.
Private schools in Delhi -- which use either English or Hindi as the language of instruction -- are affiliated to one of three administering bodies, the Council for the Indian School Certificate Examinations (CISCE), the Central Board for Secondary Education (CBSE) or the National Institute of Open Schooling (NIOS). In 2004 -- 05, approximately 15.29 lakh (1.529 million) students were enrolled in primary schools, 8.22 lakh (0.822 million) in middle schools and 6.69 lakh (0.669 million) in secondary schools across Delhi. Female students represented 49 % of the total enrolment. The same year, the Delhi government spent between 1.58 % and 1.95 % of its gross state domestic product on education.
Schools and higher educational institutions in Delhi are administered either by the Directorate of Education, the NCT government or private organisations. In 2006, Delhi had 165 colleges, five medical colleges and eight engineering colleges, seven major universities and nine deemed universities.
The premier management colleges of Delhi such as Faculty of Management Studies (Delhi) and Indian Institute of Foreign Trade rank the best in India. All India Institute of Medical Sciences Delhi is a premier medical school for treatment and research. National Law University, Delhi is a prominent law school and is affiliated to the Bar Council of India.
Delhi Technological University (formerly Delhi College of Engineering), Indraprastha Institute of Information Technology, Netaji Subhas Institute of Technology, Guru Gobind Singh Indraprastha University and National Law University, Delhi are the only state universities. University of Delhi, Jawaharlal Nehru University and Jamia Millia Islamia are the central universities, and Indira Gandhi National Open University is for distance education. As of 2008, about 16 % of all Delhi residents possessed at least a college graduate degree.
As the capital of India, Delhi is the focus of political reportage, including regular television broadcasts of Parliament sessions. Many national media agencies, including the state - owned Press Trust of India, Media Trust of India and Doordarshan, is based in the city. Television programming includes two free terrestrial television channels offered by Doordarshan, and several Hindi, English, and regional - language cable channels offered by multi system operators. Satellite television has yet to gain a large quantity of subscribers in the city.
Print journalism remains a popular news medium in Delhi. The city 's Hindi newspapers include Navbharat Times, Hindustan Dainik, Punjab Kesari, Pavitra Bharat, Dainik Jagran, Dainik Bhaskar, Amar Ujala and Dainik Desbandhu. Amongst the English language newspapers, The Hindustan Times, with a daily circulation of over a million copies, is the single largest daily. Other major English newspapers include Times of India, The Hindu, Indian Express, Business Standard, The Pioneer, The Statesman, and The Asian Age. Regional language newspapers include the Malayalam daily Malayala Manorama and the Tamil dailies Dinamalar and Dinakaran.
Radio is a less popular mass medium in Delhi, although FM radio has gained popularity since the inauguration of several new stations in 2006. A number of state - owned and private radio stations broadcast from Delhi.
Delhi has hosted many major international sporting events, including the first and also the ninth Asian Games, the 2010 Hockey World Cup, the 2010 Commonwealth Games and the 2011 Cricket World Cup. Delhi lost bidding for the 2014 Asian Games, and considered making a bid for the 2020 Summer Olympics. However, sports minister Manohar Singh Gill later stated that funding infrastructure would come before a 2020 bid. There are indications of a possible 2028 bid.
The 2010 Commonwealth Games, which ran from 3 to 14 October 2010, was one of the largest sports event held in India. The opening ceremony of the 2010 Commonwealth Games was held at the Jawaharlal Nehru Stadium, the main stadium of the event, in New Delhi at 7: 00 pm Indian Standard Time on 3 October 2010. The ceremony featured over 8,000 performers and lasted for two and a half hours. It is estimated that ₹ 3.5 billion (US $55 million) were spent to produce the ceremony. Events took place at 12 competition venues. 20 training venues were used in the Games, including seven venues within Delhi University. The rugby stadium in Delhi University North Campus hosted rugby games for Commonwealth Games. The mess left behind after the Commonwealth Games prompted Prime Minister Manmohan Singh to replace Sports and Youth Affairs minister Manohar Singh Gill with Ajay Maken in 19 January 2011 Cabinet reshuffle.
Cricket and football are the most popular sports in Delhi. There are several cricket grounds, or maidans, located across the city. The Feroz Shah Kotla Ground (known commonly as the Kotla) is one of the oldest cricket grounds in India and is a venue for international cricket matches. It is the home ground of the Delhi cricket team, which represents the city in the Ranji Trophy, the premier Indian domestic first - class cricket championship. The Delhi cricket team has produced several world - class international cricketers such as Virender Sehwag, Virat Kohli, Gautam Gambhir, Madan Lal, Chetan Chauhan, Ishant Sharma and Bishan Singh Bedi to name a few. The Railways and Services cricket teams in the Ranji Trophy also play their home matches in Delhi, in the Karnail Singh Stadium and the Harbax Singh Stadium respectively. The city is also home to the Indian Premier League team Delhi Daredevils, who play their home matches at the Kotla, and was the home to the Delhi Giants team (previously Delhi Jets) of the now defunct Indian Cricket League.
Ambedkar Stadium, a football stadium in Delhi which holds 21,000 people, was the venue for the Indian football team 's World Cup qualifier against UAE on 28 July 2012. Delhi hosted the Nehru Cup in 2007 and 2009, in both of which India defeated Syria 1 -- 0. In the Elite Football League of India, Delhi 's first professional American football franchise, the Delhi Defenders played its first season in Pune. Buddh International Circuit in Greater Noida, a suburb of Delhi, formerly hosted the Formula 1 Indian Grand Prix. The Indira Gandhi Arena is also in Delhi.
Delhi is a member of the Asian Network of Major Cities 21.
Current Regional and Professional Sports Teams from Delhi
Irani Trophy
Vijay Hazare Trophy
Former Regional and Professional Sports Teams from Delhi
In February 2014, the Government of India approved Delhi 's bid for World Heritage City status. The historical city of Shahjahanabad and Lutyens ' Bungalow Zone in New Delhi were cited in the bid. A team from UNESCO was scheduled to visit Delhi in September 2014 to validate its claims. INTACH acted as the nodal agency for the bid. The announcement of accepted cities was to be made in June 2015. However, the Government of India withdrew its nomination on 21 May 2015.
Government
General information
1 Tokyo - Yokohama 2 Shanghai 3 Jakarta 4 Delhi 5 Seoul - Incheon
6 Karachi 7 Guangzhou 8 Beijing 9 Shenzhen 7 Mexico City
11 São Paulo 12 Lagos 13 Mumbai 14 Cairo 15 New York
16 Osaka 17 Moscow 18 Beijing 19 Chengdu 20 Dhaka
|
what are 4 areas of social responsibility issues | Social responsibility - wikipedia
Social responsibility is an ethical framework and suggests that an entity, be it an organization or individual, has an obligation to act for the benefit of society at large. Social responsibility is a duty every individual has to perform so as to maintain a balance between the economy and the ecosystems. A trade - off may exist between economic development, in the material sense, and the welfare of the society and environment, though this has been challenged by many reports over the past decade. Social responsibility means sustaining the equilibrium between the two. It pertains not only to business organizations but also to everyone whose any action impacts the environment. This responsibility can be passive, by avoiding engaging in socially harmful acts, or active, by performing activities that directly advance social goals. Social responsibility must be intergenerational since the actions of one generation have consequences on those following.
Businesses can use ethical decision making to secure their businesses by making decisions that allow for government agencies to minimize their involvement with the corporation. For instance if a company follows the United States Environmental Protection Agency (EPA) guidelines for emissions on dangerous pollutants and even goes an extra step to get involved in the community and address those concerns that the public might have; they would be less likely to have the EPA investigate them for environmental concerns. "A significant element of current thinking about privacy, however, stresses "self - regulation '' rather than market or government mechanisms for protecting personal information ". According to some experts, most rules and regulations are formed due to public outcry, which threatens profit maximization and therefore the well - being of the shareholder, and that if there is not an outcry there often will be limited regulation.
Some critics argue that corporate social responsibility (CSR) distracts from the fundamental economic role of businesses; others argue that it is nothing more than superficial window - dressing, or "greenwashing ''; others argue that it is an attempt to pre-empt the role of governments as a watchdog over powerful corporations though there is no systematic evidence to support these criticisms. A significant number of studies have shown no negative influence on shareholder results from CSR but rather a slightly negative correlation with improved shareholder returns.
Student social responsibility is the responsibility of every student for their actions. It is morally binding, and suggests that each individual act in such a way that minimizes the adverse effect on those immediately around them.
Corporate social responsibility or CSR has been defined by Lord Holme and Richard Watts in the World Business Council for Sustainable Development 's publication "Making Good Business Sense '' as "... the continuing commitment by business to behave ethically and contribute to economic development while improving the quality of life of the workforce and their families as well as the local community and society at large. '' CSR is one of the newest management strategies where companies try to create a positive impact on society while doing business. Evidence suggests that CSR taken on voluntarily by companies will be much more effective than CSR mandated by governments. There is no clear - cut definition of what CSR comprises. Every company has different CSR objectives though the main motive is the same. All companies have a two - point agenda -- to improve qualitatively (the management of people and processes) and quantitatively (the impact on society). The second is as important as the first and stake holders of every company are increasingly taking an interest in "the outer circle '' - the activities of the company and how these are impacting the environment and society. The other motive behind this is that the companies should not be focused only on maximization of profits.
While many corporations include social responsibility in their operations, it is still important for those procuring the goods and services to ensure the products are socially sustainable. Verification tools are available from a multitude of entities internationally, such as the Underwriters Laboratories environmental standards, BIFMA, BioPreferred, and Green Seal. These resources help corporations and their consumers identify potential risks associated with a product 's lifecycle and enable end users to confirm the corporation 's practices adhere to social responsibility ideals.
One common view is that scientists and engineers are morally responsible for the negative consequences which result from the various applications of their knowledge and inventions. After all, if scientists and engineers take personal pride in the many positive achievements of science and technology, why should they be allowed to escape responsibility for the negative consequences related to the use or abuse of scientific knowledge and technological innovations? Furthermore, scientists and engineers have a collective responsibility for the choice and conduct of their work. Committees of scientists and engineers are often involved in the planning of governmental and corporate research programs, including those devoted to the development of military technologies and weaponry. Many professional societies and national organizations, such as the National Academy of Science and the National Academy of Engineering in the United States, have ethical guidelines (see Engineering ethics and Research ethics for the conduct of scientific research and engineering). Clearly, there is recognition that scientists and engineers, both individually and collectively, have a special and much greater responsibility than average citizens with respect to the generation and use of scientific knowledge.
Unfortunately, it has been pointed out that the situation is not that simple and scientists and engineers should not be blamed for all the evils created by new scientific knowledge and technological innovations. First, there is the common problem of fragmentation and diffusion of responsibility. Because of the intellectual and physical division of labor, the resulting fragmentation of knowledge, the high degree of specialization, and the complex and hierarchical decision - making process within corporations and government research laboratories, it is exceedingly difficult for individual scientists and engineers to control the applications of their innovations. This fragmentation of both work and decision - making results in fragmented moral accountability, often to the point where "everybody involved was responsible but none could be held responsible. ''
Another problem is ignorance. The scientists and engineers can not predict how their newly generated knowledge and technological innovations may be abused or misused for destructive purposes in the near or distant future. While the excuse of ignorance is somewhat acceptable for those scientists involved in very basic and fundamental research where potential applications can not be even envisioned, the excuse of ignorance is much weaker for scientists and engineers involved in applied scientific research and technological innovation since the work objectives are well known. For example, most corporations conduct research on specific products or services that promise to yield the greatest possible profit for share - holders. Similarly, most of the research funded by governments is mission - oriented, such as protecting the environment, developing new drugs, or designing more lethal weapons. In all cases where the application of scientific knowledge and technological innovation is well known a priori, it is impossible for a scientist or engineer to escape responsibility for research and technological innovation that is morally dubious. As John Forge writes in Moral Responsibility and the Ignorant Scientist: "Ignorance is not an excuse precisely because scientists can be blamed for being ignorant. ''
Another point of view is that responsibility falls on those who provide the funding for the research and technological developments, which in most cases are corporations and government agencies. Furthermore, because taxpayers provide indirectly the funds for government - sponsored research, they and the politicians that represent them, i.e., society at large, should be held accountable for the uses and abuses of science. Compared to earlier times when scientists could often conduct their own research independently, today 's experimental research requires expensive laboratories and instrumentation, making scientists dependent on those who pay for their studies.
Quasi-legal instruments, or soft law principle has received some normative status in relation to private and public corporations in the United Nations Educational, Scientific and Cultural Organization (UNESCO) Universal Declaration on Bioethics and Human Rights developed by the UNESCO International Bioethics Committee particularly in relation to child and maternal welfare. (Faunce and Nasu 2009) The International Organization for Standardization will "encourage voluntary commitment to social responsibility and will lead to common guidance on concepts, definitions and methods of evaluation. ''
|
2001 a space odyssey what does the ending mean | Interpretations of 2001: a space Odyssey - wikipedia
Since its premiere in 1968, the film 2001: A Space Odyssey has been analysed and interpreted by numerous people, ranging from professional movie critics to amateur writers and science fiction fans. The director of the film, Stanley Kubrick, and the writer, Arthur C. Clarke, wanted to leave the film open to philosophical and allegorical interpretation, purposely presenting the final sequences of the film without the underlying thread being apparent; a concept illustrated by the final shot of the film, which contains the image of the embryonic "Starchild ''.
Kubrick encouraged people to explore their own interpretations of the film, and refused to offer an explanation of "what really happened '' in the movie, preferring instead to let audiences embrace their own ideas and theories. In a 1968 interview with Playboy, Kubrick stated:
You 're free to speculate as you wish about the philosophical and allegorical meaning of the film -- and such speculation is one indication that it has succeeded in gripping the audience at a deep level -- but I do n't want to spell out a verbal road map for 2001 that every viewer will feel obligated to pursue or else fear he 's missed the point.
Neither of the two creators equated openness to interpretation with meaninglessness, although it might seem that Clarke implied as much when he stated, shortly after the film 's release, "If anyone understands it on the first viewing, we 've failed in our intention. '' When told of the comment, Kubrick said "I believe he made it (the comment) facetiously. The very nature of the visual experience in 2001 is to give the viewer an instantaneous, visceral reaction that does not -- and should not -- require further amplification. '' When told that Kubrick had called his comment ' facetious ', Clarke responded
I still stand by this remark, which does not mean one ca n't enjoy the movie completely the first time around. What I meant was, of course, that because we were dealing with the mystery of the universe, and with powers and forces greater than man 's comprehension, then by definition they could not be totally understandable. Yet there is at least one logical structure -- and sometimes more than one -- behind everything that happens on the screen in "2001 '', and the ending does not consist of random enigmas, some critics to the contrary.
In a subsequent discussion of the film with Joseph Gelmis, Kubrick said his main aim was to avoid "intellectual verbalization '' and reach "the viewer 's subconscious ''. He said he did not deliberately strive for ambiguity, that it was simply an inevitable outcome of making the film non-verbal, though he acknowledged that this ambiguity was an invaluable asset to the film. He was willing then to give a fairly straightforward explanation of the plot on what he called the "simplest level '', but unwilling to discuss the metaphysical interpretation of the film which he felt should be left up to the individual viewer.
Arthur C. Clarke 's novel of the same name was developed simultaneously with the film, though published after its release. It seems to explain the ending of the film more clearly. Clarke 's novel explicitly identifies the monolith as a tool created by extraterrestrials that has been through many stages of evolution, moving from organic forms, through biomechanics, and finally has achieved a state of pure energy. The book explains the monolith much more specifically than the movie, depicting the first (on Earth) as a device capable of inducing a higher level of consciousness by directly interacting with the brain of pre-humans approaching it, the second (on the Moon) as an alarm signal designed to alert its creators that humanity had reached a sufficient technological level for space travel, and the third (near Jupiter in the movie but on a satellite of Saturn in the novel) as a gateway or portal to allow travel to other parts of the galaxy. It depicts Bowman traveling through some kind of interstellar switching station which the book refers to as "Grand Central, '' in which travelers go into a central hub and then are routed to their individual destinations. The book also depicts a crucial utterance by Bowman when he enters the portal via the monolith; his last statement is "Oh my God -- it 's full of stars! '' This statement is not shown in the movie, but becomes crucial in the film based on the sequel, 2010: The Year We Make Contact.
The book reveals that these aliens travel the cosmos assisting lesser species to take evolutionary steps. Bowman explores the hotel room methodically, and deduces that it is a kind of zoo created by aliens -- fabricated from information derived from television transmissions from Earth intercepted by the TMA - 1 monolith -- in which he is being studied by the invisible alien entities. He examines some food items provided for him, and notes that they are edible, yet clearly not made of any familiar substance from Earth. Kubrick 's film leaves all this unstated.
Physicist Freeman Dyson urged those baffled by the film to read Clarke 's novel:
After seeing Space Odyssey, I read Arthur Clarke 's book. I found the book gripping and intellectually satisfying, full of the tension and clarity which the movie lacks. All the parts of the movie that are vague and unintelligible, especially the beginning and the end, become clear and convincing in the book. So I recommend to my middle - aged friends who find the movie bewildering that they should read the book; their teenage kids do n't need to.
Clarke himself used to recommend reading the book, saying "I always used to tell people, ' Read the book, see the film, and repeat the dose as often as necessary ' '', although, as his biographer Neil McAleer points out, he was promoting sales of his book at the time. Elsewhere he said, "You will find my interpretation in the novel; it is not necessarily Kubrick 's. Nor is his necessarily the ' right ' one -- whatever that means. ''
Film critic Penelope Houston noted in 1971 that the novel differs in many key respects from the film, and as such perhaps should not be regarded as the skeleton key to unlock it.
Stanley Kubrick was less inclined to cite the book as a definitive interpretation of the film, but he also frequently refused to discuss any possible deeper meanings during interviews. During an interview with Joseph Gelmis in 1969 Kubrick explained:
It 's a totally different kind of experience, of course, and there are a number of differences between the book and the movie. The novel, for example, attempts to explain things much more explicitly than the film does, which is inevitable in a verbal medium. The novel came about after we did a 130 - page prose treatment of the film at the very outset. This initial treatment was subsequently changed in the screenplay, and the screenplay in turn was altered during the making of the film. But Arthur took all the existing material, plus an impression of some of the rushes, and wrote the novel. As a result, there 's a difference between the novel and the film... I think that the divergencies between the two works are interesting. Actually, it was an unprecedented situation for someone to do an essentially original literary work based on glimpses and segments of a film he had not yet seen in its entirety.
Author Vincent LoBrutto, in Stanley Kubrick: A Biography, was inclined to note creative differences leading to a separation of meaning for book and film:
The film took on its own life as it was being made, and Clarke became increasingly irrelevant. Kubrick could probably have shot 2001 from a treatment, since most of what Clarke wrote, in particular some windy voice - overs which explained the level of intelligence reached by the ape men, the geological state of the world at the dawn of man, the problems of life on the Discovery and much more, was discarded during the last days of editing, along with the explanation of HAL 's breakdown. ''
In an interview for Rolling Stone magazine, Kubrick said "On the deepest psychological level the film 's plot symbolizes the search for God, and it finally postulates what is little less than a scientific definition of God... The film revolves around this metaphysical conception, and the realistic hardware and the documentary feelings about everything were necessary in order to undermine your built - in resistance to the poetical concept. ''
When asked by Eric Nordern in Kubrick 's interview with Playboy if 2001: A Space Odyssey was a religious film, Kubrick elaborated:
I will say that the God concept is at the heart of 2001 but not any traditional, anthropomorphic image of God. I do n't believe in any of Earth 's monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that each star is a life - giving sun and that there are approximately 100 billion galaxies in just the visible universe. Given a planet in a stable orbit, not too hot and not too cold, and given a few billion years of chance chemical reactions created by the interaction of a sun 's energy on the planet 's chemicals, it 's fairly certain that life in one form or another will eventually emerge. It 's reasonable to assume that there must be, in fact, countless billions of such planets where biological life has arisen, and the odds of some proportion of such life developing intelligence are high. Now, the Sun is by no means an old star, and its planets are mere children in cosmic age, so it seems likely that there are billions of planets in the universe not only where intelligent life is on a lower scale than man but other billions where it is approximately equal and others still where it is hundreds of thousands of millions of years in advance of us. When you think of the giant technological strides that man has made in a few millennia -- less than a microsecond in the chronology of the universe -- can you imagine the evolutionary development that much older life forms have taken? They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities -- and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans.
In the same interview, he also blames the poor critical reaction to 2001 as follows:
Perhaps there is a certain element of the lumpen literati that is so dogmatically atheist and materialist and Earth - bound that it finds the grandeur of space and the myriad mysteries of cosmic intelligence anathema.
The film has been seen by many people not only as a literal story about evolution and space adventures, but as an allegorical representation of aspects of philosophical, religious or literary concepts.
Friedrich Nietzsche 's philosophical tract Thus Spoke Zarathustra, about the potential of mankind, is directly referred to by the use of Richard Strauss 's musical piece of the same name. Nietzsche writes that man is a bridge between the ape and the Übermensch. In an interview in the New York Times, Kubrick gave credence to interpretations of 2001 based on Zarathustra when he said: "Somebody said man is the missing link between primitive apes and civilized human beings. You might say that is inherent in the story too. We are semicivilized, capable of cooperation and affection, but needing some sort of transfiguration into a higher form of life. Man is really in a very unstable condition. '' Moreover, in the chapter Of the Three Metamorphoses, Nietzsche identifies the child as the last step before the Uberman (after the camel and the lion), lending further support to this interpretation in light of the ' star - child ' who appears in the final scenes of the movie.
Donald MacGregor has analysed the film in terms of a different work, The Birth of Tragedy, in which Nietzsche refers to the human conflict between the Apollonian and Dionysian modes of being. The Apollonian side of man is rational, scientific, sober, and self - controlled. For Nietzsche a purely Apollonian mode of existence is problematic, since it undercuts the instinctual side of man. The Apollonian man lacks a sense of wholeness, immediacy, and primal joy. It is not good for a culture to be either wholly Apollonian or Dionysian. While the world of the apes at the beginning of 2001 is Dionysian, the world of travel to the moon is wholly Apollonian, and HAL is an entirely Apollonian entity. Kubrick 's film came out just a year before the Woodstock rock festival, a wholly Dionysian affair. MacGregor argues that David Bowman in his transformation has regained his Dionysian side.
The conflict between humanity 's internal Dionysus and Apollo has been used as a lens through which to view many other Kubrick films especially A Clockwork Orange, Dr. Strangelove, Lolita, and Eyes Wide Shut.
2001 has also been described as an allegory of human conception, birth, and death. In part, this can be seen through the final moments of the film, which are defined by the image of the "star child '', an in utero fetus that draws on the work of Lennart Nilsson. The star child signifies a "great new beginning '', and is depicted naked and ungirded, but with its eyes wide open.
New Zealand journalist Scott MacLeod sees parallels between the spaceship 's journey and the physical act of conception. We have the long, bulb - headed spaceship as a sperm, and the destination planet Jupiter (or the monolith floating near it) as the egg, and the meeting of the two as the trigger for the growth of a new race of man (the "star child ''). The lengthy pyrotechnic light show witnessed by David Bowman, which has puzzled many reviewers, is seen by MacLeod as Kubrick 's attempt at visually depicting the moment of conception, when the "star child '' comes into being.
Taking the allegory further, MacLeod argues that the final scenes in which Bowman appears to see a rapidly ageing version of himself through a "time warp '' is actually Bowman witnessing the withering and death of his own species. The old race of man is about to be replaced by the "star child '', which was conceived by the meeting of the spaceship and Jupiter. MacLeod also sees irony in man as a creator (of HAL) on the brink of being usurped by his own creation. By destroying HAL, man symbolically rejects his role as creator and steps back from the brink of his own destruction.
Similarly, in his book, The Making of Kubrick 's 2001, author Jerome Agel puts forward the interpretation that Discovery One represents both a body (with vertebrae) and a sperm cell, with Bowman being the "life '' in the cell which is passed on. In this interpretation, Jupiter represents both a female and an ovum.
An extremely complex three - level allegory is proposed by Leonard F. Wheat in his book, Kubrick 's 2001: A Triple Allegory. Wheat states that, "Most... misconceptions (of the film) can be traced to a failure to recognize that 2001 is an allegory -- a surface story whose characters, events, and other elements symbolically tell a hidden story... In 2001 's case, the surface story actually does something unprecedented in film or literature: it embodies three allegories. '' According to Wheat, the three allegories are:
Wheat uses acronyms, as evidence to support his theories. For example, of the name Heywood R. Floyd, he writes "He suggests Helen -- Helen of Troy. Wood suggests wooden horse -- the Trojan Horse. And oy suggests Troy. '' Of the remaining letters, he suggests "Y is Spanish for and. R, F, and L, in turn, are in ReFLect. '' Finally, noting that D can stand for downfall, Wheat concludes that Floyd 's name has a hidden meaning: "Helen and Wooden Horse Reflect Troy 's Downfall ''.
As with many elements of the film, the iconic monolith has been subject to countless interpretations, including religious, alchemical, historical, and evolutionary. To some extent, the very way in which it appears and is presented allows the viewer to project onto it all manner of ideas relating to the film. The Monolith in the movie seems to represent and even trigger epic transitions in the history of human evolution, evolution of man from ape - like beings to civilised people, hence the odyssey of mankind.
Vincent LoBrutto 's biography of Kubrick notes that for many, Clarke 's novel is the key to understanding the monolith. Similarly, Geduld observes that "the monolith... has a very simple explanation in Clarke 's novel '', though she later asserts that even the novel does not fully explain the ending.
Rolling Stone reviewer Bob McClay sees the film as a four - movement symphony, its story told with "deliberate realism ''. Carolyn Geduld believes that what "structurally unites all four episodes of the film '' is the monolith, the film 's largest and most unresolvable enigma. Each time the monolith is shown, man transcends to a different level of cognition, linking the primeval, futuristic and mystic segments of the film. McClay 's Rolling Stone review notes a parallelism between the monolith 's first appearance in which tool usage is imparted to the apes and the completion of "another evolution '' in the fourth and final encounter with the monolith. In a similar vein, Tim Dirks ends his synopsis saying "The cyclical evolution from ape to man to spaceman to angel - starchild - superman is complete ''.
The monolith appears four times in 2001: A Space Odyssey: on the African savanna, on the moon, in space orbiting Jupiter, and near Bowman 's bed before his transformation. After the first encounter with the monolith, we see the leader of the apes have a quick flashback to the monolith after which he picks up a bone and uses it to smash other bones. Its usage as a weapon enables his tribe to defeat the other tribe of apes occupying the water hole who have not learned how to use bones as weapons. After this victory, the ape - leader throws his bone into the air, after which the scene shifts to an orbiting weapon four million years later, implying that the discovery of the bone as a weapon inaugurated human evolution, hence the much more advanced orbiting weapon 4 million years later.
The first and second encounters of humanity with the monolith have visual elements in common; both apes, and later astronauts, touch the monolith gingerly with their hands, and both sequences conclude with near - identical images of the sun appearing directly over the monolith (the first with a crescent moon adjacent to it in the sky, the second with a near - identical crescent Earth in the same position), both echoing the sun -- earth -- moon alignment seen at the very beginning of the film. The second encounter also suggests the triggering of the monolith 's radio signal to Jupiter by the presence of humans, echoing the premise of Clarke 's source story "The Sentinel ''.
In the most literal narrative sense, as found in the concurrently written novel, the Monolith is a tool, an artifact of an alien civilisation. It comes in many sizes and appears in many places, always in the purpose of advancing intelligent life. Arthur C. Clarke has referred to it as "the alien Swiss Army Knife ''; or as Heywood Floyd speculates in 2010, "an emissary for an intelligence beyond ours. A shape of some kind for something that has no shape. ''
The fact that the first tool used by the protohumans is a weapon to commit murder is only one of the challenging evolutionary and philosophic questions posed by the film. The tool 's link to the present day is made by the famous graphic match from the bone / tool flying into the air, to a weapon orbiting the earth. At the time of the movie 's making, the space race was in full swing, and the use of space and technology for war and destruction was seen as a great challenge of the future.
But the use of tools also allowed mankind to survive and flourish over the next 4 million years, at which point the monolith makes its second appearance, this time on the Moon. Upon excavation, after remaining buried beneath the lunar surface for 4 million years, the monolith is examined by humans for the first time, and it emits a powerful radio signal -- the target of which becomes Discovery One 's mission.
In reading Clarke or Kubrick 's comments, this is the most straightforward of the monolith 's appearances. It is "calling home '' to say, in effect, "they 're here! '' Some species visited long ago has not only evolved intelligence, but intelligence sufficient to achieve space travel. Humanity has left its cradle, and is ready for the next step. This is the point of connection with Clarke 's earlier short story, "The Sentinel '', originally cited as the basis for the entire film.
The third time we see a monolith marks the beginning of the film 's most cryptic and psychedelic sequence, interpretations of the last two monolith appearances are as varied as the film 's viewers. Is it a "star gate, '' some giant cosmic router or transporter? Are all of these visions happening inside Bowman 's mind? And why does he wind up in some cosmic hotel suite at the end of it?
According to Michael Hollister in his book Hollyworld, the path beyond the infinite is introduced by the vertical alignment of planets and moons with a perpendicular monolith forming a cross, as if the astronaut is about to become a new saviour. Bowman lives out his years alone in a neoclassical room, brightly lit from underneath, that evokes the Age of Enlightenment, decorated with classical art.
As Bowman 's life quickly passes in this neoclassical room, the monolith makes its final appearance: standing at the foot of his bed as he approaches death. He raises a finger toward the monolith, a gesture that alludes to the Michelangelo painting of The Creation of Adam, with the monolith representing God.
The monolith is the subject of the film 's final line of dialogue (spoken at the end of the "Jupiter Mission '' segment): "Its origin and purpose still a total mystery ''. Reviewers McClay and Roger Ebert have noted that the monolith is the main element of mystery in the film, Ebert writing of "The shock of the monolith 's straight edges and square corners among the weathered rocks '', and describing the apes warily circling it as prefiguring man reaching "for the stars ''. Patrick Webster suggests the final line relates to how the film should be approached as a whole, noting "The line appends not merely to the discovery of the monolith on the moon, but to our understanding of the film in the light of the ultimate questions it raises about the mystery of the universe. ''
Gerard Loughlin claimed in a 2003 book that the monolith is Kubrick 's representation of the cinema screen itself: "it is a cinematic conceit, for turn the monolith on its side and one has the letterbox of the cinemascope screen, the blank rectangle on which the star - child appears, as does the entirety of Kubrick 's film. '' The internet - based film critic Rob Ager later produced a video essay also espousing this theory. The academic Dan Leberg complained that Ager had not credited Loughlin.
The HAL 9000 has been compared to Frankenstein 's monster. HAL is an artificial intelligence, a sentient, synthetic, life form. According to John Thurman, HAL 's very existence is an abomination, much like Frankenstein 's monster. "While perhaps not overtly monstrous, HAL 's true character is hinted at by his physical ' deformity '. Like a Cyclops he relies upon a single eye, examples of which are installed throughout the ship. The eye 's warped wide - angle point - of - view is shown several times -- notably in the drawings of hibernating astronauts (all of whom HAL will later murder). ''
Kubrick underscores the Frankenstein connection with a scene that virtually reproduces the style and content of a scene from James Whale 's 1931 Frankenstein. The scene in which Frankenstein 's monster is first shown on the loose is borrowed to depict the first murder by HAL of a member of Discovery One 's crew -- the empty pod, under HAL 's control, extends its arms and "hands '', and goes on a "rampage '' directed towards astronaut Poole. In each case, it is the first time the truly odious nature of the "monster '' can be recognised as such, and only appears about halfway through the film.
Clarke has suggested in interviews, his original novel, and in a rough draft of the shooting script that HAL 's orders to lie to the astronauts (more specifically, concealing the true nature of the mission) drove him "insane ''. The novel does include the phrase "He (HAL) had been living a lie '' -- a difficult situation for an entity programmed to be as reliable as possible. Or as desirable, given his programming to "only win 50 % of the time '' at chess, in order for the human astronauts to feel competitive. Clarke also gives an explanation of the ill - effects of HAL being ordered to lie in computer terms as well as psychological terms, stating HAL is caught in a "Mobius feedback loop. ''
While the film remains ambiguous, one can see evidence in the film that since HAL was instructed to deceive the mission astronauts as to the actual nature of the mission and that deception opens a Pandora 's box of possibilities. During a game of chess, although easily victorious over Frank Poole, HAL makes a subtle mistake in the use of descriptive notation to describe a move, and when describing a forced mate, fails to mention moves that Poole could make to delay defeat. Poole is seen to be mouthing his moves to himself during the game and it is later revealed that HAL can lip read. HAL 's conversation with Dave Bowman just before the diagnostic error of the AE - 35 unit that communicates with Earth is an almost paranoid question and answer session ("Surely one could not be unaware of the strange stories circulating... rumors about something being dug up on the moon... '') where HAL skirts very close to the pivotal issue concerning which he is concealing information. When Dave states "You 're working up your crew psychology report, '' HAL takes a few seconds to respond in the affirmative. Immediately following this exchange, he errs in diagnosing the antenna unit. HAL has been introduced to the unique and alien concept of human dishonesty. He does not have a sufficiently layered understanding of human motives to grasp the need for this and trudging through the tangled web of lying complications, he falls prey to human error.
The follow - up film 2010 further elaborates Clarke 's explanation of HAL 's breakdown. While HAL was under orders to deny the true mission with the crew, he was programmed at a deep level to be completely accurate and infallible. This conflict between two key directives led to him taking any measures to prevent Bowman and Poole finding out about this deception. Once Poole had been killed, others were eliminated to remove any witnesses to his failure to complete the mission.
One interesting aspect of HAL 's plight, noted by Roger Ebert, is that this supposedly perfect computer actually behaves in the most human fashion of all of the characters. He has reached human intelligence levels, and seems to have developed human traits of paranoia, jealousy and other emotions. By contrast, the human characters act like machines, coolly performing their tasks in a mechanical fashion, whether they are mundane tasks of operating their craft or even under extreme duress as Dave must be following HAL 's murder of Frank. For instance, Frank Poole watches a birthday transmission from his parents with what appears to be complete apathy.
Although the film leaves it mysterious, early script drafts made clear that HAL 's breakdown is triggered by authorities on Earth who order him to withhold information from the astronauts about the purpose of the mission (this is also explained in the film 's sequel 2010). Frederick Ordway, Kubrick 's science advisor and technical consultant, stated that in an earlier script Poole tells HAL there is "... something about this mission that we were n't told. Something the rest of the crew knows and that you know. We would like to know whether this is true '', to which HAL responds: "I 'm sorry, Frank, but I do n't think I can answer that question without knowing everything that all of you know. '' HAL then falsely predicts a failure of the hardware maintaining radio contact with Earth (the source of HAL 's difficult orders) during the broadcast of Frank Poole 's birthday greetings from his parents.
The final script removed this explanation, but it is hinted at when HAL asks David Bowman if Bowman is bothered by the "oddities '' and "tight security '' surrounding the mission. After Bowman concludes that HAL is dutifully drawing up the "crew psychology report '', the computer makes his false prediction of hardware failure. Another hint occurs at the moment of HAL 's deactivation when a video reveals the purpose of the mission.
Stanley Kubrick originally intended, when the film does its famous match - cut from prehistoric bone - weapon to orbiting satellite, that the latter and the three additional satellites seen be established as orbiting nuclear weapons by a voice - over narrator talking about nuclear stalemate. Further, Kubrick intended that the Star Child would detonate the weapons at the end of the film. Over time, Kubrick decided that this would create too many associations with his previous film Dr. Strangelove and he decided not to make it so obvious that they were "war machines ''. Kubrick was also confronted with the fact that, during the production of the film, the US and USSR had agreed not to put any nuclear weapons into outer space by signing the Outer Space Treaty.
Alexander Walker in a book he wrote with Kubrick 's assistance and authorisation, states that Kubrick eventually decided that as nuclear weapons the bombs had "no place at all in the film 's thematic development '', now being an "orbiting red herring '' which would "merely have raised irrelevant questions to suggest this as a reality of the twenty - first century ''.
In the Canadian TV documentary 2001 and Beyond, Clarke stated that not only was the military purpose of the satellites "not spelled out in the film, there is no need for it to be '', repeating later in this documentary that "Stanley did n't want to have anything to do with bombs after Dr. Strangelove ''.
In a New York Times interview in 1968, Kubrick merely referred to the satellites as "spacecraft '', as does the interviewer, but he observed that the match - cut from bone to spacecraft shows they evolved from "bone - as - weapon '', stating "It 's simply an observable fact that all of man 's technology grew out of his discovery of the weapon - tool ''.
Nothing in the film calls attention to the purpose of the satellites. James John Griffith, in a footnote in his book Adaptations As Imitations: Films from Novels, wrote "I would wonder, for instance, how several critics, commenting on the match - cut that links humanity 's prehistory and future, can identify -- without reference to Clarke 's novel -- the satellite as a nuclear weapon ''.
Arthur C. Clarke, in the TV documentary 2001: The Making of a Myth, described the bone - to - satellite sequence in the film, saying "The bone goes up and turns into what is supposed to be an orbiting space bomb, a weapon in space. Well, that is n't made clear, we just assume it 's some kind of space vehicle in a three - million - year jump cut ''. Former NASA research assistant Steven Pietrobon wrote "The orbital craft seen as we make the leap from the Dawn of Man to contemporary times are supposed to be weapons platforms carrying nuclear devices, though the movie does not make this clear. ''
The vast majority of film critics, including noted Kubrick authority Michel Ciment, interpreted the satellites as generic spacecraft (possibly Moon bound).
The perception that the satellites are nuclear weapons persists in the minds of some viewers (and some space scientists). Due to their appearance there are statements by members of the production staff who still refer to them as weapons. Walker, in his book Stanley Kubrick, Director, noted that although the bombs no longer fit in with Kubrick 's revised thematic concerns, "nevertheless from the national markings still visible on the first and second space vehicles we see, we can surmise that they are the Russian and American bombs. ''
Similarly, Walker in a later essay stated that two of the spacecraft seen circling Earth were meant to be nuclear weapons, after asserting that early scenes of the film "imply '' nuclear stalemate. Pietrobon, who was a consultant on 2001 to the Web site Starship Modeler regarding the film 's props, observes small details on the satellites such as Air Force insignia and "cannons ''.
In the film, US Air Force insignia, and flag insignia of China and Germany (including what appears to be an Iron Cross) can be seen on three of the satellites, which correspond to three of the bombs stated countries of origin in a widely circulated early draft of the script.
Production staff who continue to refer to "bombs '' (in addition to Clarke) include production designer Harry Lange (previously a space industry illustrator), who has since the film 's release shown his original production sketches for all of the spacecraft to Simon Atkinson, who refers to seeing "the orbiting bombs ''. Fred Ordway, the film 's science consultant, sent a memo to Kubrick after the film 's release listing suggested changes to the film, mostly complaining about missing narration and shortened scenes. One entry reads: "Without warning, we cut to the orbiting bombs. And to a short, introductory narration, missing in the present version ''. Multiple production staff aided in the writing of Jerome Agel 's 1970 book on the making of the film, in which captions describe the objects as "orbiting satellites carrying nuclear weapons '' Actor Gary Lockwood (astronaut Frank Poole) in the audio DVD commentary says the first satellite is an armed weapon, making the famous match - cut from bone to satellite a "weapon - to - weapon cut ''. Several recent reviews of the film mostly of the DVD release refer to armed satellites, possibly influenced by Gary Lockwood 's audio commentary.
A few published works by scientists on the subject of space exploration or space weapons tangentially discuss 2001: A Space Odyssey and assume at least some of the orbiting satellites are space weapons. Indeed, details worked out with input from space industry experts, such as the structure on the first satellite that Pietrobon refers to as a "conning tower '', match the original concept sketch drawn for the nuclear bomb platform. Modelers label them in diverse ways. On the one hand, the 2001 exhibit (given in that year) at the Tech Museum in San Jose and now online (for a subscription) referred merely to "satellites '', while a special modelling exhibition at the exhibition hall at Porte de Versailles in Paris also held in 2001 (called 2001 l'odyssée des maquettes (2001: A Modeler 's Odyssey)) overtly described their reconstructions of the first satellite as the "US Orbiting Weapons Platform ''. Some, but not all, space model manufacturers or amateur model builders refer to these entities as bombs.
The perception that the satellites are bombs persists in the mind of some but by no means all commentators on the film. This may affect one 's reading of the film as a whole. Noted Kubrick authority Michel Ciment, in discussing Kubrick 's attitude toward human aggression and instinct, observes "The bone cast into the air by the ape (now become a man) is transformed at the other extreme of civilization, by one of those abrupt ellipses characteristic of the director, into a spacecraft on its way to the moon. '' In contrast to Ciment 's reading of a cut to a serene "other extreme of civilization '', science fiction novelist Robert Sawyer, speaking in the Canadian documentary 2001 and Beyond, sees it as a cut from a bone to a nuclear weapons platform, explaining that "what we see is not how far we 've leaped ahead, what we see is that today, ' 2001 ', and four million years ago on the African veldt, it 's exactly the same -- the power of mankind is the power of its weapons. It 's a continuation, not a discontinuity in that jump. ''
Kubrick, notoriously reluctant to provide any explanation of his work, never publicly stated the intended functions of the orbiting satellites, preferring instead to let the viewer surmise what their purpose might be.
|
who played violet in saved by the bell | List of Saved by the Bell characters - wikipedia
The American television sitcom Saved by the Bell, that aired on NBC from 1989 to 1993, follows a group of six high school students and their principal, Mr. Belding.
Kelly Kapowski (portrayed by Tiffani Amber Thiessen) is the most popular girl in school and is head cheerleader and captain of the volleyball, swim, and softball teams. Though a good student and role model, Kelly did find herself sentenced to detention on a couple of occasions. She is also a big fan of George Michael. She also is the love interest and later wife of Zack Morris.
At the start of her freshman year, Zack had been trying to go out with Kelly ever since she can remember. For a while, a feud began between Zack and fellow student A.C. Slater over who would be her boyfriend, which caused her great stress, but a lot of fun at the same time. Kelly blushed numerous times when both Zack and Slater would hit on her, especially when it was done in front of other students. During her sophomore year, Slater conceded defeat to Zack and she and Zack began dating, whereas Slater thereupon pursued his interest in Jessie Spano.
The following school year marked the end of their relationship. Zack wanted to go steady with Kelly, but she is not sure at first. She thought it over but by the time she decided to accept Zack 's offer, he had become infatuated with a young school nurse. That turned out to be a dead - end, but when he tried to apologize to Kelly and get her back, she brushed him off. However, they were back together in the following episode ("Breaking Up is Hard to Undo ''). Kelly began working at The Max as a waitress and fell for her boss, Jeff Hunter (Patrick Muldoon), much to Zack 's disappointment. This caused her and Zack to break up. Kelly dated Jeff for a while, until he is caught at an 18 - and - over club (The Attic) with another girl.
Subsequently, Kelly and Zack became closer friends, though they dated other people. While vacationing in Palm Springs for Jessie Spano 's father 's wedding, Zack and Kelly flirted with the possibility of getting together again but ended up remaining friends.
In her senior year, she is once again pursued by Zack to accompany him to the prom. She agrees, and they get back together. As graduation approached, Kelly stated she could not afford to go to an out - of - state university and instead would attend community college. She and her friends graduated from Bayside High in 1993 and went their separate ways.
In Saved by the Bell: The College Years, Kelly is accepted into California University, joining Zack, Slater, and Screech as suitemates. She also became the new roommate of Leslie Burke and Alex Taber. While at Cal. U., Kelly engaged in an affair with her anthropology professor, Jeremiah Lasky. Zack makes a final attempt to win her over when she decides to go on a "Semester on the Sea '' program through the Mediterranean for three months. Zack proposed to Kelly, and she accepted.
Without much support from their families, Zack and Kelly had planned to get married in Las Vegas in Saved by the Bell: Wedding in Las Vegas. The couple managed to bring Slater, Screech, and Lisa Turtle with them for the event. Just prior to the exchange of vows at the ceremony at a cheap wedding chapel, Zack 's parents showed up and stopped the wedding. They told Zack and Kelly that they would give them the wedding of their dreams. A few days later, Zack and Kelly had an elaborate outdoor wedding in the Las Vegas area. Lisa and Jessie stood as bridesmaids while Screech and Slater stood as groomsmen. The newlyweds then went on a honeymoon.
When Zack and Slater made guest appearances on Saved by the Bell: The New Class a few years later, Slater asked Zack how "the Mrs. '' (meaning Kelly) is doing, Zack replied, "Good. ''
Samuel "Screech '' Powers (portrayed by Dustin Diamond) is one of only two characters to appear as a series regular from the beginning of the franchise to its end. He appeared in Good Morning, Miss Bliss; Saved by the Bell; Saved by the Bell: The College Years; and Saved by the Bell: The New Class. Screech is seen as the geek of his peer group. While he is clearly intelligent and a high academic achiever, he lacks common sense and social skills. His friends, who are more conventionally attractive and "cooler '' than he is, like him anyway and always include him in their plans.
In Good Morning, Miss Bliss, Screech is much like he is in all of his other incarnations, only younger. He mentions he has a brother in one episode, but this was written out of all subsequent continuity.
In Saved by the Bell, Screech is the only child of an unseen father and an Elvis - obsessed mother (guest star Ruth Buzzi). In addition, he has a dog named Hound Dog, and a robot pal named Kevin, whom he built and programmed himself and who exhibits artificial intelligence. He often exhibits low self - esteem when it comes to girls, in contrast to his more popular friends.
During his years at Bayside High, Screech frequently pursues classmate Lisa Turtle and is consistently turned down by her. However, she does agree to date him in one episode, only to spoil the date by talking through the movie. She also attends senior prom with him, because she is moved by both his honesty about how hurt he is that no one wants to go to the prom with him and impressed when Screech says he will stand by an earlier promise not to ask her to the prom.
It is revealed in one episode that Screech is of Italian ancestry.
Screech does eventually end up with a girlfriend, Violet Anne Bickerstaff (played by the then unknown Tori Spelling), and dates her for several episodes, even managing to win back the support of her upper - class parents after losing it on a disastrous dinner date. Violet is then never seen again, without any explanation.
Screech is frequently roped into scams by his best friend, Zack. As a running gag, he often unwittingly sabotages them. In spite of all his faults, Screech is well liked by his friends.
Upon graduating, Screech and his friends Zack, Slater, and Kelly, attend California University in Saved by the Bell: The College Years. Screech remains there until his sophomore year, when he begins a work - study program at his alma mater, Bayside High, alongside Principal Richard Belding.
In Saved by the Bell: Wedding in Las Vegas, Screech attends Zack and Kelly 's wedding. He served as Zack 's best man in the wedding along with Slater. This is mentioned in an episode of Saved by the Bell: The New Class, which aired around the time of the movie 's release.
From season two in Saved by the Bell: The New Class, Screech is initially in the role as Belding 's administrative assistant as part of a work - study program. Screech retains the same bumbling tendencies he did as a teenager, often irritating Belding and leading to his new what - did - I - do catachcall "Zoinks! '' (an utterance originally made famous by Shaggy Rogers from Scooby - Doo). However, Screech remains at Bayside in this capacity until the end of the series. Mr. Belding leaves to pursue a new job at the University of Chattanooga at the end of the series. It 's implied that Screech will take over as Principal but the series ends without showing this happening.
Albert Clifford "A.C. '' Slater (portrayed by Mario Lopez) is a popular jock who excelled in most sports (particularly football and amateur wrestling). An Army brat, he is an outsider, having transferred to Bayside in the first filmed episode (which aired later as a flashback). A.C. mentions that he has been to Bolivia, Italy, Iceland, and Berlin among other places. He becomes the school 's star athlete, excelling as a wrestler and the quarterback of the football team, but does not excel in the class room.
His father, Martin (a Major in the U.S. Army), appears in two episodes. His mother, Lorraine, however, is not in any episodes. He also has a younger sister named J.B. whom Zack briefly began dating, much to Slater 's chagrin. In the season four episode "Love Machine, '' his ex-girlfriend from Berlin visits and calls him by his real name, "Albert Clifford ".
Much like Voorhies, Lopez was able to captivate the show 's producers into casting him into a role that was originally written to be Caucasian. This issue of Slater 's ethnicity is addressed in "Slater 's War '' from The College Years, where it is revealed that 25 years earlier A.C. 's father changed his last name from Sanchez to Slater so he could get into the military academy. Although the episode reveals that A.C. does not fluently understand Spanish, A.C. is seen speaking broken Spanish to the kitchen staff of the Malibu Sands Beach club.
At the start of freshman year in Saved by the Bell, Slater arrives as a transfer student, immediately making an enemy of Zack Morris by attempting to make a move on Kelly Kapowski. Both boys end up in detention after schemes to be with Kelly backfire on both of them. While in detention, Slater reveals to Zack that he had transferred in and out of various schools throughout his life due to his father being a Major in the army. Their mutual understanding of a father putting something else before his son would serve as the ice breaker between the two (Zack 's own father was too busy with his job and never paid attention to him or his academics). By the end of the first season, Zack and Slater find themselves bonding over similar social standing, popularity among women, and athletic ability. During Season 2, the rivalry between the two is toned down, and they begin to become friends. Slater grew into the habit of calling Zack "Preppie '' which he at first intended as an ongoing insult, but once they became friends, Slater used the term as an affectionate nickname for Zack. Slater goes on to date feminist Jessie Spano on and off for the remainder of High School.
Throughout the course of the show, Slater and Zack compete for many other girls which often puts a strain on their friendship (one instance culminated in a brawl between the two during their senior year). In the end though, their friendship is more important to them than any girl that would possibly come between them.
In Saved by the Bell: The College Years, Slater is accepted into California University, along with Zack, Screech, and Kelly as suite mates and begins dating Alex Taber, one of Kelly 's roommates. While there, Slater continues his passion for wrestling and works in the University 's main restaurant. Along with Screech, Slater serves as Zack 's best man at his wedding to Kelly in Las Vegas in Saved by the Bell: Wedding in Las Vegas. Slater makes two return guest appearances in Saved by the Bell: The New Class, firstly in "Goodbye, Bayside '' Part 2, along with Zack and Lisa when a millionaire alumnus J. Walter McMillan plans to buy the school and tear it down for the purpose of constructing a condo, and in "Fire at the Max: Part 2 '', where he reminisces on fond memories he had with his friends at the diner, after it was accidentally burned down by student Ryan Parker, who forgot to turn off the Christmas lights.
Jessica "Jessie '' Myrtle Spano (portrayed by Elizabeth Berkley) is a lifelong friend of Zack, Screech, Lisa and Kelly. She and Zack live next door to each other, and Zack regularly visits Jessie by climbing through her window. Her parents are divorced. She lives with her mother, who remarried in the third season, providing Jessie with a stepbrother, Eric (played by Joshua Hoffman) from New York City, seen in only two episodes. Jessie 's father is the owner and manager of the Marriott Desert Sands resort. He also becomes remarried in the hour - long "Palm Springs Weekend '' episode, to a much younger woman with whom Jessie did not initially get along and was against marrying. She eventually accepted Leslie for who she was and becomes friends with her (Jessie 's father is played by George McDaniel, and her new stepmother, Leslie, is played by Barbra Brighton).
Jessie is portrayed as a liberal (she critiqued then - president George H.W. Bush), with strong feminist views. She is often the first to speak up when she feels something is unjust. Although seen as intelligent, Jessie has a somewhat neurotic streak.
Jessie is the class president. In a close election, she initially lost to Zack. However, Zack used underhanded techniques to win the election in order to get a free trip to Washington, D.C., but upon realizing that being class president would be a much better fit for his friend Jessie, Zack stepped down from office, naming Jessie the president. Jessie remains president for the remainder of her time at Bayside.
From sophomore year until the end of the series, Jessie dates athlete A.C. Slater in an "opposites attract '' relationship, which causes friction between the both of them. Slater 's pet name for Jessie is "Mama. '' Jessie and Slater 's relationship is put to the test several times over the course of the series. Jessie also has a brief romantic encounter with Zack when the two are forced to endure a kissing scene in a school play. Additionally, she has a crush on one - time character Graham (played by David Kreigel), with whom she spends cut day. (This last encounter ultimately leads to her and Slater 's break up).
Most notable is Jessie 's struggle with addiction to over-the - counter caffeine pills, in the episode entitled "Jessie 's Song. '' Pressure from school and social obligations, and her desire to be accepted into Stanford University, drive Jessie to take caffeine pills in order to stay awake. Eventually she gets hooked on the pills and has a breakdown as Zack comes to her rescue.
Another ongoing issue is Jessie 's height. She often considers herself too tall, and has therefore developed a bit of an insecurity complex, especially when it comes to shorter boys, and sometimes superficially judges men based on their height. Additionally, she is known as Jessie "Legs '' Spano to some. Jessie would later stop judging men based on their height when she begins dating Slater.
In an early episode, Jessie reveals that she attended dance camp as a child. As such, she is an experienced dancer. Jessie is also on the swim team. Jessie later joins Kelly and Lisa as cheerleaders (because, according to Jessie, it looks good to colleges).
Before graduation, Jessie has her hopes pinned on becoming valedictorian, but ultimately loses out to Screech. However, Screech realizes how important it is to her, and concedes the honor to Jessie. Jessie soon realizes just how much valedictorian meant to Screech and gives it back to him.
After graduation, Jessie goes off to attend Columbia University.
In Saved by the Bell: Wedding in Las Vegas, Jessie, with Lisa, returns as one of Kelly 's bridesmaids at Zack and Kelly 's wedding in Las Vegas. This was her final appearance to date in the show.
Lisa Marie Turtle (portrayed by Lark Voorhies) is the rich girl of the group, whose parents both worked as physicians. She is frequently seen wearing designer clothing.
In her first appearance in Good Morning, Miss Bliss, Lisa plays a more secondary role than she later would in the later incarnations of the series. She attends John F. Kennedy Junior High with Zack Morris and Samuel "Screech '' Powers.
Throughout her high school years in Saved by the Bell, Lisa spends most of her time fighting off the affections of Screech, who is relentless in pursuing her (and eventually wins at least one date with her) and has been doing so since they were in Kindergarten. Despite her irritation with Screech, she values him as a friend, and eventually comes to admire Screech 's nobility when he abdicates his position as the class valedictorian in favor of Jessie Spano, who had a slightly lower grade point average than he did, but valued the position more. Lisa even went to the senior prom with Screech when she felt sorry that no one would go with him, when he promised to respect her comfort zone and not ask her, but she decided that she wants him to be her prom date. Most of Lisa 's time is spent in the company of schoolmates and fellow cheerleaders Kelly Kapowski and Jessie Spano.
Unlike the rest of her friends, Lisa does not have a long - term relationship during high school (despite having countless dates), although she has a short term romance with Zack Morris in the episode "The Bayside Triangle, '' much to the chagrin of Screech. To keep his friendship with Screech, Zack must break up with Lisa. However, this is forgotten about in following episodes. "Screech 's Spaghetti Sauce '' involves Screech on a date with a girl who is impressed by him, causing Lisa to be in a state of disbelief.
Upon graduating, Lisa is accepted into the Fashion Institute of Technology in New York City. Lisa only makes one guest appearance in Saved by the Bell: The College Years when she visits her former classmates Zack, Screech, Kelly, and A.C. Slater at California University in the episode "Wedding Plans '', after hearing of Zack and Kelly 's engagement. In Saved by the Bell: Wedding in Las Vegas, Lisa and Jessie were Kelly 's bridesmaids for her wedding to Zack. Lisa returns to Bayside High along with Zack and Slater in "Goodbye Bayside: Part 2 '' (from Saved by the Bell: The New Class), where they are reunited with Screech (now working at Bayside on a work - study program) and their principal Mr. Belding when the school is under threat of being torn down by an alumnus.
Mr. Richard Belding (portrayed by Dennis Haskins) is the principal of Bayside High School. In his first appearance in the pilot of Good Morning, Miss Bliss, Mr. Belding 's first name was Gerald, and the character was played by Oliver Clark. Other than Miss Bliss, the character of Mr. Belding was the only one carried over when the show began production. The character also appeared in Saved by the Bell: The New Class and guest starred in "A Thanksgiving Story '' (from Saved by the Bell: The College Years). He is one of only two characters to remain from the beginning of the franchise until the end (the other being Samuel "Screech '' Powers). Mr. Belding 's name is a pun referencing the onomatopoeia "ding '' that comes from a bell, (a "bell ding ''). His signature catchphrase is "Hey, hey, hey, hey! What is going on here? '', usually uttered after finding out about something ridiculous occurring at the school.
Mr. Belding 's most frequent nemesis is Zack Morris; however, throughout the series, it is abundantly clear the two like and respect each other, arguably because in "Save the Max '' it is revealed that when Belding was a student himself he was a disc jockey and one who would rebel against school authority, prompting the comment that he was "the Zack Morris of the 1960s ''. On at least one occasion, Belding has even called Zack Morris "the son (he) never had ''. A small joke was also that Belding was open about his past failures with women prior to his marriage, such as when he was in the U.S. Army and in love with a North Vietnamese girl, where "even she was with the enemy! '' Belding 's family has appeared in the series on many occasions. The most frequent appearances were by his infant son, named in honor of Zack. When Belding 's wife (Becky) went into labor in an elevator, Zack Morris, along with classmate Tori Scott, helped deliver him.
Much of Belding 's humor comes from his attempts to make jokes, sing or being generally tricked and scammed by Zack. Despite Zack 's ideas that he knew Belding inside and out, occasionally Belding would reveal that he had other ideas to deal with issues, thus outfoxing Zack. He also was against the Bayside - Valley High prank war and in spite of his best efforts to call a truce with Valley High 's principal, the war continued until then end of the cheerleading competition when one of the students from Valley was exposed.
Belding also has a small rivalry with fellow teacher Mr. Tuttle, who was also in line for the position of Principal at Bayside. He also had a friendly rivalry with Brandon Tartikoff over courtship of Belding 's wife, but Tartikoff accepted that he got the short end of the straw and accepted the eventual presidency of NBC (thus implicitly breaking the fourth wall, since Tartikoff actually was president of NBC at the time) while Mr. Belding was "happily married and got to be principal of a school of great kids. ''
Belding remains the one constant between all the series and in the second season of The New Class, he is reunited with Bayside High alumnus Samuel "Screech '' Powers, who is studying to become a teacher. Screech becomes Mr. Belding 's administrative assistant until the end of the series, when Belding decides to take a job as dean of a College in Tennessee.
In 2012, Dennis Haskins reprised his role as Mr. Belding on the hit Nickelodeon series Victorious in the episode "April Fools Blank ''.
Tori Scott (portrayed by Leanna Creel) arrives at Bayside during senior year, making an immediate enemy of Zack Morris by parking her motorcycle in his usual parking space on her first day. Eventually, things thaw between Tori and Zack and they later begin dating, but broke up in the episode "The Will '' due to Zack 's sexist behavior. After the episode "School Song, '' Tori is never seen or mentioned again.
Mrs. Culpepper is the school 's de facto art teacher (although she is seen teaching history during Cut Day) and made appearances in both junior year and senior year. She is always involved in comical situations in which she bumps into lockers and mistakes inanimate objects for living things due to her bad eyesight. At one point she even accidentally made Mr. Belding think she is in love with him; when he talked to her about her "crush '' she became completely flustered, and later punched him the face for continuing to "reassure '' her that he had no romantic designs towards her.
Mr. Dewey (Patrick Thomas O'Brien) is a math teacher who also oversaw detention. He represented the archetypal down - on - his - luck, more - bored - than - the - kids, depressed teacher but seemed to be respected by the kids on the grounds that nothing they did bothered or registered very much with him. He spoke in a monotonous voice, is sarcastic, always wore glasses, and once auditioned for American Gladiators. Mr. Dewey also appeared in the New Class.
Dr. Mertz (Avery Schreiber) is a science teacher that seems to want the students to enjoy his class, giving out awards such as a molecule hat to Screech. His Science test stresses Kelly because she has concert tickets that she wo n't get to use if she fails. Zack hoping to get on Kelly 's good side arranges for Screech to tutor Kelly. The plan backfires when Kelly develops romantic feelings for Screech. Kelly manages to pass Dr. Mertz 's test but finds that without the need for a tutor, she and Screech really do not have much in common.
Played by Raf Mauro, Dickerson is a history teacher known for his unfriendly demeanor and an impossible midterm that no student had passed in three years, something he seems to be very proud of. In the episode "The Fabulous Belding Boys, '' Dickerson has a nervous breakdown and is replaced by the very popular Rod Belding.
Mr. Heimlich (appeared in episode: "Earthquake! '') is the physics teacher. Since the episode revolves primarily around Becky Belding 's baby and the earthquake, he did not have a very large part. He had a German accent and cheerfully assured the class that an upcoming exam would be impossible to pass, but Zack set up a quick in - school baby shower for Becky and they were able to defer that exam to another day.
Nurse Jennifer (played by Nancy Valen) is a school nurse that showed up for only one episode during sophomore year to replace Nurse "Blind - as - a-bat '' Butcher. Zack decided to scrap his plans for a steady relationship with Kelly in order to pursue a romance with Jennifer. However, Jennifer is aware of the plot and managed to frighten Zack away by coming on to him and saying that she needed to escape her violent husband who is a professional wrestler (which may or may not have been true).
Coach Rizzo, one of the many coaches at Bayside, made his one appearance during the episode "Slater 's Friend '' as a substitute for an English class; his teaching this class is supposed to be funny because he spoke with a heavy Italian accent. He stood up at Artie the Lizard 's funeral and said some kind words about the reptile.
Mrs. Simpson (Pamela Kosh) is the nearly deaf, British - accented teacher who taught English class. She also appeared at the beginning of junior year and embarrassed Kelly and Zack by referring to them as "Bayside 's Most Beloved Couple '' just a short while after they broke up. She said she did n't like Zack, and once wore a hearing aid that she discarded because the titular bell caused painful sound waves to assault her. She also made an appearance in the pilot episode of The New Class.
Coach Sonski (played by comedian Monty Hoffman) is the wrestling coach at Bayside High. He is overweight with a testosterone - laden personality, and has been shown to be a sexist (mainly in "Hold Me Tight ''). He depends on Slater as his star athlete.
Coach Sonski is also shown as Bayside 's auto shop instructor in the episode where Jessie 's new stepbrother comes to California. He retains his sense of humor as well as his macho attitude throughout the episodes, though he once confessed to watching Oprah because he is "sensitive to dames ''.
Tuttle (portrayed by Jack Angeles) is an enthusiastic, overweight teacher (perhaps a foil for the thin, dour Mr. Dewey). He is well - liked by the students but had a mutually unfriendly relationship with Mr. Belding, stemming largely from Tuttle having been runner - up to Belding when Bayside had chosen a new principal. He also served as a leader in the local teacher 's union, a driver 's education teacher, and a music teacher.
Mr. Testaverde (portrayed by John Moschitta) is a history teacher (in the episode "The Gift ''). He is nicknamed "Terrible '' Testaverde by the students because he is one of the hardest teachers at Bayside. Not only is he tough on grading and exams, but he gave lectures in "speed talk '', causing Jessie to almost set fire to the papers she wrote on, and Screech to write with both hands. Both Jessie and Screech actually tried to keep up, but Zack totally ignored the lecture and listened to music instead.
Ms. Wentworth (portrayed by Carol Lawrence) is the Social Studies teacher during sophomore year who piqued Zack 's interest in his family heritage by assigning a family research project. She also taught the class about subliminal advertising, which led to a predictably hair - brained scheme from Zack. Ms. Wentworth is one of Bayside 's more enthusiastic and unorthodox teachers. Like Mr. Tuttle, she is one of the few teachers who genuinely seems to like Zack, and seems to be popular with her students, because she is serious about her job but treats them nicely and has a good sense of humor.
Kristy Barnes (Krystee Clark) is a female wrestler (in "Hold Me Tight '' (3.20)) who joins the team despite much protest from Coach Sonski. She and Zack date, and Zack is embarrassed when Kristy beats up a Valley wrestler who is trying to fight with Zack. Although Kristy and Slater 's relationship is purely wrestling - related, Jessie ends up getting jealous as the two spend time together in the gym in close physical contact. When Jessie insults Kristy, Jessie then realizes she is wrong about who the female wrestler likes. She joins forces with Zack to convince Kristy to rejoin the team, and Zack resumes dating her. She does not appear at all on the show after this episode, and Zack goes back to his mix of dating adventures and pining for a reunion with Kelly Kapowski.
Violet (Tori Spelling) is a nerdy character (appearing in 3 episodes in seasons 2 & 3) who was originally dating Maxwell Nerdstrom, but she dumped him in favor of Screech (whom she commonly addresses by his real name "Samuel ''), as Maxwell had treated her poorly. In her first appearance, she knocks over Screech 's mother 's statue of Elvis Presley. This leads to a moneymaking scheme by the gang to get $250 for a new statue. Violet comes from an upper class family who disapproves of her relationship with Screech, mainly due to the fact he is from a lower class, despite the fact he and Violet share many common interests. In another episode, Violet is shown to have a beautiful singing voice, but is too shy to sing solo for Bayside 's abysmal school choir until Screech supports her. She then quits the choir after Screech embarrasses her at a disastrous dinner with her folks, but when he comes out and sings her opposite, saving the day for her it convinces the Bickerstaffs that he is a good guy, and they say they no longer object to him dating their daughter.
Heather (portrayed by Christine Taylor) is a brief love interest of Zack 's (in "S.A.T. 's '' (3.17)). Zack sits behind her while taking the test and invites her over to study when she shows interest in him after he scored 1502 on his S.A.T. 's, although things go downhill for Zack when she invites her boyfriend Bob over for some study help too.
Ginger (portrayed by Bridgette Wilson) is a pretty but vacuous blonde (appearing in 5 season 4 episodes) who dates Zack a few times, but annoys him because she has little to say beyond asking him if she has lipstick on her teeth.
Graham (portrayed by David Kriegel) is a student (appearing in "Cut Day '' (3.23)) who has progressive views that dovetail with Jessie 's, earning her attention when he points out that the U.S. has never had a female President while other countries have (he mentions Indira Gandhi and Golda Meir). During the Junior Cut Day, Jessie and Graham stay behind in school to protest the delivery of a package of Styrofoam cups. It is Graham 's presence, and the fact that Slater sees her having fun hanging out with him, that leads Jessie and Slater to consider dating other people.
Jennifer (portrayed by Stephanie Furst, appearing in "Love Machine '' (4.11)) revealed herself to be Slater 's ex-girlfriend from his days as an army brat in West Germany. Zack first dated her at Slater 's behest (Slater did not want Jessie to know he was still attracted to Jennifer), but Jennifer finds out Slater 's ruse from Screech and dances closely with Zack at the Max, leading Slater to get angry and later to call Jennifer "his girlfriend. '' Jessie storms away in a rage before Kelly tells her she has to let Slater go and see if there is anything to hold onto. While Jessie sadly allows Slater the chance to date his ex to see if the flame is still there, it turns out that the two (Slater and Jennifer) are now incompatible: time had changed them both. She then asks Zack out and he says he 'd love to go out with her. She does not appear at all on the show after her lone episode.
Joanna (Shana Furlow) is a transfer student (in "The Fight '') who becomes the object of affection for both Zack and Slater, resulting in a strife that culminates in a physical fight, in which Zack and Slater duke it out for her affection. Joanna is so disgusted with the boys ' actions that she refuses to have anything to do with either of them.
Rhonda (portrayed by Kirsten Holmquist) is Bayside 's resident tomboy. She is a tall, lanky girl with disheveled blond hair (appearing in 3 season 1 episodes). She and Zack have one date, during which she forcibly plants a kiss on his lips. Rhonda also tries to join the cheerleading squad in "Save That Tiger '' and tries out for the team when one of the regulars is injured right before the big competition against Valley. Her audition is clumsy and the girls eventually talk Jessie into taking the spot.
Robin (portrayed by Soleil Moon Frye, also known as TV 's Punky Brewster) is a stuck - up girl (appearing in "Screech 's Spaghetti Sauce '' (4.3)) who takes advantage of Screech 's affections toward her because of his rapidly growing spaghetti sauce business in order to con money and gifts from him. Screech 's friends are apparently aware of her true nature and tries to warn him about Robin. Although Screech showers her with expensive gifts, including several articles of jewelry, she informs him at the end that she believed him to be a geek and a loser. She receives her comeuppance, however, in the form of a bogus buyback deal for Screech 's spaghetti sauce recipe after the gang learns from a lawyer that Screech 's recipe was from the Betsy Crocker Cookbook.
Pete Stonebreaker (Bryan Cooper) is one of the tallest nerds at Bayside (appearing in 2 episodes in seasons 3 & 4) who made his first appearance in an episode during the Malibu Sands storyline as a volleyball player when Screech was trying to recruit some members to replace Gary, an injured volleyball player. He seems to be a leader amongst the nerds, and has been able to fit in socially at times with more popular students.
Bryan Watkins (portrayed by Patrick J. Dancy) is one of Bayside 's more intellectual students and Student Council Vice President (in "Date Auction '' (3.15)). Lisa wins him at the date auction, but he is repulsed by her perceived shallowness and frivolous personality. Although Lisa puts on an intelligent air in order to impress Bryan (e.g. discussing Anna Karenina and the nature of art), she eventually gets fed up with his snobby behavior and habit of putting down her friends and tells him off.
Louise (Lara Lyon) is a nerdy Bayside student with blonde hair in pigtails and horn - rimmed glasses (appearing in 3 episodes in seasons 2 & 4) who is sometimes shown as a friend of Screech 's. She enrolls in the Army JROTC program at Bayside after Zack convinces her that in the US Army men outnumber women. She is placed on Zack 's "nerd team '' in a physical competition to compete against Slater 's better qualified team. Zack learns a lesson about leadership when he must encourage his team to realize their potential, which included telling Louise she could climb a rope. She returns in the episode "1 - 900 - Crushed '', where she confides she has feelings for the jock "Moose '', whose feelings are mutual, but she worries about the mismatched relationship. She is the queen bee of the school 's nerd clique, and Zack goes out on a date with her during his campaign to earn the nod for writing the new school song, knowing she would convince Bayside 's geeks and dweebs to vote for him. Slater tells another high - ranking geek that Zack 's "date '' with Louise could set a precedent where other popular guys begin dating geek girls, which inspired the geeks to cast their votes in the song contest for Screech 's entry.
Moose (portrayed by Mark Clayman) is a very dumb and childlike jock, much like his fellow football teammate Ox (appearing in 3 episodes in seasons 2 & 4). He is seen dating Louise in the episode "1 - 900 - Crushed '', and also has a minor speaking role in the episode "Student - Teacher Week. ''
Nerdstrom (portrayed by Jeffrey Asch) is a rich nerd (appearing in 2 episodes in seasons 2 & 4) who is Violet Bickerstaff 's boyfriend and treats her rather poorly, although he does buy her a gold - plated pocket protector. His poor treatment of her is one of the factors in her becoming Screech 's girlfriend. While most of the nerds comport themselves with a bumbling dignity at most, Nerdstrom goes above and beyond, behaving as a pompous, stuck - up geek. He defeats Zack in a game of poker, but gets his comeuppance in the end. Perhaps Maxwell 's most noted accomplishment at Bayside is mistakenly kissing Screech 's dog, "Hound Dog '' believing him to be Jessie, to everybody 's delight. He is later humiliated when Screech pointed out that Maxwell kissed his dog and tells Maxwell to stay out of Violet 's life.
Ox (Troy Fromin) is a member of the Bayside football team and an archetypal "dumb jock '' (appearing in 9 episodes in seasons 3 & 4) who pals around with Slater when Slater is without his Core - 6 friends. Despite being a jock, he dates a female nerd, and is often shown to be gentler and more sensitive than his size and oafish behavior would suggest. He is among the students who get drunk senior year at the toga party. A similar character named "Moose '' also appeared on the show, however.
Wendy (Judy Carmen) is an overweight girl who serves as Student Council treasurer (in "Date Auction '') who bids $100 on Zack to win a date with him to the dance. He is mortified and his superficial nature causes him to craft a multitude of lies in order to avoid a date with the Rubenesque Wendy. However, when she finds out the truth, Wendy calls Zack out for hurting her feelings and calls off the date. He realizes his mistake and ends up taking Wendy to the dance and later to the Max (though this is not shown).
Rod Belding (portrayed by Ed Blatchford) is Mr. Belding 's brother and steps in as a substitute teacher during junior year (Screech: "A building with two Beldings, one of whom is balding! ''), after Mr. Dickerson has a mental breakdown. He initially makes a hip impression on the students because of his happy - go - lucky attitude, world - weary demeanor, and tales of defying authority and schoolwork. Rod arranges to take the students on the rafting trip for their annual class trip (ruining Mr. Belding 's plans to visit Yosemite Park) and attempts to teach them about it. Mr. Belding is overshadowed by his brother 's connection with the students, which is evident in a minor confrontation they have when Mr. Belding tells his brother he is not to be teaching the kids whitewater rafting on official class time. After Zack accuses him of being jealous of Rod, he soon finds out what he 's really like on the inside. During a confrontation with the brothers, Rod tries to skip out of the trip just to meet a stewardess named Inga. Mr. Belding is furious that Rod would abandon his commitment to the students and orders him never to come back to Bayside, saying, "Get out of my school. '' Rod leaves as Zack ducks behind some lockers to avoid his sight. Not wanting to upset his students, Mr. Belding claims that Rod is ill. However, when he offers to chaperone the trip instead, the students happily and gratefully accept. Zack, who had secretly overheard the incident, tells his principal that "we got the better Belding. ''
Becky Belding, Mr. Belding 's wife (portrayed by Louan Gideon), is featured in the episode "The Earthquake '', in which she, Zack and Tori get stuck in an elevator after a minor earthquake. During the time in the elevator she goes into labor and gives birth to the Beldings ' son with the help of Zack, who becomes the baby 's namesake.
Richard Belding 's niece (portrayed by Jodi Peterson) is blonde and bubbly, but no one wants to date her as she is related to the Principal. During Zack 's sophomore year at Bayside, he earns himself a Saturday school detention. In order to be released from serving time and to be able to attend Kelly 's upcoming party, he signs a treaty with Mr. Belding agreeing to take Penny out on a date on Friday in lieu of serving his sentence. After making this agreement, Kelly informs Zack that her birthday party would also be that Friday, since the Max was booked on Saturday. In order to make Kelly 's party, Zack trains Screech to imitate him, so that he could take Penny out on the date instead; this works out well because Penny is attracted to Screech, but she angers Kelly when she says that she is hot for "Zack, '' thinking that was who Screech was.
Richard and Becky Belding 's son, born when Zack, Mrs. Belding, and Tori get stuck in an elevator. He also appears in two episodes of Saved by the Bell: The New Class.
Derek Morris (portrayed by John Sanderford) is the father of Zack. He is a businessman who spends little time with his son. He is very strict and opposed at first for Zack to marry Kelly in the TV Movie Saved by the Bell: Wedding in Las Vegas.
Melanie Rhonda Morris (portrayed by Melody Rogers) is the mother of Zack. She 's more fresh and less strict than her husband Derek. She adores Zack but still she knows some of her son 's tricks. She supported Zack 's marriage but could n't do much to aid him in order to respect her husband, in the TV movie Saved by the Bell: Wedding in Las Vegas.
Frank Kapowski (portrayed by John Mansfield) is the father of Kelly and her six other siblings. Kind and loving, he worked at a defense plant, but is laid off in the season two premiere "The Prom ''. He is also seen at Zack and Kelly 's wedding in Saved by the Bell: Wedding in Las Vegas giving Kelly away, and at the reception with his wife and 2 of Kelly 's younger brothers.
Billy Kapowski is Kelly 's baby brother, whom she places in the care of her friends in the episode "The Babysitters '' so that she could have her cheerleading picture taken for the school yearbook. Zack is often left alone with Billy as the others make excuses or have other commitments to attend to. Because of this, Billy and Zack bond quickly to the point where Zack distrusts anyone else to care for the baby. He is forced, however, to allow Jessie and Lisa to look after him in their Home Economics / Parenthood class. They lose track of Billy when Screech goes to fetch Billy for Zack but mistakes a doll for the baby. The gang eventually find him in Mr. Belding 's office, and at the end of the episode he speaks his first word: "Zack. ''
Another brother of Kelly 's, he disrupts the end of one of Kelly and Zack 's dates. Kyle later dumps water on Zack from the second floor of their house after Kelly decided against going steady.
Nicki Kapowski (portrayed by Laura Mooney) is Kelly 's tomboyish little sister who develops a crush on Zack. She becomes convinced that Zack feels the same way after he mixes up her phone call with Kelly 's (while running the "Teen Line ''). Although she is only thirteen and in the seventh grade, she nonetheless shows up at Bayside High to visit Zack, sporting a more feminine look and demanding a kiss. After trying various ploys to turn her off (including dressing up like a geek and trying to gross her out with a pet spider), Zack finally has to tell her the truth: he is n't interested in her; he is in love with Kelly. More angry than hurt, she insults Zack for trying to scare her away instead of having the courage to tell a thirteen - year - old girl how he really feels.
Harry Bannister (portrayed by Dean Jones is Kelly 's grandfather from the movie Saved by the Bell: Hawaiian Style. He invites the youngsters to stay at "The Hawaiian Hideaway '', a rustic hotel in Hawaii. However, someone else is out to buy his land and build a hotel / resort complex, and the gang has to save it.
Major Martin Slater (Gerald Castillo) is A.C. 's strict but loving father, who is frequently transferred due to Army obligations. Although he wants his son to attend West Point and enter the military as he himself had done, he eventually changes his views when he sees that Slater wants to attend college on a wrestling scholarship and figure things out from there. He is more frequent in the first season than any of the others. It is later revealed in "Saved by the Bell: The College Years '' that Martin 's last name was actually "Sanchez, '' causing an embarrassed A.C. Slater to realize his father concealed his Latino heritage (presumably simple due to his decidedly non-Latino appearance) to avoid rampant bigotry as he entered the military.
J.B. is Slater 's younger sister. She shows up suddenly to visit A.C. and Zack develops a crush on her. This makes A.C. angry as he knows how big a womanizer Zack is, and forbids them from seeing each other. Once J.B. learns of this, she confronts Slater angrily and tells him to butt out of her love life. He eventually accepts that she 'll be dating his best friend and thus would n't be so bad. He still warns Zack to "treat her right ''. She does n't appear in any other episodes besides "Slater 's Sister. ''
Leslie (Barbra Brighton) is the aerobics instructor (in the 2 - part season 3 episode "Palm Springs Weekend '') as well as Jessie 's future step - mom. Although Zack makes a pass at her in the beginning of the episode, the gang later finds out Leslie is marrying Jessie 's dad. Jessie refers to her future step - mom as an "aerobics bimbo '', insults and lies to her, and asks her dad flat - out to not marry her. Her father firmly says he loves Leslie and is marrying her, period. Jessie then refuses to attend the wedding. At the end of the episode, however, Zack convinces Jessie to do the right thing and they arrive (though slightly late) to the wedding on a golf cart, where Jessie apologizes and Leslie forgives her.
Eric Tramer (Josh Hoffman) is Jessie 's stepbrother from New York (in the 2 - part season 3 episode "The Wicked Step - Brother ''). At first excited to have a new stepbrother, Jessie quickly grows to dislike his rough, confrontational and offensive personality. Eric causes trouble and conflict at Bayside, particularly with Zack who feels threatened by the new prankster bad - boy, even going so far as to offer up Mr. Belding 's car for auto shop class dissection to get Zack into trouble. However, after a punch and a speech from Jessie (after he said she was "just a chick ''), he reforms his ways, entirely reassembles Belding 's car, and apologizes to all. He was planning to return to NYC, but the gang, particularly Lisa, who had developed feelings for him, convinces him to remain at Bayside.
Mr. Turtle (Henry Brown) appears in the episode "The Lisa Card ''. He was Lisa 's father, who gave Lisa her first credit card for ' emergencies only, ' however, she spends nearly $400 on clothes with the credit card. At the end of the episode, she confesses that she spent the emergency credit card balance on clothes, and Mr. Turtle forces her to pay the credit card off herself.
Lisa 's mother Judy Turtle (Susan Beaubian) appeared in two episodes of the series, "Operation Zack '' (Season 3) and "Drinking and Driving '' (Season 4). In "Operation Zack '', she is Zack 's doctor after he accidentally breaks his leg when Mr. Belding bumps into him in the locker room. In "Drinking and Driving '', Lisa crashed her car after getting drunk at a party, and has to deal with the repercussions of her actions, including telling Judy what she did to her car.
Roberta Powers (Ruth Buzzi) appeared in "House Party '' (Season 2). She is Screech 's mom, who is a huge Elvis fan. In the episode, she trusts Screech to be home alone for a weekend and gives him a long list of rules to abide by. Her favorite Elvis statue gets broken by Screech 's girlfriend, Violet, and the gang has to throw a party to raise money for the statue replacement.
Kimberly (who was uncredited) is Screech 's cousin who appeared in the episode "The Aftermath '' (Season 3). She was one of Zack 's blind dates, which was intended to cheer Zack up after his rough breakup with Kelly. Slater, Jessie, and Lisa start wondering how Screech is related to her, due to the strong dissimilarities in their appearances, until Screech tells them she is adopted.
Stacey Carosi (portrayed by Leah Remini, appearing in 6 episodes of season 3). She works for her father, Leon Carosi, at the Malibu Sands beach resort as a staff manager, in charge of all of the other members. Zack Morris is quickly smitten with her, and is disappointingly shocked to learn that she is the daughter of his boss. Throughout the summer, she and Zack progress from flirtation to hostility to eventual romance, marked by their kiss under the fireworks during the Fourth of July celebration.
Leon Carosi (portrayed by Ernie Sabella) is Stacey 's father, who takes a strong dislike to Zack due to his constant snarky remarks and attempted courting of his daughter. He is also the gang 's greedy boss for the summer. He also is very narcissistic and thinks highly of himself and his daughter. However, despite his dislike of Zack, he eventually warms up to him towards the end of Zack 's summer employment and says he approves of Zack and Stacey dating. He is similar to Mr. Belding who is always trying to get rid of Zack.
Cynthia (portrayed by Denise Richards -- uncredited), is Slater 's secret admirer while he worked at the Malibu Sands Beach Resort (in "The Last Weekend '' (3.12)), eventually revealing herself by pretending to drown. Although she and Slater seem to be a couple at the end of the Malibu Sands series, and Slater even informs the gang that Cynthia is coming to Bayside the following school year, she never again appears on the show.
Craig Strand (Benjamin King) is Stacy Carosi 's preppy boyfriend from back east in Boston, a sophomore at Yale (in "My Boyfriend 's Back '' (3.8)), who shows up at the beach club and interrupts Zack and Stacy 's fling, even giving Stacy his fraternity pin. Craig defeats Zack in an ATV race after bumping Zack 's ATV on the last turn. Ultimately, Stacy sends Craig back home and confides her love for Zack.
Max is the owner of The Max (played by Ed Alonzo, in 20 episodes during the first 2 seasons). He often advises the students of Bayside on their choices and dilemmas, and re-inforces his points with magic tricks. He is also the gang 's friend.
Danielle (Julie St. Claire) is a girl that Zack dates for one episode ("Fake I.D. 's ''). Zack meets Danielle at the Max after she gets a flat tire. She attempts to use the occupied pay phone, so Zack allows her to use his cell phone. He then changes the flat tire after the tow truck company says it would take an hour. Danielle is a student at USC, so Zack lies and tells her he was also a Trojan. He continues to lie, telling her that he is a Photojournalism major, and the reason that she never saw him on campus is because he is always in the dark room working on a project. However, Danielle ditches Zack upon learning he lied to her.
Jeff Hunter (played by Patrick Muldoon) has a brief tenure (in three episodes of season 3) as the Max 's manager who likes Kelly Kapowski. As a result, Zack and Kelly break up, allowing Jeff and Kelly to begin a relationship. Unfortunately, very soon afterwards Jeff 's infidelity was revealed when he is seen dancing with and kissing a girl at an over-18 club known as The Attic. Kelly breaks up with him as a result.
James (portrayed by Mark Blankfield) is an out - of - work actor who takes a job as a waiter at the Max (appearing in 2 episodes in seasons 2 & 3). He claims that he was a student at acting school. Most of the work he seems to get as an actor on the show is in the employ of Zack. In the season 3 episode "SATs, '' Zack hires James to do a double role as Stanley Alan Taylor (S.A.T.), who orders Mr. Belding to lighten up Zack 's workload after his grades have dropped, then to play an admissions officer for Harvard who acts snooty and elitist to everyone. In the season 2 episode "Rent - a-Pop, '' Zack hired James to impersonate his father after Belding demanded he meet with him as a condition to go on a ski trip in order to report on Zack 's failing grades; to which Zack was convinced his real father would not let him go if he knew the truth. Zack then has to hire James again to play Mr. Belding after his real father accepts a letter to meet Belding once more, where James falsifies Zack 's grades as spectacular. James the Actor also made two appearances on Saved By the Bell: The New Class.
Johnny Dakota (portrayed by Eddie Garcia) is a movie star who chooses Bayside as the site for filming an anti-drug PSA for NBC in the 1991 episode "No Hope with Dope ''. Zack thinks this is a great idea, seeing this as a chance for stylish parties and to get close to Hollywood babes. However, during a party at his house, the gang discovers that Johnny is a marijuana user. Zack attempts to get him to call off filming the PSA, feeling it would be hypocritical for Johnny to tell millions of young people not to use drugs when he himself is a drug user, but Johnny refuses. When the gang refuses to participate as a result of his drug habit, Johnny angrily leaves Bayside without filming. The gang then films an anti-drug PSA with the help of Mr. Belding 's friend, Brandon Tartikoff.
Brian Fate (portrayed by Nick Brooks) is a British record producer who picks up Zack Attack and makes them famous in the episode "Rockumentary '' (3.22). Although he has confidence in the band and takes them to the top, he can not stop the inevitable breakup that occurs as a result of Mindy, the band 's publicist. Since the entire episode is just a dream, he may not have existed in the "real world '' of the show.
Linda (portrayed by Bryn Erin -- uncredited) has a very brief role in the "Rockumentary '' episode. After the band splits up and each member goes their separate way, Slater has a race car accident leading to the band meeting again. Screech shows up (after seeking wisdom from the high geek) to the hospital with a cheerleader, Linda. She takes regular sentences and makes cheers from them, leading eventually to Lisa threatening her, "if ya do n't get ridda Linda, I'ma throw her out the winda '' "B-Y-E BYE! '' after which she leaves the room. Like to Brian Fate, she was part of the entire fantasy of the episode, and thus probably did not exist in the real world.
Brandon Tartikoff, a television executive for NBC, appeared as himself in the episode "No Hope With Dope '' (3.21). After Johnny Dakota was revealed to be a pot - smoker and subsequently left Bayside without finishing the anti-drug commercial being filmed there, Mr. Belding phoned Tartikoff, whom he had been childhood friends with, at NBC to see if he could help them. Tartikoff ended up assisting the teens in finishing the commercial. Afterwards, impressed with the rapport Belding had with his students, he suggested the idea of a TV show about "a group of kids and their principal '' before shaking his head and saying that it "would never work '' (an obvious meta - joke on SBTB itself).
Cousins and Valley students (Dan is portrayed by C.W. Hemingway and Stan is portrayed by Mark Clayman) responsible for the Bayside - Valley prank war in the episode "Save That Tiger '' (1.16). After Zack and Slater steals the Valley mascot, Dan and Stan retaliated by kidnapping Screech (who wears the tiger costume and is the Bayside High mascot). However, Dan 's misstep in trying to pose as the tiger mascot (along with Zack and Slater secretly putting fleas inside the costume) made Valley High lose the cheerleading competition, along with the prank war in the end. This earns him a severe reprimand by his principal. Mark Clayman also played Bayside student, "Moose. ''
Allison Fox (portrayed by Hilary Hayes) poses as a reporter from "Chessboy '' Magazine (in "Check Your Mate '' (3.7)). After rejecting Zack 's advances, she claims to be profiling Screech for the state chess championship against Valley. She is later revealed to be the girlfriend of Guy Guy, a student at Valley High with whom Zack had met a $300 bet on the match. After flirting with Screech, she steals the lucky beret given to him by Violet without which he thinks he can not win the match. However, with a burst of good luck by Violet, Screech wins the match and Allison is exposed for her scam.
Marvin Nedick (Gino De Mauro) is Valley 's star wrestler and a major threat to AC Slater 's undefeated wrestling record (in "Pinned to the Mat '' (1.9)). After career day, Slater takes up cooking and quits the wrestling team, leaving Screech to wrestle Nedick. As Screech prepares to wrestle him, Slater appears. When asked why he changed his mind, Slater says that he burned his quiche. Slater quickly defeats Nedick in the wrestling match.
Lieutenant Chet Adams (portrayed by Cylk Cozart, in the season two episode "Zack 's War '') is the head of Bayside 's California Cadet Corps program. Lisa shows an infatuation towards Lt. Adams (even once inadvertently calling him "Lt. Hot '') and joins the program in an attempt to gain his favor. He then reminds her that he 's married. At the beginning of their training, Adams tells Zack that he can divide the cadets into teams, which prompts Zack to clump together all the "nerds '' into one team and all the "jocks '' onto another team, thinking all the jocks will be on his team. Adams replies by saying, "You get to pick the teams, but I get to pick the captains '' and puts the nerds on Zack 's team and the jocks on Slater 's team. Jessie 's feminist attitude clashes with Adams ' military style, as she asks him to revise his motto "separating the men from the boys '' to "separating the persons from the persons. ''
Frank and Laura Benton (Stephen Mendel and Jennifer McComb) are a homeless father and daughter, respectively (in the 2 - part season 3 episode "Home for Christmas ''), who catches the eye of Zack and the gang around Christmas time. Laura, who works in a department store with Kelly, seems to be a tentative love interest for Zack; she is very shy around Zack about her homelessness. Zack, Slater and Screech get Frank to a hospital when he passes out at the mall from hunger, and then they find out he is Laura 's father. Laura 's boss accuses her of stealing a suit jacket that Kelly set aside to buy, so Frank would have an outfit that he could wear to job interviews. Later, Laura 's boss apologizes and gives Laura the jacket for free. Eventually, the Morris family takes in the Bentons until they could "get back on their feet. ''
Gem Diamond (Gary Beach) is a con artist (in "Class Rings '' (4.12)) who sells Zack fake class rings, which turn the wearer 's finger green. Desperate to replace the bunk rings with the genuine article, Zack and Slater make a plan in which Screech threatens Gem with karate unless he replaces the rings. Gem concedes, and the student body is satisfied with their new jewelry. He is last seen running out of the Max, being driven away by his mother in an attempt to escape from Screech 's moves. Gem also appears in the California Dreams episode "Honest Sly '' as a used car dealer.
Chief Henry (portrayed by Dehl Berti) is an American Indian (in "Running Zack '' (2.13)) with whom Zack bonds during his family history project. Zack initially tries to skip the project because he has a big track meet coming up, and later gives an insulting presentation that leads the teacher to refer him to the Chief. While they quickly become good friends, Henry dies suddenly, with Zack lamenting that they had hardly had time to get to know each other. Later, Henry 's ghost appears to Zack, giving him a gift and encouraging him to run at the school track meet.
Kevin the Robot (operated and voiced by Mike Lavelle) was a creation of Screech 's with artificial intelligence (appearing in 3 episodes during seasons 1 & 2), who lives in Screech 's room and usually dishes out advice and witty remarks. His appearances include him as Screech 's assistant when Screech is doing a magic show; and as assistant hall monitor when Screech is nominated for the post by his classmates, to make up for forgetting Screech 's birthday.
|
how far is new orleans louisiana from here | New Orleans - Wikipedia
New Orleans (/ nj uː ˈɔːrli. ən z, - ˈɔːrˈliːnz, - ˈɔːrlənz /, or / ˈnɔːrlənz /; French: La Nouvelle - Orléans (la nuvɛlɔʁleɑ̃) (listen)) is a major United States port and the largest city and metropolitan area in the state of Louisiana.
The population of the city was 343,829 as of the 2010 U.S. Census. The New Orleans metropolitan area (New Orleans -- Metairie -- Kenner Metropolitan Statistical Area) had a population of 1,167,764 in 2010 and was the 46th largest in the United States. The New Orleans -- Metairie -- Bogalusa Combined Statistical Area, a larger trading area, had a 2010 population of 1,452,502. Before Hurricane Katrina, Orleans Parish was the most populous parish in Louisiana. As of 2015, it ranks third in population, trailing neighboring Jefferson Parish, and East Baton Rouge Parish.
It is well known for its distinct French and Spanish Creole architecture, as well as its cross-cultural and multilingual heritage. New Orleans is also famous for its cuisine, music (particularly as the birthplace of jazz), and its annual celebrations and festivals, most notably Mardi Gras, dating to French colonial times. The city is often referred to as the "most unique '' in the United States.
New Orleans is located in southeastern Louisiana, and developed on both sides of the Mississippi River. The heart of the city and French Quarter is on the north side of the river as it curves through this area. The city and Orleans Parish (French: paroisse d'Orléans) are coterminous. The city and parish are bounded by the parishes of St. Tammany to the north, St. Bernard to the east, Plaquemines to the south, and Jefferson to the south and west. Lake Pontchartrain, part of which is included in the city limits, lies to the north and Lake Borgne lies to the east.
The city is named after the Duke of Orleans, who reigned as Regent for Louis XV from 1715 to 1723, as it was established by French colonists and strongly influenced by their European culture. It also has a number of illustrative nicknames:
La Nouvelle - Orléans (New Orleans) was founded May 7, 1718, by the French Mississippi Company, under the direction of Jean - Baptiste Le Moyne de Bienville, on land inhabited by the Chitimacha. It was named for Philippe II, Duke of Orléans, who was Regent of the Kingdom of France at the time. His title came from the French city of Orléans.
The French colony was ceded to the Spanish Empire in the Treaty of Paris (1763). During the American Revolutionary War, New Orleans was an important port for smuggling aid to the rebels, transporting military equipment and supplies up the Mississippi River. Bernardo de Gálvez y Madrid, Count of Gálvez successfully launched a southern campaign against the British from the city in 1779. New Orleans (Spanish: Nueva Orleans) remained under Spanish control until 1803, when it reverted briefly to French oversight. Nearly all of the surviving 18th - century architecture of the Vieux Carré (French Quarter) dates from the Spanish period, the most notable exception being the Old Ursuline Convent.
Napoleon sold Louisiana (New France) to the United States in the Louisiana Purchase in 1803. Thereafter, the city grew rapidly with influxes of Americans, French, Creoles, and Africans. Later immigrants were Irish, Germans, and Italians. Major commodity crops of sugar and cotton were cultivated with slave labor on large plantations outside the city.
The Haitian Revolution ended in 1804 and established the second republic in the Western Hemisphere and the first republic led by black people. It had occurred over several years in what was then the French colony of Saint - Domingue. Thousands of refugees from the violent revolution, both whites and free people of color (affranchis or gens de couleur libres), arrived in New Orleans, often bringing slaves of African descent with them. While Governor Claiborne and other officials wanted to keep out additional free black men, the French Creoles wanted to increase the French - speaking population. As more refugees were allowed into the Territory of Orleans, Haitian émigrés who had first gone to Cuba also arrived. Many of the white Francophones had been deported by officials in Cuba in retaliation for Bonapartist schemes in Spain.
Nearly 90 percent of these immigrants settled in New Orleans. The 1809 migration brought 2,731 whites; 3,102 free persons of African descent; and 3,226 enslaved persons of African descent, doubling the city 's population. The city became 63 percent black in population, a greater proportion than Charleston, South Carolina 's 53 percent.
During the final campaign of the War of 1812, the British sent a force of 11,000 soldiers, marines, and sailors, in an attempt to capture New Orleans. Despite great challenges, General Andrew Jackson, with support from the U.S. Navy on the river, successfully cobbled together a motley military force of: militia from Louisiana and Mississippi, including free men of color, U.S. Army regulars, a large contingent of Tennessee state militia, Kentucky riflemen, Choctaw fighters, and local privateers (the latter led by the pirate Jean Lafitte), to decisively defeat the British troops, led by Sir Edward Pakenham, in the Battle of New Orleans on January 8, 1815. The armies had not learned of the Treaty of Ghent which had been signed on December 24, 1814. (However, the treaty did not call for cessation of hostilities until after both governments had ratified the treaty, and the US government did not ratify it until February 16, 1815.) The fighting in Louisiana had begun in December 1814 and did not end until late January, after the Americans held off the British Navy during a ten - day siege of Fort St. Philip. (The Royal Navy went on to capture Fort Bowyer near Mobile, before the commanders received news of the peace treaty.)
As a principal port, New Orleans played a major role during the antebellum era in the Atlantic slave trade. Its port also handled huge quantities of commodities for export from the interior and imported goods from other countries, which were warehoused and transferred in New Orleans to smaller vessels and distributed the length and breadth of the vast Mississippi River watershed. The river in front of the city was filled with steamboats, flatboats, and sailing ships. Despite its role in the slave trade, New Orleans at the same time had the largest and most prosperous community of free persons of color in the nation, who were often educated and middle - class property owners.
Dwarfing in population the other cities in the antebellum South, New Orleans had the largest slave market in the domestic slave trade, which expanded after the United States ' ending of the international trade in 1808. Two - thirds of the more than one million slaves brought to the Deep South arrived via the forced migration of the domestic slave trade. The money generated by the sale of slaves in the Upper South has been estimated at 15 percent of the value of the staple crop economy. The slaves represented half a billion dollars in property. An ancillary economy grew up around the trade in slaves -- for transportation, housing and clothing, fees, etc., estimated at 13.5 percent of the price per person. All of this amounted to tens of billions of dollars (2005 dollars, adjusted for inflation) during the antebellum period, with New Orleans as a prime beneficiary.
According to the historian Paul Lachance,
After the Louisiana Purchase, numerous Anglo - Americans migrated to the city. The population of the city doubled in the 1830s and by 1840, New Orleans had become the wealthiest and the third-most populous city in the nation. Large numbers of German and Irish immigrants began arriving in the 1840s, working as laborers in the busy port. In this period, the state legislature passed more restrictions on manumissions of slaves, and virtually ended it in 1852.
In the 1850s, white Francophones remained an intact and vibrant community; they maintained instruction in French in two of the city 's four school districts (all were white). In 1860, the city had 13,000 free people of color (gens de couleur libres), the class of free, mostly mixed - race people that developed during French and Spanish rule. The census recorded 81 percent as mulatto, a term used to cover all degrees of mixed race. Mostly part of the Francophone group, they constituted the artisan, educated and professional class of African Americans. Most blacks were still enslaved, working at the port, in domestic service, in crafts, and mostly on the many large, surrounding sugar cane plantations.
After growing by 45 percent in the 1850s, by 1860, the city had nearly 170,000 people The city was a destination for immigrants. It had grown in wealth, with a "per capita income (that) was second in the nation and the highest in the South. '' The city had a role as the "primary commercial gateway for the nation 's booming mid-section. '' The port was the third largest in the nation in terms of tonnage of imported goods, after Boston and New York, handling 659,000 tons in 1859.
As the French Creole elite feared, during the Civil War their world changed. In 1862, following the occupation by the Navy after the Battle of Forts Jackson and St. Philip, Northern forces under Gen. Benjamin F. Butler, a respected state lawyer of the Massachusetts militia, occupied the city. Later New Orleans residents nicknamed him as "Beast '' Butler, because of a military order he issued. After his troops had been assaulted and harassed in the streets by Southern women, his order warned that future such occurrences would result in his men treating such "ladies '' as those "plying their avocation in the streets '', implying that they would treat the women like prostitutes. Accounts of this spread like wildfire across the South and the North. He also came to be called "Spoons '' Butler because of the alleged looting that his troops did while occupying New Orleans.
Butler abolished French language instruction in city schools; statewide measures in 1864 and, after the war, 1868 further strengthened English - only policy imposed by federal representatives. With the predominance of English speakers in the city and state, that language had already become dominant in business and government. By the end of the 19th century, French usage in the city had faded significantly; it was also under pressure from new immigrants: English speakers such as the Irish, and other Europeans, such as the Italians and Germans. However, as late as 1902 "one - fourth of the population of the city spoke French in ordinary daily intercourse, while another two - fourths was able to understand the language perfectly, '' and as late as 1945, one still encountered elderly Creole women who spoke no English. The last major French language newspaper in New Orleans, L'Abeille de la Nouvelle - Orléans (New Orleans Bee), ceased publication on December 27, 1923, after ninety - six years. According to some sources, Le Courrier de la Nouvelle Orleans continued until 1955.
As the city was captured and occupied early in the war, it was spared the destruction through warfare suffered by many other cities of the American South. The Union Army eventually extended its control north along the Mississippi River and along the coastal areas of the State. As a result, most of the southern portion of Louisiana was originally exempted from the liberating provisions of the 1863 "Emancipation Proclamation '' issued by President Abraham Lincoln. Large numbers of rural ex-slaves and some free people of color from the city volunteered for the first regiments of Black troops in the War. Led by Brig. Gen. Daniel Ullman (1810 -- 1892), of the 78th Regiment of New York State Volunteers Militia, they were known as the "Corps d'Afrique. '' While that name had been used by a militia before the war, that group was composed of free people of color. The new group was made up mostly of former slaves. They were supplemented in the last two years of the War by newly organized United States Colored Troops, who played an increasingly important part in the war.
Violence throughout the South, especially the Memphis Riots of 1866 followed by the New Orleans Riot in July of that year, resulted in Congress passing the Reconstruction Act and the Fourteenth Amendment, to extend the protections of full citizenship to freedmen and free people of color. Louisiana and Texas were put under the authority of the "Fifth Military District '' of the United States during Reconstruction. Louisiana was eventually readmitted to the Union in 1868; its Constitution of 1868 granted universal manhood suffrage and established universal public education. Both blacks and whites were elected to local and state offices. In 1872, lieutenant governor P.B.S. Pinchback, who was of mixed race, succeeded Henry Clay Warmouth for a brief period as Republican governor of Louisiana, becoming the first governor of African descent of an American state. (The next African American to serve as governor of an American state was Douglas Wilder, elected in Virginia in 1989.) New Orleans even operated a racially - integrated public school system during this period.
Wartime damage to levees and cities along the Mississippi River adversely affected southern crops and trade for the port city for some time. The federal government contributed to restoring infrastructure, but it took time. The nationwide financial recession and Panic of 1873 also adversely affected businesses and slowed economic recovery.
From 1868, elections in Louisiana were marked by violence, as white insurgents tried to suppress black voting and disrupt Republican gatherings. Violence continued around elections. The disputed 1872 gubernatorial election resulted in conflicts that ran for years. The "White League '', an insurgent paramilitary group that supported the Democratic Party, was organized in 1874 and operated in the open, violently suppressing the black vote and running off Republican officeholders. In 1874, in the Battle of Liberty Place, 5,000 members of the White League fought with city police to take over the state offices for the Democratic candidate for governor, holding them for three days. By 1876, such tactics resulted in the white Democrats, the so - called Redeemers, regaining political control of the state legislature. The federal government gave up and withdrew its troops in 1877, ending Reconstruction.
White Democrats passed Jim Crow laws, establishing racial segregation in public facilities. In 1889, the legislature passed a constitutional amendment incorporating a "grandfather clause '' that effectively disfranchised freedmen as well as the propertied people of color free before the war. Unable to vote, African Americans could not serve on juries or in local office, and were closed out of formal politics for several generations in the state. It was ruled by a white Democratic Party. Public schools were racially segregated and remained so until 1960.
New Orleans ' large community of well - educated, often French - speaking free persons of color (gens de couleur libres), who had been free prior to the Civil War, sought to fight back against Jim Crow. They organized the Comité des Citoyens (Citizens Committee) to work for civil rights. As part of their legal campaign, they recruited one of their own, Homer Plessy, to test whether Louisiana 's newly enacted Separate Car Act was constitutional. Plessy boarded a commuter train departing New Orleans for Covington, Louisiana, sat in the car reserved for whites only, and was arrested. The case resulting from this incident, Plessy v. Ferguson, was heard by the U.S. Supreme Court in 1896. The court ruled that "separate but equal '' accommodations were constitutional, effectively upholding Jim Crow measures. In practice, African - American public schools and facilities were underfunded in Louisiana and across the South. The Supreme Court ruling contributed to this period as the nadir of race relations in the United States. The rate of lynchings of black men was high across the South, as other states also disfranchised blacks and sought to impose Jim Crow to establish white supremacy. Anti-Italian sentiment in 1891 contributed to the lynchings of 11 Italians, some of whom had been acquitted of the murder of the police chief. Some were shot and killed in the jail where they were being held. It was the largest mass lynching in U.S. history. In July 1900 the city was swept by white mobs rioting after Robert Charles, a young African American, had killed a policeman and temporarily escaped. They killed him and an estimated 20 other blacks; seven whites died in the conflict, which lasted a few days until a state militia suppressed it.
Throughout New Orleans ' history, until the early 20th century when medical and scientific advances ameliorated the situation, the city suffered repeated epidemics of yellow fever and other tropical and infectious diseases.
New Orleans ' zenith as an economic and population center, in relation to other American cities, occurred in the decades prior to 1860. At this time New Orleans was the nation 's fifth - largest city and was significantly larger than all other American South population centers. New Orleans continued to increase in population from the mid-19th century onward, but rapid economic growth shifted to other areas of the country, meaning that New Orleans ' relative importance steadily declined. First to emerge in importance were the new industrial and railroad hubs of the Midwest, then the rapidly growing metropolises of the Pacific Coast in the decades before and after the turn of the 20th century. Construction of railways and highways decreased river traffic, diverting goods to other transportation corridors and markets. Thousands of the most ambitious blacks and people of color left New Orleans and the state in the Great Migration around World War II and after, many for West Coast destinations. In the post-war period, other Sun Belt cities in the South and West surpassed New Orleans in population.
From the late 1800s, most U.S. censuses recorded New Orleans ' slipping rank among American cities. Reminded every ten years of its declining relative importance, New Orleans would periodically mount attempts to regain its economic vigor and pre-eminence, with varying degrees of success.
By the mid-20th century, New Orleanians recognized that their city was being surpassed as the leading urban area in the South. By 1950, Houston, Dallas, and Atlanta exceeded New Orleans in size, and in 1960 Miami eclipsed New Orleans, even as the latter 's population reached what would be its historic peak that year. As with other older American cities in the postwar period, highway construction and suburban development drew residents from the center city to newer housing outside. The 1970 census recorded the first absolute decline in the city 's population since it joined the United States. The New Orleans metropolitan area continued expanding in population, however, just not as rapidly as other major cities in the Sun Belt. While the port remained one of the largest in the nation, automation and containerization resulted in significant job losses. The city 's relative fall in stature meant that its former role as banker to the South was inexorably supplanted by competing companies in larger peer cities. New Orleans ' economy had always been based more on trade and financial services than on manufacturing, but the city 's relatively small manufacturing sector also shrank in the post -- World War II period. Despite some economic development successes under the administrations of DeLesseps "Chep '' Morrison (1946 -- 1961) and Victor "Vic '' Schiro (1961 -- 1970), metropolitan New Orleans ' growth rate consistently lagged behind more vigorous cities.
During the later years of Morrison 's administration, and for the entirety of Schiro 's, the city was a center of the Civil Rights Movement. The Southern Christian Leadership Conference was founded in the city, and lunch counter sit - ins were held in Canal Street department stores. A prominent and violent series of confrontations occurred in 1960 when the city attempted school desegregation, following the Supreme Court ruling in Brown v. Board of Education (1954). When six - year - old Ruby Bridges integrated William Frantz Elementary School in the city 's Ninth Ward, she was the first child of color to attend a previously all - white school in the South.
The Civil Rights Movement 's success in gaining federal passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 provided enforcement of constitutional rights, including renewed voting for blacks. Together, these resulted in the most far - reaching changes in New Orleans ' 20th century history. Though legal and civil equality were re-established by the end of the 1960s, a large gap in income levels and educational attainment persisted between the city 's White and African - American communities. As the middle class and wealthier members of both races left the center city, its population 's income level dropped, and it became proportionately more African American. From 1980, the African - American majority has elected primarily officials from its own community. They have struggled to narrow the gap by creating conditions conducive to the economic uplift of the African - American community.
New Orleans became increasingly dependent on tourism as an economic mainstay during the administrations of Sidney Barthelemy (1986 -- 1994) and Marc Morial (1994 -- 2002). Relatively low levels of educational attainment, high rates of household poverty, and rising crime threatened the prosperity of the city in the later decades of the century. The negative effects of these socioeconomic conditions contrasted with the changes to the economy of the United States, which were based on a post-industrial, knowledge - based paradigm in which mental skills and education were far more important to advancement than manual skills.
In the 20th century, New Orleans ' government and business leaders believed they needed to drain and develop outlying areas to provide for the city 's expansion. The most ambitious development during this period was a drainage plan devised by engineer and inventor A. Baldwin Wood, designed to break the surrounding swamp 's stranglehold on the city 's geographic expansion. Until then, urban development in New Orleans was largely limited to higher ground along the natural river levees and bayous.
Wood 's pump system allowed the city to drain huge tracts of swamp and marshland and expand into low - lying areas. Over the 20th century, rapid subsidence, both natural and human - induced, resulted in these newly populated areas declining to several feet below sea level.
New Orleans was vulnerable to flooding even before the city 's footprint departed from the natural high ground near the Mississippi River. In the late 20th century, however, scientists and New Orleans residents gradually became aware of the city 's increased vulnerability. In 1965, flooding from Hurricane Betsy killed dozens of residents, although the majority of the city remained dry. The rain - induced flood of May 8, 1995, demonstrated the weakness of the pumping system. After that event, measures were undertaken to dramatically upgrade pumping capacity. By the 1980s and 1990s, scientists observed that extensive, rapid, and ongoing erosion of the marshlands and swamp surrounding New Orleans, especially that related to the Mississippi River -- Gulf Outlet Canal, had the unintended result of leaving the city more vulnerable to hurricane - induced catastrophic storm surges than earlier in its history.
New Orleans was catastrophically affected by what the University of California Berkeley 's Dr. Raymond B. Seed called "the worst engineering disaster in the world since Chernobyl '', when the Federal levee system failed during Hurricane Katrina in 2005. By the time the hurricane approached the city at the end of August 2005, most residents had evacuated. As the hurricane passed through the Gulf Coast region, the city 's federal flood protection system failed, resulting in the worst civil engineering disaster in American history. Floodwalls and levees constructed by the United States Army Corps of Engineers failed below design specifications and 80 % of the city flooded. Tens of thousands of residents who had remained in the city were rescued or otherwise made their way to shelters of last resort at the Louisiana Superdome or the New Orleans Morial Convention Center. More than 1,500 people were recorded as having died in Louisiana, most in New Orleans, and others are still unaccounted for. Before Hurricane Katrina, the city called for the first mandatory evacuation in its history, to be followed by another mandatory evacuation three years later with Hurricane Gustav.
The city was declared off - limits to residents while efforts to clean up after Hurricane Katrina began. The approach of Hurricane Rita in September 2005 caused repopulation efforts to be postponed, and the Lower Ninth Ward was reflooded by Rita 's storm surge.
Because of the scale of damage, many people settled permanently outside the city in other areas where they had evacuated, as in Houston. Federal, state, and local efforts have been directed at recovery and rebuilding in severely damaged neighborhoods. The Census Bureau in July 2006 estimated the population of New Orleans to be 223,000; a subsequent study estimated that 32,000 additional residents had moved to the city as of March 2007, bringing the estimated population to 255,000, approximately 56 % of the pre-Katrina population level. Another estimate, based on data on utility usage from July 2007, estimated the population to be approximately 274,000 or 60 % of the pre-Katrina population. These estimates are somewhat smaller than a third estimate, based on mail delivery records, from the Greater New Orleans Community Data Center in June 2007, which indicated that the city had regained approximately two - thirds of its pre-Katrina population. In 2008, the Census Bureau revised its population estimate for the city upward, to 336,644. Most recently, 2010 estimates show that neighborhoods that did not flood are near 100 % of their pre-Katrina populations, and in some cases, exceed 100 % of their pre-Katrina populations.
Several major tourist events and other forms of revenue for the city have returned. Large conventions are being held again, such as those held by the American Library Association and American College of Cardiology. College football events such as the Bayou Classic, New Orleans Bowl, and Sugar Bowl returned for the 2006 -- 2007 season. The New Orleans Saints returned that season as well, following speculation of a move. The New Orleans Hornets (now named the Pelicans) returned to the city fully for the 2007 -- 2008 season, having partially spent the 2006 -- 2007 season in Oklahoma City. New Orleans successfully hosted the 2008 NBA All - Star Game and the 2008 BCS National Championship Game. The city hosted the first and second rounds of the 2007 NCAA Men 's Division I Basketball Tournament. New Orleans and Tulane University hosted the Final Four Championship in 2012. Additionally, the city hosted the Super Bowl XLVII on February 3, 2013 at the Mercedes - Benz Superdome.
Major annual events such as Mardi Gras and the Jazz & Heritage Festival were never displaced or canceled. Also, an entirely new annual festival, "The Running of the Bulls New Orleans '', was created in 2007.
On February 7, 2017, a large EF3 wedge tornado hit parts of the eastern side of the city, causing severe damages to homes and other buildings, as well as destroying a mobile home park. At least 25 people were left injured by the event.
New Orleans is located at 29 ° 57 ′ 53 '' N 90 ° 4 ′ 14 '' W / 29.96472 ° N 90.07056 ° W / 29.96472; - 90.07056 (29.964722, − 90.070556) on the banks of the Mississippi River, approximately 105 miles (169 km) upriver from the Gulf of Mexico. According to the U.S. Census Bureau, the city has a total area of 350 square miles (910 km), of which 169 square miles (440 km) is land and 181 square miles (470 km) (52 %) is water. Orleans Parish is the smallest parish by land area in Louisiana.
The city is located in the Mississippi River Delta on the east and west banks of the Mississippi River and south of Lake Pontchartrain. The area along the river is characterized by ridges and hollows.
New Orleans was originally settled on the natural levees or high ground, along the Mississippi River. After the Flood Control Act of 1965, the US Army Corps of Engineers built floodwalls and man - made levees around a much larger geographic footprint that included previous marshland and swamp. Over time, pumping of nearby marshland allowed for development into lower elevation areas. Today, a large portion of New Orleans is at or below local mean sea level and evidence suggests that portions of the city may be dropping in elevation due to subsidence.
A 2007 study by Tulane and Xavier University suggested that "51 %... of the contiguous urbanized portions of Orleans, Jefferson, and St. Bernard parishes lie at or above sea level, '' with the more densely populated areas generally on higher ground. The average elevation of the city is currently between 1 foot (0.30 m) and 2 feet (0.61 m) below sea level, with some portions of the city as high as 20 feet (6 m) at the base of the river levee in Uptown and others as low as 7 feet (2 m) below sea level in the farthest reaches of Eastern New Orleans. A study published by the ASCE Journal of Hydrologic Engineering in 2016, however, stated:
The magnitude of subsidence potentially caused by the draining of natural marsh in the New Orleans area and southeast Louisiana is a topic of debate. A study published in Geology in 2006 by an associate professor at Tulane University claims:
The study noted, however, that the results did not necessarily apply to the Mississippi River Delta, nor the New Orleans Metropolitan area proper. On the other hand, a report by the American Society of Civil Engineers claims that "New Orleans is subsiding (sinking) '':
In May 2016, NASA published a study which suggested that most areas of New Orleans were, in fact, experiencing subsidence at a "highly variable rate '' which was "generally consistent with, but somewhat higher than, previous studies. ''
The Central Business District of New Orleans is located immediately north and west of the Mississippi River, and was historically called the "American Quarter '' or "American Sector. '' It was developed after the heart of French and Spanish settlement. It includes Lafayette Square. Most streets in this area fan out from a central point in the city. Major streets of the area include Canal Street, Poydras Street, Tulane Avenue and Loyola Avenue. Canal Street functions as the street which divides the traditional "downtown '' area from the "uptown '' area.
Every street crossing Canal Street between the Mississippi River and Rampart Street, which is the northern edge of the French Quarter, has a different name for the "uptown '' and "downtown '' portions. For example, St. Charles Avenue, known for its street car line, is called Royal Street below Canal Street, though where it traverses the Central Business District between Canal and Lee Circle, it is properly called St. Charles Street. Elsewhere in the city, Canal Street serves as the dividing point between the "South '' and "North '' portions of various streets. In the local parlance downtown means "downriver from Canal Street '', while uptown means "upriver from Canal Street ''. Downtown neighborhoods include the French Quarter, Tremé, the 7th Ward, Faubourg Marigny, Bywater (the Upper Ninth Ward), and the Lower Ninth Ward. Uptown neighborhoods include the Warehouse District, the Lower Garden District, the Garden District, the Irish Channel, the University District, Carrollton, Gert Town, Fontainebleau, and Broadmoor. However, the Warehouse and the Central Business District, despite being above Canal Street, are frequently called "Downtown '' as a specific region, as in the Downtown Development District.
Other major districts within the city include Bayou St. John, Mid-City, Gentilly, Lakeview, Lakefront, New Orleans East, and Algiers.
New Orleans is world - famous for its abundance of unique architectural styles which reflect the city 's historical roots and multicultural heritage. Though New Orleans possesses numerous structures of national architectural significance, it is equally, if not more, revered for its enormous, largely intact (even post-Katrina) historic built environment. Twenty National Register Historic Districts have been established, and fourteen local historic districts aid in the preservation of this tout ensemble. Thirteen of the local historic districts are administered by the New Orleans Historic District Landmarks Commission (HDLC), while one -- the French Quarter -- is administered by the Vieux Carre Commission (VCC). Additionally, both the National Park Service, via the National Register of Historic Places, and the HDLC have landmarked individual buildings, many of which lie outside the boundaries of existing historic districts.
Many styles of housing exist in the city, including the shotgun house and the bungalow style. Creole townhouses, notable for their large courtyards and intricate iron balconies, line the streets of the French Quarter. Throughout the city, there are many other historic housing styles: Creole cottages, American townhouses, double - gallery houses, and Raised Center - Hall Cottages. St. Charles Avenue is famed for its large antebellum homes. Its mansions are in various styles, such as Greek Revival, American Colonial and the Victorian styles of Queen Anne and Italianate architecture. New Orleans is also noted for its large, European - style Catholic cemeteries, which can be found throughout the city.
For much of its history, New Orleans ' skyline consisted of only low - and mid - rise structures. The soft soils of New Orleans are susceptible to subsidence, and there was doubt about the feasibility of constructing large high rises in such an environment. Developments in engineering throughout the twentieth century eventually made it possible to build sturdy foundations to support high rise structures in the city, and in the 1960s, the World Trade Center New Orleans and Plaza Tower were built, demonstrating the viability of tall skyscrapers in New Orleans. One Shell Square took its place as the city 's tallest building in 1972. The oil boom of the 1970s and early 1980s redefined New Orleans ' skyline with the development of the Poydras Street corridor. Today, most of New Orleans ' tallest buildings are clustered along Canal Street and Poydras Street in the Central Business District.
The climate of New Orleans is humid subtropical (Köppen climate classification Cfa), with short, generally mild winters and hot, humid summers; most suburbs and parts of Wards 9 and 15 fall in USDA Plant Hardiness Zone 9a, while the city 's other 15 wards are rated 9b in whole. The monthly daily average temperature ranges from 53.4 ° F (11.9 ° C) in January to 83.3 ° F (28.5 ° C) in July and August. Officially, as measured at New Orleans International Airport, temperature records range from 11 to 102 ° F (− 12 to 39 ° C) on December 23, 1989 and August 22, 1980, respectively; Audubon Park has recorded temperatures ranging from 6 ° F (− 14 ° C) on February 13, 1899 up to 104 ° F (40 ° C) on June 24, 2009. Dewpoints in the summer months (June -- August) are relatively high, ranging from 71.1 to 73.4 ° F (21.7 to 23.0 ° C).
The average precipitation is 62.5 inches (1,590 mm) annually; the summer months are the wettest, while October is the driest month. Precipitation in winter usually accompanies the passing of a cold front. On average, there are 77 days of 90 ° F (32 ° C) + highs, 8.1 days per winter where the high does not exceed 50 ° F (10 ° C), and 8.0 nights with freezing lows annually. It is rare for the temperature to reach 20 or 100 ° F (− 7 or 38 ° C), with the last occurrence of each being February 5, 1996 and June 26, 2016, respectively.
New Orleans experiences snowfall only on rare occasions. A small amount of snow fell during the 2004 Christmas Eve Snowstorm and again on Christmas (December 25) when a combination of rain, sleet, and snow fell on the city, leaving some bridges icy. The New Year 's Eve 1963 snowstorm affected New Orleans and brought 4.5 inches (11 cm). Snow fell again on December 22, 1989, when most of the city received 1 -- 2 inches (2.5 -- 5.1 cm).
The last significant snowfall in New Orleans was on the morning of December 11, 2008.
Hurricanes pose a severe threat to the area, and the city is particularly at risk because of its low elevation; because it is surrounded by water from the north, east, and south; and because of Louisiana 's sinking coast. According to the Federal Emergency Management Agency, New Orleans is the nation 's most vulnerable city to hurricanes. Indeed, portions of Greater New Orleans have been flooded by: the Grand Isle Hurricane of 1909, the New Orleans Hurricane of 1915, 1947 Fort Lauderdale Hurricane, Hurricane Flossy in 1956, Hurricane Betsy in 1965, Hurricane Georges in 1998, Hurricanes Katrina and Rita in 2005, and Hurricane Gustav in 2008, with the flooding in Betsy being significant and in a few neighborhoods severe, and that in Katrina being disastrous in the majority of the city.
In 2005, storm surge from Hurricane Katrina caused catastrophic failure of the federally designed and built levees, flooding 80 % of the city. A report by the American Society of Civil Engineers says that "had the levees and floodwalls not failed and had the pump stations operated, nearly two - thirds of the deaths would not have occurred ''.
New Orleans has always had to consider the risk of hurricanes, but the risks are dramatically greater today due to coastal erosion from human interference. Since the beginning of the 20th century, it has been estimated that Louisiana has lost 2,000 square miles (5,000 km) of coast (including many of its barrier islands), which once protected New Orleans against storm surge. Following Hurricane Katrina, the Army Corps of Engineers has instituted massive levee repair and hurricane protection measures to protect the city.
In 2006, Louisiana voters overwhelmingly adopted an amendment to the state 's constitution to dedicate all revenues from off - shore drilling to restore Louisiana 's eroding coast line. Congress has allocated $7 billion to bolster New Orleans ' flood protection.
According to a study by the National Academy of Engineering and the National Research Council, levees and floodwalls surrounding New Orleans -- no matter how large or sturdy -- can not provide absolute protection against overtopping or failure in extreme events. Levees and floodwalls should be viewed as a way to reduce risks from hurricanes and storm surges, not as measures that completely eliminate risk. For structures in hazardous areas and residents who do not relocate, the committee recommended major floodproofing measures -- such as elevating the first floor of buildings to at least the 100 - year flood level.
Historical Population Figures
According to the 2010 Census, 343,829 people and 189,896 households were in New Orleans. The racial and ethnic makeup of the city was 60.2 % African American, 33.0 % White, 2.9 % Asian (1.7 % Vietnamese, 0.3 % Indian, 0.3 % Chinese, 0.1 % Filipino, 0.1 % Korean), 0.0 % Pacific Islander, and 1.7 % were people of two or more races. People of Hispanic or Latino origin made up 5.3 % of the population; 1.3 % of New Orleans is Mexican, 1.3 % Honduran, 0.4 % Cuban, 0.3 % Puerto Rican, and 0.3 % Nicaraguan.
The last population estimate before Hurricane Katrina was 454,865, as of July 1, 2005. A population analysis released in August 2007 estimated the population to be 273,000, 60 % of the pre-Katrina population and an increase of about 50,000 since July 2006. A September 2007 report by The Greater New Orleans Community Data Center, which tracks population based on U.S. Postal Service figures, found that in August 2007, just over 137,000 households received mail. That compares with about 198,000 households in July 2005, representing about 70 % of pre-Katrina population. More recently, the Census Bureau revised upward its 2008 population estimate for the city, to 336,644 inhabitants. In 2010, estimates showed that neighborhoods that did not flood were near 100 % of their pre-Katrina populations, and in some cases, exceeded 100 % of their pre-Katrina populations.
A 2006 study by researchers at Tulane University and the University of California, Berkeley determined that there are as many as 10,000 to 14,000 undocumented immigrants, many from Mexico, currently residing in New Orleans. Janet Murguía, president and chief executive officer of the National Council of La Raza, stated that there could be up to 120,000 Hispanic workers in New Orleans. In June 2007, one study stated that the Hispanic population had risen from 15,000, pre-Katrina, to over 50,000.
The Times - Picayune reported in January 2009 that the metropolitan area had a recent influx of 5,300 households in the later half of 2008, bringing the population to around 469,605 households, or 88.1 % of its pre-Katrina levels. While the area 's population has been on an upward trajectory since the storm, much of that growth was attributed to residents returning after Katrina. Many observers predicted that growth would taper off, but the data center 's analysis suggests that New Orleans and the surrounding parishes are benefiting from an economic migration resulting from the global financial crisis of 2008 -- 2009.
As of 2010, 90.31 % of New Orleans residents age 5 and older spoke English at home as a primary language, while 4.84 % spoke Spanish, 1.87 % Vietnamese, and 1.05 % spoke French. In total, 9.69 % of New Orleans 's population age 5 and older spoke a mother language other than English.
New Orleans ' colonial history of French and Spanish settlement has resulted in a strong Roman Catholic tradition. Catholic missions administered to slaves and free people of color, establishing schools for them. In addition, many late 19th and early 20th century European immigrants, such as the Irish, some Germans, and Italians, were Catholic. In New Orleans and the surrounding Louisiana Gulf Coast area, the predominant religion is Catholicism. Within the Archdiocese of New Orleans (which includes not only the city but the surrounding Parishes as well), 35.9 % percent of the population is Roman Catholic. The influence of Catholicism is reflected in the city 's French and Spanish cultural traditions, including its many parochial schools, street names, architecture, and festivals, including Mardi Gras.
New Orleans notably has a distinctive variety of Louisiana Voodoo, due in part to syncretism with African and Afro - Caribbean Roman Catholic beliefs. The fame of the voodoo practitioner Marie Laveau contributed to this, as did New Orleans ' distinctly Caribbean cultural influences. Although the tourism industry has strongly associated Voodoo with the city, only a small number of people are serious adherents to the religion.
Jewish settlers, primarily Sephardim, settled in New Orleans from the early nineteenth century. Some migrated from the communities established in the colonial years in Charleston, South Carolina and Savannah, Georgia. The merchant Abraham Cohen Labatt helped found the first Jewish congregation in New Orleans and Louisiana in the 1830s, which became known as the Portuguese Jewish Nefutzot Yehudah congregation (he and some other members were Sephardic Jews, whose ancestors had lived in Portugal and Spain). Ashkenazi Jews from eastern Europe came as immigrants in the late 19th and 20th centuries. By the 21st century, there were 10,000 Jews in New Orleans. This number dropped to 7,000 after the disruption of Hurricane Katrina.
In the wake of Katrina, all New Orleans synagogues lost members, but most re-opened in their original locations. The exception was Congregation Beth Israel, the oldest and most prominent Orthodox synagogue in the New Orleans region. Beth Israel 's building in Lakeview was destroyed by flooding. After seven years of holding services in temporary quarters, the congregation consecrated a synagogue in August 2012 on land purchased from its new neighbor, the Reform Congregation Gates of Prayer in Metairie.
As of 2011 there had been increases in the Hispanic population in the New Orleans area, including in Kenner, central Metairie, and Terrytown in Jefferson Parish and eastern New Orleans and Mid-City in New Orleans proper.
Prior to Hurricane Katrina there were few persons of Brazilian origin in the city, but after Katrina and by 2008 a population emerged. Portuguese speakers were the second most numerous group to take English as a second language classes in the Roman Catholic Archdiocese of New Orleans, after Spanish speakers. Many Brazilians worked in skilled trades such as tile and flooring; fewer worked as day laborers than did Latinos. Many had moved to New Orleans from Brazilian communities in the Northeastern United States, Florida, and Georgia. Brazilians settled throughout the New Orleans metropolitan area. Most of the Brazilians were undocumented immigrants. In January 2008 Bruce Nolan of the Houston Chronicle stated that estimates of the New Orleans Brazilian population had a mid-range of 3,000, but no entity had determined exactly how many Brazilians resided in the city. By 2008 Brazilians had opened many small churches, shops, and restaurants catering to their community.
The city of New Orleans faced a decreasing population before and after Hurricane Katrina. Beginning in 1960, the population of the city decreased due to several factors. Jobs and population followed the cycles of oil and tourism, the city 's population declined as suburbanization increased (in common with many US cities), and jobs migrated to surrounding parishes. This economic and population decline resulted in high levels of poverty among city residents; in 1960 it was the fifth - highest of all US cities, and was almost twice the national average in 2005, at 24.5 %. New Orleans experienced an increase in residential segregation from 1900 to 1980, leaving the poor, who were disproportionately African American in older, low - lying locations within the city 's core. These areas were especially susceptible to flood and storm damage.
Hurricane Katrina, which displaced 800,000 people in total, contributed significantly to the continued decline of New Orleans ' population. As of 2010, the population of New Orleans was at 76 % of what it was in 2005. African Americans, renters, the elderly, and people with low income were disproportionately affected by Hurricane Katrina, compared to affluent and white residents. In the aftermath of Katrina, city government commissioned groups such as Bring New Orleans Back Commission, the New Orleans Neighborhood Rebuilding Plan, the Unified New Orleans Plan, and the Office of Recovery Management to contribute to plans addressing depopulation. Their ideas included shrinking the city 's footprint from before the storm, incorporating community voices into development plans, and creating green spaces, some of which incited controversy.
From 2010 to 2014 the city grew by 12 %, adding an average of more than 10,000 new residents each year following the 2010 Census.
New Orleans has one of the largest and busiest ports in the world, and metropolitan New Orleans is a center of maritime industry. The New Orleans region also accounts for a significant portion of the nation 's oil refining and petrochemical production, and serves as a white - collar corporate base for onshore and offshore petroleum and natural gas production.
New Orleans is a center for higher learning, with over 50,000 students enrolled in the region 's eleven two - and four - year degree granting institutions. A top - 50 research university, Tulane University, is located in New Orleans ' Uptown neighborhood. Metropolitan New Orleans is a major regional hub for the health care industry and boasts a small, globally competitive manufacturing sector. The center city possesses a rapidly growing, entrepreneurial creative industries sector, and is renowned for its cultural tourism. Greater New Orleans, Inc. (GNO, Inc.) acts as the first point - of - contact for regional economic development, coordinating between Louisiana 's Department of Economic Development and the various parochial business development agencies.
New Orleans was developed as a strategically located trading entrepôt, and it remains, above all, a crucial transportation hub and distribution center for waterborne commerce. The Port of New Orleans is the 5th - largest port in the United States based on volume of cargo handled, and second - largest in the state after the Port of South Louisiana. It is the 12th - largest in the U.S. based on value of cargo. The Port of South Louisiana, also based in the New Orleans area, is the world 's busiest in terms of bulk tonnage. When combined with the Port of New Orleans, it forms the 4th - largest port system in volume handled. Many shipbuilding, shipping, logistics, freight forwarding and commodity brokerage firms either are based in metropolitan New Orleans or maintain a large local presence. Examples include Intermarine, Bisso Towboat, Northrop Grumman Ship Systems, Trinity Yachts, Expeditors International, Bollinger Shipyards, IMTT, International Coffee Corp, Boasso America, Transoceanic Shipping, Transportation Consultants Inc., Dupuy Storage & Forwarding and Silocaf. The largest coffee - roasting plant in the world, operated by Folgers, is located in New Orleans East.
Like Houston, New Orleans is located in proximity to the Gulf of Mexico and the many oil rigs that lie just offshore. Louisiana ranks fifth among states in oil production and eighth in reserves in the United States. It has two of the four Strategic Petroleum Reserve (SPR) storage facilities: West Hackberry in Cameron Parish and Bayou Choctaw in Iberville Parish. Other infrastructure includes 17 petroleum refineries, with a combined crude oil distillation capacity of nearly 2.8 million barrels per day (450,000 m / d), the second highest in the nation after Texas. Louisiana 's numerous ports include the Louisiana Offshore Oil Port (LOOP), which is capable of receiving ultra large oil tankers. Given the quantity of oil importing, Louisiana is home to many major pipelines supplying the nation: Crude Oil (Exxon, Chevron, BP, Texaco, Shell, Scurloch - Permian, Mid-Valley, Calumet, Conoco, Koch Industries, Unocal, U.S. Dept. of Energy, Locap); Product (TEPPCO Partners, Colonial, Plantation, Explorer, Texaco, Collins); and Liquefied Petroleum Gas (Dixie, TEPPCO, Black Lake, Koch, Chevron, Dynegy, Kinder Morgan Energy Partners, Dow Chemical Company, Bridgeline, FMP, Tejas, Texaco, UTP). Several major energy companies have regional headquarters in the city or its suburbs, including Royal Dutch Shell, Eni and Chevron. Numerous other energy producers and oilfield services companies are also headquartered in the city or region, and the sector supports a large professional services base of specialized engineering and design firms, as well as a term office for the federal government 's Minerals Management Service.
The city is the home to a single Fortune 500 company: Entergy, a power generation utility and nuclear powerplant operations specialist. In the wake of Hurricane Katrina, the city lost its other Fortune 500 company, Freeport - McMoRan, when it merged its copper and gold exploration unit with an Arizona company and relocated that division to Phoenix, Arizona. Its McMoRan Exploration affiliate remains headquartered in New Orleans. Other companies either headquartered or with significant operations in New Orleans include: Pan American Life Insurance, Pool Corp, Rolls - Royce, Newpark Resources, AT&T, TurboSquid, iSeatz, IBM, Navtech, Superior Energy Services, Textron Marine & Land Systems, McDermott International, Pellerin Milnor, Lockheed Martin, Imperial Trading, Laitram, Harrah 's Entertainment, Stewart Enterprises, Edison Chouest Offshore, Zatarain 's, Waldemar S. Nelson & Co., Whitney National Bank, Capital One, Tidewater Marine, Popeyes Chicken & Biscuits, Parsons Brinckerhoff, MWH Global, CH2M HILL, Energy Partners Ltd, The Receivables Exchange, GE Capital, and Smoothie King.
Tourism is another staple of the city 's economy. Perhaps more visible than any other sector, New Orleans ' tourist and convention industry is a $5.5 billion juggernaut that accounts for 40 percent of New Orleans ' tax revenues. In 2004, the hospitality industry employed 85,000 people, making it New Orleans ' top economic sector as measured by employment totals. The city also hosts the World Cultural Economic Forum (WCEF). The forum, held annually at the New Orleans Morial Convention Center, is directed toward promoting cultural and economic development opportunities through the strategic convening of cultural ambassadors and leaders from around the world. The first WCEF took place in October 2008.
Federal agencies and the Armed forces have significant facilities in the area. The U.S. Fifth Circuit Court of Appeals operates at the US Courthouse downtown. NASA 's Michoud rocket factory is located in New Orleans East and is operated by Lockheed Martin. It is a huge manufacturing facility that produced the external fuel tanks for the space shuttles, and is now used for the construction of NASA 's Space Launch System. The rocket factory lies within the enormous New Orleans Regional Business Park, also home to the National Finance Center, operated by the United States Department of Agriculture (USDA), and the Crescent Crown distribution center. Other large governmental installations include the U.S. Navy 's Space and Naval Warfare (SPAWAR) Systems Command, located within the University of New Orleans Research and Technology Park in Gentilly, Naval Air Station Joint Reserve Base New Orleans; and the headquarters for the Marine Force Reserves in Federal City in Algiers.
According to the City 's 2008 Comprehensive Annual Financial Report, the top employers in the city are:
New Orleans has many visitor attractions, from the world - renowned French Quarter; to St. Charles Avenue, (home of Tulane and Loyola Universities, the historic Pontchartrain Hotel, and many 19th - century mansions); to Magazine Street, with its boutique stores and antique shops.
According to current travel guides, New Orleans is one of the top ten most - visited cities in the United States; 10.1 million visitors came to New Orleans in 2004. Prior to Hurricane Katrina (2005), there were 265 hotels with 38,338 rooms in the Greater New Orleans Area. In May 2007, there were over 140 hotels and motels in operation with over 31,000 rooms.
A 2009 Travel + Leisure poll of "America 's Favorite Cities '' ranked New Orleans first in ten categories, the most first - place rankings of the 30 cities included. According to the poll, New Orleans is the best U.S. city as a spring break destination and for "wild weekends '', stylish boutique hotels, cocktail hours, singles / bar scenes, live music / concerts and bands, antique and vintage shops, cafés / coffee bars, neighborhood restaurants, and people watching. The city also ranked second for the following: friendliness (behind Charleston, South Carolina), gay - friendliness (behind San Francisco), bed and breakfast hotels / inns, and ethnic food. However, the city was voted last in terms of active (?) residents, and it placed near the bottom in cleanliness, safety, and as a family destination.
The French Quarter (known locally as "the Quarter '' or Vieux Carré), which was the colonial - era city and is bounded by the Mississippi River, Rampart Street, Canal Street, and Esplanade Avenue, contains many popular hotels, bars, and nightclubs. Notable tourist attractions in the Quarter include Bourbon Street, Jackson Square, St. Louis Cathedral, the French Market (including Café du Monde, famous for café au lait and beignets), and Preservation Hall. Also in the French Quarter is the old New Orleans Mint, a former branch of the United States Mint which now operates as a museum, and The Historic New Orleans Collection, a museum and research center housing art and artifacts relating to the history of New Orleans and the Gulf South.
Close to the Quarter is the Tremé community, which contains the New Orleans Jazz National Historical Park and the New Orleans African American Museum -- a site which is listed on the Louisiana African American Heritage Trail.
To tour the port, one can ride the Natchez, an authentic steamboat with a calliope, which cruises the Mississippi the length of the city twice daily. Unlike most other places in the United States, New Orleans has become widely known for its element of elegant decay. The city 's historic cemeteries and their distinct above - ground tombs are attractions in themselves, the oldest and most famous of which, Saint Louis Cemetery, greatly resembles Père Lachaise Cemetery in Paris.
The National WWII Museum, opened in the Warehouse District in 2000 as the "National D - Day Museum, '' has undergone a major expansion. Nearby, Confederate Memorial Hall, the oldest continually - operating museum in Louisiana (although under renovation since Katrina), contains the second - largest collection of Confederate Civil War memorabilia in the world. Art museums in the city include the Contemporary Arts Center, the New Orleans Museum of Art (NOMA) in City Park, and the Ogden Museum of Southern Art.
New Orleans also boasts a decidedly natural side. It is home to the Audubon Nature Institute (which consists of Audubon Park, the Audubon Zoo, the Aquarium of the Americas, and the Audubon Insectarium), and home to gardens which include Longue Vue House and Gardens and the New Orleans Botanical Garden. City Park, one of the country 's most expansive and visited urban parks, has one of the largest stands (if not the largest stand) of oak trees in the world.
There are also various points of interest in the surrounding areas. Many wetlands are found in close proximity to the city, including Honey Island Swamp and Barataria Preserve. Chalmette Battlefield and National Cemetery, located just south of the city, is the site of the 1815 Battle of New Orleans.
In 2009, New Orleans ranked No. 7 on Newsmax magazine 's list of the "Top 25 Most Uniquely American Cities and Towns '', a piece written by current CBS News travel editor Peter Greenberg. In determining his ranking, Greenberg cited the city 's rebuilding effort post-Katrina as well as its mission to become eco-friendly.
The New Orleans area is home to numerous celebrations, the most popular of which is Carnival, often referred to as Mardi Gras. Carnival officially begins on the Feast of the Epiphany, also known as the "Twelfth Night ''. Mardi Gras (French for "Fat Tuesday ''), the final and grandest day of festivities, is the last Tuesday before the Catholic liturgical season of Lent, which commences on Ash Wednesday.
The largest of the city 's many music festivals is the New Orleans Jazz & Heritage Festival. Commonly referred to simply as "Jazz Fest '', it is one of the largest music festivals in the nation, featuring crowds of people from all over the world, coming to experience music, food, arts, and crafts. Despite the name, it features not only jazz but a large variety of music, including both native Louisiana music and international artists. Along with Jazz Fest, New Orleans ' Voodoo Experience ("Voodoo Fest '') and the Essence Music Festival are both large music festivals featuring local and international artists.
Other major festivals held in the city include Southern Decadence, the French Quarter Festival, and the Tennessee Williams / New Orleans Literary Festival.
In 2002, Louisiana began offering tax incentives for film and television production. This led to a substantial increase in the number of films shot in the New Orleans area and brought the nickname "Hollywood South. '' Films which have been filmed or produced in and around New Orleans include: Ray, Runaway Jury, The Pelican Brief, Glory Road, All the King 's Men, Déjà Vu, Last Holiday, The Curious Case of Benjamin Button, 12 Years a Slave, and numerous others. In 2006, work began on the Louisiana Film & Television studio complex, based in the Tremé neighborhood. Louisiana began to offer similar tax incentives for music and theater productions in 2007, leading many to begin referring to New Orleans as "Broadway South. ''
The first theatre in New Orleans was the French - language Theatre de la Rue Saint Pierre, which opened in 1792. The first opera in New Orleans was given there in 1796. In the nineteenth century the city was the home of two of America 's most important venues for the performance of French opera, the Théâtre d'Orléans and later the French Opera House. Today, opera is performed by the New Orleans Opera.
New Orleans has always been a significant center for music, showcasing its intertwined European, Latin American, and African cultures. The city 's unique musical heritage was born in its colonial and early American days from a unique blending of European musical instruments with African rhythms. As the only North American city to have allowed slaves to gather in public and play their native music (largely in Congo Square, now located within Louis Armstrong Park), New Orleans gave birth to an indigenous music: jazz. Soon, brass bands formed, gaining popular attraction which continues today. The Louis Armstrong Park area, near the French Quarter in Tremé, contains the New Orleans Jazz National Historical Park. The city 's music was later significantly influenced by Acadiana, home of Cajun and Zydeco music, and by Delta blues.
New Orleans ' unique musical culture is further evident in its traditional funerals. A spin on military brass band funerals, New Orleans traditional funerals feature sad music (mostly dirges and hymns) on the way to the cemetery and happier music (hot jazz) on the way back. Such musical funerals are still held when a local musician, a member of a social club, krewe, or benevolent society, or a noted dignitary has passed. Until the 1990s, most locals preferred to call these "funerals with music '', but visitors to the city have long dubbed them "jazz funerals. ''
Much later in its musical development, New Orleans was home to a distinctive brand of rhythm and blues that contributed greatly to the growth of rock and roll. An example of the New Orleans ' sound in the 1960s is the # 1 US hit "Chapel of Love '' by the Dixie Cups, a song which knocked the Beatles out of the top spot on the Billboard Hot 100. New Orleans became a hotbed for funk music in the 1960s and 1970s, and by the late 1980s, it had developed its own localized variant of hip hop, called bounce music. While never commercially successful outside of the Deep South, it remained immensely popular in the poorer neighborhoods of the city throughout the 1990s.
A cousin of bounce, New Orleans hip hop has seen commercial success locally and internationally, producing Lil Wayne, Master P, Birdman, Juvenile, Cash Money Records, and No Limit Records. Additionally, the wave of popularity of cowpunk, a fast form of southern rock, originated with the help of several local bands, such as The Radiators, Better Than Ezra, Cowboy Mouth, and Dash Rip Rock. Throughout the 1990s, many sludge metal bands started in the area. New Orleans ' heavy metal bands like Eyehategod, Soilent Green, Crowbar, and Down have incorporated styles such as hardcore punk, doom metal, and southern rock to create an original and heady brew of swampy and aggravated metal that has largely avoided standardization.
New Orleans is the southern terminus of the famed Highway 61.
New Orleans is world - famous for its food. The indigenous cuisine is distinctive and influential. New Orleans food developed from centuries of amalgamation of the local Creole, haute Creole, and New Orleans French cuisines. Local ingredients, French, Spanish, Italian, African, Native American, Cajun, Chinese, and a hint of Cuban traditions combine to produce a truly unique and easily recognizable Louisiana flavor.
New Orleans is known for specialties like beignets (locally pronounced like "ben - yays ''), square - shaped fried pastries that could be called "French doughnuts '' (served with café au lait made with a blend of coffee and chicory rather than only coffee); and Po - boy and Italian Muffuletta sandwiches; Gulf oysters on the half - shell, fried oysters, boiled crawfish, and other seafood; étouffée, jambalaya, gumbo, and other Creole dishes; and the Monday favorite of red beans and rice. (Louis Armstrong often signed his letters, "Red beans and ricely yours ''.) Another New Orleans specialty is the praline locally / ˈprɑːliːn /, a candy made with brown sugar, granulated sugar, cream, butter, and pecans. The city also has notable street food including the Asian inspired beef Yaka mein.
New Orleans has developed a distinctive local dialect of American English over the years that is neither Cajun nor the stereotypical Southern accent, so often misportrayed by film and television actors. It does, like earlier Southern Englishes, feature frequent deletion of the pre-consonantal "r ''. This dialect is quite similar to New York City area accents such as "Brooklynese '', to people unfamiliar with either. There are many theories regarding how it came to be, but it likely resulted from New Orleans ' geographic isolation by water and the fact that the city was a major immigration port throughout the 19th century. As a result, many of the ethnic groups who reside in Brooklyn also reside in New Orleans, such as the Irish, Italians (especially Sicilians), and Germans, among others, as well as a very sizable Jewish community.
One of the strongest varieties of the New Orleans accent is sometimes identified as the Yat dialect, from the greeting "Where y'at? '' This distinctive accent is dying out generation by generation in the city itself, but remains very strong in the surrounding parishes.
Less visibly, various ethnic groups throughout the area have retained their distinctive language traditions to this day. Although rare, languages still spoken are the Kreyol Lwiziyen by the Creoles; an archaic Louisiana - Canarian Spanish dialect spoken by the Isleño people and older members of the population; and Cajun.
New Orleans ' professional sports teams include the 2009 Super Bowl XLIV champion New Orleans Saints (NFL), the New Orleans Pelicans (NBA), and the New Orleans Baby Cakes (PCL). It is also home to the Big Easy Rollergirls, an all - female flat track roller derby team, and the New Orleans Blaze, a women 's football team. A local group of investors began conducting a study in 2007 to see if the city could support a Major League Soccer team. New Orleans is also home to two NCAA Division I athletic programs, the Tulane Green Wave of the American Athletic Conference and the UNO Privateers of the Southland Conference.
The Mercedes - Benz Superdome is the home of the Saints, the Sugar Bowl, and other prominent events. It has hosted the Super Bowl a record seven times (1978, 1981, 1986, 1990, 1997, 2002, and 2013). The Smoothie King Center is the home of the Pelicans, VooDoo, and many events that are not large enough to need the Superdome. New Orleans is also home to the Fair Grounds Race Course, the nation 's third - oldest thoroughbred track. The city 's Lakefront Arena has also been home to sporting events.
Each year New Orleans plays host to the Sugar Bowl, the New Orleans Bowl and the Zurich Classic, a golf tournament on the PGA Tour. In addition, it has often hosted major sporting events that have no permanent home, such as the Super Bowl, ArenaBowl, NBA All - Star Game, BCS National Championship Game, and the NCAA Final Four. The Rock ' n ' Roll Mardi Gras Marathon and the Crescent City Classic are two road running events held annually in the city.
The City of New Orleans is a political subdivision of the State of Louisiana. It has a mayor - council government according to a Home Rule Charter adopted in 1954, as later amended. The city council consists of seven council members, who are elected by district and two at - large councilmembers. The current mayor, Mitch Landrieu, was elected on February 6, 2010 and assumed office on May 3, 2010. The Orleans Parish Civil Sheriff 's Office serves papers involving lawsuits and provides security for the Civil District Court and Juvenile Courts. The Criminal Sheriff, Marlin Gusman, maintains the parish prison system, provides security for the Criminal District Court, and provides backup for the New Orleans Police Department on an as - needed basis. An ordinance in 2006 established an Office of Inspector General for city government.
The city of New Orleans and the parish of Orleans operate as a merged city - parish government. Before the city of New Orleans became co-extensive with Orleans Parish, Orleans Parish was home to numerous smaller communities. The original city of New Orleans was composed of what are now the 1st through 9th wards. The city of Lafayette (including the Garden District) was added in 1852 as the 10th and 11th wards. In 1870, Jefferson City, including Faubourg Bouligny and much of the Audubon and University areas, was annexed as the 12th, 13th, and 14th wards. Algiers, on the west bank of the Mississippi, was also annexed in 1870, becoming the 15th ward.
New Orleans ' government is now largely centralized in the city council and mayor 's office, but it maintains a number of relics from earlier systems when various sections of the city ran much of their affairs separately. For example, New Orleans had seven elected tax assessors, each with their own staff, representing various districts of the city, rather than one centralized office. A constitutional amendment passed on November 7, 2006, consolidated the seven assessors into one in 2010. On February 18, 2010, Errol Williams was elected as the first citywide assessor. The New Orleans government operates both a fire department and the New Orleans Emergency Medical Services.
Crime has been recognized as an ongoing problem for New Orleans, although the issue is outside the view of most visitors to the city. As in other U.S. cities of comparable size, the incidence of homicide and other violent crimes is highly concentrated in certain impoverished neighborhoods, such as housing projects. The murder rate for the city has been historically high for its population and has always stayed among cities with the highest murder rates. In 1979, 242 killings was the first record of homicides broken in New Orleans. Murders would later go up to 305 by the end of 1990 and to 345 in 1991.
In 1994 New Orleans was named the Murder Capitol of America as the city hit a historic peak of 424 killings. The murder count surpassed Washington D.C., Chicago, Baltimore and Miami.
In 2012, Travel + Leisure named New Orleans the # 2 "America 's Dirtiest City '', down from a # 1 "Dirtiest '' status of the previous year. The magazine surveyed both national readership and local residents, from a list of prominent cities having the most visible illegal littering, dumping, and other environmental crime conditions.
Across New Orleans, homicides peaked in 1994 at 86 murders per 100,000 residents. By 2009, despite a 17 % decrease in violent crime in the city, the homicide rate remained among the highest in the United States, at between 55 and 64 per 100,000 residents. In 2010, New Orleans was 49.1 per 100,000, and in 2012, that number climbed to 53.2. This is the highest rate among cities of 250,000 population or larger. Offenders in New Orleans are almost exclusively black men, with 97 % of the offenders being black and 95 % being male.
The violent crime rate was also a key issue in the city 's 2010 mayoral race. In January 2007, several thousand New Orleans residents marched through city streets and gathered at City Hall for a rally demanding police and city leaders tackle the crime problem. Then - Mayor Ray Nagin said he was "totally and solely focused '' on addressing the problem. Later, the city implemented checkpoints during late night hours in problem areas. The murder rate climbed 14 % higher in 2011 to 57.88 per 100,000 retaining its status as the ' Murder Capital of the United States ' and rising to 21 in the world. In 2016, according to annual crime statistics released by the New Orleans Police Department, there were 176 murders in the city.
There are several higher education institutions in the city:
New Orleans Public Schools (NOPS) is the name given to the city 's public school system. Pre-Katrina, NOPS was one of the area 's largest systems (along with the Jefferson Parish public school system). In the years leading up to Hurricane Katrina, the New Orleans public school system was widely recognized as the lowest performing school district in Louisiana. According to researchers Carl L. Bankston and Stephen J. Caldas, only 12 of the 103 public schools within the city limits of New Orleans showed reasonably good performance at the beginning of the 21st century.
Following Hurricane Katrina, the state of Louisiana took over most of the schools within the system (all schools that fell into a nominal "worst - performing '' metric); many of these schools, in addition to others that were not subject to state takeover, were subsequently granted operating charters giving them administrative independence from the Orleans Parish School Board, the Recovery School District and / or the Louisiana Board of Elementary and Secondary Education (BESE). At the start of the 2014 school year, all public school students in the NOPS system will attend these independent public charter schools, making New Orleans "the nation 's first completely privatized public school district in the nation. ''
The last few years have witnessed significant and sustained gains in student achievement, as outside operators like KIPP, the Algiers Charter School Network, and the Capital One -- University of New Orleans Charter School Network have assumed control of dozens of schools. The most recent release of annual school performance scores (October 2009) demonstrated continued growth in the academic performance of New Orleans ' public schools. If the scores of all public schools in New Orleans (Orleans Parish School Board - chartered, Recovery School District - chartered, Recovery School District - operated, etc.) are considered, an overall school district performance score of 70.6 results. This score represents a 6 % increase over an equivalent 2008 metric, and a 24 % improvement when measured against an equivalent pre-Katrina (2004) metric, when a district score of 56.9 was posted. Notably, this score of 70.6 approaches the score (78.4) posted in 2009 by the adjacent, suburban Jefferson Parish public school system, though that system 's performance score is itself below the state average of 91.
This longstanding pattern is changing, however, as the NOPS system is engaged in the most promising and far - reaching public school reforms in the nation, reforms aimed at decentralizing power away from the pre-Katrina school board central bureaucracy to individual school principals and independent public charter school boards, monitoring charter school performance by granting renewable, five - year operating contracts permitting the closure of those not succeeding, and vesting choice in parents of public schools students, allowing them to enroll their children in almost any school in the district.
There are numerous academic and public libraries and archives in New Orleans, including Monroe Library at Loyola University, Howard - Tilton Memorial Library at Tulane University, the Law Library of Louisiana, and the Earl K. Long Library at the University of New Orleans.
The New Orleans Public Library includes 13 locations, most of which were damaged by Hurricane Katrina. However, only four libraries remained closed in 2007. The main library includes a Louisiana Division housing city archives and special collections.
Other research archives are located at the Historic New Orleans Collection and the Old U.S. Mint.
An independently operated lending library called Iron Rail Book Collective specializes in radical and hard - to - find books. The library contains over 8,000 titles and is open to the public. It was the first library in the city to re-open after Hurricane Katrina.
The Louisiana Historical Association was founded in New Orleans in 1889. It operated first at Howard Memorial Library. Then its own Memorial Hall was added to Howard Library. The design for the new building was undertaken by the New Orleans architect Thomas Sully.
Historically, the major newspaper in the area was The Times - Picayune. The paper made headlines of its own in 2012 when owner Advance Publications cut its print schedule to three days each week, instead focusing its efforts on its website, NOLA.com. That action briefly the made New Orleans the largest city in the country without a daily newspaper, until the Baton Rouge newspaper The Advocate began a New Orleans edition in September 2012. In June 2013, the Times - Picayune resumed daily printing with a condensed newsstand tabloid edition, nicknamed TP Street, which is published on the three days each week that its namesake broadsheet edition is not printed. (The Picayune has not returned to daily delivery.) With the resumption of daily print editions from the Times - Picayune and the launch of the New Orleans edition of The Advocate, now The New Orleans Advocate, the city now has two daily newspapers for the first time since the afternoon States - Item ceased publication on May 31, 1980.
In addition to the daily newspapers, weekly publications include The Louisiana Weekly and Gambit Weekly. Also in wide circulation is the Clarion Herald, the newspaper of the Archdiocese of New Orleans.
Greater New Orleans is the 54th largest Designated Market Area (DMA) in the U.S., serving 566,960 homes. Major television network affiliates serving the area include:
Two radio stations that were influential in promoting New Orleans - based bands and singers were 50,000 - watt WNOE - AM (1060) and 10,000 - watt WTIX (690 AM). These two stations competed head - to - head from the late 1950s to the late 1970s.
WWOZ, the New Orleans Jazz and Heritage Station, broadcasts, 24 hours per day, modern and traditional jazz, blues, rhythm and blues, brass band, gospel, cajun, zydeco, Caribbean, Latin, Brazilian, African, bluegrass, and Irish at 90.7 FM and at www.wwoz.org.
WTUL, a local college radio station (Tulane University), broadcasts a wide array of programming, including 20th century classical, reggae, jazz, showtunes, indie rock, electronic music, soul / funk, goth, punk, hip hop, New Orleans music, opera, folk, hardcore, Americana, country, blues, Latin, cheese, techno, local, world, ska, swing and big band, kids shows, and even news programming from DemocracyNow. WTUL is listener supported and non-commercial. The disc jockeys are volunteers, many of them college students.
Louisiana 's film and television tax credits have spurred some growth in the television industry, although to a lesser degree than in the film industry. Many films and advertisements have in part or whole been filmed in the city, as have television programs such as The Real World: New Orleans in 2000, The Real World: Back to New Orleans in 2009 and 2010 and Bad Girls Club: New Orleans in 2011.
New Orleans has four active streetcar lines:
More lines are at the planning stage.
The city 's streetcars were also featured in the Tennessee Williams play, A Streetcar Named Desire. The streetcar line to Desire Street became a bus line in 1948. There are proposals to revive a Desire streetcar line, running along the neutral grounds of North Rampart and St. Claude, as far downriver as Poland Avenue, near the Industrial Canal.
Hurricane Katrina destroyed the power lines supplying the St. Charles Avenue line. The associated levee failures flooded the Mid-City facility storing the red streetcars which normally run on the Riverfront and Canal Street lines. Restoration of service has been gradual, with vintage St. Charles line cars running on the Riverfront and Canal lines until the more modern Czech - built red cars are back in service; they are being individually restored at the RTA 's facility between Willow and Jeannette streets in the Carrollton neighborhood. On December 23, 2007, streetcars were restored to running on the St. Charles line up to Carrolton Avenue. The much - anticipated re-opening of the second portion of the historic route, which continues until the intersection of Carrolton Avenue and Claiborne Avenue, was commemorated on June 28, 2008.
The city 's flat landscape, simple street grid, and mild winters, facilitate bicycle ridership, helping to make New Orleans eighth among U.S. cities in its rate of bicycle and pedestrian transportation, and sixth in terms of the percentage of bicycling commuters. The city 's bicyclists benefit from being located at the start of the Mississippi River Trail, a 3,000 - mile (4,800 km) bicycle path that stretches from the city 's Audubon Park to Minnesota. The first 25 miles (40 km) of the path, through Destrehan, is paved with a smooth macadam surface. Bicyclists looking to cross the river have access to the city 's ferries. Since the 2005 levee - breach, the city has actively sought to promote bicycling by constructing a $1.5 million bike trail from Mid-City to Lake Pontchartrain, and by adding over 37 miles (60 km) of bicycle lanes to various streets, including St. Charles Avenue. In 2009, Tulane University contributed to these efforts by converting the main street through its Uptown campus, McAlister Place, into a pedestrian mall opened to bicycle traffic. In 2010, work began to add a 3.1 - mile (5.0 km) bicycle corridor from the French Quarter to Lakeview, and 14 miles (23 km) of additional bike lanes on existing streets. New Orleans has also been recognized as a place with an abundance of uniquely decorated and uniquely designed bicycles.
Public transportation in the city is operated by the New Orleans Regional Transit Authority ("RTA ''). There are many bus routes connecting the city and suburban areas. The RTA lost 200 + buses due to Hurricane Katrina, this would mean that there would be a 30 -- 60 minute waiting period for the next bus to come to the bus stop, and the streetcars took until 2008 to return, so the RTA placed an order for 38 Orion VII Next Generation clean diesel buses, which arrived in July 2008. The RTA has these new buses running on biodiesel. The Jefferson Parish Department of Transit Administration operates Jefferson Transit, which provides service between the city and its suburbs.
New Orleans is served by Interstate 10, Interstate 610 and Interstate 510. I - 10 travels east -- west through the city as the Pontchartrain Expressway. In the far eastern part of the city, New Orleans East, it is known as the Eastern Expressway. I - 610 provides a direct shortcut for traffic passing through New Orleans via I - 10, allowing that traffic to bypass I - 10 's southward curve. In the future, New Orleans will have another interstate highway, Interstate 49, which will be extended from its current terminus in Lafayette to the city.
In addition to the interstate highways, U.S. 90 travels through the city, while U.S. 61 terminates in the city 's downtown center. In addition, U.S. 11 terminates in the eastern portion of the city.
New Orleans is home to many bridges, the Crescent City Connection is perhaps the most notable. It serves as New Orleans ' major bridge across the Mississippi River, providing a connection between the city 's downtown on the eastbank and its westbank suburbs. Other bridges that cross the Mississippi River in the New Orleans area are the Huey P. Long Bridge, over which U.S. 90 travels, and the Hale Boggs Memorial Bridge, which carries Interstate 310.
The Twin Span Bridge, a five - mile (8 km) causeway in eastern New Orleans, carries I - 10 across Lake Pontchartrain. Also in eastern New Orleans, Interstate 510 / LA 47 travels across the Intracoastal Waterway / Mississippi River - Gulf Outlet Canal via the Paris Road Bridge, connecting New Orleans East and suburban Chalmette.
The tolled Lake Pontchartrain Causeway, consisting of two parallel bridges are, at 24 miles (39 km) long, the longest bridges in the world. Built in the 1950s (southbound span) and 1960s (northbound span), the bridges connect New Orleans with its suburbs on the north shore of Lake Pontchartrain via Metairie.
The metropolitan area is served by the Louis Armstrong New Orleans International Airport, located in the suburb of Kenner. New Orleans also has several regional airports located throughout the metropolitan area. These include the Lakefront Airport, Naval Air Station Joint Reserve Base New Orleans (locally known as Callender Field) in the suburb of Belle Chasse and Southern Seaplane Airport, also located in Belle Chasse. Southern Seaplane has a 3,200 - foot (980 m) runway for wheeled planes and a 5,000 - foot (1,500 m) water runway for seaplanes. New Orleans International suffered some damage as a result of Hurricane Katrina, but as of April 2007, it contained the most traffic and is the busiest airport in the state of Louisiana and the sixth busiest in the Southeast. As of 2017, the airport handled more than 11 million passengers, with service to more than 57 destinations. The airport 's international service includes nonstop flights to the United Kingdom, Germany, Canada, Mexico, Honduras, Bahamas, and Dominican Republic.
The city is served by rail via Amtrak. The New Orleans Union Passenger Terminal is the central rail depot, and is served by three trains: the Crescent, operating between New Orleans and New York City; the City of New Orleans, operating between New Orleans and Chicago; and the Sunset Limited, operating through New Orleans between Orlando and Los Angeles. From late August 2005 to the present, the Sunset Limited has remained officially an Orlando - to - Los Angeles train, being considered temporarily truncated due to the lingering effects of Hurricane Katrina. At first (until late October 2005) it was truncated to a San Antonio - to - Los Angeles service; since then (from late October 2005 on) it has been truncated to a New Orleans - to - Los Angeles service. As time has passed, particularly since the January 2006 completion of the rebuilding of damaged tracks east of New Orleans by their owner, CSX Transportation, the obstacles to restoration of the Sunset Limited 's full route have been more managerial and political than physical.
With the strategic benefits of both a major international port and one of the few double - track Mississippi River crossings, the city is served by six of the seven Class I railroads in North America: Union Pacific Railroad, BNSF Railway, Norfolk Southern Railway, Kansas City Southern Railway, CSX Transportation and Canadian National Railway. The New Orleans Public Belt Railroad provides interchange services between the railroads.
Recently, many have proposed extending New Orleans ' public transit system by adding light rail routes from downtown, along Airline Highway through the airport to Baton Rouge and from downtown to Slidell and the Mississippi Gulf Coast. Proponents of this idea claim that these new routes would boost the region 's economy, which has been badly damaged by Hurricane Katrina, and serve as an evacuation option for hospital patients out of the city.
New Orleans has had continuous ferry service since 1827, with three routes in current operation. The Canal Street Ferry (or Algiers Ferry) connects downtown New Orleans at the foot of Canal Street with the National Historic Landmark District of Algiers Point on the other side of the Mississippi River ("West Bank '' in local parlance) and is popular with tourists and locals alike. This downtown ferry terminal also serves the Canal Street / Gretna Ferry, connecting Gretna, Louisiana. The Gretna Ferry serves pedestrians and bicyclists only. The Canal Street Ferry services passenger vehicles, bicycles and pedestrians, as does a third ferry miles downriver, connecting Chalmette, Louisiana and Lower Algiers.
New Orleans has ten sister cities:
|
where do they go in ferris bueller's day off | Ferris Bueller 's Day Off - wikipedia
Ferris Bueller 's Day Off is a 1986 American teen comedy film written, co-produced, and directed by John Hughes, and co-produced by Tom Jacobson. The film stars Matthew Broderick as Ferris Bueller, a high - school slacker who spends a day off from school, with Mia Sara and Alan Ruck. Ferris regularly "breaks the fourth wall '' to explain techniques and inner thoughts.
Hughes wrote the screenplay in less than a week. Filming began in September 1985 and finished in November 1985. Featuring many landmarks, including the Sears Tower and the Art Institute of Chicago, the film was Hughes ' love letter to Chicago: "I really wanted to capture as much of Chicago as I could. Not just in the architecture and landscape, but the spirit. ''
Released by Paramount Pictures on June 11, 1986, the film became one of the top - grossing films of the year, receiving $70.1 million over a $5.8 million budget, and was enthusiastically acclaimed by critics and audiences alike. In 2014, the film was selected for preservation in the National Film Registry by the Library of Congress, being deemed "culturally, historically, or aesthetically significant. '' In 2016, Paramount, Turner Classic Movies, and Fathom Events re-released the film and Pretty in Pink to celebrate its 30th anniversary.
In suburban Chicago, near the end of the high school year, senior Ferris Bueller fakes sickness to stay home. Throughout the film, Ferris frequently breaks the fourth wall to talk about his friends and give the audience advice on how to skip school. His parents believe him, though his sister Jeanie is not convinced. Dean of Students Edward R. Rooney suspects Ferris is being truant again and commits to catching him. Ferris convinces his friend Cameron Frye, who really is absent due to illness, to help get Ferris ' girlfriend Sloane Peterson out of school by reporting that her grandmother has died. To trick Rooney, Ferris sways Cameron to let them use his father 's prized 1961 Ferrari 250 GT California Spyder to collect Sloane. Cameron is dismayed when Ferris continues to use the car to drive them into downtown Chicago to spend the day, but Ferris promises they will return it as it was.
The trio leave the car with parking garage attendants who immediately take the car for a joy ride after they leave. Ferris, Cameron, and Sloane sightsee around the city, including the Art Institute of Chicago, Sears Tower, Chicago Mercantile Exchange, and Wrigley Field, while narrowly dodging sight by Mr. Bueller. Cameron remains disinterested, and Ferris attempts to cheer him up by impromptu joining a parade float during the Von Steuben Day parade and lip - syncing Wayne Newton 's cover of "Danke Schoen '', as well as a rendition of The Beatles ' "Twist and Shout '' that excites the gathered crowds.
Meanwhile, Rooney investigates the Bueller home to try to prove Ferris ' truancy, getting into several pratfalls. At the same time, Jeanie, frustrated that the entire school believes Ferris has come down with a deadly illness, skips class and returns home to confront him, only to hear someone outside trying to break in. Rooney flees while she calls the police; when they arrive they arrest her for filing a false report and contact her mother to collect her. While waiting, she meets a juvenile delinquent who advises her not to worry so much about Ferris. Mrs. Bueller arrives at the station, upset about having to forgo a house sale, only to find Jeanie kissing the delinquent, infuriating her more.
Ferris and his friends collect the Ferrari and depart for home, but shortly discover many miles have been added to the odometer and Cameron becomes catatonic. Back at Cameron 's garage, Ferris sets the car on blocks and runs it in reverse to try to take miles off the odometer without success. Cameron finally snaps, and lets out his anger against his controlling father by repeatedly kicking the car. This causes it to fall off the blocks and race in reverse through the back of the garage and into the ravine below. Ferris offers to take the blame, but Cameron asserts he will stand up against his father.
Ferris returns Sloane home and realizes his parents are due home soon. As he races on foot through the neighborhood he is nearly hit by Jeanie, who is driving their mother home. She speeds off trying to beat Ferris home. Ferris makes it home first to find Rooney waiting for him outside. Jeanie races into the house as their mother talks to their father about her behavior that day. Jeanie discovers Rooney threatening Ferris and tells Rooney that she was just helping to return Ferris from the hospital and shows Rooney his wallet that she had found from his earlier break - in. Rooney flees from the family dog while Ferris rushes back to his bedroom to greet his parents while feigning his waning illness. As they leave, Ferris reminds the audience, "Life moves pretty fast. If you do n't stop and look around once in a while, you could miss it. ''
A defeated Rooney heads home and is picked up by a school bus, further humiliated by the students.
As he was writing the film in 1985, John Hughes kept track of his progress in a spiral - bound logbook. He noted that the basic storyline was developed on February 25. It was successfully pitched the following day to Paramount Studios chief Ned Tanen. Tanen was intrigued by the concept, but wary that the Writers Guild of America was hours away from picketing the studio. Hughes wrote the screenplay in less than a week. Editor Paul Hirsch explained that Hughes had a trance - like concentration to his script - writing process, working for hours on end, and would later shoot the film on essentially what was his first draft of the script. "The first cut of Ferris Bueller 's Day Off ended up at two hours, 45 minutes. The shortening of the script had to come in the cutting room '', said Hirsch. "Having the story episodic and taking place in one day... meant the characters were wearing the same clothes. I suspect that Hughes writes his scripts with few, if any costume changes just so he can have that kind of freedom in the editing. ''
Hughes intended the movie to be more focused on the characters rather than the plot. "I know how the movie begins, I know how it ends '', said Hughes. "I do n't ever know the rest, but that does n't seem to matter. It 's not the events that are important, it 's the characters going through the event. Therefore, I make them as full and real as I can. This time around, I wanted to create a character who could handle everyone and everything. ''
Hughes said that he had Broderick in mind when he wrote the screenplay, saying Broderick was the only actor he could think of who could pull off the role, calling him clever and charming. "Certain guys would have played Ferris and you would have thought, ' Where 's my wallet? ' '' Hughes said. "I had to have that look; that charm had to come through. Jimmy Stewart could have played Ferris at 15... I needed Matthew. '' Other actors who were considered for the role included Jim Carrey, John Cusack, Tom Cruise and Michael J. Fox.
Sara surprised Hughes when she auditioned for the role of Sloane Peterson. "It was funny. He did n't know how old I was and said he wanted an older girl to play the 17 - year - old. He said it would take someone older to give her the kind of dignity she needed. He almost fell out of his chair when I told him I was only 18. '' Molly Ringwald had also wanted to play Sloane, but according to Ringwald, "John would n't let me do it: he said that the part was n't big enough for me. ''
Ruck had previously auditioned for the Bender role in The Breakfast Club which went to Judd Nelson, but Hughes remembered Ruck and cast him as the 17 - year - old Cameron Frye. According to Hughes, the character of Cameron was largely based on a friend of his in high school. "He was sort of a lost person. His family neglected him, so he took that as license to really pamper himself. When he was legitimately sick, he actually felt good, because it was difficult and tiring to have to invent diseases but when he actually had something, he was relaxed. '' Ruck said the role of Cameron had originally been offered to Emilio Estevez who turned it down. "Every time I see Emilio, I want to kiss him '', said Ruck. "Thank you! '' Ruck, then 29, worried about the age difference. "I was worried that I 'd be 10 years out of step, and I would n't know anything about what was cool, what was hip, all that junk. But when I was going to high school, I did n't know any of that stuff then, either. So I just thought, well, hell -- I 'll just be me. The character, he 's such a loner that he really would n't give a damn about that stuff anyway. He 'd feel guilty that he did n't know it, but that 's it. '' Ruck was not surprised to find himself cast young. "No, because, really, when I was 18, I sort of looked 12 '', he said. "Maybe it 's a genetic imbalance. ''
Ruck and Broderick had previously acted together in the Broadway production of Biloxi Blues. Cameron 's Mr. Peterson voice was an in - joke imitation of their former director Gene Saks. Ruck felt at ease working with Broderick, often crashing in his trailer. "We did n't have to invent an instant friendship like you often have to do in a movie '', said Ruck. "We were friends. ''
Jones was cast as Rooney based on his role in Amadeus, where he played the emperor; Hughes thought that character 's modern equivalent was Rooney. "My part was actually quite small in the script, but what seemed to be the important part to me was that I was the only one who was n't swept along by Ferris '', recalls Jones. "So I was the only one in opposition, which presented a lot of opportunities, some of which were n't even in the script or were expanded on. John was receptive to anything I had to offer, and indeed got ideas along the way himself. So that was fun, working with him. '' "Hughes told me at the time -- and I thought he was just blowing his own horn -- he said, ' You are going to be known for this for the rest of your life. ' And I thought, ' Sure '... but he was right. '' To help Jones study for the part, Hughes took him to meet his old vice principal. "This is the guy I want you to pay close attention to, '' Jones explained to Hughes ' biographer Kirk Honeycutt. While meeting him, the VP 's coat momentarily flew open revealing a holster and gun attached to the man 's belt. This made Jones realize what Hughes had envisioned. "The guy was ' Sign up for the Army quick before I kill you! ' '' Jones exclaimed.
Stein says he got the role of Bueller 's Economics teacher through six degrees of separation. "Richard Nixon introduced me to a man named Bill Safire, who 's a New York Times columnist. He introduced me to a guy who 's an executive at Warner Brothers. He introduced me to a guy who 's a casting director. He introduced me to John Hughes. John Hughes and I are among the only Republicans in the picture business, and John Hughes put me in the movie '', Stein said. Hughes said that Stein was an easy and early choice for the role of the teacher: "He was n't a professional actor. He had a flat voice, he looked like a teacher. ''
"Chicago is what I am, '' said Hughes. "A lot of Ferris is sort of my love letter to the city. And the more people who get upset with the fact that I film there, the more I 'll make sure that 's exactly where I film. It 's funny -- nobody ever says anything to Woody Allen about always filming in New York. America has this great reverence for New York. I look at it as this decaying horror pit. So let the people in Chicago enjoy Ferris Bueller. ''
For the film, Hughes got the chance to take a more expansive look at the city he grew up in. "We took a helicopter up the Chicago River. This is the first chance I 'd really had to get outside while making a movie. Up to this point, the pictures had been pretty small. I really wanted to capture as much of Chicago as I could, not just the architecture and the landscape, but the spirit. '' Shooting began in Chicago on September 9, 1985. In late October 1985, the production moved to Los Angeles, and shooting ended on November 22. The Von Steuben Day Parade scene was filmed on September 28. Scenes were filmed at several locations in downtown Chicago and Winnetka (Ferris 's home, his mother 's real estate office, etc.). Many of the other scenes were filmed in Northbrook, Illinois, including at Glenbrook North High School, on School Drive, the long, curvy street on which Glenbrook North and neighboring Maple Middle School are situated. The exterior of Ferris 's house is located at 4160 Country Club Drive, Long Beach, California.
The modernist house of Cameron Frye is located in Highland Park, Illinois. Known as the Ben Rose House, it was designed by architects A. James Speyer, who designed the main building in 1954, and David Haid, who designed the pavilion in 1974. It was once owned by photographer Ben Rose, who had a car collection in the pavilion. In the film Cameron 's father is portrayed as owning a Ferrari 250 GT California in the same pavilion. According to Lake Forest College art professor Franz Shulze, during the filming of the scene where the Ferrari crashes out of the window, Haid explained to Hughes that he could prevent the car from damaging the rest of the pavilion. Haid fixed connections in the wall and the building remained intact. Haid said to Hughes afterward, "You owe me $25,000 '', which Hughes paid. Other scenes were shot in Chicago, River Forest, Oak Park, Northbrook, Highland Park, Glencoe and Winnetka, Lake Forest and Long Beach, California. After Ben Rose 's death in 2009 the house was offered for sale and was sold in 2014.
According to Hughes, the scene at the Art Institute of Chicago was "a self - indulgent scene of mine -- which was a place of refuge for me, I went there quite a bit, I loved it. I knew all the paintings, the building. This was a chance for me to go back into this building and show the paintings that were my favorite. '' The museum had not been shot in, until the producers of the film approached them. "I remember Hughes saying, ' There are going to be more works of art in this movie than there have ever been before, ' '' recalled Jennifer Grey.
According to editor Paul Hirsch, in the original cut, the museum scene fared poorly at test screenings until he switched sequences around and Hughes changed the soundtrack.
The piece of music I originally chose was a classical guitar solo played on acoustic guitar. It was nonmetrical with a lot of rubato. I cut the sequence to that music and it also became nonmetrical and irregular. I thought it was great and so did Hughes. He loved it so much that he showed it to the studio but they just went "Ehhh. '' Then after many screenings where the audience said "The museum scene is the scene we like least '', he decided to replace the music. We had all loved it, but the audience hated it. I said, ' I think I know why they hate the museum scene. It 's in the wrong place. ' Originally, the parade sequence came before the museum sequence, but I realized that the parade was the highlight of the day, there was no way we could top it, so it had to be the last thing before the three kids go home. So that was agreed upon, we reshuffled the events of the day, and moved the museum sequence before the parade. Then we screened it and everybody loved the museum scene! My feeling was that they loved it because it came in at the right point in the sequence of events. John felt they loved it because of the music. Basically, the bottom line is, it worked.
The music used for the final version of the museum sequence is an instrumental cover version of The Smiths ' "Please, Please, Please, Let Me Get What I Want '', performed by The Dream Academy. A passionate Beatles fan, Hughes makes multiple references to them and John Lennon in the script. During filming, Hughes "listened to The White Album every single day for fifty - six days ''. Hughes also pays tribute to his childhood hero Gordie Howe with Cameron 's Detroit Red Wings jersey. "I sent them the jersey '', said Howe. "It was nice seeing the No. 9 on the big screen. ''
In the film, Ferris convinces Cameron to borrow his father 's rare 1961 Ferrari GT California. "The insert shots of the Ferrari were of the real 250 GT California '', Hughes explains in the DVD commentary. "The cars we used in the wide shots were obviously reproductions. There were only 100 of these cars, so it was way too expensive to destroy. We had a number of replicas made. They were pretty good, but for the tight shots I needed a real one, so we brought one in to the stage and shot the inserts with it. ''
Prior to filming, Hughes learned about Modena Design and Development who produced the Modena Spyder California, a replica of the Ferrari 250 GT. Hughes saw a mention of the company in a car magazine and decided to research them. Neil Glassmoyer recalls the day Hughes contacted him to ask about seeing the Modena Spyder:
The first time he called I hung up on him because I thought it was a friend of mine who was given to practical jokes. Then he called back and convinced me it really was him, so Mark and I took the car to his office. While we were waiting outside to meet Hughes this scruffy - looking fellow came out of the building and began looking the car over; we thought from his appearance he must have been a janitor or something. Then he looked up at a window and shouted, ' This is it! ' and several heads poked out to have a look. That scruffy - looking fellow was John Hughes, and the people in the window were his staff. Turned out it was between the Modena Spyder and a Porsche Turbo, and Hughes chose the Modena.
Automobile restorationist Mark Goyette designed the kits for three reproductions used in the film and chronicled the whereabouts of the cars today:
One of the "replicars '' was sold by Bonhams on April 19, 2010, at the Royal Air Force Museum at Hendon, United Kingdom for £ 79,600.
The "replicar '' was "universally hated by the crew '', said Ruck. "It did n't work right. '' The scene in which Ferris turns off the car to leave it with the garage attendant had to be shot a dozen times because it would not start. The car was built with a real wheel base, but used a Ford V8 engine instead of a V12. At the time of filming, the original 250 GT California model was worth $350,000. Since the release of the film, it has become one of the most expensive cars ever sold, going at auction in 2008 for $10,976,000 and more recently in 2015 for $16,830,000. The vanity plate of Cameron 's dad 's Ferrari spells NRVOUS and the other plates seen in the film are homages to Hughes 's earlier works, VCTN (National Lampoon 's Vacation), TBC (The Breakfast Club), MMOM (Mr. Mom), as well as 4FBDO (Ferris Bueller 's Day Off).
Ben Stein 's famous monotonous lecture about the Smoot - Hawley Tariff Act was not originally in Hughes 's script. Stein, by happenstance, was lecturing off - camera to the amusement of the student cast. "I was just going to do it off camera, but the student extras laughed so hard when they heard my voice that (Hughes) said do it on camera, improvise, something you know a lot about. When I gave the lecture about supply - side economics, I thought they were applauding. Everybody on the set applauded. I thought they were applauding because they had learned something about supply - side economics. But they were applauding because they thought I was boring... It was the best day of my life '', Stein said.
The parade scene took multiple days of filming; Broderick spent some time practicing the dance moves. "I was very scared '', Broderick said. "Fortunately, the sequence was carefully choreographed beforehand. We worked out all the moves by rehearsing in a little studio. It was shot on two Saturdays in the heart of downtown Chicago. The first day was during a real parade, and John got some very long shots. Then radio stations carried announcements inviting people to take part in ' a John Hughes movie '. The word got around fast and 10,000 people showed up! For the final shot, I turned around and saw a river of people. I put my hands up at the end of the number and heard this huge roar. I can understand how rock stars feel. That kind of reaction feeds you. ''
Broderick 's moves were choreographed by Kenny Ortega (who later choreographed Dirty Dancing). Much of it had to be scrapped though as Broderick had injured his knee badly during the scenes of running through neighbors ' backyards. "I was pretty sore '', Broderick said. "I got well enough to do what you see in the parade there, but I could n't do most of Kenny Ortega 's knee spins and things like that that we had worked on. When we did shoot it, we had all this choreography and I remember John would yell with a megaphone, ' Okay, do it again, but do n't do any of the choreography, ' because he wanted it to be a total mess. '' "Danke Schoen '' was somewhat choreographed but for "Twist and Shout '', Broderick said, "we were just making everything up ''. Hughes explained that much of the scene was spontaneously filmed. "It just happened that this was an actual parade, which we put our float into -- unbeknownst to anybody, all the people on the reviewing stand. Nobody knew what it was, including the governor. ''
Wrigley Field is featured in two interwoven and consecutive scenes. In the first scene, Rooney is looking for Ferris at a pizza joint while the voice of Harry Caray announces the action of a ballgame that is being shown on TV. From the play - by - play descriptions, the uniforms, and the player numbers, this game has been identified as the June 5, 1985, game between the Atlanta Braves and the Chicago Cubs. The batter rips a foul ball into the left field stands, and as Rooney looks away from the TV briefly, the TV cameras show a close up of Ferris a moment after catching it. The scene in the pizza joint continues as Rooney tries to banter about the game with the guy behind the counter.
In the next scene, Sloane, Cameron, and Ferris are in the left field stands inside Wrigley. Ferris flexes his hand in pain after supposedly catching the foul ball. During this scene, the characters enjoy the game and joke about what they would be doing if they had played by the rules. All these "in the park '' shots, including the one from the previous scene where Ferris catches the foul ball on TV, were filmed on September 24, 1985, at a game between the Montreal Expos and the Cubs. During the 1985 season, the Braves and the Expos both wore powder blue uniforms during their road games. And so, with seamless editing by Hughes, it is difficult to distinguish that the game being seen and described in the pizza joint is not only a different game but also a different Cubs ' opponent than the one filmed inside the stadium.
John Hughes had originally wanted to film the scene at the baseball game at Comiskey Park, as Hughes was a Chicago White Sox fan. However, due to time constraints, the location was moved to Wrigley Field at the last minute.
On October 1, 2011, Wrigley Field celebrated the 25th anniversary of the film by showing it on three giant screens on the infield.
Several scenes were cut from the final film; one lost scene entitled "The Isles of Langerhans '' has the three teenagers trying to order in the French restaurant, shocked to discover pancreas on the menu (although in the finished film, Ferris still says, "We ate pancreas '', while recapping the day). This is featured on the Bueller, Bueller Edition DVD. Other scenes were never made available on any DVD version. These scenes included additional screen time with Jeanie in a locker room, Ferris ' younger brother and sister (both of whom were completely removed from the film), and additional / alternate lines of dialogue throughout the film, all of which can be seen in the original theatrical trailer. Hughes had also wanted to film a scene where Ferris, Sloane, and Cameron go to a strip club. Paramount executives told him there were only so many shooting days left, so the scene was scrapped.
An official soundtrack was not originally released for the film, as director John Hughes felt the songs would not work well together as a continuous album. However, according to an interview with Lollipop Magazine, Hughes noted that he had sent 100,000 7 '' vinyl singles containing two songs featured in the film to members of his fan mailing list.
Hughes gave further details about his refusal to release a soundtrack in the Lollipop interview:
The only official soundtrack that Ferris Bueller 's Day Off ever had was for the mailing list. A&M was very angry with me over that; they begged me to put one out, but I thought "who 'd want all of these songs? '' I mean, would kids want "Danke Schoen '' and "Oh Yeah '' on the same record? They probably already had "Twist and Shout '', or their parents did, and to put all of those together with the more contemporary stuff, like the (English) Beat -- I just did n't think anybody would like it. But I did put together a seven - inch of the two songs I owned the rights to -- "Beat City '' on one side, and... I forget, one of the other English bands on the soundtrack... and sent that to the mailing list. By ' 86, ' 87, it was costing us $30 a piece to mail out 100,000 packages. But it was a labor of love.
Songs featured in the film include:
"Danke Schoen '' is one of the recurring motifs in the film and is sung by Ferris, Ed Rooney, and Jeanie. Hughes called it the "most awful song of my youth. Every time it came on, I just wanted to scream, claw my face. I was taking German in high school -- which meant that we listened to it in school. I could n't get away from it. '' According to Broderick, Ferris 's singing "Danke Schoen '' in the shower was his idea. "Although it 's only because of the brilliance of John 's deciding that I should sing "Danke Schoen '' on the float in the parade. I had never heard the song before. I was learning it for the parade scene. So we 're doing the shower scene and I thought, ' Well, I can do a little rehearsal. ' And I did something with my hair to make that Mohawk. And you know what good directors do: they say, ' Stop! Wait till we roll. ' And John put that stuff in. ''
The soundtrack for the film, limited to 5,000 copies, was released on September 13, 2016 by La - La Land Records. The album includes new wave and pop songs featured in the film, as well as Ira Newborn 's complete score, including unused cues. Due to licensing restrictions, "Twist and Shout, '' "Taking The Day Off, '' and "March of the Swivelheads '' were not included, but are available elsewhere. The Flowerpot Men 's "Beat City '' makes its first official release on CD with a new mix done by The Flowerpot Men 's Ben Watkins and Adam Peters that differs from the original 7 '' fan club release.
The film largely received positive reviews from critics. It is "Certified Fresh '' on Rotten Tomatoes, having an aggregated score of 79 % (based on 63 critics ' reviews), and an average rating of 7.7 / 10. The site 's consensus reads "Matthew Broderick charms in Ferris Bueller 's Day Off, a light and irrepressibly fun movie about being young and having fun. '' Roger Ebert gave it three out of four stars, calling it "one of the most innocent movies in a long time, '' and "a sweet, warm - hearted comedy. '' Richard Roeper called the film "one of my favorite movies of all time. It has one of the highest ' repeatability ' factors of any film I 've ever seen... I can watch it again and again. There 's also this, and I say it in all sincerity: Ferris Bueller 's Day Off is something of a suicide prevention film, or at the very least a story about a young man trying to help his friend gain some measure of self - worth... Ferris has made it his mission to show Cameron that the whole world in front of him is passing him by, and that life can be pretty sweet if you wake up and embrace it. That 's the lasting message of Ferris Bueller 's Day Off. '' Roeper pays homage to the film with a license plate that reads "SVFRRIS ''. Conservative columnist George Will hailed Ferris as "the moviest movie, '' a film "most true to the general spirit of the movies, the spirit of effortless escapism. ''
Essayist Steve Almond called Ferris "the most sophisticated teen movie (he) had ever seen, '' adding that while Hughes had made a lot of good movies, Ferris was the "one film (he) would consider true art, (the) only one that reaches toward the ecstatic power of teendom (sic) and, at the same time, exposes the true, piercing woe of that age. '' Almond also applauded Ruck 's performance, going so far as saying he deserved the Academy Award for Best Supporting Actor of 1986: "His performance is what elevates the film, allows it to assume the power of a modern parable. '' The New York Times reviewer Nina Darnton criticized Mia Sara 's portrayal of Sloane for lacking "the specific detail that characterized the adolescent characters in Hughes 's other films '', asserting she "created a basically stable but forgettable character. '' Conversely, Darnton praised Ruck and Grey 's performances: "The two people who grow in the movie -- Cameron, played with humor and sensitivity by Alan Ruck, and Ferris 's sister Jeanie, played with appropriate self - pity by Jennifer Grey -- are the most authentic. Grey manages to play an insufferably sulky teen - ager who is still attractive and likable. ''
National Review writer Mark Hemingway lauded the film 's celebration of liberty. "If there 's a better celluloid expression of ordinary American freedom than Ferris Bueller 's Day Off, I have yet to see it. If you could take one day and do absolutely anything, piling into a convertible with your best girl and your best friend and taking in a baseball game, an art museum, and a fine meal seems about as good as it gets, '' wrote Hemingway.
Others were less enamored with Ferris, many taking issue with the film 's "rebel without a cause '' hedonism. David Denby of New York Magazine, called the film "a nauseating distillation of the slack, greedy side of Reaganism. '' Author Christina Lee agreed, adding it was a "splendidly ridiculous exercise in unadulterated indulgence, '' and the film "encapsulated the Reagan era 's near solipsist worldview and insatiable appetite for immediate gratification -- of living in and for the moment... '' Gene Siskel panned the film from a Chicago - centric perspective saying "Ferris Bueller does n't do anything much fun... (t) hey do n't even sit in the bleachers where all the kids like to sit when they go to Cubs games. '' Siskel did enjoy the chemistry between Jennifer Grey and Charlie Sheen. Ebert thought Siskel was too eager to find flaws in the film 's view of Chicago.
Broderick was nominated for a Golden Globe Award in 1986 for Best Actor -- Motion Picture Musical or Comedy.
The film opened in 1,330 theaters in the United States and had a total weekend gross of $6,275,647, opening at # 2. Ferris Bueller 's Day Off 's total gross in the United States was approximately $70,136,369, making it a box office success. It subsequently became the 10th - highest - grossing film of 1986.
As an influential and popular film, Ferris Bueller 's Day Off has been included in many film rating lists. The film is number 54 on Bravo 's "100 Funniest Movies '', came 26th in the British 50 Greatest Comedy Films and ranked number 10 on Entertainment Weekly 's list of the "50 Best High School Movies ''.
First Lady Barbara Bush paraphrased the film in her 1990 commencement address at Wellesley College: "Find the joy in life, because as Ferris Bueller said on his day off, ' Life moves pretty fast; if you do n't stop and look around once in a while, you could miss it! ' '' Responding to the audience 's enthusiastic applause, she added "I 'm not going to tell George ya clapped more for Ferris than ya clapped for George. ''
Other phrases from Ferris Bueller 's Day Off such as Stein 's nasally - voiced "Bueller?... Bueller?... Bueller? '' (while taking roll call in class), and "Anyone? Anyone? '' (trying to probe the students for answers) as well as Kristy Swanson 's cheerful "No problem whatsoever! '' also permeated popular culture. In fact, Stein 's monotone performance launched his acting career. In 2016, Stein reprised the attendance scene in a campaign ad for Iowa Senator Charles Grassley; Stein intoned the last name of Grassley 's opponent (Patty Judge), to silence, while facts about her missed votes and absences from state board meetings were listed. Stein then calls out "Grassley, '' which gets a response; Stein mutters, "He 's always here. ''
Broderick said of the Ferris Bueller role, "It eclipsed everything, I should admit, and to some degree it still does. '' Later at the 2010 Oscar tribute to Hughes, he said, "For the past 25 years, nearly every day someone comes up to me, taps me on the shoulder and says, ' Hey, Ferris, is this your day off? ' ''
Ruck says that with Cameron Frye, Hughes gave him "the best part I ever had in a movie, and any success that I 've had since 1985 is because he took a big chance on me. I 'll be forever grateful. '' "While we were making the movie, I just knew I had a really good part '', Ruck says. "My realization of John 's impact on the teen - comedy genre crept in sometime later. Teen comedies tend to dwell on the ridiculous, as a rule. It 's always the preoccupation with sex and the self - involvement, and we kind of hold the kids up for ridicule in a way. Hughes added this element of dignity. He was an advocate for teenagers as complete human beings, and he honored their hopes and their dreams. That 's what you see in his movies. ''
Broderick starred in a television advertisement prepared by Honda promoting its CR - V for the 2012 Super Bowl XLVI. The ad pays homage to Ferris Bueller, featuring Broderick (as himself) faking illness to skip out of work to enjoy sightseeing around Los Angeles. Several elements, such as the use of the song "Oh Yeah '', and a valet monotonously calling for "Broderick... Broderick... '', appear in the ad. A teaser for the ad had appeared two weeks prior to the Super Bowl, which had created rumors of a possible film sequel. It was produced by Santa Monica - based RPA and directed by Todd Phillips. AdWeek 's Tim Nudd called the ad "a great homage to the original 1986 film, with Broderick this time calling in sick to a film shoot and enjoying another day of slacking. '' On the other hand, Jalopnik 's Matt Hardigree called the spot "sacrilegious ''.
In March 2017, Domino 's Pizza began an advertising campaign parodying the film, featuring actor Joe Keery in the lead role.
The film 's influence in popular culture extends beyond the film itself to how musical elements of the film have been received as well, for example, Yello 's song "Oh Yeah ''. As Jonathan Bernstein explains, "Never a hit, this slice of Swiss - made tomfoolery with its varispeed vocal effects and driving percussion was first used by John Hughes to illustrate the mouthwatering must - haveness of Cameron 's dad 's Ferrari. Since then, it has become synonymous with avarice. Every time a movie, TV show or commercial wants to underline the jaw - dropping impact of a hot babe or sleek auto, that synth - drum starts popping and that deep voice rumbles, ' Oh yeah... ' '' Concerning the influence of another song used in the film, Roz Kaveney writes that some "of the finest moments in later teen film draw on Ferris 's blithe Dionysian fervour -- the elaborate courtship by song in 10 Things I Hate About You (1999) draws usefully on the "Twist and Shout '' sequence in Ferris Bueller 's Day Off ".
The bands Save Ferris and Rooney were named in allusion to Ferris Bueller 's Day Off.
"Twist and Shout '' charted again, 16 years after the Beatles broke up, as a result of its prominent appearance in both this film and Back To School (where Rodney Dangerfield performs a cover version) which was released the same weekend as Ferris Bueller 's Day Off. The re-released single reached # 23 in the U.S; a US - only compilation album containing the track The Early Beatles, re-entered the album charts at # 197. The version heard in the film includes brass overdubbed onto the Beatles ' original recording, which did not go down well with Paul McCartney. "I liked (the) film but they overdubbed some lousy brass on the stuff! If it had needed brass, we 'd had stuck it on ourselves! '' Upon hearing McCartney 's reaction, Hughes felt bad for "offend (ing) a Beatle. But it was n't really part of the song. We saw a band (onscreen) and we needed to hear the instruments. ''
Broderick and Hughes stayed in touch for a while after production. "We thought about a sequel to Ferris Bueller, where he 'd be in college or at his first job, and the same kind of things would happen again. But neither of us found a very exciting hook to that. The movie is about a singular time in your life. '' "Ferris Bueller is about the week before you leave school, it 's about the end of school -- in some way, it does n't have a sequel. It 's a little moment and it 's a lightning flash in your life. I mean, you could try to repeat it in college or something but it 's a time that you do n't keep. So that 's partly why I think we could n't think of another '', Broderick added. "But just for fun '', said Ruck, "I used to think why do n't they wait until Matthew and I are in our seventies and do Ferris Bueller Returns and have Cameron be in a nursing home. He does n't really need to be there, but he just decided his life is over, so he committed himself to a nursing home. And Ferris comes and breaks him out. And they go to, like, a titty bar and all this ridiculous stuff happens. And then, at the end of the movie, Cameron dies. ''
Many scholars have discussed at length the film 's depiction of academia and youth culture. For Martin Morse Wooster, the film "portrayed teachers as humorless buffoons whose only function was to prevent teenagers from having a good time ''. Regarding not specifically teachers, but rather a type of adult characterization in general, Art Silverblatt asserts that the "adults in Ferris Bueller 's Day Off are irrelevant and impotent. Ferris 's nemesis, the school disciplinarian, Mr. Rooney, is obsessed with ' getting Bueller. ' His obsession emerges from envy. Strangely, Ferris serves as Rooney 's role model, as he clearly possesses the imagination and power that Rooney lacks... By capturing and disempowering Ferris, Rooney hopes to... reduce Ferris 's influence over other students, which would reestablish adults, that is, Rooney, as traditional authority figures. '' Nevertheless, Silverblatt concludes that "Rooney is essentially a comedic figure, whose bumbling attempts to discipline Ferris are a primary source of humor in the film ''. Thomas Patrick Doherty writes that "the adult villains in teenpics such as... Ferris Bueller 's Day Off (1986) are overdrawn caricatures, no real threat; they 're played for laughs ''. Yet Silverblatt also remarks that casting "the principal as a comic figure questions the competence of adults to provide young people with effective direction -- indeed, the value of adulthood itself ''.
Adults are not the stars or main characters of the film, and Roz Kaveney notes that what "Ferris Bueller brings to the teen genre, ultimately, is a sense of how it is possible to be cool and popular without being rich or a sports hero. Unlike the heroes of Weird Science, Ferris is computer savvy without being a nerd or a geek -- it is a skill he has taken the trouble to learn. ''
In 2010, English comedian Dan Willis performed his show "Ferris Bueller 's Way Of... '' at the Edinburgh Festival, delving into the philosophy of the movie and looking for life answers within.
The film was first released on VHS and Laserdisc in 1987, and then re-released on VHS in 1996. The film has been released on DVD three times; including the original DVD release October 19, 1999, the Bueller... Bueller edition January 2006, and the I Love the ' 80s edition August 19, 2008. The original DVD, like most Paramount Pictures films released on DVD for the first time, has very few bonus features, but it does feature a commentary by Hughes. Though this is no longer available for sale, the director 's commentary is available here. The Bueller... Bueller re-release has several more bonus features, but does not contain the commentary track of the original DVD release. The I Love the ' 80s edition is identical to the first DVD release (no features aside from commentary), but includes a bonus CD with songs from the 1980s. The songs are not featured in the film. The Bueller... Bueller edition has multiple bonus features such as interviews with the cast and crew, along with a clip of Stein 's commentaries on the film 's philosophy and impact. The Blu - ray Disc release (which is a part of the Bueller... Bueller edition, with the same bonus material) was first released on May 5, 2009. A 25th anniversary edition for DVD and Blu - ray were both released on August 2, 2011.
In 1990, a series called Ferris Bueller started for NBC, starring Charlie Schlatter as Ferris Bueller, Jennifer Aniston as Jeanie Bueller, and Ami Dolenz as Sloane Peterson. The series served as a prequel to the film. In the pilot episode, the audience sees Schlatter cutting up a cardboard cutout of Matthew Broderick, saying that he hated Broderick 's performance as him. It was produced by Maysh, Ltd. Productions in association with Paramount Television. In part because of competition of the similar series on the Fox Television Network, Parker Lewis Ca n't Lose, the series was canceled after the first thirteen episodes aired. Both Schlatter and Aniston later had success on other TV shows, Schlatter on Diagnosis: Murder and Aniston on Friends.
|
how many episodes are in season 4 of skam | Skam (TV series) - wikipedia
Skam (Norwegian pronunciation: (skɑm); English: Shame) is a Norwegian teen drama web series about the daily life of teenagers at the Hartvig Nissen School, a gymnasium in the wealthy borough of Frogner in West End Oslo. It was produced by NRK P3, which is part of the Norwegian government - owned NRK.
Skam follows a new main character each season. While airing, a new clip, conversation or social media post was published in real - time on the NRK website on a daily basis. Each season has focus on particular topics, ranging from relationship difficulties, identity, eating disorders, sexual assault, homosexuality, mental health issues, religion, and forbidden love.
Despite no promotion ahead of its 2015 launch, Skam broke viewership records. Its premiere episode is among the most - watched episodes in NRK 's history, and by the middle of season two, it was responsible for half of NRK 's traffic. With season three, it broke all streaming records in Norway, along with viewership records in neighboring countries Denmark and Sweden, and attracted an active international fanbase on social media, where fans promoted translations to aid in understanding. The show repeatedly made international headlines for its popularity surge across the world, and the show 's actors became famous worldwide. However, the international popularity was not expected, with the music industry requiring geoblocking of NRK 's website due to music license contracts only supporting the Norwegian public. The series ended after its fourth season in 2017, reportedly due to high production stress.
Skam received critical acclaim and significant recognition for its portrayal of sexual abuse in the second season and homosexuality in the third. The series was also praised for its contributions to promote Norwegian language and culture internationally, as well as for its unique distribution format, adopting a new strategy of real - time, high - engagement, snippet - based distribution rather than rigidity and television schedules. It received multiple Norwegian awards throughout its run, including "Best Drama '' and "People 's Choice Award ''. The series has an American adaptation in production led by Skam showrunner Julie Andem, as well as local remakes in multiple European countries. In 2018, members of the British royal family, specifically Prince William and his wife Catherine, visited Norway to meet the actors and discuss the series, as well as to talk about youth and mental health, a further sign of its international acclaim.
The series focuses on the daily life of teenagers at the Hartvig Nissen School (Hartvig Nissens skole), a prestigious gymnasium (preparatory high school) located in the Frogner borough in Oslo 's West End, with the address Niels Juels gate (Niels Juel Street) 56. The school is informally and widely known simply as "Nissen. '' Originally named Nissen 's Girls ' School, it was founded by Hartvig Nissen in 1849 as a private girls ' school which was owned by its headmasters and which served the higher bourgeoisie. It is the second oldest gymnasium in Oslo and is widely considered one of the country 's most prestigious; its alumni include many famous individuals and two members of the Norwegian royal family. The school was the first higher school in Norway which admitted women.
At the start of a week, a clip, conversation or social media post is posted on the Skam website. New material is posted on a daily basis, with the content unified and combined into one full episode on Fridays. The main character differs from season to season, and the fictional characters have social media profiles where viewers can follow their activities.
The following are characters in Skam.
In season one, David Alexander Sjøholt was credited as playing a character called David, though his name was never spoken in the show. This could be the character Magnus or Sjøholt was playing a different student. Sjøholt also makes an uncredited appearance in the last episode of season two.
In season three, Mikael appears uncredited in a video Isak finds on the Internet of Even.
In the final episode of season four, the main character changes between each clip. Vilde, Penetrator - Chris, Jonas, Chris, Even and William each have a clip where they are the main character. Eskild and Linn have one clip together.
The first clip from season 1 was made available on Tuesday, 22 September 2015, with the combined clips during the week premiering as a full episode on Friday, 25 September 2015. The season consists of 11 episodes; the main character is Eva Mohn. The storyline deals with Eva 's difficult relationship with her boyfriend Jonas and the themes of loneliness, identity, belonging and friendship.
The first clip from season 2 was made available on Monday, 29 February 2016, with the combined clips during the week premiering as a full episode on Friday, 4 March 2016. The season consists of 12 episodes; the main character is Noora Amalie Sætre. The season is about her relationship with William and deals with issues of friendship, feminism, eating disorder, self - image, violence, sexual violence and the contemporaneous refugee crisis in relation to Norwegian democracy.
The first clip from season 3 was made available on Sunday, 2 October 2016, with the combined clips during the week premiering as a full episode on Friday, 7 October 2016. The season consists of 10 episodes; the main character is Isak Valtersen. The season deals with Isak 's burgeoning relationship with Even Bech Næsheim and is principally a coming out story that deals with issues of love, sexual identity, authenticity, mental illness, religion and friendship.
The first clip from season 4 was made available on Monday, 10 April 2017, with the combined clips during the week premiering as a full episode on Friday, 14 April 2017. The season consisted of 10 episodes, and the main character is Sana Bakkoush. The season deals with the Islamic religion, forbidden love, cyberbullying, friendship, and the Norwegian russ celebratory period.
The series finale episode switches character clip - to - clip, focusing on short stories by characters not given their own, full season. The episode deals with parental depression, love rejection, jealousy, friendship, mutual relationship support, and fear of abandonment.
Julie Andem created Skam. In an interview with Rushprint in April 2016, Andem discussed production of the series. Originally developed for 16 - year - old girls, Andem made use of the "NABC '' production model ("Needs / Approach / Benefit / Competition ''), and instead of collecting information from a vast amount of sources, she had extensive, hours - long interviews with a single representative to uncover what needs that specific target audience had in order to cover that story. In contrast to American shows, which were the primary competition for shows attracting attention from teenagers, Andem stated that she had one advantage; knowing who the audience were and what culture they grew up in. One major area of exploration Andem found through research was pressure; she stated that "the pressure to perform is very high for this target audience. They strive to perform in so many ways. That 's fine, and it does n't necessarily have to be dangerous or unhealthy. But what is unhealthy is that many feel like they ca n't live up to the demands, and therefore feel that they failed. They are comparing themselves to each other, not themselves. And then a thought occurred: How to get them to let go of the pressure through a series like Skam ''. Andem wanted the show to be a combination of social realism, soap opera, and sitcom, transitioning between the genres as the scenes switch, for example from the comical scenes of a doctor 's office to the make - out scenes on the stairs. She admitted it did n't always work, saying that one particular scene change in episode 5 of season 1 from the stairs to the doctor 's office was "a dramatic jump '', and elaborated that "in a later scene, I told the photographer that we maybe should try to go a little closer. But we did n't get the humor, so: fuckit, we 'll shoot sitcom - ish and blend the genres. ''
Andem created nine characters, without any backstory. Everyone was supposed to be able to lead a season, and the show was going to switch character season - to - season. 1,200 people auditioned for the roles in the first round of casting. As production started, Andem wrote scripts for the shows, and there was no improvisation. "A lot of people think much of the show is improvised. It 's not. A lot is written for the actors. And before and after a scene, I 'll wait for a while before I say thank you and let them play a little in the scene. If a scene does n't work, we 'll fix it and see what in the script does n't work. '' Production had a short deadline, with scripts written in three days, one - and - a-half days to shoot, and four - to - five days to edit. "The plan must be there, and we just have to finish through ''. The series ' use of real - time was planned from early on, and Andem wrote the series in episodic format, although the content also had to work for daily releases, including a cliffhanger ending in each scene. Andem read the comments for each day, and looked for feedback from the audience on how to end each season while still keeping her original plans in some way.
As the series premiered, there was little or no promotion for the show, due to the production 's and NRK 's wish for teenagers to find it on their own, spread the news through social media, and avoid the older generation even noticing the series. There were no launch interviews, no reviews, and the actors were shielded from the media, with NRK P3 editorial chief Håkon Moslet saying that "We want most of the focus to be on the show. These actors are very young, I think it 's good they 're being shielded a little. They do also notice the popularity of the series ''.
The Norwegian newspaper Dagbladet reported in December 2016 that the production of seasons 2 and 3 of Skam had cost a combined sum of NOK 10 million. NRK P3 editorial chief Håkon Moslet stated: "For being a drama of high quality, Skam is a very cost - efficient production. ''
In December 2016, the series was renewed for a fourth season. In early April 2017, it was announced that the first clip from the fourth season would premiere on 10 April, and that it would be the last season of the series. NRK P3 editorial chief Håkon Moslet stated that the making of Skam had been "an extreme sport '', and in an Instagram post, creator, writer and director Julie Andem wrote that "Skam has been a 24 / 7 job. It has also been amazingly fun to work on, and I really believe that has given the series a unique energy, and ensured that Skam continues to surprise and entertain. We recently decided that we wo n't be making a new season this fall. I know many of you out there will be upset and disappointed to hear this, but I 'm confident this is the right decision. ''
On 23 June 2017, one day before the series finale, the entire cast officially met the press before the series ' wrap party, answering questions from fans around the world and describing their experiences and memories from production. It was notably the first time all the actors were allowed to break their silence and speak to the public.
In December 2016, Simon Fuller 's XIX Entertainment production company signed a deal with NRK to produce an American version of the series called Shame for the U.S. and Canada. The series will introduce new characters and actors, but retain the storytelling format of Skam. Location scouting and pre-production is in progress, with an expected debut in 2018. In October 2017, during the MIPCOM trade show, it was announced that the show will air on Facebook 's "Facebook Watch '' original video platform.
In April 2017, the Danish theatre Aveny - T was reported to have acquired exclusive rights to produce a stage version of Skam. Four different performances will be made, one for each season, with the first show having taking place in Copenhagen on 15 September 2017, and the remaining three performances produced once a year through the year 2020.
In September 2017, French entertainment website AlloCiné reported on the imminent production of a French remake of the series.
In October 2017, Variety and The Hollywood Reporter reported that local adaptations of Skam would be produced in five European countries; Germany, France, Italy, Spain and the Netherlands. NRK CEO Thor Gjermund Eriksen said in a statement that "We are very excited about the tremendous interest that Skam / Shame has generated outside of Norway. The creators of Skam aimed to help 16 - year - old - girls strengthen their self - esteem through dismantling taboos, making them aware of interpersonal mechanisms and showing them the benefits of confronting their fears. This is a vision we are proud to bring to other countries ''. Variety notes that each local production will be required to do its own local research into the dilemmas and dreams of its teenagers, rather than copying the original Norwegian production.
In Norway, the series is available on the radio channel NRK P3 's website, and on the web television solution NRK TV. The weekly episodes are also aired on Fridays on TV channel NRK3. The series has been licensed to air as a Nordvision co-production by public service broadcasters in other Nordic countries, specifically:
In Norway, on average, about 192,000 viewers watched the first season, with the first episode being one of the most viewed of all time on NRK TV online. In the first week of June 2016, streaming of Skam was responsible for over half of the traffic on NRK TV. Following the release of the third - season finale, NRK stated that the second season had an average audience of 531,000, while the third season broke all streaming records on its NRK TV service with an average audience of 789,000 people. The trailer for the fourth season, released on 7 April 2017, was watched by 900,000 people within four days. During the start of the fourth season, 1.2 million unique users had visited Skam 's website, and the first episode had been watched by 317,000 people. NRK P3 editorial chief Håkon Moslet told Verdens Gang that "We see that there is high traffic and high interest for season 4. Since the end of the season we have seen a pattern around viewer interest. We lie high in the first week and towards the end of the season when the drama kicks in. '' In May 2017, NRK published a report on 2016 viewing statistics, writing that the third season broke both the streaming record for a series on NRK TV and for streaming of any series in Norway.
An October 2016 Aftenposten report detailed that Skam had become popular in Sweden, with "well over 5000 '' viewers with Swedish IP addresses watching the episodes, not counting the individual clips. A later report from Verdens Gang in January 2017 stated that Skam had "broken all records '' in Sweden, with over 25 million plays on SVT Play. Following the series ' licensing deal for broadcasting in Denmark, the series broke records in January 2017, with the show 's first episode scoring 560,000 viewers on DR TV. In Finland, the first episode had more than 130,000 views by the end of February 2017, two and half months after its release, described by Yle audience researcher Anne Hyvärilä as "quite exceptional ''.
Skam has received critical acclaim. The newspaper NATT&DAG selected it as the best TV series of 2015. In its second season, Kripos, Norway 's National Criminal Investigation Service, praised the series ' handling of sexual abuse, including the girls ' encouraging the victim, Noora, to go to the emergency room to explain the situation and gather evidence of the abuse, and Noora confronting her abuser with relevant laws he has broken to prevent the sharing of photographs showing her naked. The National Center for Prevention of Sexual Assault also praised the portrayal, adding that they wish for the series to become a syllabus in schools. In the third season, Martine Lunder Brenne of Verdens Gang praised the theme of homosexuality and wrote that "I praise it first and foremost because young homosexual people, both in and outside the closet, finally get some long - awaited and modern role models. It does n't matter if it 's a character in a fictional drama - right now, Skam is Norway 's coolest show ''. In the fourth season, Christopher Pahle of Dagbladet praised a conversation about religion, writing: "two young people, with Muslim backgrounds, have a reflected, respectful and enlightening conversation about religion without arguing or taking it to the trenches. Think about that. They pick flowers and dribble a ball, and even if they do n't necessarily convince each other, that 's not the purpose either. The point is that they understand each other ''.
Skam has been recognised for its contributions to promote Norwegian language and culture, and to foster affinity between Nordic countries. In December 2016, the Nordic Association awarded Skam the annual Nordic Language Prize for its ability to engage a young Nordic audience, connecting with young people across the Nordic region and fostering positive attitudes about the region 's neighbouring languages. In April 2017, Skam and its creator Julie Andem were awarded the Peer Gynt Prize, an award given to a person or institution that has had a positive impact on society and made Norway famous abroad.
In June 2017, just prior to the show 's ending, Aftenposten published a report featuring interviews with many well - known Norwegian television creators, writers and directors, all praising Skam showrunner Julie Andem for her creative work on the show. Praise was directed at the series ' "unpolished '' nature, her ability to maintain "such a high level of quality over a long period of time '', the series ' blend of different sexualities and ethnicities and use of dialogue to resolve issues, and the show 's compassion, thereby its ability to truly capture its generational audience.
The show 's series finale received positive reviews. Vilde Sagstad Imeland of Verdens Gang praised the final clip for being a "worthy and emotional ending ''. Cecilie Asker of Aftenposten wrote that "The very last episode of Skam leaves us with a big sorrow, a sore loss, and a craving for more. It could n't have been better. ''
On 1 July 2017, during the celebration of Oslo Pride, Skam, its creator Julie Andem, and actors Tarjei Sandvik Moe, Henrik Holm and Carl Martin Eggesbø were awarded the "Fryd '' award, an award given to persons or organizations that break the norms in gender and sexuality in a positive manner.
In February 2018, Prince William and his wife Kate, members of the British royal family, visited the Hartvig Nissen school to meet with the cast and learn more about Skam, its impact on the actors ' lives and to discuss youth and mental health.
Starting with season 3, the show was able to reach a foreign audience, and NRK was therefore heavily asked to add English subtitles to the Skam episodes online. The demands were declined and NRK explained the decision by explaining that the licenses for the music presented throughout the series was restricted to a Norwegian audience, and that easy availability outside Norway would violate the terms of their license agreements. They also took action against attempts to unofficially spread substantial amount of video content with English subtitles on the internet.
When denied official subtitling, fans started making their own translations of the episodes into several world languages, greatly expanding the online fanbase. Norwegian viewers have been fast to share translated clips quickly after availability through Google Drive, blogs, and language courses.
By the end of 2016, Skam had been trending globally several times on Twitter and Tumblr, and its Facebook, Instagram and Vine presence grew rapidly. On social media, fandoms developed creative paintings, screensavers, phone covers, and fan videos. Filming locations, including Sagene Church, and the Hartvig Nissen school, were visited by fans, and the actors were receiving worldwide attention. After being featured in an episode in the third season, Gabrielle 's song "5 fine frøkner '' saw a 3,018 percent increase in listening on Spotify, with over 13 million streams and, at one point, rising to 8th place on the Swedish top music rankings. Continuing into its fourth season in April 2017, Skam continued to be an active topic worldwide on social media, with over 20,000 tweets containing # skamseason4 in 24 hours, with over 1 / 4 of the tweets originating from the United States.
In January 2017, Skam was geoblocked for foreign viewers. NRK attorney Kari Anne Lang - Ree stated that "NRK has a right to publish content to the Norwegian audience and foreign countries. The music industry is reacting to the fact that many international viewers are listening to music despite NRK not having international licensing deals. NRK takes the concerns from the music industry seriously. We are in dialogue with (the music industry) to find a solution ''. NRK stated that "We want to thank our international fans and followers who have embraced SKAM. We are blown away by your dedication -- it is something we never expected. That is why it hurts to tell you guys that due to a necessary clarification with the music right holders, SKAM will until further notice not be available outside Norway. We are working hard to figure out how to solve this issue so that the fans can continue to enjoy SKAM from where they are ''. After the release of the trailer for the fourth season in April 2017, the geoblock was removed for Nordic countries, with NRK P3 editorial chief Håkon Moslet telling Verdens Gang that "We 've been working hard so that viewers can watch this from foreign countries, and we worked deliberately this winter to make it accessible outside Norway. The trailer was just released, so we hope that has worked well. ''
The series has received significant attention from international media publications for its unique distribution model of real - time snippet - based information.
Anna Leszkiewicz of New Statesman posted in March 2017 that she considered Skam "the best show on TV '', hightlighting the second season 's handling of sexual assault. She praised the series for avoiding "shocking, gratuitous rape scenes '', instead focusing on a single hand gesture by abuser Nico as a sign of predatory behavior. However, Leszkiewicz criticized the show for taking the "escape route '', in which Noora finds the courage to speak to another girl who was at the party, who insists that, while Noora and Nico were in bed together, no sexual intercourse took place. Leszkiewicz commented that "So many women go through what Noora went through in Skam. Most of them do n't get offered the same escape route. Instead, they have to live with the shame and confusion of an "ambiguous '' assault. '' The same month, Elite Daily 's Dylan Kickham wrote that the international fanbase for Skam on social media was "much larger than I ever would have predicted '', with major fangroups on Twitter, Facebook, and Tumblr. He credited the third season 's storyline of homosexuality, calling it "incredibly intimate and profound '', particularly praising a scene featuring a conversation about flamboyant attitudes between main character Isak and supporting character Eskild. While acknowledging highlights of the past two seasons, Kickham explained that "season three stands above the rest by shining a light on aspects of sexuality that are very rarely depicted in mainstream media '', praising Eskild 's "magnificent and timely take on the toxic "masc - for - masc '' discrimination within the gay community '' in response to Isak 's homophobic comments. "It 's these small, incisive moments that show just how much Skam understands and cares about the issues it portrays '', explained Kickham.
In March 2017, voters of E! Online 's poll regarding "Top Couple 2017 '' declared characters Isak and Even, main stars of the show 's third season, the winners.
Anna Leszkiewicz of New Statesman wrote in April 2017 about the show and its impact, explaining that "In its three short seasons, Skam has explored date rape, coming out, mental health issues from anorexia to bipolar disorder, stereotypical perceptions of Islam, and teen pregnancy without ever feeling issue - based: instead, it approaches all these things through universal emotions like loneliness, or feeling misunderstood ''.
Verdens Gang wrote in April that Skam had become popular in China, where publicly discussing homosexuality is illegal. It reported that almost four million Chinese people had watched the third season through piracy and a total of six million had watched all episodes so far translated to Chinese. The report also stated that NRK has no plans to stop piracy in China, and NRK P3 editorial chief Håkon Moslet told Verdens Gang that "It was Isak and Even that captured a young Chinese audience. There 's a lot of censorship in China, and they are role models and have a relationship that Chinese people have a need to see. ''
In June 2017, Anna Leszkiewicz wrote another report on the series, particularly the decision to end it. Noting that the series originally had nine characters designed to each lead a season, she quoted fans who said that "It seems like such an abrupt decision. It does n't serve the storyline at all. '' Acknowledging the pressure of the show 's global popularity as potentially a key element to end it, she wrote that some fans "feel that Sana 's season has been overshadowed by other characters and plotlines, something that is particularly frustrating for those who were keen to see greater Muslim representation in the show ''; two prominent examples being the emphasis on characters Noora and William, and a conversation about religion in which Sana 's "white, non-Muslim friend, Isak, discuss Islamophobia, was whitesplainy ''. Giving further attention to negative feedback, the report cites commentary such as "Sana has been disrespected and disregarded and erased and sidelined and that is fucking gross. She deserved better ''.
Even after its conclusion, Skam continued to be hugely popular on social media. In December 2017, Tumblr released its list of the most talked - about shows of the year on its platform, with Skam topping the chat as number one, outranking hugely successful American series Game of Thrones, Stranger Things, and The Walking Dead.
In December 2017, Anna Leszkiewicz published another report on the series, this time focusing on Skam 's legacy as an American adaption was in production. NRK P3 editorial chief Håkon Moslet told her that "There was a lot of piracy '', acknowledging that the show 's global popularity was the result of fans illegally distributing content through Google Drive, though adding "But we did n't mind ''. Producer and project manager Marianne Furevold explained that "We were given a lot of time to do so much research, and I think that 's a huge part of the success that we see today with Skam '', referencing extensive in - depth interviews, attending schools and youth clubs, and immersing into teenagers ' online lives, something that she did not think would have been possible with a commercial network. In regards to ending the series after its fourth season, while its popularity peaked, Moslet told Leszkiewicz that writer Julie Andem spent an enormous amount of time developing the series; "It was kind of an extreme sport to make, this series, especially for her. It was her life, 24 / 7, for two and a half years. It was enough, I think. And she wanted to end on a high. So that 's the reason. I think it was the right thing ''. Facebook securing the deal to air the American adaption of the series on its newly launched Watch platform was described as potentially due to decreased popularity of the network among its younger users, with Moslet joking that "Perhaps Skam will save Facebook ''. Andem had posted on Instagram that she "would n't have been able to make a season five as good as it deserved to be '', though did n't want to give away the producing job of the American version. Moslet praised the series ' diverse set of characters, concluding with the statement that "At a time of confusion and intolerance, it seems more important than ever '' for content creators to embrace diversity and reject intolerant attitudes.
|
the council of nicaea was convened to answer questions on the | First Council of Nicaea - wikipedia
The First Council of Nicaea (/ naɪˈsiːə /; Greek: Νίκαια (ˈni: kaɪja)) was a council of Christian bishops convened in the Bithynian city of Nicaea (now Iznik, Bursa province, Turkey) by the Roman Emperor Constantine I in AD 325. Constantine I organized the Council along the lines of the Roman Senate and presided over it, but did not cast any official vote.
This ecumenical council was the first effort to attain consensus in the Church through an assembly representing all of Christendom. Hosius of Corduba, who was probably one of the Papal legates, may have presided over its deliberations.
Its main accomplishments were settlement of the Christological issue of the divine nature of God the Son and his relationship to God the Father, the construction of the first part of the Nicene Creed, establishing uniform observance of the date of Easter, and promulgation of early canon law.
The First Council of Nicaea was the first ecumenical council of the Church. Most significantly, it resulted in the first uniform Christian doctrine, called the Nicene Creed. With the creation of the creed, a precedent was established for subsequent local and regional councils of Bishops (Synods) to create statements of belief and canons of doctrinal orthodoxy -- the intent being to define unity of beliefs for the whole of Christendom.
Derived from Greek (Ancient Greek: οἰκουμένη oikouménē "the inhabited one ''), "ecumenical '' means "worldwide '' but generally is assumed to be limited to the known inhabited Earth, (Danker 2000, pp. 699 - 670) and at this time in history is synonymous with the Roman Empire; the earliest extant uses of the term for a council are Eusebius ' Life of Constantine 3.6 around 338, which states "he convoked an Ecumenical Council '' (Ancient Greek: σύνοδον οἰκουμενικὴν συνεκρότει sýnodon oikoumenikḕn synekrótei) and the Letter in 382 to Pope Damasus I and the Latin bishops from the First Council of Constantinople.
One purpose of the council was to resolve disagreements arising from within the Church of Alexandria over the nature of the Son in his relationship to the Father: in particular, whether the Son had been ' begotten ' by the Father from his own being, and therefore having no beginning, or else created out of nothing, and therefore having a beginning. St. Alexander of Alexandria and Athanasius took the first position; the popular presbyter Arius, from whom the term Arianism comes, took the second. The council decided against the Arians overwhelmingly (of the estimated 250 -- 318 attendees, all but two agreed to sign the creed and these two, along with Arius, were banished to Illyria).
Another result of the council was an agreement on when to celebrate Easter, the most important feast of the ecclesiastical calendar, decreed in an epistle to the Church of Alexandria in which is simply stated:
We also send you the good news of the settlement concerning the holy pasch, namely that in answer to your prayers this question also has been resolved. All the brethren in the East who have hitherto followed the Jewish practice will henceforth observe the custom of the Romans and of yourselves and of all of us who from ancient times have kept Easter together with you.
Historically significant as the first effort to attain consensus in the church through an assembly representing all of Christendom, the Council was the first occasion where the technical aspects of Christology were discussed. Through it a precedent was set for subsequent general councils to adopt creeds and canons. This council is generally considered the beginning of the period of the First seven Ecumenical Councils in the History of Christianity.
The First Council of Nicaea was convened by Emperor Constantine the Great upon the recommendations of a synod led by Hosius of Córdoba in the Eastertide of 325. This synod had been charged with investigation of the trouble brought about by the Arian controversy in the Greek - speaking east. To most bishops, the teachings of Arius were heretical and dangerous to the salvation of souls. In the summer of 325, the bishops of all provinces were summoned to Nicaea, a place reasonably accessible to many delegates, particularly those of Asia Minor, Georgia, Armenia, Syria, Palestine, Egypt, Greece, and Thrace.
This was the first general council in the history of the Church summoned by emperor Constantine I. In the Council of Nicaea, "The Church had taken her first great step to define revealed doctrine more precisely in response to a challenge from a heretical theology. ''
Constantine had invited all 1,800 bishops of the Christian church within the Roman Empire (about 1,000 in the east and 800 in the west), but a smaller and unknown number attended. Eusebius of Caesarea counted more than 250, Athanasius of Alexandria counted 318, and Eustathius of Antioch estimated "about 270 '' (all three were present at the council). Later, Socrates Scholasticus recorded more than 300, and Evagrius, Hilary of Poitiers, Jerome, Dionysius Exiguus, and Rufinus recorded 318. This number 318 is preserved in the liturgies of the Eastern Orthodox Church and the Coptic Orthodox Church of Alexandria.
Delegates came from every region of the Roman Empire, including Britain. The participating bishops were given free travel to and from their episcopal sees to the council, as well as lodging. These bishops did not travel alone; each one had permission to bring with him two priests and three deacons, so the total number of attendees could have been above 1,800. Eusebius speaks of an almost innumerable host of accompanying priests, deacons, and acolytes.
The Eastern bishops formed the great majority. Of these, the first rank was held by the three patriarchs: Alexander of Alexandria, Eustathius of Antioch, and Macarius of Jerusalem. Many of the assembled fathers -- for instance, Paphnutius of Thebes, Potamon of Heraclea, and Paul of Neocaesarea -- had stood forth as confessors of the faith and came to the council with the marks of persecution on their faces. This position is supported by patristic scholar Timothy Barnes in his book Constantine and Eusebius. Historically, the influence of these marred confessors has been seen as substantial, but recent scholarship has called this into question.
Other remarkable attendees were Eusebius of Nicomedia; Eusebius of Caesarea, the purported first church historian; circumstances suggest that Nicholas of Myra attended (his life was the seed of the Santa Claus legends); Aristaces of Armenia (son of Saint Gregory the Illuminator); Leontius of Caesarea; Jacob of Nisibis, a former hermit; Hypatius of Gangra; Protogenes of Sardica; Melitius of Sebastopolis; Achilleus of Larissa (considered the Athanasius of Thessaly) and Spyridion of Trimythous, who even while a bishop made his living as a shepherd. From foreign places came John, bishop of Persia and India, Theophilus, a Gothic bishop, and Stratophilus, bishop of Pitiunt in Georgia.
The Latin - speaking provinces sent at least five representatives: Marcus of Calabria from Italia, Cecilian of Carthage from Africa, Hosius of Córdoba from Hispania, Nicasius of Die from Gaul, and Domnus of Stridon from the province of the Danube.
Athanasius of Alexandria, a young deacon and companion of Bishop Alexander of Alexandria, was among the assistants. Athanasius eventually spent most of his life battling against Arianism. Alexander of Constantinople, then a presbyter, was also present as representative of his aged bishop.
The supporters of Arius included Secundus of Ptolemais, Theonus of Marmarica, Zephyrius (or Zopyrus), and Dathes, all of whom hailed from the Libyan Pentapolis. Other supporters included Eusebius of Nicomedia, Paulinus of Tyrus, Actius of Lydda, Menophantus of Ephesus, and Theognus of Nicaea.
"Resplendent in purple and gold, Constantine made a ceremonial entrance at the opening of the council, probably in early June, but respectfully seated the bishops ahead of himself. '' As Eusebius described, Constantine "himself proceeded through the midst of the assembly, like some heavenly messenger of God, clothed in raiment which glittered as it were with rays of light, reflecting the glowing radiance of a purple robe, and adorned with the brilliant splendor of gold and precious stones. '' The emperor was present as an overseer and presider, but did not cast any official vote. Constantine organized the Council along the lines of the Roman Senate. Hosius of Cordoba may have presided over its deliberations; he was probably one of the Papal legates. Eusebius of Nicomedia probably gave the welcoming address.
The agenda of the synod included:
The council was formally opened May 20, in the central structure of the imperial palace at Nicaea, with preliminary discussions of the Arian question. In these discussions, some dominant figures were Arius, with several adherents. "Some 22 of the bishops at the council, led by Eusebius of Nicomedia, came as supporters of Arius. But when some of the more shocking passages from his writings were read, they were almost universally seen as blasphemous. '' Bishops Theognis of Nicaea and Maris of Chalcedon were among the initial supporters of Arius.
Eusebius of Caesarea called to mind the baptismal creed of his own diocese at Caesarea at Palestine, as a form of reconciliation. The majority of the bishops agreed. For some time, scholars thought that the original Nicene Creed was based on this statement of Eusebius. Today, most scholars think that the Creed is derived from the baptismal creed of Jerusalem, as Hans Lietzmann proposed.
The orthodox bishops won approval of every one of their proposals regarding the Creed. After being in session for an entire month, the council promulgated on June 19 the original Nicene Creed. This profession of faith was adopted by all the bishops "but two from Libya who had been closely associated with Arius from the beginning ''. No explicit historical record of their dissent actually exists; the signatures of these bishops are simply absent from the Creed.
The Arian controversy arose in Alexandria when the newly reinstated presbyter Arius began to spread doctrinal views that were contrary to those of his bishop, St. Alexander of Alexandria. The disputed issues centered on the natures and relationship of God (the Father) and the Son of God (Jesus). The disagreements sprang from different ideas about the Godhead and what it meant for Jesus to be God 's Son. Alexander maintained that the Son was divine in just the same sense that the Father is, coeternal with the Father, else he could not be a true Son.
Arius emphasized the supremacy and uniqueness of God the Father, meaning that the Father alone is almighty and infinite, and that therefore the Father 's divinity must be greater than the Son 's. Arius taught that the Son had a beginning, and that he possessed neither the eternity nor the true divinity of the Father, but was rather made "God '' only by the Father 's permission and power, and that the Son was rather the very first and the most perfect of God 's creatures.
The Arian discussions and debates at the council extended from about May 20, 325, through about June 19. According to legendary accounts, debate became so heated that at one point, Arius was struck in the face by Nicholas of Myra, who would later be canonized. This account is almost certainly apocryphal, as Arius himself would not have been present in the council chamber due to the fact that he was not a bishop.
Much of the debate hinged on the difference between being "born '' or "created '' and being "begotten ''. Arians saw these as essentially the same; followers of Alexander did not. The exact meaning of many of the words used in the debates at Nicaea were still unclear to speakers of other languages. Greek words like "essence '' (ousia), "substance '' (hypostasis), "nature '' (physis), "person '' (prosopon) bore a variety of meanings drawn from pre-Christian philosophers, which could not but entail misunderstandings until they were cleared up. The word homoousia, in particular, was initially disliked by many bishops because of its associations with Gnostic heretics (who used it in their theology), and because their heresies had been condemned at the 264 -- 268 Synods of Antioch.
According to surviving accounts, the presbyter Arius argued for the supremacy of God the Father, and maintained that the Son of God was created as an act of the Father 's will, and therefore that the Son was a creature made by God, begotten directly of the infinite, eternal God. Arius 's argument was that the Son was God 's very first production, before all ages. The position being that the Son had a beginning, and that only the Father has no beginning. And Arius argued that everything else was created through the Son. Thus, said the Arians, only the Son was directly created and begotten of God; and therefore there was a time that He had no existence. Arius believed that the Son of God was capable of His own free will of right and wrong, and that "were He in the truest sense a son, He must have come after the Father, therefore the time obviously was when He was not, and hence He was a finite being '', and that He was under God the Father. Therefore, Arius insisted that the Father 's divinity was greater than the Son 's. The Arians appealed to Scripture, quoting biblical statements such as "the Father is greater than I '', and also that the Son is "firstborn of all creation ''.
The opposing view stemmed from the idea that begetting the Son is itself in the nature of the Father, which is eternal. Thus, the Father was always a Father, and both Father and Son existed always together, eternally, coequally and consubstantially. The contra - Arian argument thus stated that the Logos was "eternally begotten '', therefore with no beginning. Those in opposition to Arius believed that to follow the Arian view destroyed the unity of the Godhead, and made the Son unequal to the Father. They insisted that such a view was in contravention of such Scriptures as "I and the Father are one '' and "the Word was God '', as such verses were interpreted. They declared, as did Athanasius, that the Son had no beginning, but had an "eternal derivation '' from the Father, and therefore was coeternal with him, and equal to God in all aspects.
The Council declared that the Son was true God, coeternal with the Father and begotten from His same substance, arguing that such a doctrine best codified the Scriptural presentation of the Son as well as traditional Christian belief about him handed down from the Apostles. This belief was expressed by the bishops in the Creed of Nicaea, which would form the basis of what has since been known as the Niceno - Constantinopolitan Creed.
One of the projects undertaken by the Council was the creation of a Creed, a declaration and summary of the Christian faith. Several creeds were already in existence; many creeds were acceptable to the members of the council, including Arius. From earliest times, various creeds served as a means of identification for Christians, as a means of inclusion and recognition, especially at baptism.
In Rome, for example, the Apostles ' Creed was popular, especially for use in Lent and the Easter season. In the Council of Nicaea, one specific creed was used to define the Church 's faith clearly, to include those who professed it, and to exclude those who did not.
Some distinctive elements in the Nicene Creed, perhaps from the hand of Hosius of Cordova, were added, some specifically to counter the Arian point of view.
At the end of the creed came a list of anathemas, designed to repudiate explicitly the Arians ' stated claims.
Thus, instead of a baptismal creed acceptable to both the Arians and their opponents the council promulgated one which was clearly opposed to Arianism and incompatible with the distinctive core of their beliefs. The text of this profession of faith is preserved in a letter of Eusebius to his congregation, in Athanasius, and elsewhere. Although the most vocal of anti-Arians, the Homoousians (from the Koine Greek word translated as "of same substance '' which was condemned at the Council of Antioch in 264 -- 268) were in the minority, the Creed was accepted by the council as an expression of the bishops ' common faith and the ancient faith of the whole Church.
Bishop Hosius of Cordova, one of the firm Homoousians, may well have helped bring the council to consensus. At the time of the council, he was the confidant of the emperor in all Church matters. Hosius stands at the head of the lists of bishops, and Athanasius ascribes to him the actual formulation of the creed. Great leaders such as Eustathius of Antioch, Alexander of Alexandria, Athanasius, and Marcellus of Ancyra all adhered to the Homoousian position.
In spite of his sympathy for Arius, Eusebius of Caesarea adhered to the decisions of the council, accepting the entire creed. The initial number of bishops supporting Arius was small. After a month of discussion, on June 19, there were only two left: Theonas of Marmarica in Libya, and Secundus of Ptolemais. Maris of Chalcedon, who initially supported Arianism, agreed to the whole creed. Similarly, Eusebius of Nicomedia and Theognis of Nice also agreed, except for certain statements.
The Emperor carried out his earlier statement: everybody who refused to endorse the Creed would be exiled. Arius, Theonas, and Secundus refused to adhere to the creed, and were thus exiled to Illyria, in addition to being excommunicated. The works of Arius were ordered to be confiscated and consigned to the flames, while his supporters considered as "enemies of Christianity. '' Nevertheless, the controversy continued in various parts of the empire.
The Creed was amended to a new version by the First Council of Constantinople in 381.
The feast of Easter is linked to the Jewish Passover and Feast of Unleavened Bread, as Christians believe that the crucifixion and resurrection of Jesus occurred at the time of those observances.
As early as Pope Sixtus I, some Christians had set Easter to a Sunday in the lunar month of Nisan. To determine which lunar month was to be designated as Nisan, Christians relied on the Jewish community. By the later 3rd century some Christians began to express dissatisfaction with what they took to be the disorderly state of the Jewish calendar. They argued that contemporary Jews were identifying the wrong lunar month as the month of Nisan, choosing a month whose 14th day fell before the spring equinox.
Christians, these thinkers argued, should abandon the custom of relying on Jewish informants and instead do their own computations to determine which month should be styled Nisan, setting Easter within this independently computed, Christian Nisan, which would always locate the festival after the equinox. They justified this break with tradition by arguing that it was in fact the contemporary Jewish calendar that had broken with tradition by ignoring the equinox, and that in former times the 14th of Nisan had never preceded the equinox. Others felt that the customary practice of reliance on the Jewish calendar should continue, even if the Jewish computations were in error from a Christian point of view.
The controversy between those who argued for independent computations and those who argued for continued reliance on the Jewish calendar was formally resolved by the Council, which endorsed the independent procedure that had been in use for some time at Rome and Alexandria. Easter was henceforward to be a Sunday in a lunar month chosen according to Christian criteria -- in effect, a Christian Nisan -- not in the month of Nisan as defined by Jews. Those who argued for continued reliance on the Jewish calendar (called "protopaschites '' by later historians) were urged to come around to the majority position. That they did not all immediately do so is revealed by the existence of sermons, canons, and tracts written against the protopaschite practice in the later 4th century.
These two rules, independence of the Jewish calendar and worldwide uniformity, were the only rules for Easter explicitly laid down by the Council. No details for the computation were specified; these were worked out in practice, a process that took centuries and generated a number of controversies (see also Computus and Reform of the date of Easter.) In particular, the Council did not seem to decree that Easter must fall on Sunday.
Nor did the Council decree that Easter must never coincide with Nisan 14 (the first Day of Unleavened Bread, now commonly called "Passover '') in the Hebrew calendar. By endorsing the move to independent computations, the Council had separated the Easter computation from all dependence, positive or negative, on the Jewish calendar. The "Zonaras proviso '', the claim that Easter must always follow Nisan 14 in the Hebrew calendar, was not formulated until after some centuries. By that time, the accumulation of errors in the Julian solar and lunar calendars had made it the de facto state of affairs that Julian Easter always followed Hebrew Nisan 14.
The suppression of the Meletian schism, an early breakaway sect, was another important matter that came before the Council of Nicaea. Meletius, it was decided, should remain in his own city of Lycopolis in Egypt, but without exercising authority or the power to ordain new clergy; he was forbidden to go into the environs of the town or to enter another diocese for the purpose of ordaining its subjects. Melitius retained his episcopal title, but the ecclesiastics ordained by him were to receive again the Laying on of hands, the ordinations performed by Meletius being therefore regarded as invalid. Clergy ordained by Meletius were ordered to yield precedence to those ordained by Alexander, and they were not to do anything without the consent of Bishop Alexander.
In the event of the death of a non-Meletian bishop or ecclesiastic, the vacant see might be given to a Meletian, provided he was worthy and the popular election were ratified by Alexander. As to Meletius himself, episcopal rights and prerogatives were taken from him. These mild measures, however, were in vain; the Meletians joined the Arians and caused more dissension than ever, being among the worst enemies of Athanasius. The Meletians ultimately died out around the middle of the fifth century.
Corpus Juris Canonici
Ancient Church Orders
Collections of ancient canons
Other
Holy Orders
Confession
Eucharist
Tribunal Officers
Tribunal Procedure
Juridic persons
The council promulgated twenty new church laws, called canons, (though the exact number is subject to debate), that is, unchanging rules of discipline. The twenty as listed in the Nicene and Post-Nicene Fathers are as follows:
On July 25, 325, in conclusion, the fathers of the council celebrated the Emperor 's twentieth anniversary. In his farewell address, Constantine informed the audience how averse he was to dogmatic controversy; he wanted the Church to live in harmony and peace. In a circular letter, he announced the accomplished unity of practice by the whole Church in the date of the celebration of Christian Passover (Easter).
The long - term effects of the Council of Nicaea were significant. For the first time, representatives of many of the bishops of the Church convened to agree on a doctrinal statement. Also for the first time, the Emperor played a role, by calling together the bishops under his authority, and using the power of the state to give the council 's orders effect.
In the short - term, however, the council did not completely solve the problems it was convened to discuss and a period of conflict and upheaval continued for some time. Constantine himself was succeeded by two Arian Emperors in the Eastern Empire: his son, Constantius II and Valens. Valens could not resolve the outstanding ecclesiastical issues, and unsuccessfully confronted St. Basil over the Nicene Creed.
Pagan powers within the Empire sought to maintain and at times re-establish paganism into the seat of the Emperor (see Arbogast and Julian the Apostate). Arians and Meletians soon regained nearly all of the rights they had lost, and consequently, Arianism continued to spread and be a subject of debate within the Church during the remainder of the fourth century. Almost immediately, Eusebius of Nicomedia, an Arian bishop and cousin to Constantine I, used his influence at court to sway Constantine 's favor from the proto - orthodox Nicene bishops to the Arians.
Eustathius of Antioch was deposed and exiled in 330. Athanasius, who had succeeded Alexander as Bishop of Alexandria, was deposed by the First Synod of Tyre in 335 and Marcellus of Ancyra followed him in 336. Arius himself returned to Constantinople to be readmitted into the Church, but died shortly before he could be received. Constantine died the next year, after finally receiving baptism from Arian Bishop Eusebius of Nicomedia, and "with his passing the first round in the battle after the Council of Nicaea was ended ''.
Christianity was illegal in the empire until the emperors Constantine and Licinius agreed in 313 to what became known as the Edict of Milan. However, Nicene Christianity did not become the state religion of the Roman Empire until the Edict of Thessalonica in 380. In the mean time, paganism remained legal and present in public affairs. Constantine 's coinage and other official motifs, until the Council of Nicaea, had affiliated him with the pagan cult of Sol Invictus. At first, Constantine encouraged the construction of new temples and tolerated traditional sacrifices. Later in his reign, he gave orders for the pillaging and the tearing down of Roman temples.
Constantine 's role regarding Nicaea was that of supreme civil leader and authority in the empire. As Emperor, the responsibility for maintaining civil order was his, and he sought that the Church be of one mind and at peace. When first informed of the unrest in Alexandria due to the Arian disputes, he was "greatly troubled '' and, "rebuked '' both Arius and Bishop Alexander for originating the disturbance and allowing it to become public. Aware also of "the diversity of opinion '' regarding the celebration of Easter and hoping to settle both issues, he sent the "honored '' Bishop Hosius of Cordova (Hispania) to form a local church council and "reconcile those who were divided ''. When that embassy failed, he turned to summoning a synod at Nicaea, inviting "the most eminent men of the churches in every country ''.
Constantine assisted in assembling the council by arranging that travel expenses to and from the bishops ' episcopal sees, as well as lodging at Nicaea, be covered out of public funds. He also provided and furnished a "great hall... in the palace '' as a place for discussion so that the attendees "should be treated with becoming dignity ''. In addressing the opening of the council, he "exhorted the Bishops to unanimity and concord '' and called on them to follow the Holy Scriptures with: "Let, then, all contentious disputation be discarded; and let us seek in the divinely - inspired word the solution of the questions at issue. ''
Thereupon, the debate about Arius and church doctrine began. "The emperor gave patient attention to the speeches of both parties '' and "deferred '' to the decision of the bishops. The bishops first pronounced Arius ' teachings to be anathema, formulating the creed as a statement of correct doctrine. When Arius and two followers refused to agree, the bishops pronounced clerical judgement by excommunicating them from the Church. Respecting the clerical decision, and seeing the threat of continued unrest, Constantine also pronounced civil judgement, banishing them into exile. This was the beginning of the practice of using secular power to establish doctrinal orthodoxy within Christianity, an example followed by all later Christian emperors, which led to a circle of Christian violence, and of Christian resistance couched in terms of martyrdom.
There is no record of any discussion of the biblical canon at the council. The development of the biblical canon took centuries, and was nearly complete (with exceptions known as the Antilegomena, written texts whose authenticity or value is disputed) by the time the Muratorian fragment was written.
In 331, Constantine commissioned fifty Bibles for the Church of Constantinople, but little else is known (in fact, it is not even certain whether his request was for fifty copies of the entire Old and New Testaments, only the New Testament, or merely the Gospels). Some scholars believe that this request provided motivation for canon lists. In Jerome 's Prologue to Judith, he claims that the Book of Judith was "found by the Nicene Council to have been counted among the number of the Sacred Scriptures '', which some have suggested means the Nicene Council did discuss what documents would number among the sacred scriptures, but more likely simply means the Council used Judith in its deliberations on other matters and so it should be considered canonical.
The main source of the idea that the Bible was created at the Council of Nicea seems to be Voltaire, who popularised a story that the canon was determined by placing all the competing books on an altar during the Council and then keeping the ones that did not fall off. The original source of this "fictitious anecdote '' is the Synodicon Vetus, a pseudo-historical account of early Church councils from AD 887:
The canonical and apocryphal books it distinguished in the following manner: in the house of God the books were placed down by the holy altar; then the council asked the Lord in prayer that the inspired works be found on top and -- as in fact happened -- the spurious on the bottom.
The council of Nicaea dealt primarily with the issue of the deity of Christ. Over a century earlier the term "Trinity '' (Τριάς in Greek; trinitas in Latin) was used in the writings of Origen (185 -- 254) and Tertullian (160 -- 220), and a general notion of a "divine three '', in some sense, was expressed in the second century writings of Polycarp, Ignatius, and Justin Martyr. In Nicaea, questions regarding the Holy Spirit were left largely unaddressed until after the relationship between the Father and the Son was settled around the year 362. So the doctrine in a more full - fledged form was not formulated until the Council of Constantinople in 360 AD, and a final form formulated in 381 AD, primarily crafted by Gregory of Nyssa.
While Constantine had sought a unified church after the council, he did not force the Homoousian view of Christ 's nature on the council (see The role of Constantine).
Constantine did not commission any Bibles at the council itself. He did commission fifty Bibles in 331 for use in the churches of Constantinople, itself still a new city. No historical evidence points to involvement on his part in selecting or omitting books for inclusion in commissioned Bibles.
Despite Constantine 's sympathetic interest in the Church, he was not baptized until some 11 or 12 years after the council, putting off baptism as long as he did so as to be absolved from as much sin as possible in accordance with the belief that in baptism all sin is forgiven fully and completely.
Roman Catholics assert that the idea of Christ 's deity was ultimately confirmed by the Bishop of Rome, and that it was this confirmation that gave the council its influence and authority. In support of this, they cite the position of early fathers and their expression of the need for all churches to agree with Rome (see Irenaeus, Adversus Haereses III: 3: 2).
However, Protestants, Eastern Orthodox, and Oriental Orthodox do not believe the Council viewed the Bishop of Rome as the jurisdictional head of Christendom, or someone having authority over other bishops attending the Council. In support of this, they cite Canon 6, where the Roman Bishop could be seen as simply one of several influential leaders, but not one who had jurisdiction over other bishops in other regions.
According to Protestant theologian Philip Schaff, "The Nicene fathers passed this canon not as introducing anything new, but merely as confirming an existing relation on the basis of church tradition; and that, with special reference to Alexandria, on account of the troubles existing there. Rome was named only for illustration; and Antioch and all the other eparchies or provinces were secured their admitted rights. The bishoprics of Alexandria, Rome, and Antioch were placed substantially on equal footing. '' Thus, according to Schaff, the Bishop of Alexandria was to have jurisdiction over the provinces of Egypt, Libya and the Pentapolis, just as the Bishop of Rome had authority "with reference to his own diocese. ''
But according to Fr. James F. Loughlin, there is an alternate Roman Catholic interpretation. It involves five different arguments "drawn respectively from the grammatical structure of the sentence, from the logical sequence of ideas, from Catholic analogy, from comparison with the process of formation of the Byzantine Patriarchate, and from the authority of the ancients '' in favor of an alternative understanding of the canon. According to this interpretation, the canon shows the role the Bishop of Rome had when he, by his authority, confirmed the jurisdiction of the other patriarchs -- an interpretation which is in line with the Roman Catholic understanding of the Pope. Thus, the Bishop of Alexandria presided over Egypt, Libya and the Pentapolis, while the Bishop of Antioch "enjoyed a similar authority throughout the great diocese of Oriens, '' and all by the authority of the Bishop of Rome. To Loughlin, that was the only possible reason to invoke the custom of a Roman Bishop in a matter related to the two metropolitan bishops in Alexandria and Antioch.
However, Protestant and Roman Catholic interpretations have historically assumed that some or all of the bishops identified in the canon were presiding over their own dioceses at the time of the Council -- the Bishop of Rome over the Diocese of Italy, as Schaff suggested, the Bishop of Antioch over the Diocese of Oriens, as Loughlin suggested, and the Bishop of Alexandria over the Diocese of Egypt, as suggested by Karl Josef von Hefele. According to Hefele, the Council had assigned to Alexandria, "the whole (civil) Diocese of Egypt. '' Yet those assumptions have since been proven false. At the time of the Council, the Diocese of Egypt did not yet exist, so the Council could not have assigned it to Alexandria. Antioch and Alexandria were both located within the civil Diocese of Oriens, Antioch being the chief metropolis, but neither administered the whole. Likewise, Rome and Milan were both located within the civil Diocese of Italy, Milan being the chief metropolis, yet neither administered the whole.
This geographic issue related to Canon 6 was highlighted by Protestant writer, Timothy F. Kauffman, as a correction to the anachronism created by the assumption that each bishop was already presiding over a whole diocese at the time of the council. According to Kauffman, since Milan and Rome were both located within the Diocese of Italy, and Antioch and Alexandria were both located within the Diocese of Oriens, a relevant and "structural congruency '' between Rome and Alexandria was readily apparent to the gathered bishops: both had been made to share a diocese of which neither was the chief metropolis. Rome 's jurisdiction within Italy had been defined in terms of several of the city 's adjacent provinces since Diocletian 's reordering of the empire in 293, as the earliest Latin version of the canon indicates, and the rest of the Italian provinces were under the jurisdiction of Milan.
That provincial arrangement of Roman and Milanese jurisdiction within Italy therefore was a relevant precedent, and provided an administrative solution to the problem facing the council -- namely, how to define Alexandrian and Antiochian jurisdiction within the Diocese of Oriens. In canon 6, the Council left most of the diocese under Antioch 's jurisdiction, and assigned a few provinces of the diocese to Alexandria, "since the like is customary for the Bishop of Rome also. ''
In that scenario, a relevant Roman precedent is invoked, answering Loughlin 's argument as to why the custom of a bishop in Rome would have any bearing on a dispute regarding Alexandria in Oriens, and at the same time correcting Schaff 's argument that the bishop of Rome was invoked by way of illustration "with reference to his own diocese. '' The custom of the bishop of Rome was invoked by way of illustration, not because he presided over the whole Church, or over the western Church or even over "his own diocese, '' but rather because he presided over a few provinces in a diocese that was otherwise administered from Milan. On the basis of that precedent, the council recognized Alexandria 's ancient jurisdiction over a few provinces in the Diocese of Oriens, a diocese that was otherwise administered from Antioch.
The Churches of Byzantium celebrate the Fathers of the First Ecumenical Council on the seventh Sunday of Pascha (the Sunday before Pentecost). The Lutheran Church - Missouri Synod celebrates the First Ecumenical Council on June 12. The Coptic Church celebrates The Assembly of the First Ecumenical Council on 9 Hathor (usually Nov. 18). The Armenian Church celebrates the 318 Fathers of the Holy Council of Nicaea on Sept. 1.
Note: NPNF2 = Schaff, Philip; Wace, Henry (eds.), Nicene and Post-Nicene Fathers, Second Series, Christian Classics Ethereal Library, retrieved 2014 - 07 - 29
|
so you think you can dance filipino contestants | List of so You Think You can dance finalists (US Season 5) - wikipedia
This is a list of finalists from the fifth season of So You Think You Can Dance (US), a television program. Finalists are listed in alphabetical order.
Born on December 6, 1988 to Nigerian parents, Ade Obayomi is a contemporary dancer from Orange County, California. Ade started dancing at the age of 6. A graduate of Corona Primary School, he briefly attended Chapman University in Orange, California, where Season 4 's Katee Shean (3rd place) and Stephen "Twitch '' Boss (2nd place) also studied. A dance highlight for Ade was performing at Radio City Music Hall. Ade had also previously auditioned for Season 4 of SYTYCD, but was cut after the ballroom round.
During Season 5, Ade was paired with ballerina Melissa Sandvig, with whom he danced each week until only ten dancers remained in the competition. One week prior to his elimination, once again paired with Sandvig, he performed a breast cancer - themed contemporary dance. This piece, entitled "This Woman 's Work '' and choreographed by Tyce Diorio, brought the judges to tears and was subsequently showcased on a future episode showing favorite dances from the season.
Ade was eliminated during the semi-final week, along with former partner Sandvig. He eventually returned to the show as an All - Star on Season 7.
In Ade 's spare time he likes to read draw or play soccer. He is also a part - time musician and has signed in with the Record Label Okada Music Group more commonly known as O.M.G.
Born on November 1, 1986, Ashley is a contemporary dancer from Los Angeles who was originally raised in Arizona. She had previously auditioned for seasons two, three, and four. She was partnered with contemporary dancer Kūpono Aweau until she was eliminated in week two.
Born on January 26, 1984 to Japanese parents, Asuka is a Latin ballroom dancer from San Francisco, CA. At the time of her first audition, she lived in Irvine, CA. Asuka originally auditioned for the show during Season 4, but was cut during Top 20 selection at the very end of Vegas Week. Her hobbies include swimming, golf, and tennis. Additionally, she started playing the piano at the age of twelve. Asuka graduated from the University of California, Irvine, and had worked as a Latin Ballroom instructor for the last two years at the time of Season 5 's airing. She auditioned for Season 5 with her friend Ricky Sun, also a ballroom dancer, who was eliminated at the end of Vegas Week.
At the beginning of the Top 20 portion of the show, Asuka was partnered with contemporary dancer Vitolio Jeune. Asuka was eliminated in the third week of the competition.
Since her time on the show, Asuka has been partnered by Oleg Astakhov, competing in the professional Latin division, representing USA. They 've won many Professional Rising Star titles. Together with Oleg Astakhov they successfully run dance company in Los Angeles - "Ballroom - Dancing - LA ''. For more information about Asuka visit: http://www.BallroomDancingLA.com
Born on July 26, 1989, Brandon is a contemporary dancer from Miami, Florida. He began dancing ballet at age 10. Brandon is a graduate of Coral Reef Senior High School and now attends Miami Dade College. Brandon Bryant 's most memorable dance experience prior to being on SYTYCD was performing for Madonna at her daughter 's birthday party.
Brandon 's audition brought Mary Murphy to tears. However, Lil ' C was not overly impressed with him, and Mia Michaels went so far as to say that she did n't like his attitude. The tension between Michaels and Bryant, which continued well into the Top 20 portion of Season 5, was a topic which received much attention on blogs and fan sites dedicated to the show. Later in the season, however, both Michaels and Lil ' C appeared to change their opinions of him for the better (much to the relief of both Brandon and his fans).
At the beginning of the Top 20 portion of Season 5, Brandon was partnered with salsa dancer Janette Manrara. He danced in the finale, eventually finishing the competition as first runner - up to winner Jeanine Mason.
Brandon also subsequently appeared in the Glee episode of "Bad Reputation '' as one of the dancers in the "Physical '' number with Olivia Newton - John and Sue Sylvester (Jane Lynch), as well as in the "Total Eclipse of the Heart '' number from the same episode.
On November 16, 2010, Brandon appeared on Dancing with the Stars, performing a contemporary duet to "Universal Child '' by Annie Lennox.
Brandon is now a principal dancer for Britney Spears ' Las Vegas Residency. He has been dancing for Britney since the show began in December 2013. He proudly wears the title of a Britney Boy!
In addition, Brandon is asked to travel all over America to teach at different dance conventions and choreography for many dance studios.
Born on August 30, 1987, Caitlin is an Acro - Contemporary dancer from Annapolis, Maryland. Caitlin graduated from the Baltimore School for the Arts in 2005. She also danced at the North Carolina Dance Theatre as an apprentice for two years, and is an experienced gymnast as well. She auditioned for So You Think You Can Dance with her sister Megan, and they both advanced to Vegas. They both remained in the competition until the very end of Vegas week, when Megan was eliminated and Caitlin was selected to be in the Top 20.
At the beginning of the Top 20 portion of the competition, Caitlin was paired with contemporary dancer Jason Glover. She was eliminated in week five, which ordinarily would mean that she would not be part of the SYTYCD live tour at the end of the season (since traditionally only the top ten contestants were selected to be on the tour) -- but in a surprise move by the producers of the show, it was announced on - air immediately following her elimination that she (as well as fellow contestant Phillip Chbeeb, who was eliminated on the same night) would still be included on the Season 5 tour.
Born on November 16, 1983, Janette is a salsa dancer from Miami, Florida. She has never studied salsa formally; rather, she picked it up from her family, who are Cuban.
Manrara began performing in musical theatre at age 12, and started her formal dance training at 19, studying ballroom, ballet, pointe, jazz and hip - hop. Janette currently attends Florida International University.
For the Top 20 portion of the show, Janette was originally paired with contemporary dancer Brandon Bryant. Like Janette, Brandon had made it through Las Vegas week during Season 4, only to be cut during the Top 20 selection. During her time in the Top 20, Janette received much praise from the judges; both Mia Michaels and Nigel Lythgoe said that Janette was their favorite female dancer that season, and Lythgoe went on to say he hoped she would win the entire competition. A tango she performed with Brandon was the first dance on Season 5 to receive a standing ovation from the judges.
Manrara was eliminated during week 7, along with contemporary dancer Jason Glover. Glover and Manrara reportedly went to Disney World together shortly after their elimination and subsequently became involved in a romantic relationship.
Post-SYTYCD, Janette was part of the "Bohemian Rhapsody '' number performed by rival glee club Vocal Adrenaline during the "Journey '' episode of Glee. She also made her "Burn the Floor - West End '' debut at the Shaftesbury Theatre in London, England on July 21, 2010. Additionally, she is part of the current Burn the Floor cast, which is touring the United States from late 2010 through May 2011.
In 2013, Janette joined the cast of the BBC One 's Strictly Come Dancing as one of the professional dancers. Her celebrity partners have been fashion designer Julien Macdonald, actor Jake Wood, singer Peter Andre and DJ Melvin Odoom.
In 2015 Janette got engaged to long time partner Aljaž Skorjanec.
Born on March 16, 1988, Jason is a contemporary dancer from Fresno, California. He began dancing at age 12, studying Tap and Hip Hop, and is a graduate of Bullard High School. His favorite dancer is Michael Jackson. Jason was featured prominently in the Season 4 's Las Vegas Week episode, successfully dancing for his life after the group round, before being cut following Mia Michaels ' contemporary routine. After making it into the Top 20 in Season 5, he was partnered with contemporary ballet dancer Caitlin until she was eliminated in week five. In week six he danced a Travis Wall contemporary piece with Jeanine Mason, which received standing ovations. He was eliminated in the top 8 week of season 5, with fellow contestant Janette Manrara.
He appeared in the TV Show Agents of S.H.I.E.L.D. on the 2016 episode "Failed Experiments '' as a Mayan to become the Season 3 main antagonist named Hive.
Jeanine was the overall winner of season 5.
Born May 24, 1988, is a salsa dancer from New York City. Platero was a gymnast before he began dancing at age 16. He is a graduate of Seminole High School, was a World Salsa finalist in 2006 - 2007 and performed as a dancer and acrobat in "High School Musical. '' He was paired up with Contemporary jazz dancer Karla Garcia but was eliminated in the third week. Jonathan can be seen as one of the dancer in the "Physical '' number in Glee (Bad Reputation). He can also be seen as one of the principal dancers of the "Cinderela Christmas '' Panto production at The El Portal Theatre for their 2011 - 2012 shows. During season 10 and 11 Jonathan returned as a choreographer.
Born on June 30, 1985, Karla is a Filipino - American contemporary jazz dancer from Oxon Hill, MD. Karla is a graduate of New York University. Karla 's early dance experience included learning traditional Filipino folk dancing. She performed on Broadway in "Hot Feet, '' danced in the ensemble of the "Radio City Christmas Spectacular '' and went on tour with "Wicked. '' She was originally partnered with Jonathan, but following his elimination in week three her new partner was Vitolio. She and Vitolio were both eliminated in week 4.
Karla is also an experienced hip - hop dancer and is a member of the Boogie Bots, who came in fourth place of the second season of America 's Best Dance Crew. However, she did not compete with them during the show.
Born on January 19, 1991, Kayla is a Contemporary / Jazz dancer from Aurora, Colorado, who started dancing when she was two years old. She was first partnered with Latin / ballroom dancer Max Kapitannikov, until he was eliminated in week 2, after which she was paired with contemporary dancer Kūpono. She received constant praise from the judges - Mia Michaels claimed she is "the perfect girl, '' Debbie Allen gave her the moniker ' White Lightning ' (which would stick with her for the duration of the competition), and Adam Shankman compared her to dancers from previous seasons such as Will Wingfield, Travis Wall and Danny Tidwell, due to her technical skill. Despite this, she ended up in the bottom three four times. Kayla is possibly best remembered for her dancing in a Mia Michaels contemporary piece titled ' Addiction, ' which she performed in week five with partner Kūpono. It was performed again at the live finale after a request from Nigel Lythgoe. She made it to the finale, finishing in fourth place.
Born on March 23, 1986, is a contemporary dancer from Kailua, Hawaii. Kuponohi'ipoi (nicknamed "Kupono '') Aweau graduated from the Kamehameha Schools and has been dancing since he was 16 years old. Kupono Aweau loves his home state of Hawaii and is an avid collector of home furnishings. He was originally partnered with Ashley, but following her elimination in week two his new partner became contemporary dancer Kayla. He is often compared to Season 4 finalist Mark Kanemura. Kūpono made it to the top ten but was eliminated in week 6 of the competition along with Randi Evans.
Born on April 4, 1983, Max is a ballroom dancer from Brooklyn, New York but was raised in Moscow, Russia. Max Kapitannikov has been dancing for as long as he can remember. Max Kapitannikov comes from a dancing family, and his mother is a ballet teacher. Max Kapitannikov also enjoys sculpting and playing the guitar. Max Kapitannikov attended the Manhattan Comprehensive High School. He was partnered up with Jazz / contemporary dancer Kayla before his elimination in week 2. His elimination was widely condemned. The judges praised Max for being the hardest working dancer of the season.
Born on April 30, 1980, Melissa is a ballerina from Los Alamitos, California. Melissa Sandvig has been dancing for most of her life. Melissa Sandvig has danced with the Milwaukee Ballet Company, LA Opera, Long Beach Ballet and Helios Dance Theater. A highlight for Melissa Sandvig was performing "Le Coeur Illumine '' at the Dorothy Chandler Pavilion. She is the oldest finalist at the age of twenty - nine. Her partner is contemporary dancer Ade. She has gained the nicknames the "Naughty Ballerina '' and the "Buff Ballerina ''. She is the first ballerina to make the Top 10 and to go as far as the Top 6. She currently teaches pilates, as stated in one of her videos on the show. She was eliminated on July 30 along with her former partner Ade Obayomi. One week prior to her elimination she danced a contemporary piece with partner Ade. The piece was choreographed by Tyce Diorio and its theme was breast cancer. The piece moved the judges to tears. Possibly her most memorable piece on the show, it was repeated by judges ' request during the finale. Later, in a special episode presented by Nigel Lythgoe, this routine was named Lythgoe 's favorite routine ever danced on So You Think You Can Dance. Melissa can be seen dancing in the "Total Eclipse of the Heart '' number in Glee.
Born on January 30, 1990, Paris is a contemporary dancer from Issaquah, Washington. She was involved in a car crash that left her left leg numb, but still dances despite the injury. Paris is a graduate of Skyline High School. She began dancing when she was 6 years old and now teaches at a studio near her home. Paris was also a Miss Washington Teen pageant winner, and was a Seattle Storm and Seattle Sonic Jr. dancer from 2000 - 2003. She was eliminated in week one along with her partner Tony.
Born on November 30, 1988, Phillip Chbeeb (nicknamed PacMan) is a popper from Indianapolis, Indiana, but was originally from Houston, Texas. Phillip, who has had no formal dance training and learned "from the streets '', began dancing when he was 15 years old. He is currently an engineering physics major at Loyola Marymount University. He was previously partnered with contemporary dancer Jeanine (the ultimate winner), until week 5, when Chbeeb was eliminated. An announcement by executive producer Nigel Lythgoe said that Chbeeb and the female dancer also eliminated in week 5, Caitlin Kinney, would be going on tour with the rest of the crew.
Phillip had previously auditioned for the show during season three, but was cut in Vegas Week. He returned in season four and once again made it to Vegas, but contracted pneumonia and was unable to compete. He later danced on the season finale against Robert Muraine.
Phillip 's quote on dancing: "I started dancing around 15 years old, actually, pretty late. Instantly fell in love with hip hop dance and popping, mostly just ' cause it seemed like almost unreal what some of the people could do. It seemed almost unnatural, and I really kind of enjoyed the creativity that was in the art form, so, I jumped into that, and I 've just been trying to progress and get better ever since then. For me there was n't necessarily a performance that stood out as my first performance, ' cause I was just one of those kids that danced at the school dance and just tried to break it down in circles. I think my first battle as a popper, I think was one of the biggest experiences for me, because it was just a funny experience for me. I wore clothes that were way too big for me the entire time, and it was just the first confrontation with the one - on - one battle kind of mentality, trying to out - do the other person, and, I do n't know, it was just a crazy experience for me. ''
He competed on season 6 of America 's Best Dance Crew with I.aM.mE. The crew was crowned the champion on June 5, 2011.
He made an appearance in Step Up: Revolution and Step Up: All In.
Born on August 12, 1985, Randi Evans is a Contemporary dancer from Orem, Utah, partnered with Evan Kasprzak. Randi began dancing at an early age and performed in the closing ceremonies at the 2002 Salt Lake City Winter Olympic Games. She currently attends Utah Valley University where she is majoring in elementary education. Randi and her partner, Evan Kasprzak, did a number of their routines that were sensual in nature, such as one where Evan was centered around her behind. In Vegas she was noted for always wearing unitards, but has not been seen wearing them since she got into the top 20 -- despite wearing a shirt to a rehearsal that said "Unitard Girl ''. Randi was eliminated in week 6 of the competition along with Kūpono Aweau.
Born on April 17, 1989, Tony is a hip - hop dancer from Buffalo, New York. Tony has been dancing for most of his life. He is a graduate of Frontier High School, playing on the school 's soccer, football and lacrosse teams. Tony likes to write short stories and poems, and his favorite professional dancer is Barry Lather. His comedic solo in Vegas earned him a following, but the judges put him through into the top 20. During the Top 20 competition, he and his partner Paris were eliminated following a hip - hop routine. The judges agreed that in the performance Tony 's personality was n't aggressive, a quality they said was needed for the dance.
Born on November 1, 1982, Vitolio is a contemporary dancer from Miami, Florida. Vitolio began dancing at age 18 and is a graduate of Florida 's New World School of the Arts. Vitolio loves motorcycles, and his favorite dancer is Desmond Richardson. Vitolio grew up in poverty with his grandmother in Haiti after his mother died following the birth of his sister. His grandmother eventually had to put him in an orphanage. This story was so moving that choreographer Louis van Amstel decided to choreograph a waltz based on his story. His partner was originally Latin ballroom dancer Asuka, but after her elimination in week three, his new partner was contemporary dancer Karla. He was eliminated in week 4. Vitolio reached the top 14, despite the judges ' declaration that he did n't always deliver in his solos.
|
what is most of the land in argentina used for | Geography of Argentina - wikipedia
The geography of Argentina describes the geographic features of Argentina, a country located in southern South America (or Southern Cone). Bordered by the Andes in the west and the South Atlantic Ocean to the east, neighboring countries are Chile to the west, Bolivia and Paraguay to the north, and Brazil and Uruguay to the northeast.
In terms of area, Argentina is the second largest country of South America after Brazil, and the 8th largest country in the world. Its total area is approximately 2.7 million km2. Argentina claims a section of Antarctica (Argentine Antarctica) but has agreed to suspend sovereignty disputes in the region as a signatory to the Antarctic Treaty. Argentina also asserts claims to several South Atlantic islands administered by the United Kingdom.
With a population of more than 42.1 million, Argentina ranks as the world 's 32nd most populous country as of 2010.
Argentina 's provinces are divided in 7 zones regarding climate and terrain. From North to South, West to East:
In Argentina, the fluvial net is integrated by many systems of different economic relevance, which could be measured by their amount of flow and navigability. Water flow relevance is based on its potential to be used for irrigation and as a source of energy. Depending on where the water streams drain, rivers and creeks could be classified into three different kinds of watersheds:
On the other hand, lakes and lagoons are permanent accumulations of water over impervious depressions. Their difference is mainly based on their extension and depth. They are very important for stream regulation, as a source of energy, tourist attraction and its ichthyologic wealth. In Argentina, all major lakes are in Patagonia (Carlevari and Carlevari, 2007).
Except in the northeast there are few large rivers, and many have only seasonal flows. Nearly all watercourses drain eastward toward the Atlantic, but a large number terminate in lakes and swamps or become lost in the thirsty soils of the Pampas and Patagonia. The four major rivers systems are those that feed into the Río de la Plata estuary, those made up of the Andean streams, those of the central river system, and those of the southern system.
The Paraná, the second - longest river in South America after the Amazon, flows approximately 4,900 km and forms part of the borders between Brazil and Paraguay, and Paraguay and Argentina. Its upper reaches feature many waterfalls. It is joined by the Iguazú River (Río Iguaçu) where it enters Argentina in the northeast. This area is well known throughout the world for the spectacular Iguazú Falls (Cataratas Iguaçu, meaning "great water ''). One of the world 's great natural wonders, they are located on the border between Argentina and Brazil with two - thirds of the falls in Argentina. They include approximately 275 falls, ranging between 60 and 80 m high. These falls are higher and wider than Niagara Falls on the border of the United States and Canada. Other tributaries of the Paraná, which feed in from the west, are the Bermejo, Bermejito, Salado, and Carcarañá.
The Uruguay River (1,600 km) forms a part of the borders between Argentina and Brazil and Argentina and Uruguay. It is navigable for about 300 km from its mouth to Concordia. The 2,550 - km Paraguay River forms part of the border between Paraguay and Argentina, and flows into the Paraná north of Corrientes and Alto Paraná. These all join to flow into the Río de la Plata, and eventually into the Atlantic Ocean in northern Argentina. Where these rivers meet, a wide estuary is formed, which can reach a maximum width of 222 km.
In north central Argentina, Lake Mar Chiquita is supplied with its water by several rivers. The Dulce River originates near San Miguel de Tucumán and flows southwest into the lake. From the southwest it is also fed by the Primero and Segundo Rivers.
In the northern Patagonia region, the major rivers are the Colorado and Negro Rivers, both of which rise in the Andes and flow to the Atlantic Ocean. The Colorado is fed by the Salado River, which flows from Pico Ojos del Salado in a southeasterly direction to the Colorado. Tributaries of the Salado include the Atuel, Diamante, Tunuyán, Desaguadero, and the San Juan, all of which originate in the northwest Andes. The Negro also has two main tributaries of its own, the Neuquén and the Limay. In the central Patagonia region the Chubut rises in the Andes and flows east to form a sizable lake before making its way to the ocean. The Lake District is also coursed by its share of rivers, all originating in the mountains and flowing to the Atlantic. These include the Deseado, Chico, Santa Cruz, and Gallegos Rivers.
The Los Lagos Region (Lake District), on the border of Chile and Argentina in the Andes mountain region, contains many glacial lakes that are carved out of the mountains then filled by melt - water and rain. The most significant of these is Lago Buenos Aires, also known as General Carrera, located in southern Argentina and shared with Chile. It is the largest lake in the country and the fifth largest in all of South America with an average surface area of 2,240 km.
Moving south along the border one would encounter Lago San Martín, Lago Viedma, and finally Lago Argentino, the second largest lake in this region with an area of 1466 km. Not far from Lake Buenos Aires on the Castillo Plain near Comodoro Rivadavia is Lake Colhue Huapi.
One of the world 's largest salt lakes, and the second largest lake in Argentina, is Lake Mar Chiquita (Little Sea), located in central Argentina. Its surface area varies from year to year and season to season, but has in it wettest periods spanned 5,770 km. The reservoir created by the Chocón dam, located on the Río Negro, is one of the country 's largest manmade lakes.
Iberá, in the northeast of Argentina, is a biologically rich region, with more than sixty ponds joined to marshes and swampland. The area is extremely humid, and is home to hundreds of bird species and thousands of insects, including a wide variety of butterflies. The area hosts a diverse array of flora and fauna, notably the royal water lily, silk - cotton trees, alligators, and capybara, the largest rodent species in the world.
Argentina is subject to a variety of climates. The north of the country, including latitudes in and below the Tropic of Capricorn, is characterized by very hot, humid summers (which result in a lot of swamp lands) with mild drier winters, and is subject to periodic droughts during the winter season.
Central Argentina has hot summers with tornadoes and thunderstorms (in western Argentina producing some of the world 's largest hail), and cool winters. The southern regions have warm summers and cold winters with heavy snowfall, especially in mountainous zones. Higher elevations at all latitudes experience cooler conditions.
International agreements:
Strategic importance:
The National Parks of Argentina make up a network of 30 national parks in Argentina. The parks cover a very varied set of terrains and biotopes, from Baritú National Park on the northern border with Bolivia to Tierra del Fuego National Park in the far south of the continent (see List of national parks of Argentina).
The creation of the National Parks dates back to the 1903 donation of 73 square kilometres of land in the Lake District in the Andes foothills by Francisco Moreno. This formed the nucleus of a larger protected area in Patagonia around San Carlos de Bariloche. In 1934, a law was passed creating the National Parks system, formalising the protected area as the Nahuel Huapi National Park and creating the Iguazú National Park. The National Park Police Force was born, enforcing the new laws preventing tree - felling and hunting. Their early task was largely to establish national sovereignty over these disputed areas and to protect borders.
Five further national parks were declared in 1937 in Patagonia and the service planned new towns and facilities to promote tourism and education. Six more were declared by 1970.
In 1970 a new law established new categories of protection, so that there now were National Parks, National Monuments, Educational Reserves and Natural Reserves. Three national parks were declared in the 1970s. In 1980, another new law affirmed the status of national parks - this law is still in place. The 1980s saw the service reaching out to local communities and local government to help in the running and development of the national parks. Ten more national parks were created with local co-operation, sometimes at local instigation. In 2000, Mburucuyá and Copo National Parks were declared, and El Leoncito natural reserve was upgraded to a national park.
The headquarters of the National Park Service are in downtown Buenos Aires, on Santa Fe Avenue. A library and information centre are open to the public. The administration also covers the national monuments, such as the Petrified Forest, and natural and educational reserves.
Los Cardones National Park
Los Glaciares National Park
Tierra del Fuego National Park
Nahuel Huapi National Park
Iguazú National Park
El Palmar National Park
Laguna Blanca National Park
Argentina is largely antipodal to central and coastal China.
|
both ejaculatory ducts and the urethra pass through the | Ejaculatory duct - wikipedia
The ejaculatory ducts (ductus ejaculatorii) are paired structures in male anatomy. Each ejaculatory duct is formed by the union of the vas deferens with the duct of the seminal vesicle. They pass through the prostate, and open into the urethra at the seminal colliculus. During ejaculation, semen passes through the prostate gland, enters the urethra and exits the body via the urinary meatus.
Ejaculation occurs in two stages, the emission stage and the expulsion stage. The emission stage involves the workings of several structures of the ejaculatory duct; contractions of the prostate gland, the seminal vesicles, the bulbourethral gland and the vas deferens push fluids into the prostatic urethra. The semen is stored here until ejaculation occurs. Muscles at the base of the penis contract in order to propel the seminal fluid trapped in the prostatic urethra through the penile urethra and expel it through the urinary meatus. The ejaculate is expelled in spurts, due to the movement of the muscles propelling it. These muscle contractions are related to the sensations of orgasm for the male.
Sperm is produced in the testes and enters the ejaculatory ducts via the vas deferens. As it passes by the seminal vesicles, a fluid rich in fructose combines with sperm. This addition nourishes the sperm in order to keep it active and motile. Seminal fluid continues down the ejaculatory duct into the prostate gland, where an alkaline prostatic fluid is added. This addition provides the texture and odor associated with semen. The alkalinity of the prostatic fluid serves to neutralize the acidity of the female vaginal tract in order to prolong the survival of sperm in this harsh environment. Semen is now a fructose - rich, alkaline fluid containing sperm as it enters the bulbourethral glands below the prostate. The bulbourethral glands secrete a small amount of clear fluid into the urethra before the ejaculate is expelled. The functions of this fluid are not entirely known but are suggested to aid in lubricating the male urethra in preparation for the semen during ejaculation. The amount of semen produced and expelled during ejaculation corresponds to the length of time that the male is sexually aroused before ejaculation occurs. Generally, the longer the period of arousal, the larger the amount of seminal fluid.
Ejaculation and orgasm may occur simultaneously, however they are not coupled, in that one may occur without the other. For example, a man may have a dry orgasm (termed Retrograde ejaculation); there is no expulsion of ejaculate however the man still experiences orgasm. Also, paraplegics may ejaculate seminal fluid but not experience the sensation of orgasm.
Ejaculatory duct obstruction is an acquired or congenital pathological condition in which one or both ejaculatory ducts are obstructed. In the case that both ejaculatory ducts are obstructed, this illness presents with the symptoms of aspermia and male infertility.,
Surgery to correct benign prostatic hyperplasia may destroy these ducts resulting in retrograde ejaculation. Retrograde ejaculation empties the seminal fluid formed in the emission phase into the bladder of the male instead of expelling it through the urethra and out the tip of the penis. This results in a dry orgasm, where orgasm may still be experienced but without expulsion of semen from the ejaculatory ducts.
Lobes of prostate
Vertical section of bladder, penis and urethra.
Prostate with seminal vesicles and seminal ducts, viewed from in front and above.
Median sagittal section of male pelvis.
|
where was some mothers do have them filmed | Some Mothers Do ' Ave ' Em - wikipedia
Some Mothers Do ' Ave ' Em is a British sitcom created and written by Raymond Allen and starring Michael Crawford and Michele Dotrice. It was first broadcast in 1973 and ran for three series, ending in 1978, and returning briefly in 2016 for a one - off special. The series follows the accident - prone Frank Spencer and his tolerant wife, Betty, through Frank 's various attempts to hold down a job, which frequently end in disaster. The sitcom was filmed in and around the town of Bedford in Bedfordshire. It was noted for its stuntwork, performed by Crawford himself, as well as featuring various well - remembered and much lampooned catchphrases, that have become part of popular culture. In a 2004 poll to find Britain 's Best Sitcom, Some Mothers Do ' Ave Em came 22nd.
The expression "Some mothers do have them '' is meant to refer to someone clumsy or foolish.
The wimpish, smiling Frank, sporting his trademark beret and trench coat, is married to the apparently normal Betty (Michele Dotrice) and in later series they have a baby daughter, Jessica. The character was popular with television impressionists such as Mike Yarwood in the 1970s, particularly his main catchphrase, "Ooh Betty '', which is only ever said in one episode: series 2, episode 2.
"Ooh Betty... '' is not Frank 's only catchphrase of the series. Others include a quavering "Oooh... '', usually uttered with his forefinger to his mouth as he stands amidst the chaos of some disaster he has just caused (and which he himself has invariably escaped unscathed). He also sometimes complains about being "ha - RASSed! '', or occasionally, "I 've had a lot of ha - RASSments lately '' (originally an American pronunciation). Other recurring catchphrases include references to "a bit of trouble '', which usually implies some sort of undisclosed digestive disorder, and to the cat having "done a whoopsie '' (presumably a euphemism for having defecated in an inappropriate place, on one occasion in Spencer 's beret). If Frank is pleased (or confused) about something, he will often use the catchphrase "Mmmm -- nice! '' or "Ohhh -- nice! ''
Despite his unfailing ability to infuriate people, Frank is essentially a very sympathetic character, who inspires much affection from his ever - loving and patient wife, Betty. He also venerates the memory of his late mother (Jessica Spencer) and worships his daughter (also named Jessica). (References to Frank 's mother by people who knew her suggest that she was very like her son.)
The final series was written by Allen based on stories by Michael Crawford (not written by Crawford himself as sometimes reported) and made five years after the previous one (although there had been two Christmas specials in between). Frank 's character changes noticeably in this series, becoming more self - aware and keen to make himself appear more educated and well - spoken. He develops an air of pomposity which is always best demonstrated when someone would approach and enquire "Mr Spencer? '' to which he would always reply, "I am he. '' He also becomes more self - assured, and much more willing to argue back when criticised, and often wins arguments by leaving his opponents dumbfounded by the bizarreness of what he would say.
Acknowledging the show 's success in Australia, the final series saw him begin talk of having relations there, and contemplating emigrating.
Crawford himself has talked of how he based many of Frank 's reactions on those of a young child. Crawford also found it difficult to break out of the public association with the role, despite his later career as a hugely successful musical performer on the West End and Broadway stage, in popular shows such as Barnum and The Phantom of the Opera.
Ronnie Barker and Norman Wisdom were the BBC 's first and second choices for the role of Frank. However the casting of Crawford proved effective, as many of Frank 's mannerisms and turns of phrase were invented by the actor (some having been used previously by Crawford in the film Hello, Dolly!), and his stunt - performing and singing skills were used in the series.
In addition to Frank and Betty, most episodes would introduce at least one other character (a doctor, a neighbour, an employer, etc.) who would be seen to gradually suffer the inevitably chaotic consequences of Frank 's fleeting presence in their lives. These characters were often played by some of the greatest and recognisable character actors of the era, including George Baker, James Cossins, Peter Jeffrey, Richard Wilson, Fulton Mackay, Bernard Hepton, Christopher Timothy, George Sewell, Bryan Pringle, Christopher Biggins, Milton Johns, John Ringham, David Ryall and Elisabeth Sladen (who, in her autobiography, mentions that she was considered for the role of Betty). David Jacobs appeared as himself in the 1975 Christmas special. A pre-Minder Glynn Edwards appeared in more than two episodes as Frank and Betty 's irascible new neighbour, Mr Lewis, while a pre-Bread Jean Boht appeared in two episodes as Mrs Lewis. In addition, a pre-Hi - de-Hi Diane Holland appeared in one episode as a hospital receptionist. One regular character in the early series was Frank 's long suffering mother - in - law Mrs. Fisher, played by Jane Hylton and Frank 's local Catholic priest, Father O'Hara, who was played by Cyril Luckham. Australian actor and comedian Dick Bentley appeared in three of the last four episodes broadcast, as Frank 's Australian grandfather.
The theme tune by Ronnie Hazlehurst features two piccolos spelling out the title in Morse code, excluding the apostrophes.
Herne Bay in Kent features in this episode where Frank (Michael Crawford) arrives at a shop for a job interview.
Guest stars Fulton Mackay.
Starring Edward Hardwicke
Starring Christopher Biggins as a learner pilot.
Due to time pressures and aircraft noise problems, sound recordists were unable to accurately capture the audio within the aircraft cockpit. To alleviate this, the BBC Radiophonic Workshop used stock library sounds, including wind noise, the Wilhelm Scream No3 and radio chatter from a previous episode of Dads Army. The engine sound was recreated using an edited version of the aircraft sounds used in the classic film The Dambusters. A-V syncing issues mean it is clearly evident in these scenes that the actors voices have been re-recorded afterwards.
The BBC has repeated Some Mothers Do ' Ave ' Em several times since the series was produced in the 1970s. The programme has been shown on Catalan public television, Nigeria on the NTA in the 1980s and 1990s and in Australia on the Seven Network 's Great Comedy Classics Franchise in 2006 - 2007, GO! from 2009 to 2010 until the sitcom now screens on 7TWO. British channels Gold, BBC Two and BBC Prime took over repeats of the programme in 2007.
On 4 October 2015, it was reported that the BBC were planning a return of the sitcom, with Crawford and Dotrice purported to be involved in shooting scheduled for early 2016. The news was met extremely positively on social media. On 18 March 2016, Michael Crawford and Michele Dotrice reprised their roles for a one - off sketch for Sport Relief. Cameo appearances included Sir Bradley Wiggins, Paul McCartney and Jenson Button.
Sketch for Sport Relief. Gemma Arterton guest stars as a grown up version of Baby Jessica, alongside Sir Paul McCartney, Jenson Button, Boris Johnson, Roy Hodgson, Arsène Wenger, David Walliams, Jessica Ennis, Bradley Wiggins, Clare Balding, Sir Chris Hoy, Sir Andy Murray, and Jamie Murray playing themselves, and featured Chris Wilson as the Newsagent.
In the United Kingdom six episodes from Series 1 and various selected episodes of Some Mothers Do ' Ave ' Em were originally released by BBC Video on VHS in the 1990s. Series 1 and Series 2 were released on VHS and DVD on 21 October 2002. Series 3 and the Christmas Specials were released on VHS and DVD on 19 May 2003. The Complete Series was released on VHS and DVD on 6 October 2003, by Second Sight available in Region 2. On 1 November 2010, 2 entertain reissued Some Mothers Do ' Ave ' Em -- The Complete Christmas Specials. On 14 February 2011 Some Mothers Do ' Ave ' Em -- The Complete Series and Christmas Specials was reissued by 2 entertain with new packaging.
The complete collection is now available from both BBC Store and iTunes as a digital download.
In Australia Series 1 - 3 and the Christmas Specials were released in 2003 and 2004. The complete boxset was released in 2004 on DVD in region 4.
In the United States 13 selected episodes were released on VHS in 1998, and reissued on DVD region 1 in 2001.
A stage adaptation, written and directed by Guy Unsworth based on the TV series will begin a UK tour at the Wyvern Theatre in Swindon in February 2018. Comedian Joe Pasquale plays Frank Spencer, with Sarah Earnshaw as Betty and Susie Blake as Mrs Fisher.
|
who has been awarded the nobel peace prize 2017 | 2017 Nobel Peace Prize - wikipedia
The 2017 Nobel Peace Prize was awarded to the International Campaign to Abolish Nuclear Weapons (ICAN) "for its work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its ground - breaking efforts to achieve a treaty - based prohibition on such weapons, '' according to the Norwegian Nobel Committee announcement on October 6, 2017. The award announcement acknowledged the fact that "the world 's nine nuclear - armed powers and their allies '' neither signed nor supported the treaty - based prohibition known as the Treaty on the Prohibition of Nuclear Weapons or nuclear ban treaty, yet in an interview Committee Chair Berit Reiss - Andersen told reporters that the award was intended to give "encouragement to all players in the field '' to disarm. The award was hailed by civil society as well as governmental and intergovernmental representatives who support the nuclear ban treaty, but drew criticism from those opposed. At the Nobel Peace Prize award ceremony held in Oslo City Hall on December 10, 2017, Setsuko Thurlow, an 85 - year - old survivor of the 1945 atomic bombing of Hiroshima, and ICAN Executive Director Beatrice Fihn jointly received a medal and diploma of the award on behalf of ICAN and delivered the Nobel lecture.
A global civil society coalition of 468 peace, human rights, environment, development and faith groups as of 2017, ICAN was recognized for its decade - long consensus - building support for the Humanitarian Pledge and the Treaty on the Prohibition of Nuclear Weapons. Nobel Committee Chair Berit Reiss - Andersen described ICAN 's work as having "brought the debate forward by focusing so heavily on the humanitarian consequences of using nuclear arms. ''
The Peace Prize announcement came in the midst of the 2017 North Korea crisis, uncertainty over certification of Iran 's compliance with the 2015 accord that limits Iran 's nuclear program, the Doomsday Clock assessment in January 2017 of the highest threat of nuclear war since 1953, heightened rhetoric between Indian and Pakistani military officials to target each other and retaliate with the early use nuclear weapons, Russia 's strategic doctrine calling for early use of nuclear weapons against any "major NATO assault '' on its territory, and opposition by nuclear powers to the nuclear ban treaty and its ratification.
In a telephone interview immediately after the announcement, ICAN Executive Director Beatrice Fihn said that, the Cold War being long over, possession and use of weapons of mass destruction "is no longer acceptable '' in the 21st century. In a formal statement, ICAN called the 2017 prize a tribute to "the tireless efforts of many millions of campaigners and concerned citizens worldwide who, ever since the dawn of the atomic age, have loudly protested nuclear weapons '' and to "the survivors of the atomic bombings of Hiroshima and Nagasaki -- the hibakusha -- and victims of nuclear test explosions ''. Holding a press conference at UN Headquarters, in New York, the ICAN executive director said that disarmament campaign efforts of a "new generation, '' of "people who grew up after the Cold War and do n't understand why we still have the (nuclear) weapons, '' were in effect also being recognized by the award.
Nominations for the prize numbered 318, including 215 individuals and 103 organizations, second highest to the record 376 nominations considered in 2016. Though the Nobel Committee does not release names being considered for 50 years, reportedly they included: Tong Jen and Onodera Toshitaki seeking justice for Chinese victims of wartime atrocities during World War II; organizers Iran Foreign Minister Mohammad Javad Zarif and EU foreign policy chief Federica Mogherini of the 2015 Iran Deal negotiations; UNHCR and High Commissioner Filippo Grandi for their work on the rights and dignity of refugees; Turkish journalists Cumhuriyet and Can Dündar; The Economic Community of West African States (ECOWAS) for their work securing Gambia 's political transition; and the humanitarian White Helmets, also known as the Syrian Civil Defense, and Raed al Saleh.
Congratulatory messages in the days following the award announcement came from individual disarmament supporters as well as ICAN coalition organizations, other civil society groups, public figures, governments and the United Nations, including: survivors of the Hiroshima and Nagasaki bombings (hibakusha), The ATOM Project, Peace Boat, Nuclear Threat Initiative, Campaign for Nuclear Disarmament, Women 's International League for Peace and Freedom, Human Rights Watch, Oxfam, Ploughshares Fund, Stockholm International Peace Research Institute, Germany, EU foreign policy chief Federica Mogherini, UK Green Party co-leader Caroline Lucas, Austria, Canada 's New Democratic Party Critic for Foreign Affairs Hélène Laverdière, Mexico, and Nigeria. Twenty - three countries included congratulatory remarks in their statements at the UN General Assembly First Committee on Disarmament and International Security, including Sweden and New Zealand.
Pugwash President Sergio Duarte wrote that the award designation reflects "growing public recognition '' of banning nuclear weapons as part of the international humanitarian norm to abolish weapons of mass destruction, citing examples of the abolition of bacteriological weapons in the 1970s and chemical weapons in the 1990s. He also called on State parties to make further progress at the UN High Level Conference on Nuclear Disarmament slated for 2018, noting the role of civil society organizations such as ICAN in supporting such multilateral disarmament processes.
UN Secretary - General António Guterres relayed in a press statement that the award "recognizes the determined efforts of civil society to highlight the unconscionable humanitarian and environmental consequences that would result if they (nuclear weapons) were ever used again, '' noting that the first UN General Assembly resolution, in 1946, had "established the goal of ridding the world of nuclear weapons and all weapons of mass destruction. '' The UN 's top disarmament official Izumi Nakamitsu in a statement said that the 2017 Peace Prize "recognizes once again the vital and indispensable role of civil society in advancing our common aspirations peace, security and a world free of nuclear weapons. ''
Former Soviet Union President Mikhail Gorbachev in a statement said the award designation was "a very good decision '' and signified that "a world without nuclear weapons -- there can not be any other goal! '', also recalling a joint statement with then US President Ronald Reagan at the 1986 Reykjavik Summit that "a nuclear war can not be won and must never be fought. ''
Welcoming the implications of the Peace Prize announcement in an interview with RT, "whistleblower '' William McNeilly defended his WikiLeaked report in 2015 that claimed Trident nuclear programme safety and security failures and that sparked nuclear - deterrent debates in the UK the same year.
New Zealand 's Green Party Spokesperson for Social Development, MP Jan Logie, said that "Our Pacific Ocean and its peoples have suffered the terrible effects of nuclear explosions and today we acknowledge the survivors of nuclear weapons use and testing. This Nobel Prize honours them. ''
Aging survivors of the 1945 Hiroshima and Nagasaki atomic bombings, known as hibakusha, have long campaigned to abolish nuclear weapons, often recounting the horrific suffering they endured and from which many more died. At gatherings to watch the broadcast Peace Prize announcement and in other press interviews, their reactions included:
Speaking at Bowling Green University with fellow Hiroshima bombing survivor Keiko Ogura, who founded Hiroshima Interpreters for Peace, Setsuko Thurlow likened ICAN 's work to other social movements eventually embraced by nations, saying "it is our moral imperative to abolish nuclear weapons '' and that the Peace Prize for ICAN "represents a break from the typical state perspective. ''
Supporters from faith communities issued congratulatory statements, including: the Dalai Lama, Daisaku Ikeda, Father Shay Cullen, the Holy See, Pax Christi International, and the World Council of Churches.
While the majority of reactions from the international community hailed the Nobel Committee 's decision, other reactions were critical about the announcement 's implications. NATO Secretary - General Jens Stoltenberg said that NATO has in common with ICAN the goal of "preserving peace and creating the conditions for a world without nuclear weapons '' and welcomes the attention drawn by the award announcement to nuclear non-proliferation issues, but that the nuclear ban treaty supported by ICAN "risks undermining the progress we have made over the years, '' citing the existence of nuclear arms as the reason to maintain nuclear arsenals and for NATO remaining a nuclear alliance since the Cold War. In a press release Norwegian Prime Minister Erna Solberg praised ICAN for promoting their common goal of a world free of nuclear weapons but reiterated that Norway will not sign the ban treaty, echoing NATO 's stance.
Similarly, spokesperson Dmitry Peskov told reporters that the Kremlin believes the award decision should be respected and that Russia as a member of the nuclear club both supports nuclear non-proliferation and maintains its position expressed by President Vladimir Putin that "there is no alternative to nuclear parity '' in global security measures. The government of Australia as of October 9, 2017 did not comment on the award designation but, through its spokesperson, acknowledged "the commitment of ICAN and its supporters to promoting awareness of the humanitarian consequences of nuclear weapons '' and restated the government 's position that "so long as the threat of nuclear attack exists, US extended deterrence will serve Australia 's fundamental national security interests. '' The USA reacted by saying in its statement that the award announcement "does not change the U.S. position on the treaty '' which in its view "risks undermining existing efforts to address global proliferation and security challenges, '' and that "no state possessing nuclear weapons or which depends upon such weapons for its security supports '' the ban treaty. When asked to clarify whether Canadian Prime Minister Justin Trudeau wanted to congratulate ICAN, the prime minister 's office did not respond, though in a June 2017 statement Canadian Foreign Minister Chrystia Freeland 's press secretary said that "Canada remains firmly committed to concrete steps towards global nuclear disarmament and nonproliferation. ''
The Economist questioned the appropriateness of ICAN 's winning of the prize, arguing it was doubtful their nuclear - ban treaty effort would do anything to advance global peace due to its rejection by the world 's nuclear powers.
On October 20, 2017, Euronews reported that, through research with German broadcaster ZDF into Nobel Prize Foundation index funds investments, German campaign group Facing Finance had determined that the Peace Prize award was funded in part by Foundation investments in companies contributing to nuclear weapons programs, including Textron, Lockheed Martin and Raytheon, and urged ICAN not to accept the 9 million SEK award money. According to Agence France - Presse, the head of the Nobel Institute Olav Njolstad was confronted on October 26, 2017 with the revelation confirmed by environmental group The Future in Our Hands, and Foundation director Lars Heikensten said the following day that "(a) t the latest, by March next year (2018) we will have no investment in anything that is connected with any kind of production which is classified as connected with nuclear weapons. ''
A day ahead of the December 10th award ceremony at Oslo City Hall, ICAN installed outside the Norwegian Parliament building 1,000 red and blue paper cranes made by children in Hiroshima.
After the first wartime use of nuclear weapons, in 1946, the Peace Prize began to recognize nuclear disarmament efforts:
In the award presentation speech on December 10, 2017, Nobel Committee Chair Berit Reiss - Andersen recalled that "twelve Peace Prizes have been awarded, in whole or in part, '' to honor "efforts against the proliferation of nuclear weapons and for nuclear disarmament, '' and included 2009 Nobelist Barack Obama.
|
who did shaggy's voice in scooby doo | Casey Kasem - wikipedia
Kemal Amin "Casey '' Kasem (April 27, 1932 -- June 15, 2014) was an American disc jockey, music historian, radio personality, voice actor, and actor, known for being the host of several music radio countdown programs, most notably American Top 40, from 1970 until his retirement in 2009, and for providing the voice of Norville "Shaggy '' Rogers in the Scooby - Doo franchise from 1969 to 1997, and again from 2002 until 2009.
Kasem co-founded the American Top 40 franchise in 1970, hosting it from its inception to 1988, and again from 1998 to 2004. Between January 1989 and early 1998, he was the host of Casey 's Top 40, Casey 's Hot 20, and Casey 's Countdown. From 1998 to 2009, Kasem also hosted two adult contemporary spin - offs of American Top 40: American Top 20 and American Top 10.
In addition to his radio shows, Kasem provided the voice of many commercials, performed many voices for Sesame Street, provided the character voice of Peter Cottontail in the Rankin / Bass production of Here Comes Peter Cottontail, was "the voice of NBC '', and helped out with the annual Jerry Lewis telethon. He provided the cartoon voices of Robin in "The Adventures of Batman and Robin '' and Super Friends, Mark on Battle of the Planets, and a number of characters for the Transformers cartoon series of the 1980s. In 2008, he was the voice of Out of Sight Retro Night which aired on WGN America but was replaced by rival Rick Dees. After 40 years, Kasem retired from his role of voicing Shaggy in 2009, although he did voice Shaggy 's father in the 2010 TV series Scooby - Doo! Mystery Incorporated.
Kasem was born in Detroit, Michigan, on April 27, 1932, to Lebanese Druze immigrant parents, who had settled in Michigan, where they worked as grocers. Kasem was named after Turkish leader Mustafa Kemal Atatürk, a man Kasem said his father respected.
In the 1940s, "Make Believe Ballroom '' reportedly inspired Kasem to follow a career in radio and later host a national radio hits countdown show. Kasem received his first experience in radio covering sports at Northwestern High School in Detroit. He then went to Wayne State University for college. While at Wayne State, he voiced children on radio programs such as The Lone Ranger and Challenge of the Yukon. In 1952, Kasem was drafted into the U.S. Army and sent to Korea. There, he worked as a DJ / announcer on the Armed Forces Radio Korea Network.
After the war, Kasem began his professional broadcasting career in Flint, Michigan. From there, he spent time in Detroit as a disc jockey for radio station WJBK - AM (and doing such shows as The Lone Ranger and Sergeant Preston of the Yukon); WYSL in Buffalo, New York; and a station in Cleveland before moving to California. At KYA in San Francisco, the general manager first suggested he tone down his ' platter patter ' and talk about the records instead. Kasem demurred at first, because it was not what was normally expected in the industry. At KEWB in Oakland, California, Kasem was both the music director and on - air personality. He created a show which mixed in biographical tidbits about the artists ' records he played, and attracted the attention of Bill Gavin who tried to recruit him as a partner. After Kasem joined KRLA in Los Angeles in 1963, his career really started to blossom and he championed the R&B music of East L.A.
Kasem earned roles in a number of low - budget movies and acted on radio dramas. While hosting "dance hops '' on local television, he attracted the attention of Dick Clark who as a producer hired him to co-host a daily teenage music show called Shebang starting in 1964. Kasem appeared in network TV series including Hawaii Five - O and Ironside. In 1967, Kasem appeared on The Dating Game, and played the role of "Mouth '' in the motorcycle gang film The Glory Stompers. In 1969, he played the role of "Knife '' in the "surfers vs. bikers '' film Wild Wheels, and had a small role in another biker movie, The Cycle Savages, starring Bruce Dern and Melody Patterson.
Kasem 's voice was, however, always the key to his career. At the end of the 1960s, he began working as a voice actor. In 1969, he started one of his most famous roles, the voice of Shaggy on Scooby - Doo, Where Are You!. He also voiced the drummer Groove from The Cattanooga Cats that year. In 1964, Kasem had a minor hit single called "Letter From Elaina ''. A spoken - word recording, it told the story of a girl who met George Harrison after a San Francisco concert.
On July 4, 1970, Kasem, along with Don Bustany, Tom Rounds, and Ron Jacobs, launched the weekly radio program American Top 40 (AT40). At the time, top 40 radio was on the decline as DJs preferred to play album - oriented progressive rock. Loosely based on the TV program Your Hit Parade, the show counted down from # 40 on the pop charts to # 1 -- the first # 1 was Three Dog Night 's "Mama Told Me (Not to Come) '' -- based on the Billboard Hot 100 each week. The show, however, was not just about the countdown. Kasem mixed in biographical information about the artists, flashback, and "long - distance dedication '' segments where he read letters written by listeners to dedicate songs of their choice to far away loved ones. He often included trivia facts about songs he played and artists whose work he showcased. Frequently, he mentioned a trivia fact about an unnamed singer before a commercial break, then provided the name of the singer after returning from the break. Kasem ended the program with his signature sign - off, "Keep your feet on the ground and keep reaching for the stars. ''
The show debuted on seven stations, but on the back of Kasem 's "always friendly and upbeat '' baritone voice it soon went nationwide. In October 1978, the show expanded from three hours a week to four. American Top 40 's success spawned several imitators including a weekly half - hour music video television show, America 's Top 10, hosted by Kasem himself. "When we first went on the air, I thought we would be around for at least 20 years, '' he later remarked. "I knew the formula worked. I knew people tuned in to find out what the No. 1 record was. '' Due to his great knowledge of music, Kasem became known as not just a disc jockey, but also a music historian.
In 1971, Kasem provided the character voice of Peter Cottontail in the Rankin / Bass production of Here Comes Peter Cottontail. In the same year, he appeared in the low - budget film The Incredible 2 - Headed Transplant, in what was probably his best remembered acting role. From 1973 until 1985, he voiced Robin on several SuperFriends franchise shows. In 1980, he voiced Merry in The Return of the King. He also voiced Alexander Cabot III on Josie and the Pussycats and Josie and the Pussycats in Outer Space, and supplied a number of voices for Sesame Street.
In the late 1970s, Kasem portrayed an actor who imitated Columbo in the Hardy Boys / Nancy Drew Mysteries two - part episode "The Mystery of the Hollywood Phantom. '' He portrayed a golf commentator in an episode of Charlie 's Angels titled "Winning is for Losers '', and appeared on Police Story, Quincy, M.E., and Switch. In 1977 he was initially hired as the narrator for the ABC sitcom Soap, but quit after the pilot episode due to the content. Rod Roddy took his place on the program. In 1984, Kasem made a cameo in Ghostbusters, reprising his role as the host of American Top 40. For a period in the late 1970s, Kasem was also the staff announcer for the NBC television network.
In 1988, Kasem left American Top 40 due to a contract dispute with ABC Radio Network. He signed a five - year, $15 million contract with Westwood One and started Casey 's Top 40 which used a different chart - the Radio & Records Contemporary (CHR) / Pop radio airplay chart (making it feature exactly the same song positions as the Rick Dees Weekly Top 40, which used the same chart at the time), to determine the top 40. He also hosted two shorter versions of the show: Casey 's Hot 20 and Casey 's Countdown. During the late 1990s, Kasem hosted the Radio Hall of Fame induction ceremony.
Kasem voiced Mark in Battle of the Planets and several Transformers characters: Bluestreak, Cliffjumper, Teletraan I and Dr. Arkeville. He left Transformers during the third season due to what he perceived as offensive caricatures of Arabs and Arab countries. In a 1990 article, he explained:
A few years ago, I was doing one of the voices in the TV cartoon series, Transformers. One week, the script featured an evil character named Abdul, King of Carbombya. He was like all the other cartoon Arabs. I asked the director, ' Are there any good Arabs in this script for balance? ' We looked. There was one other -- but he was no different than Abdul. So, I told the show 's director that, in good conscience, I could n't be a part of that show.
From 1989 to 1998, Kasem hosted Nick at Nite 's New Year 's Eve countdown of the top reruns of the year. He also made cameo appearances on Saved by the Bell and ALF in the early 1990s. In 1997, Kasem quit his role as Shaggy in a dispute over a Burger King commercial, with Billy West and Scott Innes taking over the character in the late 1990s and early 2000s.
The original American Top 40, hosted by Shadoe Stevens after Kasem 's departure, was cancelled in 1995. Kasem regained the rights to the name in 1997, and the show was back on the air in 1998, on the AMFM Network (later acquired by Premiere Radio Networks).
At the end of 2003, Kasem announced he would be leaving AT40 once his contract expired the following month and would be replaced by Ryan Seacrest. He agreed to a new contract to continue hosting his weekly adult contemporary countdown shows in the interim, which at the time were both titled American Top 20. In 2005 Kasem renewed his deal with Premiere Radio Networks to continue hosting his shows, one of which had been reduced to ten songs and had, appropriately, been retitled American Top 10 to reflect the change.
In April 2005, a television special called American Top 40 Live aired on the Fox network, hosted by Seacrest, with Kasem appearing on the show. In 2008, Kasem did the voice - over for WGN America 's Out of Sight Retro Night. He was also the host of the short - lived American version of 100 % during the 1998 -- 99 season.
In June 2009, Premiere announced it would no longer produce Kasem 's two remaining countdowns and end their eleven - year relationship. Kasem decided not to try to bring his show to another syndicator or find a replacement host to continue the show and, citing a desire to explore other avenues in his life such as writing a memoir, the 77 - year old Kasem sent out a press release during the last week of June that he would be retiring from radio on July 4 weekend, the thirty - ninth anniversary of the first ever countdown he presided over.
Kasem also performed TV commercial voice - overs throughout his career, appearing in more than 100 commercials in all.
In 2002, Kasem reprised the role of Shaggy when it was determined the character would be a vegetarian. In 2009, he retired from voice acting, with his final performance being the voice of Shaggy in Scooby - Doo! and the Samurai Sword. He did voice Shaggy again for "The Official BBC Children in Need Medley '', but went uncredited by his request. Although officially retired from acting, he provided the voice of Colton Rogers, Shaggy 's father, on a recurring basis for the 2010 -- 2013 series Scooby - Doo! Mystery Incorporated, again uncredited at his request.
As for his recognizable voice quality, "It 's a natural quality of huskiness in the midrange of my voice that I call ' garbage, ' '' he stated to The New York Times. "It 's not a clear - toned announcer 's voice. It 's more like the voice of the guy next door. ''
Kasem was a devout vegan, supported animal rights and environmental causes, and was a critic of factory farming. He initially quit voicing Shaggy in the late 1990s, when asked to voice Shaggy in a Burger King commercial, returning in 2002 after negotiating to have Shaggy become a vegetarian.
Kasem was active in politics for years, supporting Lebanese - American and Arab - American causes, an interest which was triggered by the 1982 Israeli invasion of Lebanon. He wrote a brochure published by the Arab American Institute entitled "Arab - Americans: Making a Difference ''. He turned down a position in season three of Transformers because of the show 's plot portraying "evil Arabs ''. He also called for a fairer depiction of heroes and villains, on behalf of all cultures, in Disney 's 1994 sequel to Aladdin called The Return of Jafar. In 1996, he was honored as "Man of the Year '' by the American Druze Society. Kasem campaigned against the Gulf War, advocating non-military means of pressuring Saddam Hussein into withdrawing from Kuwait, was an advocate of Palestinian independence and arranged conflict resolution workshops for Arab Americans and Jewish Americans.
A political liberal, he narrated a campaign ad for George McGovern 's 1972 presidential campaign, hosted fundraisers for Jesse Jackson 's presidential campaigns in 1984 and 1988, supported Ralph Nader for U.S. President in 2000, and supported progressive Democrat Dennis Kucinich in his 2004 and 2008 presidential campaigns. Kasem supported a number of other progressive causes, including affordable housing and the rights of the homeless.
Kasem was married to Linda Myers from 1972 to 1979; they had three children: Mike, Julie, and Kerri Kasem.
Kasem was married to actress Jean Thompson from 1980 until his death. They had one child, Liberty Jean Kasem.
In 1989, Kasem purchased a home built in 1954 and located at 138 North Mapleton Drive, previously owned by developer Abraham M. Lurie, as a birthday present for his wife Jean. In 2013, he listed it for US $43 million.
In October 2013, Kerri Kasem said her father was suffering from Parkinson 's disease, which a doctor had diagnosed in 2007; a few months later, she said he was diagnosed with Lewy body dementia, which is often difficult to differentiate from Parkinson 's. Due to his condition, he was no longer able to speak during his final months.
As his health worsened in 2013, Jean Kasem prevented any contact with her husband, particularly from his children by his first marriage. On October 1, Kerri, Mike and Julie protested in front of the Kasem home, having not been allowed contact with their father for three months; some of Casey Kasem 's long - time friends and colleagues, along with his brother Mouner, also joined the demonstration. The eldest Kasem children sought conservatorship over their father 's care, with Julie and her husband Jamil Aboulhosn filing the papers; the court denied their petition in November.
Kasem was removed from a Santa Monica, California nursing home by his wife on May 7, 2014. On May 12, Kerri Kasem was granted temporary conservatorship over her father, despite her stepmother 's objection. The court also ordered an investigation into Casey Kasem 's whereabouts, after his wife 's attorney told the court Casey was "no longer in the United States ''. He was found soon afterward in Washington state.
On June 6, 2014, Kasem was reported to be in critical but stable condition at a hospital in Washington state, receiving antibiotics for bedsores and treatment for high blood pressure. It was revealed that he had been bedridden for some time. A judge ordered separate visitation times due to antagonism between Jean Kasem and his children from his first wife. Judge Daniel S. Murphy ruled that Kasem had to be hydrated, fed, and medicated as a court - appointed lawyer reported on his health status. Jean Kasem claimed that he had been given no food, water, or medication the previous weekend. Kerri Kasem 's lawyer stated that she had him removed from artificial food and water on the orders of a doctor and in accordance with a directive her father signed in 2007 saying he would not want to be kept alive if it "would result in a mere biological existence, devoid of cognitive function, with no reasonable hope for normal functioning. '' Murphy reversed his order the following Monday, after it became known that Kasem 's body was no longer responding to the artificial nutrition, allowing the family to place Kasem on "end - of - life '' measures over the objections of Jean Kasem.
On June 15, 2014, Kasem died at St. Anthony 's Hospital in Gig Harbor, Washington at the age of 82. The immediate cause of death was reported as sepsis caused by an ulcerated bedsore. He was survived by his wife, four children, and four grandchildren. His body was handed over to his widow, who would make funeral arrangements. Reportedly, Kasem wanted to be buried at Forest Lawn Memorial Park in Glendale.
By July 19, a judge had granted Kasem 's daughter Kerri a temporary restraining order to prevent his wife from cremating Kasem 's body to allow an autopsy to be performed, but when she went to give a copy of the order to the funeral home, she was informed the body had been moved at the directive of Jean Kasem. Kasem 's wife had the body moved to a funeral home in Montreal on July 14, 2014. On August 14, it was reported in the Norwegian newspaper Verdens Gang that Kasem was going to be buried in Oslo.
Kasem 's family had his interment at Oslo Western Civil Cemetery on December 16, 2014, more than six months after his death.
In November 2015, three of Kasem 's children and his brother sued his widow for wrongful death. The lawsuit charges Jean Kasem with elder abuse and inflicting emotional distress on the children by restricting access prior to his death.
In 1981, Kasem was granted a star on the Hollywood Walk of Fame. He was inducted into the National Association of Broadcasters Hall of Fame radio division in 1985, and the National Radio Hall of Fame in 1992. Five years later, he received the Radio Hall of Fame 's first Lifetime Achievement Award. In 2003, Kasem was given the Radio Icon award at the Radio Music Awards.
|
where does the movie the outsiders take place | The Outsiders (film) - wikipedia
The Outsiders is a 1983 American coming - of - age drama film directed by Francis Ford Coppola, an adaptation of the novel of the same name by S.E. Hinton. The film was released on March 25, 1983. Jo Ellen Misakian, a librarian at Lone Star Elementary School in Fresno, California, and her students were responsible for inspiring Coppola to make the film.
The film is noted for its cast of up - and - coming stars, including C. Thomas Howell (who garnered a Young Artist Award), Rob Lowe, Emilio Estevez, Matt Dillon, Tom Cruise, Patrick Swayze, Ralph Macchio, and Diane Lane. The film helped spark the Brat Pack genre of the 1980s. Both Lane and Dillon went on to appear in Coppola 's related film Rumble Fish. Emilio Estevez went on to be in That Was Then... This Is Now, the only S.E. Hinton film adaptation not to star Matt Dillon.
The movie received mostly positive reviews from critics, and performed well at the box office, grossing $33 million on a $10 million budget.
In Tulsa, Oklahoma, greasers are a gang of tough, low - income working - class teens. They include Ponyboy Curtis and his two older brothers, Sodapop and Darrel, as well as Johnny Cade, Dallas Winston, Two - Bit Matthews, and Steve Randle. Their rivalry is with the Socs, a gang of wealthier kids from the other side of town. Two Socs, Bob Sheldon and Randy Anderson, confront Johnny, Ponyboy, and Two - Bit, who are talking to the Socs ' girlfriends, Cherry and Marcia, at a drive - in theater. The girls defuse the situation by going home with the Socs. Later that night, Ponyboy and Johnny are attacked in a park by Bob, Randy, and three other Socs. They begin dunking Ponyboy in a fountain attempting to drown him, but Johnny pulls out his switchblade and stabs Bob to death.
On the advice of Dallas, and the fact that murderers in Oklahoma will be executed in the electric chair, Ponyboy and Johnny flee on a cargo train, and hide out in an abandoned church in Windrixville. Both boys cut their hair and Ponyboy bleaches his with peroxide in order to mask their descriptions. To pass time, the boys play poker and Ponyboy reads Gone with the Wind and quotes the Robert Frost poem "Nothing Gold Can Stay ''. After a few days, Dallas arrives with news that Cherry has offered to support the boys in court, that he told the police that Johnny and Pony were in Texas, and gives Pony a note from Sodapop. They go out to get something to eat, then return to find the church on fire with children trapped inside. The Greasers turn into heroes as they rescue the kids from the burning church. It does n't take long for Ponyboy and Dally to heal up. Johnny, on the other hand, ends up with a broken back and severe burns. The boys are praised for their heroism, but Johnny is charged with manslaughter for killing Bob, while Ponyboy may be sent to a boys ' home.
Bob 's death has sparked calls from the Socs for "a rumble, '' which the Greasers win. Dallas drives Ponyboy to the hospital to visit Johnny. Johnny is unimpressed by the victory, and dies after telling Ponyboy to "stay gold, '' referring to the Frost poem. Unable to bear Johnny 's death, Dallas wanders through the hospital, pretending to shoot a doctor with his unloaded gun, which clicks harmlessly. He then robs a grocery store with the same gun, but he is shot and wounded by the owner as he flees. Pursued by the police, Dallas is surrounded in a park and the police kill him after he repeatedly refuses to drop his unloaded gun. Ponyboy is eventually cleared of wrongdoing in Bob 's death and allowed to stay with his brothers. Turning the pages of Johnny 's copy of Gone with the Wind, Ponyboy finds a letter from Johnny saying that saving the children was worth sacrificing his own life. The story ends with Ponyboy writing a school report about his experiences.
The book 's author, S.E. Hinton, appears briefly as a nurse in Dally 's (Dillon 's) hospital room.
Francis Ford Coppola had not intended to make a film about teen angst until Jo Ellen Misakian, a school librarian from Lone Star Elementary School in Fresno, California, wrote to him on behalf of her seventh and eighth grade students about adapting The Outsiders. When Coppola read the book, he was moved not only to adapt and direct it, but to follow it the next year by adapting Hinton 's novel Rumble Fish. The latter film 's cast also included Matt Dillon, Diane Lane, and Glenn Withrow.
The film was shot on location in Tulsa, Oklahoma. Coppola filmed The Outsiders and Rumble Fish back - to - back in 1982 -- a newspaper, used to show a story about the three greasers saving the kids in The Outsiders, includes a real story from 1982 regarding the death of a man hit by a train in Boston. He wrote the screenplay for the latter while on days off from shooting the former. Many of the same locations were used in both films, as were many of the same cast and crew members. The credits are shown at the beginning of the film in the style normally found in a published play.
Coppola 's craving for realism almost led to disaster during the church - burning scene. He pressed for "more fire '', and the small, controlled blaze accidentally triggered a much larger, uncontrolled fire, which a downpour fortunately doused.
The film was met with mostly positive reviews from critics. Rotten Tomatoes gives The Outsiders a 65 % "Fresh '' rating on its site. Roger Ebert awarded the film two - and - a-half out of four stars, citing problems with Coppola 's vision, "the characters wind up like pictures, framed and hanging on the screen. ''
Authors Janet Hirshenson and Jane Jenkins, in a 2007 book, wrote that the film 's realistic portrayal of poor teenagers "created a new kind of filmmaking, especially about teenagers -- a more naturalistic look at how young people talk, act, and experience the world. This movie was one of the few Hollywood offerings to deal realistically with kids from the wrong side of the tracks, and to portray honestly children whose parents had abused, neglected, or otherwise failed them. ''
The Outsiders was nominated for four Young Artist Awards, given annually since 1978 by the Young Artist Foundation. C. Thomas Howell won for "Best Young Motion Picture Actor in a Feature Film ''. Diane Lane was nominated for "Best Young Supporting Actress in a Motion Picture ''. The film was nominated for "Best Family Feature Motion Picture ''. Francis Ford Coppola was nominated for the Golden Prize at the 13th Moscow International Film Festival.
In September 2005, Coppola re-released the film on DVD, including 22 minutes of additional footage and new music, as The Outsiders: The Complete Novel. Coppola re-inserted some deleted scenes to make the film more faithful to the book. At the beginning of the film, he added scenes where Ponyboy gets stalked and jumped, the gang talks about going to the movies, Sodapop and Ponyboy talking in their room and Dally, Pony and Johnny bum around before going to the movies. In the end, Coppola added the scenes taking place in court, Mr. Syme talking to Ponyboy, and Sodapop, Ponyboy and Darry in the park. Also, much of the original score was replaced with music popular in the 1960s as well as new music composed by Michael Seifert and Dave Padrutt. The film was re-rated by the MPAA as PG - 13 for "violence, teen drinking and smoking, and some sexual references ''.
The director also removed several scenes in order to improve pacing, but they could be found on the second disc as additional scenes. In addition, Swayze, Macchio, Lane, and Howell gathered at Coppola 's estate to watch the re-release, and their commentary is included on the DVD. Dillon and Lowe provided separate commentary.
A Blu - ray edition of The Outsiders: The Complete Novel was released in Region 1 on June 3, 2014.
The original film score was composed by the director 's father, Carmine Coppola; the main theme, "Stay Gold '', was sung by Stevie Wonder. The original soundtrack included one rock song, Them 's "Gloria ''.
A television series based on the characters of the novel and film aired in 1990. It consists of a different cast playing the same characters. It picks up right after the events of the film 's ending but lasted only one season.
|
true or false pollen grains are the female gamete in the plant | Pollen - wikipedia
Pollen is a fine to coarse powdery substance comprising pollen grains which are male microgametophytes of seed plants, which produce male gametes (sperm cells). Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of coniferous plants. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination. In a case of self - pollination, this process takes place from the anther of a flower to the stigma of the same flower.
Pollen itself is not the male gamete. Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm cells.
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower. Pollen grains come in a wide variety of shapes, sizes, and surface markings characteristic of the species (see electron micrograph, right). Pollen grains of pines, firs, and spruces are winged. The smallest pollen grain, that of the forget - me - not (Myosotis spp.), is around 6 μm (0.006 mm) in diameter. Wind - borne pollen grains can be as large as about 90 -- 100 μm.
In angiosperms, during flower development the anther is composed of a mass of cells that appear undifferentiated, except for a partially differentiated dermis. As the flower develops, four groups of sporogenous cells form within the anther. The fertile sporogenous cells are surrounded by layers of sterile cells that grow into the wall of the pollen sac. Some of the cells grow into nutritive cells that supply nutrition for the microspores that form by meiotic division from the sporogenous cells.
In a process called microsporogenesis, four haploid microspores are produced from each diploid sporogenous cell (microsporocyte, pollen mother cell or meiocyte), after meiotic division. After the formation of the four microspores, which are contained by callose walls, the development of the pollen grain walls begins. The callose wall is broken down by an enzyme called callase and the freed pollen grains grow in size and develop their characteristic shape and form a resistant outer wall called the exine and an inner wall called the intine. The exine is what is preserved in the fossil record. Two basic types of microsporogenesis are recognised, simultaneous and successive. In simultaneous microsporogenesis meiotic steps I and II are completed prior to cytokinesis, whereas in successive microsporogenesis cytokinesis follows. While there may be a continuum with intermediate forms, the type of microsporogenesis has systematic significance. The predominant form amongst the monocots is successive, but there are important exceptions.
During microgametogenesis, the unicellular microspores undergo mitosis and develop into mature microgametophytes containing the gametes. In some flowering plants, germination of the pollen grain may begin even before it leaves the microsporangium, with the generative cell forming the two sperm cells.
Except in the case of some submerged aquatic plants, the mature pollen grain has a double wall. The vegetative and generative cells are surrounded by a thin delicate wall of unaltered cellulose called the endospore or intine, and a tough resistant outer cuticularized wall composed largely of sporopollenin called the exospore or exine. The exine often bears spines or warts, or is variously sculptured, and the character of the markings is often of value for identifying genus, species, or even cultivar or individual. The spines may be less than a micron in length (spinulus, plural spinuli) referred to as spinulose (scabrate), or longer than a micron (echina, echinae) referred to as echinate. Various terms also describe the sculpturing such as reticulate, a net like appearance consisting of elements (murus, muri) separated from each other by a lumen (plural lumina).
The pollen wall protects the sperm while the pollen grain is moving from the anther to the stigma; it protects the vital genetic material from drying out and solar radiation. The pollen grain surface is covered with waxes and proteins, which are held in place by structures called sculpture elements on the surface of the grain. The outer pollen wall, which prevents the pollen grain from shrinking and crushing the genetic material during desiccation, is composed of two layers. These two layers are the tectum and the foot layer, which is just above the intine. The tectum and foot layer are separated by a region called the columella, which is composed of strengthening rods. The outer wall is constructed with a resistant biopolymer called sporopollenin.
Pollen apertures are regions of the pollen wall that may involve exine thinning or a significant reduction in exine thickness. They allow shrinking and swelling of the grain caused by changes in moisture content. Elongated apertures or furrows in the pollen grain are called colpi (singular: colpus) or sulci (singular: sulcus). Apertures that are more circular are called pores. Colpi, sulci and pores are major features in the identification of classes of pollen. Pollen may be referred to as inaperturate (apertures absent) or aperturate (apertures present). The aperture may have a lid (operculum), hence is described as operculate. However the term inaperturate covers a wide range of morphological types, such as functionally inaperturate (cryptoaperturate) and omniaperturate. Inaperaturate pollen grains often have thin walls, which facilitates pollen tube germination at any position. Terms such as uniaperturate and triaperturate refer to the number of apertures present (one and three respectively).
The orientation of furrows (relative to the original tetrad of microspores) classifies the pollen as sulcate or colpate. Sulcate pollen has a furrow across the middle of what was the outer face when the pollen grain was in its tetrad. If the pollen has only a single sulcus, it is described as monosulcate, has two sulci, as bisulcate, or more, as polysulcate. Colpate pollen has furrows other than across the middle of the outer faces. Eudicots have pollen with three colpi (tricolpate) or with shapes that are evolutionarily derived from tricolpate pollen. The evolutionary trend in plants has been from monosulcate to polycolpate or polyporate pollen.
The transfer of pollen grains to the female reproductive structure (pistil in angiosperms) is called pollination. This transfer can be mediated by the wind, in which case the plant is described as anemophilous (literally wind - loving). Anemophilous plants typically produce great quantities of very lightweight pollen grains, sometimes with air - sacs. Non-flowering seed plants (e.g. pine trees) are characteristically anemophilous. Anemophilous flowering plants generally have inconspicuous flowers. Entomophilous (literally insect - loving) plants produce pollen that is relatively heavy, sticky and protein - rich, for dispersal by insect pollinators attracted to their flowers. Many insects and some mites are specialized to feed on pollen, and are called palynivores.
In non-flowering seed plants, pollen germinates in the pollen chamber, located beneath the micropyle, underneath the integuments of the ovule. A pollen tube is produced, which grows into the nucellus to provide nutrients for the developing sperm cells. Sperm cells of Pinophyta and Gnetophyta are without flagella, and are carried by the pollen tube, while those of Cycadophyta and Ginkgophyta have many flagella.
When placed on the stigma of a flowering plant, under favorable circumstances, a pollen grain puts forth a pollen tube, which grows down the tissue of the style to the ovary, and makes its way along the placenta, guided by projections or hairs, to the micropyle of an ovule. The nucleus of the tube cell has meanwhile passed into the tube, as does also the generative nucleus, which divides (if it has n't already) to form two sperm cells. The sperm cells are carried to their destination in the tip of the pollen tube. Double - strand breaks in DNA that arise during pollen tube growth appear to be efficiently repaired in the generative cell that carries the male genomic information to be passed on to the next plant generation. However, the vegetative cell that is responsible for tube elongation appears to lack this DNA repair capability.
Pollen 's sporopollenin outer sheath affords it some resistance to the rigours of the fossilisation process that destroy weaker objects; it is also produced in huge quantities. There is an extensive fossil record of pollen grains, often disassociated from their parent plant. The discipline of palynology is devoted to the study of pollen, which can be used both for biostratigraphy and to gain information about the abundance and variety of plants alive -- which can itself yield important information about paleoclimates. Also, pollen analysis has been widely used for reconstructing past changes in vegetation and their associated drivers. Pollen is first found in the fossil record in the late Devonian period and increases in abundance until the present day.
Nasal allergy to pollen is called pollinosis, and allergy specifically to grass pollen is called hay fever. Generally, pollens that cause allergies are those of anemophilous plants (pollen is dispersed by air currents.) Such plants produce large quantities of lightweight pollen (because wind dispersal is random and the likelihood of one pollen grain landing on another flower is small), which can be carried for great distances and are easily inhaled, bringing it into contact with the sensitive nasal passages.
Pollen allergies are common in polar and temperate climate zones, where production of pollen is seasonal. In the tropics pollen production varies less by the season, and allergic reactions less. In northern Europe, common pollens for allergies are those of birch and alder, and in late summer wormwood and different forms of hay. Grass pollen is also associated with asthma exacerbations in some people, a phenomenon termed thunderstorm asthma.
In the US, people often mistakenly blame the conspicuous goldenrod flower for allergies. Since this plant is entomophilous (its pollen is dispersed by animals), its heavy, sticky pollen does not become independently airborne. Most late summer and fall pollen allergies are probably caused by ragweed, a widespread anemophilous plant.
Arizona was once regarded as a haven for people with pollen allergies, although several ragweed species grow in the desert. However, as suburbs grew and people began establishing irrigated lawns and gardens, more irritating species of ragweed gained a foothold and Arizona lost its claim of freedom from hay fever.
Anemophilous spring blooming plants such as oak, birch, hickory, pecan, and early summer grasses may also induce pollen allergies. Most cultivated plants with showy flowers are entomophilous and do not cause pollen allergies.
The number of people in the United States affected by hay fever is between 20 and 40 million, and such allergy has proven to be the most frequent allergic response in the nation. There are certain evidential suggestions pointing out hay fever and similar allergies to be of hereditary origin. Individuals who suffer from eczema or are asthmatic tend to be more susceptible to developing long - term hay fever.
In Denmark, decades of rising temperatures cause pollen to appear earlier and in greater numbers, as well as introduction of new species such as ragweed.
The most efficient way to handle a pollen allergy is by preventing contact with the material. Individuals carrying the ailment may at first believe that they have a simple summer cold, but hay fever becomes more evident when the apparent cold does not disappear. The confirmation of hay fever can be obtained after examination by a general physician.
Antihistamines are effective at treating mild cases of pollinosis, this type of non-prescribed drugs includes loratadine, cetirizine and chlorpheniramine. They do not prevent the discharge of histamine, but it has been proven that they do prevent a part of the chain reaction activated by this biogenic amine, which considerably lowers hay fever symptoms.
Decongestants can be administered in different ways such as tablets and nasal sprays.
Allergy immunotherapy (AIT) treatment involves administering doses of allergens to accustom the body to pollen, thereby inducing specific long - term tolerance. Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Discovered by Leonard Noon and John Freeman in 1911, allergy immunotherapy represents the only causative treatment for respiratory allergies.
Most major classes of predatory and parasitic arthropods contain species that eat pollen, despite the common perception that bees are the primary pollen - consuming arthropod group. Many other Hymenoptera other than bees consume pollen as adults, though only a small number feed on pollen as larvae (including some ant larvae). Spiders are normally considered carnivores but pollen is an important source of food for several species, particularly for spiderlings, which catch pollen on their webs. It is not clear how spiderlings manage to eat pollen however, since their mouths are not large enough to consume pollen grains. Some predatory mites also feed on pollen, with some species being able to subsist solely on pollen, such as Euseius tularensis, which feeds on the pollen of dozens of plant species. Members of some beetle families such as Mordellidae and Melyridae feed almost exclusively on pollen as adults, while various lineages within larger families such as Curculionidae, Chrysomelidae, Cerambycidae, and Scarabaeidae are pollen specialists even though most members of their families are not (e.g., only 36 of 40000 species of ground beetles, which are typically predatory, have been shown to eat pollen -- but this is thought to be a severe underestimate as the feeding habits are only known for 1000 species). Similarly, Ladybird beetles mainly eat insects, but many species also eat pollen, as either part or all of their diet. Hemiptera are mostly herbivores or omnivores but pollen feeding is known (and has only been well studied in the Anthocoridae). Many adult flies, especially Syrphidae, feed on pollen, and three UK syrphid species feed strictly on pollen (syrphids, like all flies, can not eat pollen directly due to the structure of their mouthparts, but can consume pollen contents that are dissolved in a fluid). Some species of fungus, including Fomes fomentarius, are able to break down grains of pollen as a secondary nutrition source that is particularly high in nitrogen. Pollen may be valuable diet supplement for detritivores, providing them with nutrients needed for growth, development and maturation. It was suggested that obtaining nutrients from pollen, deposited on the forest floor during periods of pollen rains, allows fungi to decompose nutritionally scarce litter.
Some species of Heliconius butterflies consume pollen as adults, which appears to be a valuable nutrient source, and these species are more distasteful to predators than the non-pollen consuming species.
Although bats, butterflies and hummingbirds are not pollen eaters per se, their consumption of nectar in flowers is an important aspect of the pollination process.
Bee pollen for human consumption is marketed as a food ingredient and as a dietary supplement. The largest constituent is carbohydrates, with protein content ranging from 7 to 35 percent depending on the plant species collected by bees.
Honey produced by bees from natural sources contains pollen derived p - coumaric acid, an antioxidant and natural bactericide that is also present in a wide variety of plants and plant - derived food products.
The U.S. Food and Drug Administration (FDA) has not found any harmful effects of bee pollen consumption, except from the usual allergies. However, FDA does not allow bee pollen marketers in the United States to make health claims about their produce, as no scientific basis for these has ever been proven. Furthermore, there are possible dangers not only from allergic reactions but also from contaminants such as pesticides and from fungi and bacteria growth related to poor storage procedures. A manufacturers 's claim that pollen collecting helps the bee colonies is also controversial.
Pine pollen (송화 가루; Songhwa Garu) is traditionally consumed in Korea as an ingredient in sweets and beverages.
The growing industries in pollen harvesting for human and bee consumption rely on harvesting pollen baskets from honey bees as they return to their hives using a pollen trap. When this pollen has been tested for parasites, it has been found that a multitude of pollinator viruses and eukaryotic parasites are present in the pollen. It is currently unclear if the parasites are introduced by the bee that collected the pollen or if it is from contamination to the flower. Though this is not likely to pose a risk to humans, it is a major issue for the bumblebee rearing industry that relies on thousands of tonnes of honey bee collected pollen per year. Several sterilization methods have been employed, though no method has been 100 % effective at sterilizing, without reducing the nutritional value, of the pollen
In forensic biology, pollen can tell a lot about where a person or object has been, because regions of the world, or even more particular locations such a certain set of bushes, will have a distinctive collection of pollen species. Pollen evidence can also reveal the season in which a particular object picked up the pollen. Pollen has been used to trace activity at mass graves in Bosnia, catch a burglar who brushed against a Hypericum bush during a crime, and has even been proposed as an additive for bullets to enable tracking them.
|
what was the cultural highlight of canada's centennial celebrations | 1967 in Canada - wikipedia
1967 is remembered as one of the most notable years in Canada. It was the centenary of Canadian Confederation and celebrations were held throughout the nation. The most prominent event was Expo 67 in Montreal, the most successful World 's Fair ever held up to that time, and one of the first events to win international acclaim for the country. The year saw the nation 's Governor General, Georges Vanier, die in office; and two prominent federal leaders, Official Opposition Leader John Diefenbaker, and Prime Minister Lester B. Pearson announced their resignations. The year 's top news - story was French President Charles de Gaulle 's "Vive le Québec libre '' speech in Montreal. The year also saw major changes in youth culture with the "hippies '' in Toronto 's Yorkville area becoming front - page news over their lifestyle choices and battles with Toronto City Council. A new honours system was announced, the Order of Canada. In sports, the Toronto Maple Leafs won their 13th and last Stanley Cup.
In mountaineering, the year saw the first ascents of the highest peak in the remote Arctic Cordillera
The nation began to feel far more nationalistic than before, with a generation raised in a country fully detached from Britain. The new Canadian flag served as a symbol and a catalyst for this. In Quebec, the Quiet Revolution was overthrowing the oligarchy of francophone clergy and anglophone businessmen, and French Canadian pride and nationalism were becoming a national political force.
The Canadian economy was at its post-war peak, and levels of prosperity and quality of life were at all - time highs. Many of the most important elements of Canada 's welfare state were coming on line, such as Medicare and the Canada Pension Plan (CPP).
These events were coupled with the coming of age of the baby boom and the regeneration of music, literature, and art that the 1960s brought around the world. The baby boomers, who have since dominated Canada 's culture, tend to view the period as Canada 's halcyon days.
While to Montreal it was the year of Expo, to Toronto it was the culmination of the Toronto Maple Leafs dynasty of the 1960s, with the team winning its fourth Stanley Cup in six years by defeating its arch - rival, the Montreal Canadiens, in the last all - Canadian Stanley Cup Final until 1986.
Author and historian Pierre Berton famously referred to 1967 as Canada 's last good year. In his analysis, the years following saw much of 1967 's hopefulness disappear. In the early 1970s, the oil shock and other factors hammered the Canadian economy. Quebec separatism led to divisive debates and an economic decline of Montreal and Front de libération du Québec (FLQ) terrorism. The Vietnam War and Watergate Scandal in the United States also had profound effects on Canadians. Berton reported that Toronto hockey fans also note that the Maple Leafs have not won a Stanley Cup since.
may 13 - Robert P. Sheppard to ray and penny sheppard
|
what do the colors of the japan flag mean | Flag of Japan - wikipedia
The national flag of Japan is a rectangular white banner bearing a crimson - red disc at its center. This flag is officially called Nisshōki (日章旗, the "sun - mark flag ''), but is more commonly known in Japan as Hi no maru (日の丸, the "circle of the sun ''). It embodies the country 's sobriquet: Land of the Rising Sun.
The Nisshōki flag is designated as the national flag in the Law Regarding the National Flag and National Anthem, which was promulgated and became effective on August 13, 1999. Although no earlier legislation had specified a national flag, the sun - disc flag had already become the de facto national flag of Japan. Two proclamations issued in 1870 by the Daijō - kan, the governmental body of the early Meiji period, each had a provision for a design of the national flag. A sun - disc flag was adopted as the national flag for merchant ships under Proclamation No. 57 of Meiji 3 (issued on February 27, 1870), and as the national flag used by the Navy under Proclamation No. 651 of Meiji 3 (issued on October 27, 1870). Use of the Hi no maru was severely restricted during the early years of the Allied occupation of Japan after World War II; these restrictions were later relaxed.
The sun plays an important role in Japanese mythology and religion as the Emperor is said to be the direct descendant of the sun goddess Amaterasu and the legitimacy of the ruling house rested on this divine appointment and descent from the chief deity of the predominant Shinto religion. The name of the country as well as the design of the flag reflect this central importance of the sun. The ancient history Shoku Nihongi says that Emperor Monmu used a flag representing the sun in his court in 701, and this is the first recorded use of a sun - motif flag in Japan. The oldest existing flag is preserved in Unpō - ji temple, Kōshū, Yamanashi, which is older than the 16th century, and an ancient legend says that the flag was given to the temple by Emperor Go - Reizei in the 11th century. During the Meiji Restoration, both the sun disc and the Rising Sun Ensign of the Imperial Japanese Navy became major symbols in the emerging Japanese Empire. Propaganda posters, textbooks, and films depicted the flag as a source of pride and patriotism. In Japanese homes, citizens were required to display the flag during national holidays, celebrations and other occasions as decreed by the government. Different tokens of devotion to Japan and its Emperor featuring the Hi no maru motif became popular during the Second Sino - Japanese War and other conflicts. These tokens ranged from slogans written on the flag to clothing items and dishes that resembled the flag.
Public perception of the national flag varies. Historically, both Western and Japanese sources claimed the flag was a powerful and enduring symbol to the Japanese. Since the end of World War II (the Pacific War), the use of the flag and the national anthem Kimigayo has been a contentious issue for Japan 's public schools. Disputes about their use have led to protests and lawsuits. The flag is not frequently displayed in Japan due to its association with ultranationalism. To Okinawans, the flag represents the events of World War II and the subsequent U.S. military presence there. For some nations that have been occupied by Japan, the flag is a symbol of aggression and imperialism. The Hi no maru was used as a tool against occupied nations for purposes of intimidation, asserting Japan 's dominance, or subjugation. Several military banners of Japan are based on the Hi no maru, including the sunrayed Naval Ensign. The Hi no maru also serves as a template for other Japanese flags in public and private use.
The exact origin of the Hinomaru is unknown, but the rising sun seems to have had some symbolic meaning since the early 7th century (the Japanese archipelago is east of the Asian mainland, and is thus where the sun "rises ''). In 607, an official correspondence that began with "from the Emperor of the rising sun '' was sent to Chinese Emperor Yang of Sui. Japan is often referred to as "the land of the rising sun ''. In the 12th - century work, The Tale of the Heike, it was written that different samurai carried drawings of the sun on their fans. One legend related to the national flag is attributed to the Buddhist priest Nichiren. Supposedly, during a 13th - century Mongolian invasion of Japan, Nichiren gave a sun banner to the shōgun to carry into battle. The sun is also closely related to the Imperial family, as legend states the imperial throne was descended from the sun goddess Amaterasu.
One of Japan 's oldest flags is housed at the Unpo - ji temple in Yamanashi Prefecture. Legend states it was given by Emperor Go - Reizei to Minamoto no Yoshimitsu and has been treated as a family treasure by the Takeda clan for the past 1,000 years, and at least it is older than 16th century.
The earliest recorded flags in Japan date from the unification period in the late 16th century. The flags belonged to each daimyō and were used primarily in battle. Most of the flags were long banners usually charged with the mon (family crest) of the daimyō lord. Members of the same family, such as a son, father, and brother, had different flags to carry into battle. The flags served as identification, and were displayed by soldiers on their backs and horses. Generals also had their own flags, most of which differed from soldiers ' flags due to their square shape.
In 1854, during the Tokugawa shogunate, Japanese ships were ordered to hoist the Hinomaru to distinguish themselves from foreign ships. Before then, different types of Hinomaru flags were used on vessels that were trading with the U.S. and Russia. The Hinomaru was decreed the merchant flag of Japan in 1870 and was the legal national flag from 1870 to 1885, making it the first national flag Japan adopted.
While the idea of national symbols was strange to the Japanese, the Meiji Government needed them to communicate with the outside world. This became especially important after the landing of U.S. Commodore Matthew Perry in Yokohama Bay. Further Meiji Government implementations gave more identifications to Japan, including the anthem Kimigayo and the imperial seal. In 1885, all previous laws not published in the Official Gazette of Japan were abolished. Because of this ruling by the new cabinet of Japan, the Hinomaru was the de facto national flag since no law was in place after the Meiji Restoration.
The use of the national flag grew as Japan sought to develop an empire, and the Hinomaru was present at celebrations after victories in the First Sino - Japanese and Russo - Japanese Wars. The flag was also used in war efforts throughout the country. A Japanese propaganda film in 1934 portrayed foreign national flags as incomplete or defective with their designs, while the Japanese flag is perfect in all forms. In 1937, a group of girls from Hiroshima Prefecture showed solidarity with Japanese soldiers fighting in China during the Second Sino - Japanese War, by eating "flag meals '' that consisted of an umeboshi in the middle of a bed of rice. The Hinomaru bento became the main symbol of Japan 's war mobilization and solidarity with her soldiers until the 1940s.
Japan 's early victories in the Sino - Japanese War resulted in the Hinomaru again being used for celebrations. It was seen in the hands of every Japanese during parades.
Textbooks during this period also had the Hinomaru printed with various slogans expressing devotion to the Emperor and the country. Patriotism was taught as a virtue to Japanese children. Expressions of patriotism, such as displaying the flag or worshiping the Emperor daily, were all part of being a "good Japanese. ''
The flag was a tool of Japanese imperialism in the occupied Southeast Asian areas during Second World War: people had to use the flag, and schoolchildren sang Kimigayo in morning flag raising ceremonies. Local flags were allowed for some areas such as the Philippines, Indonesia, and Manchukuo. In Korea which was part of the Empire of Japan, the Hinomaru and other symbols were used to declare that the Koreans were subjects of the empire.
To the Japanese, the Hinomaru was the "Rising Sun flag that would light the darkness of the entire world. '' To Westerners, it was one of the Japanese military 's most powerful symbols.
The Hinomaru was the de facto flag throughout World War II and the occupation period. During the occupation of Japan after World War II, permission from the Supreme Commander of the Allied Powers (SCAPJ) was needed to fly the Hinomaru. Sources differ on the degree to which the use of the Hinomaru flag was restricted; some use the term "banned; '' however, while the original restrictions were severe, they did not amount to an outright ban.
After World War II, an ensign was used by Japanese civil ships of the United States Naval Shipping Control Authority for Japanese Merchant Marines. Modified from the "E '' signal code, the ensign was used from September 1945 until the U.S. occupation of Japan ceased. U.S. ships operating in Japanese waters used a modified "O '' signal flag as their ensign.
On May 2, 1947, General Douglas MacArthur lifted the restrictions on displaying the Hinomaru in the grounds of the National Diet Building, on the Imperial Palace, on the Prime Minister 's residence and on the Supreme Court building with the ratification of the new Constitution of Japan. Those restrictions were further relaxed in 1948, when people were allowed to fly the flag on national holidays. In January 1949, the restrictions were abolished and anyone could fly the Hinomaru at any time without permission. As a result, schools and homes were encouraged to fly the Hinomaru until the early 1950s.
Since World War II, Japan 's flag has been criticized for its association with the country 's militaristic past. Similar objections have also been raised to the current national anthem of Japan, Kimigayo. The feelings about the Hinomaru and Kimigayo represented a general shift from a patriotic feeling about "Dai Nippon '' -- Great Japan -- to the pacifist and anti-militarist "Nihon ''. Because of this ideological shift, the flag was used less often in Japan directly after the war even though restrictions were lifted by the SCAPJ in 1949.
As Japan began to re-establish itself diplomatically, the Hinomaru was used as a political weapon overseas. In a visit by the Emperor Hirohito and the Empress Kōjun to the Netherlands, the Hinomaru was burned by Dutch citizens who demanded that either he be sent home to Japan or tried for the deaths of Dutch prisoners of war during the Second World War. Domestically, the flag was not even used in protests against a new Status of Forces Agreement being negotiated between U.S. and Japan. The most common flag used by the trade unions and other protesters was the red flag of revolt.
An issue with the Hinomaru and national anthem was raised once again when Tokyo hosted the 1964 Summer Olympic Games. Before the Olympic Games, the size of the sun disc of the national flag was changed partly because the sun disc was not considered striking when it was being flown with other national flags. Tadamasa Fukiura, a color specialist, chose to set the sun disc at two thirds of the flag 's length. Fukiura also chose the flag colors for the 1964 as well as the 1998 Winter Olympics in Nagano.
In 1989, the death of Emperor Hirohito once again raised moral issues about the national flag. Conservatives felt that if the flag could be used during the ceremonies without reopening old wounds, they might have a chance to propose that the Hinomaru become the national flag without being challenged about its meaning. During an official six - day mourning period, flags were flown at half staff or draped in black bunting all across Japan. Despite reports of protesters vandalizing the Hinomaru on the day of the Emperor 's funeral, schools ' right to fly the Japanese flag at half - staff without reservations brought success to the conservatives.
The Law Regarding the National Flag and National Anthem was passed in 1999, choosing both the Hinomaru and Kimigayo as Japan 's national symbols. The passage of the law stemmed from a suicide of the principal of Sera High School in Sera, Hiroshima, Ishikawa Toshihiro, who could not resolve a dispute between his school board and his teachers over the use of the Hinomaru and Kimigayo. The Act is one of the most controversial laws passed by the Diet since the 1992 "Law Concerning Cooperation for United Nations Peacekeeping Operations and Other Operations '', also known as the "International Peace Cooperation Law ''.
Prime Minister Keizō Obuchi of the Liberal Democratic Party (LDP) decided to draft legislation to make the Hinomaru and Kimigayo official symbols of Japan in 2000. His Chief Cabinet Secretary, Hiromu Nonaka, wanted the legislation to be completed by the 10th anniversary of Emperor Akihito 's enthronement. This is not the first time legislation was considered for establishing both symbols as official. In 1974, with the backdrop of the 1972 return of Okinawa to Japan and the 1973 oil crisis, Prime Minister Tanaka Kakuei hinted at a law being passed enshrining both symbols in the law of Japan. In addition to instructing the schools to teach and play Kimigayo, Kakuei wanted students to raise the Hinomaru flag in a ceremony every morning, and to adopt a moral curriculum based on certain elements of the Imperial Rescript on Education pronounced by the Meiji Emperor in 1890. Kakuei was unsuccessful in passing the law through the Diet that year.
Main supporters of the bill were the LDP and the Komeito (CGP), while the opposition included the Social Democratic Party (SDPJ) and Communist Party (JCP), who cited the connotations both symbols had with the war era. The CPJ was further opposed for not allowing the issue to be decided by the public. Meanwhile, the Democratic Party of Japan (DPJ) could not develop party consensus on it. DPJ President and future prime minister Naoto Kan stated that the DPJ must support the bill because the party already recognized both symbols as the symbols of Japan. Deputy Secretary General and future prime minister Yukio Hatoyama thought that this bill would cause further divisions among society and the public schools. Hatoyama voted for the bill while Kan voted against it.
Before the vote, there were calls for the bills to be separated at the Diet. Waseda University professor Norihiro Kato stated that Kimigayo is a separate issue more complex than the Hinomaru flag. Attempts to designate only the Hinomaru as the national flag by the DPJ and other parties during the vote of the bill were rejected by the Diet. The House of Representatives passed the bill on July 22, 1999, by a 403 to 86 vote. The legislation was sent to the House of Councilors on July 28 and was passed on August 9. It was enacted into law on August 13.
On August 8, 2009, a photograph was taken at a DPJ rally for the House of Representatives election showing a banner that was hanging from a ceiling. The banner was made of two Hinomaru flags cut and sewn together to form the shape of the DPJ logo. This infuriated the LDP and Prime Minister Tarō Asō, saying this act was unforgivable. In response, DPJ President Yukio Hatoyama (who voted for the Law Regarding the National Flag and National Anthem) said that the banner was not the Hinomaru and should not be regarded as such.
Passed in 1870, the Prime Minister 's Proclamation No. 57 had two provisions related to the national flag. The first provision specified who flew the flag and how it was flown; the second specified how the flag was made. The ratio was seven units width and ten units length (7: 10). The red disc, which represents the sun, was calculated to be three - fifths of the hoist width. The law decreed the disc to be in the center, but it was usually placed one - hundredth ( ⁄) towards the hoist. On October 3 of the same year, regulations about the design of the merchant ensign and other naval flags were passed. For the merchant flag, the ratio was two units width and three units length (2: 3). The size of the disc remained the same, however the sun disc was placed one - twentieth ( ⁄) towards the hoist.
When the Law Regarding the National Flag and National Anthem passed, the dimensions of the flag were slightly altered. The overall ratio of the flag was changed to two units width by three units length (2: 3). The red disc was shifted towards dead center, but the overall size of the disc stayed the same. The background of the flag is white and the sun disc is red (紅色, beni iro), but the exact color shades were not defined in the 1999 law. The only hint given about the red color that it is a "deep '' shade.
Issued by the Japan Defense Agency (now the Ministry of Defense) in 1973 (Showa 48), specifications list the red color of the flag as 5R 4 / 12 and the white as N9 in the Munsell color chart. The document was changed on March 21, 2008 (Heisei 20) to match the flag 's construction with current legislation and updated the Munsell colors. The document lists acrylic fiber and nylon as fibers that could be used in construction of flags used by the military. For acrylic, the red color is 5.7 R 3.7 / 15.5 and white is N9. 4; nylon has 6.2 R 4 / 15.2 for red and N9. 2 for white. In a document issued by the Official Development Assistance (ODA), the red color for the Hinomaru and the ODA logo is listed as DIC 156 and CMYK 0 - 100 - 90 - 0. During deliberations about the Law Regarding the National Flag and National Anthem, there was a suggestion to either use a bright red (赤色, aka iro) shade or use one from the color pool of the Japanese Industrial Standards. Japanese flags produced in Japan often use vermilion dye.
When the Hinomaru was first introduced, the government required citizens to greet the Emperor with the flag. There was some resentment among the Japanese over the flag, resulting in some protests. It took some time for the flag to gain acceptance among the people.
During World War II in Japanese culture, it was a popular custom for friends, classmates, and relatives of a deploying soldier to sign a Hinomaru and present it to him. The flag was also used as a good luck charm and a prayer to wish the soldier back safely from battle. One term for this kind of charm is Hinomaru Yosegaki (日の丸 寄せ書き). One tradition is that no writing should touch the sun disc. After battles, these flags were often captured or later found on deceased Japanese soldiers. While these flags became souvenirs, there has been a growing trend of sending the signed flags back to the descendants of the soldier.
The tradition of signing the Hinomaru as a good luck charm still continues, though in a limited fashion. The Hinomaru Yosegaki could be shown at sporting events to give support to the Japanese national team. Another example is the hachimaki headband, which was white in color and had the red sun in the middle. During World War II, the phrases "Certain Victory '' (必勝, Hisshō) or "Seven Lives '' was written on the hachimaki and worn by kamikaze pilots. This denoted that the pilot was willing to die for his country.
Before World War II, all homes were required to display Hinomaru on national holidays. Since the war, the display of the flag of Japan is mostly limited to buildings attached to national and local governments such as city halls; it is rarely seen at private homes or commercial buildings, but some people and companies have advocated displaying the flag on holidays. Although the government of Japan encourages citizens and residents to fly the Hinomaru during national holidays, they are not legally required to do so. Since the Emperor 's 80th Birthday on December 23, 2002, the Kyushu Railway Company has displayed the Hinomaru at 330 stations.
Starting in 1995, the ODA has used the Hinomaru motif in their official logo. The design itself was not created by the government (the logo was chosen from 5,000 designs submitted by the public) but the government was trying increase the visualization of the Hinomaru through their aid packages and development programs. According to the ODA, the use of the flag is the most effective way to symbolize aid provided by the Japanese people.
According to polls conducted by mainstream media, most Japanese people had perceived the flag of Japan as the national flag even before the passage of the Law Regarding the National Flag and National Anthem in 1999. Despite this, controversies surrounding the use of the flag in school events or media still remain. For example, liberal newspapers such as Asahi Shimbun and Mainichi Shimbun often feature articles critical of the flag of Japan, reflecting their readerships ' political spectrum. To other Japanese, the flag represents the time where democracy was suppressed when Japan was an empire.
The display of the Hinomaru at homes and businesses is also debated in Japanese society. Because of the association of the Hinomaru with uyoku dantai (right wing) activists, reactionary politics, or hooliganism, some homes and businesses do not fly the flag. There is no requirement to fly the flag on any national holiday or special events. The town of Kanazawa, Ishikawa, has proposed plans in September 2012 to use government funds to buy flags with the purpose of encouraging citizens to fly the flag on national holidays. The Japanese Communist Party is vocally against the flag.
Negative perceptions of the Hinomaru exist in former colonies of Japan as well as within Japan itself, such as in Okinawa. In one notable example of this, on October 26, 1987, an Okinawan supermarket owner burned the Hinomaru before the start of the National Sports Festival of Japan. The flag burner, Shōichi Chibana, burned the Hinomaru not only to show opposition to atrocities committed by the Japanese army and the continued presence of U.S. forces, but also to prevent it from being displayed in public. Other incidents in Okinawa included the flag being torn down during school ceremonies and students refusing to honor the flag as it was being raised to the sounds of "Kimigayo ''. In the capital city of Naha, Okinawa, the Hinomaru was raised for the first time since the return of Okinawa to Japan to celebrate the city 's 80th anniversary in 2001. In the People 's Republic of China and South Korea, both of which had been occupied by the Empire of Japan, the 1999 formal adoption of the Hinomaru was met with reactions of Japan moving towards the right and also a step towards re-militarization. The passage of the 1999 law also coincided with the debates about the status of the Yasukuni Shrine, U.S. - Japan military cooperation and the creation of a missile defense program. In other nations that Japan occupied, the 1999 law was met with mixed reactions or glossed over. In Singapore, the older generation still harbors ill feelings toward the flag while the younger generation does not hold similar views as they have mainly adopted some form of Japanese culture through anime and games and Uniqlo. The Philippines government not only believed that Japan was not going to revert to militarism, but the goal of the 1999 law was to formally establish two symbols (the flag and anthem) in law and every state has a right to create national symbols. Japan has no law criminalizing the burning of the Hinomaru, but foreign flags can not be burned in Japan. Some people have humorously drawn an analogy between the Japanese flag and the appearance of a menstruation stain on a white bedsheet.
According to protocol, the flag may fly from sunrise until sunset; businesses and schools are permitted to fly the flag from opening to closing. When flying the flags of Japan and another country at the same time, the Japanese flag takes the position of honor and the flag of the guest country flies to its right. Both flags must be at the same height and of equal size. When more than one foreign flag is displayed, Japan 's flag is arranged in the alphabetical order prescribed by the United Nations. When the flag becomes unsuitable to use, it is customarily burned in private. The Law Regarding the National Flag and Anthem does not specify on how the flag should be used, but different prefectures came up with their own regulations to use the Hinomaru and other prefectural flags.
The Hinomaru flag has at least two mourning styles. One is to display the flag at half - staff (半旗, Han - ki), as is common in many countries. The offices of the Ministry of Foreign Affairs also hoist the flag at half - staff when a funeral is performed for a foreign nation 's head of state.
An alternative mourning style is to wrap the spherical finial with black cloth and place a black ribbon, known as a mourning flag (弔 旗, Chō - ki), above the flag. This style dates back to the death of Emperor Meiji on July 30, 1912, and the Cabinet issued an ordinance stipulating that the national flag should be raised in mourning when the Emperor dies. The Cabinet has the authority to announce the half - staffing of the national flag.
Since the end of World War II, the Ministry of Education has issued statements and regulations to promote the usage of both the Hinomaru and Kimigayo at schools under their jurisdiction. The first of these statements was released in 1950, stating that it was desirable, but not required, to use both symbols. This desire was later expanded to include both symbols on national holidays and during ceremonial events to encourage students on what national holidays are and to promote defense education. In a 1989 reform of the education guidelines, the LDP - controlled government first demanded that the flag must be used in school ceremonies and that proper respect must be given to it and to Kimigayo. Punishments for school officials who did not follow this order were also enacted with the 1989 reforms.
The 1999 curriculum guideline issued by the Ministry of Education after the passage of the Law Regarding the National Flag and Anthem decrees that "on entrance and graduation ceremonies, schools must raise the flag of Japan and instruct students to sing the "Kimigayo '' (national anthem), given the significance of the flag and the song. '' Additionally, the ministry 's commentary on the 1999 curriculum guideline for elementary schools note that "given the advance of internationalization, along with fostering patriotism and awareness of being Japanese, it is important to nurture school children 's respectful attitude toward the flag of Japan and Kimigayo as they grow up to be respected Japanese citizens in an internationalized society. '' The ministry also stated that if Japanese students can not respect their own symbols, then they will not be able to respect the symbols of other nations.
Schools have been the center of controversy over both the anthem and the national flag. The Tokyo Board of Education requires the use of both the anthem and flag at events under their jurisdiction. The order requires school teachers to respect both symbols or risk losing their jobs. Some have protested that such rules violate the Constitution of Japan, but the Board has argued that since schools are government agencies, their employees have an obligation to teach their students how to be good Japanese citizens. As a sign of protest, schools refused to display the Hinomaru at school graduations and some parents ripped down the flag. Teachers have unsuccessfully brought criminal complaints against Tokyo Governor Shintarō Ishihara and senior officials for ordering teachers to honor the Hinomaru and Kimigayo. After earlier opposition, the Japan Teachers Union accepts the use of both the flag and anthem; the smaller All Japan Teachers and Staffs Union still opposes both symbols and their use inside the school system.
This flag is a symbol associated with Japanese imperialism during 2nd World War and carries a similar meaning as Nazi 's Hakenkreuz. The Japan Self - Defense Forces (JSDF) and the Japan Ground Self - Defense Force use Rising Sun Flag with eight red rays extending outward, called Hachijō - Kyokujitsuki (八条 旭日 旗). A gold border lies partially around the edge.
A well - known variant of the sun disc design is the sun disc with 16 red rays in a Siemens star formation, which was also historically used by Japan 's military, particularly the Imperial Japanese Army and the Imperial Japanese Navy. The ensign, known in Japanese as the Jyūrokujō - Kyokujitsu - ki (十 六 条 旭日 旗), was first adopted as the War flag on May 15, 1870, and was used until the end of World War II in 1945. It was re-adopted on June 30, 1954, and is now used as the war flag and naval ensign of the Japan Ground Self - Defense Force (JGSDF) and the Japan Maritime Self - Defense Force (JMSDF). In Korea and China, this flag still carries a negative connotation. The JMSDF also employs the use of a masthead pennant. First adopted in 1914 and readopted in 1965, the masthead pennant contains a simplified version of the naval ensign at the hoist end, with the rest of the pennant colored white. The ratio of the pennant is between 1: 40 and 1: 90.
The Japan Air Self - Defense Force (JASDF), established independently in 1952, has only the plain sun disc as its emblem. This is the only branch of service with an emblem that does not invoke the rayed Imperial Standard. However, the branch does have an ensign to fly on bases and during parades. The ensign was created in 1972, which was the third used by the JASDF since its creation. The ensign contains the emblem of the branch centered on a blue background.
Although not an official national flag, the Z signal flag played a major role in Japanese naval history. On May 27, 1905, Admiral Heihachirō Tōgō of the Mikasa was preparing to engage the Russian Baltic Fleet. Before the Battle of Tsushima began, Togo raised the Z flag on the Mikasa and engaged the Russian fleet, winning the battle for Japan. The raising of the flag said to the crew the following: "The fate of Imperial Japan hangs on this one battle; all hands will exert themselves and do their best. '' The Z flag was also raised on the aircraft carrier Akagi on the eve of the Japan 's attack on Pearl Harbor, Hawaii, in December 1941.
Starting in 1870, flags were created for the Japanese Emperor (then Emperor Meiji), the Empress, and for other members of the imperial family. At first, the Emperor 's flag was ornate, with a sun resting in the center of an artistic pattern. He had flags that were used on land, at sea, and when he was in a carriage. The imperial family was also granted flags to be used at sea and while on land (one for use on foot and one carriage flag). The carriage flags were a monocolored chrysanthemum, with 16 petals, placed in the center of a monocolored background. These flags were discarded in 1889 when the Emperor decided to use the chrysanthemum on a red background as his flag. With minor changes in the color shades and proportions, the flags adopted in 1889 are still in use by the imperial family.
The current Emperor 's flag is a 16 - petal chrysanthemum, colored in gold, centered on a red background with a 2: 3 ratio. The Empress uses the same flag, except the shape is that of a swallow tail. The crown prince and the crown princess use the same flags, except with a smaller chrysanthemum and a white border in the middle of the flags. The chrysanthemum has been associated with the Imperial throne since the rule of Emperor Go - Toba in the 12th century, but it did not become the exclusive symbol of the Imperial throne until 1868.
Each of the 47 prefectures of Japan has its own flag which, like the national flag, consists of a symbol -- called a mon -- charged upon a monocolored field (except for Ehime Prefecture, where the background is bicolored). There are several prefecture flags, such as Hiroshima 's, that match their specifications to the national flag (2: 3 ratio, mon placed in the center and is ⁄ the length of the flag). Some of the mon display the name of the prefecture in Japanese characters; others are stylized depictions of the location or another special feature of the prefecture. An example of a prefectural flag is that of Nagano, where the orange katakana character ナ (na) appears in the center of a white disc. One interpretation of the mon is that the na symbol represents a mountain and the white disc, a lake. The orange color represents the sun while the white color represents the snow of the region.
Municipalities can also adopt flags of their own. The designs of the city flags are similar to the prefectural flags: a mon on a monocolored background. An example is the flag of Amakusa in Kumamoto Prefecture: the city symbol is composed of the Katakana character ア (a) and surrounded by waves. This symbol is centered on a white flag, with a ratio of 2: 3. Both the city emblem and the flag were adopted in 2006.
In addition to the flags used by the military, several other flag designs were inspired by the national flag. The former Japan Post flag consisted of the Hinomaru with a red horizontal bar placed in the center of the flag. There was also a thin white ring around the red sun. It was later replaced by a flag that consisted of the 〒 postal mark in red on a white background.
Two recently designed national flags resemble the Japanese flag. In 1971, Bangladesh gained independence from Pakistan, and it adopted a national flag that had a green background, charged with an off - centered red disc that contained a golden map of Bangladesh. The current flag, adopted in 1972, dropped the golden map and kept everything else. The Government of Bangladesh officially calls the red disc a circle; the red color symbolizes the blood that was shed to create their country. The island nation of Palau uses a flag of similar design, but the color scheme is completely different. While the Government of Palau does not cite the Japanese flag as an influence on their national flag, Japan did administer Palau from 1914 until 1944. The flag of Palau is an off - centered golden - yellow full moon on a sky blue background. The moon stands for peace and a young nation while the blue background represents Palau 's transition to self - government from 1981 to 1994, when it achieved full independence.
The Japanese naval ensign also influenced other flag designs. One such flag design is used by the Asahi Shimbun. At the bottom hoist of the flag, one quarter of the sun is displayed. The kanji character 朝 is displayed on the flag, colored white, covering most of the sun. The rays extend from the sun, occurring in a red and white alternating order, culminating in 13 total stripes. The flag is commonly seen at the National High School Baseball Championship, as the Asahi Shimbun is a main sponsor of the tournament. The rank flags and ensigns of the Imperial Japanese Navy also based their designs on the naval ensign.
Japanese flag flying at the Meiji Memorial.
Japan Self - Defense Forces honor guards holding the national flag during Mike Pence 's visit.
Flags of Japan and other G7 states flying in Toronto.
A series of Japanese flags in a school entrance.
Yokohama City (left) and the Hinomaru (center) flying on Yokohoma Harbor.
A larger flag of Japan during the 2006 FIVB Volleyball Men 's World Championship.
Firefighters in Tokyo holding the Japanese national flag during a ceremony.
|
around the world in 80 days summary in 250 words | Around the World in Eighty Days - wikipedia
Around the World in Eighty Days (French: Le tour du monde en quatre - vingts jours) is an adventure novel by the French writer Jules Verne, published in 1873. In the story, Phileas Fogg of London and his newly employed French valet Passepartout attempt to circumnavigate the world in 80 days on a £ 20,000 wager (£ 2,075,400 in 2017) set by his friends at the Reform Club. It is one of Verne 's most acclaimed works.
The story starts in London on Wednesday, 2 October 1872.
Phileas Fogg is a rich British gentleman living in solitude. Despite his wealth, Fogg lives a modest life with habits carried out with mathematical precision. Very little can be said about his social life other than that he is a member of the Reform Club. Having dismissed his former valet, James Forster, for bringing him shaving water at 84 ° F (29 ° C) instead of 86 ° F (30 ° C), Fogg hires Frenchman Jean Passepartout as a replacement.
At the Reform Club, Fogg gets involved in an argument over an article in The Daily Telegraph stating that with the opening of a new railway section in India, it is now possible to travel around the world in 80 days. He accepts a wager for £ 20,000 (£ 2,075,400 in 2017), half of his total fortune, from his fellow club members to complete such a journey within this time period. With Passepartout accompanying him, Fogg departs from London by train at 8: 45 p.m. on 2 October; in order to win the wager, he must return to the club by this same time on 21 December, 80 days later. They take the remaining £ 20,000 of Fogg 's fortune with them to cover expenses during the journey.
Fogg and Passepartout reach Suez in time. While disembarking in Egypt, they are watched by a Scotland Yard detective, Detective Fix, who has been dispatched from London in search of a bank robber. Since Fogg fits the vague description Scotland Yard was given of the robber, Detective Fix mistakes Fogg for the criminal. Since he can not secure a warrant in time, Fix boards the steamer (the Mongolia) conveying the travelers to Bombay. Fix becomes acquainted with Passepartout without revealing his purpose. Fogg promises the steamer engineer a large reward if he gets them to Bombay early. They dock two days ahead of schedule.
After reaching India they take a train from Bombay to Calcutta. Fogg learns that the Daily Telegraph article was wrong -- the railroad actually ends at Kholby and starts again at Allahabad, 50 miles away. Fogg purchases an elephant, hires a guide, and starts toward Allahabad.
They come across a procession in which a young Indian woman, Aouda, is to undergo sati. Since she is drugged with opium and hemp and is obviously not going voluntarily, the travelers decide to rescue her. They follow the procession to the site, where Passepartout takes the place of Aouda 's deceased husband on the funeral pyre. During the ceremony he rises from the pyre, scaring off the priests, and carries Aouda away. The twelve hours gained earlier are lost, but Fogg shows no regret.
The travelers hasten to catch the train at the next railway station, taking Aouda with them. At Calcutta, they board a steamer (the Rangoon) going to Hong Kong. Fix has Fogg and Passepartout arrested. They jump bail and Fix follows them to Hong Kong. He shows himself to Passepartout, who is delighted to again meet his travelling companion from the earlier voyage.
In Hong Kong, it turns out that Aouda 's distant relative, in whose care they had been planning to leave her, has moved to Holland, so they decide to take her with them to Europe. Still without a warrant, Fix sees Hong Kong as his last chance to arrest Fogg on British soil. Passepartout becomes convinced that Fix is a spy from the Reform Club. Fix confides in Passepartout, who does not believe a word and remains convinced that his master is not a bank robber. To prevent Passepartout from informing his master about the premature departure of their next vessel, the Carnatic, Fix gets Passepartout drunk and drugs him in an opium den. Passepartout still manages to catch the steamer to Yokohama, but neglects to inform Fogg that the steamer is leaving the evening before its scheduled departure date.
Fogg discovers that he missed his connection. He searches for a vessel that will take him to Yokohama, finding a pilot boat, the Tankadere, that takes him and Aouda to Shanghai, where they catch a steamer to Yokohama. In Yokohama, they search for Passepartout, believing that he may have arrived there on the Carnatic as originally planned. They find him in a circus, trying to earn the fare for his homeward journey. Reunited, the four board a paddle - steamer, the General Grant, taking them across the Pacific to San Francisco. Fix promises Passepartout that now, having left British soil, he will no longer try to delay Fogg 's journey, but instead support him in getting back to Britain so he can arrest Fogg in Britain itself.
In San Francisco they board a transcontinental train to New York, encountering a number of obstacles along the way: a massive herd of bison crossing the tracks, a failing suspension bridge, and the train being attacked by Sioux warriors. After uncoupling the locomotive from the carriages, Passepartout is kidnapped by the Indians, but Fogg rescues him after American soldiers volunteer to help. They continue by a wind powered sledge to Omaha, where they get a train to New York.
In New York, having missed the ship China, Fogg looks for alternative transport. He finds a steamboat, the Henrietta, destined for Bordeaux, France. The captain of the boat refuses to take the company to Liverpool, whereupon Fogg consents to be taken to Bordeaux for $2,000 ($207,540 in 2017) per passenger. He then bribes the crew to mutiny and make course for Liverpool. Against hurricane winds and going on full steam, the boat runs out of fuel after a few days. Fogg buys the boat from the captain and has the crew burn all the wooden parts to keep up the steam.
The companions arrive at Queenstown (Cobh), Ireland, take the train to Dublin and then a ferry to Liverpool, still in time to reach London before the deadline. Once on English soil, Fix produces a warrant and arrests Fogg. A short time later, the misunderstanding is cleared up -- the actual robber, an individual named James Strand, had been caught three days earlier in Edinburgh. However, Fogg has missed the train and arrives in London five minutes late, certain he has lost the wager.
The following day Fogg apologises to Aouda for bringing her with him, since he now has to live in poverty and can not support her. Aouda confesses that she loves him and asks him to marry her. As Passepartout notifies a minister, he learns that he is mistaken in the date -- it is not 22 December, but instead 21 December. Because the party had travelled eastward, their days were shortened by a few minutes; thus, even though they spent a consistent amount of time during the journey, they experienced an additional sunrise and sunset (an additional day), and so were a day ahead of the actual date. Passepartout informs Fogg of his mistake, and Fogg hurries to the Reform Club just in time to meet his deadline and win the wager. Having spent almost £ 19,000 of his travel money during the journey, he divides the remainder between Passepartout and Fix and marries Aouda.
Around the World in Eighty Days was written during difficult times, both for France and for Verne. It was during the Franco - Prussian War (1870 -- 1871) in which Verne was conscripted as a coastguard; he was having financial difficulties (his previous works were not paid royalties); his father had died recently; and he had witnessed a public execution, which had disturbed him. Despite all this, Verne was excited about his work on the new book, the idea of which came to him one afternoon in a Paris café while reading a newspaper.
The technological innovations of the 19th century had opened the possibility of rapid circumnavigation and the prospect fascinated Verne and his readership. In particular, three technological breakthroughs occurred in 1869 -- 70 that made a tourist - like around - the - world journey possible for the first time: the completion of the First Transcontinental Railroad in America (1869), the linking of the Indian railways across the sub-continent (1870), and the opening of the Suez Canal (1869). It was another notable mark in the end of an age of exploration and the start of an age of fully global tourism that could be enjoyed in relative comfort and safety. It sparked the imagination that anyone could sit down, draw up a schedule, buy tickets and travel around the world, a feat previously reserved for only the most heroic and hardy of adventurers.
Verne is often characterized as a futurist or science - fiction author, but there is not a glimmer of science fiction in this, which is his most popular work (at least in English). Rather than any futurism, it remains a memorable portrait of the British Empire "on which the sun never sets '' shortly before its peak, drawn by an outsider. Until 2006, no critical editions were written due to both the poor translations available and the stereotypical connection between science fiction and "worthless '' boys ' literature. However, Verne 's works began receiving more serious reviews in the late 20th and early 21st centuries, with new translations appearing. The book is a source of common notable English and extended British attitudes in quotes such as "Phileas Fogg and Sir Francis Cromarty... endured the discomfort with true British phlegm, talking little, and scarcely able to catch a glimpse of each other, '' as well as in Chapter 12 when the group is being jostled around on the elephant ride across the jungle. In Chapter 25, when Fogg is insulted in San Francisco, Fix acknowledges that clearly "Mr. Fogg was one of those Englishmen who, while they do not tolerate dueling at home, fight abroad when their honor is attacked. ''
Post-Colonial readings of the novel elucidate Verne 's role as propagandist for European global dominance, as a Victors ' historian. "Perhaps the leading excuse for the European colonization of India was found in the Hindu practice of the suttee ''. Verne 's novel, one of the most widely read of the 19th century, played a major role in shaping European attitudes of the colonized lands.
The closing date of the novel, 21 December 1872, was the same date as the serial publication. As it was being published serially for the first time, some readers believed that the journey was actually taking place -- bets were placed, and some railway companies and ship liner companies lobbied Verne to appear in the book. It is unknown if Verne submitted to their requests, but the descriptions of some rail and shipping lines leave some suspicion he was influenced.
Although a journey by balloon has become one of the images most strongly associated with the story, this iconic symbol was never deployed by Verne -- the idea is, briefly, brought up in Chapter 32, but dismissed, as it "would have been highly risky and, in any case, impossible. '' However, the popular 1956 movie adaptation Around the World in Eighty Days used the balloon idea, and it has now become a part of the mythology of the story, even appearing on book covers. This plot element is reminiscent of Verne 's earlier Five Weeks in a Balloon, which first made him a well - known author.
Concerning the final coup de théâtre, Fogg had thought it was one day later than it actually was, because he had forgotten this simple fact: during his journey, he had added a full day to his clock, at the rhythm of an hour per fifteen degrees, or four minutes per degree, as Verne writes. In fact, at the time and until 1884, the concept of a de jure International Date Line did not exist. If it did, he would have been made aware of the change in date once he reached this line. Thus, the day he added to his clock throughout his journey would be removed upon crossing this imaginary line. However, in the real world, Fogg 's mistake would not have occurred because a de facto date line did exist. The UK, India and the US had the same calendar with different local times. He would have noticed, when he arrived in San Francisco, that the local date was actually one day earlier than shown in his travel diary. As a consequence he could not fail to notice that the departure dates of the transcontinental train in San Francisco and of the China steamer in New York were actually one day earlier than his personal travel diary.
Following publication in 1873, various people attempted to follow Fogg 's fictional circumnavigation, often within self - imposed constraints:
The idea of a trip around the world within a set period had clear external origins and was popular before Verne published his book in 1873. Even the title Around the World in Eighty Days is not original. Several sources have been hypothesized as the origins of the story.
The most obvious took place between 1869 and 1871, when American William Perry Fogg traveled the world, describing his tour in a series of letters to The Cleveland Leader newspaper, entitled, Round the World: Letters from Japan, China, India, and Egypt (1872). But long before Fogg, Greek traveller Pausanias (c. 100 AD) wrote a work that was translated into French in 1797 as Voyage autour du monde ("Around the World ''). Verne 's friend Jacques Arago had written a very popular Voyage autour du monde in 1853. In 1869 -- 70 the idea of travelling around the world reached critical popular attention when three geographical breakthroughs occurred: the completion of the First Transcontinental Railroad in America (1869), the linking of the Indian railways across the sub-continent (1870), and the opening of the Suez Canal (1869). In 1871 appeared Around the World by Steam, via Pacific Railway, published by the Union Pacific Railroad Company, and an Around the World in A Hundred and Twenty Days by Edmond Planchut. In early 1870, the Erie Railway Company published a statement of routes, times, and distances detailing a trip around the globe of 23,739 miles in seventy - seven days and twenty - one hours.
Another early reference comes from the Italian traveler Giovanni Francesco Gemelli Careri. He wrote a book in 1699 that was translated into French: Voyage around the World or Voyage du Tour du Monde (1719, Paris). The novel documents his trip as one of the first Europeans to circle the world for pleasure rather than profit, using publicly available transportation. Gemelli Careri provides rich accounts of seventeenth - century civilization outside of Europe. These include Persia during the Ottoman Empire, Hindustan during the reign of Aurungzebe, the Chinese Lantern Festival and the Great Wall, and the native people of Meso - America. References to his books can be found in other historical publications like the Calcutta Review.
In 1872, Thomas Cook organised the first around - the - world tourist trip, leaving on 20 September 1872 and returning seven months later. The journey was described in a series of letters that were published in 1873 as Letter from the Sea and from Foreign Lands, Descriptive of a tour Round the World. Scholars have pointed out similarities between Verne 's account and Cook 's letters, although some argue that Cook 's trip happened too late to influence Verne. Verne, according to a second - hand 1898 account, refers to a Cook advertisement as a source for the idea of his book. In interviews in 1894 and 1904, Verne says the source was "through reading one day in a Paris cafe '' and "due merely to a tourist advertisement seen by chance in the columns of a newspaper. '' Around the World itself says the origins were a newspaper article. All of these point to Cook 's advert as being a probable spark for the idea of the book.
The periodical Le Tour du monde (3 October 1869) contained a short piece titled "Around the World in Eighty Days '', which refers to "140 miles '' of railway not yet completed between Allahabad and Bombay, a central point in Verne 's work. But even the Le Tour de monde article was not entirely original; it cites in its bibliography the Nouvelles Annales des Voyages, de la Géographie, de l'Histoire et de l'Archéologie (August, 1869), which also contains the title Around the World in Eighty Days in its contents page. The Nouvelles Annales were written by Conrad Malte - Brun (1775 -- 1826) and his son Victor Adolphe Malte - Brun (1816 -- 1889). Scholars believe that Verne was aware of the Le Tour de monde article, the Nouvelles Annales, or both, and that he consulted it and / or them, noting that the Le Tour du monde even included a trip schedule very similar to Verne 's final version.
A possible inspiration was the traveller George Francis Train, who made four trips around the world, including one in 80 days in 1870. Similarities include the hiring of a private train and being imprisoned. Train later claimed, "Verne stole my thunder. I 'm Phileas Fogg. ''
Regarding the idea of gaining a day, Verne said of its origin: "I have a great number of scientific odds and ends in my head. It was thus that, when, one day in a Paris café, I read in the Siècle that a man could travel around the world in 80 days, it immediately struck me that I could profit by a difference of meridian and make my traveller gain or lose a day in his journey. There was a dénouement ready found. The story was not written until long after. I carry ideas about in my head for years -- ten, or 15 years, sometimes -- before giving them form. '' In his April 1873 lecture, "The Meridians and the Calendar '', Verne responded to a question about where the change of day actually occurred, since the international date line had only become current in 1880 and the Greenwich prime meridian was not adopted internationally until 1884. Verne cited an 1872 article in Nature, and Edgar Allan Poe 's short story "Three Sundays in a Week '' (1841), which was also based on going around the world and the difference in a day linked to a marriage at the end. Verne even analysed Poe 's story in his Edgar Poe and His Works (1864). Poe 's story "Three Sundays in a Week '' was clearly the inspiration for the lost day plot device.
The book has been adapted or reimagined many times in different forms.
|
who made the first periodic table of elements | History of the periodic table - wikipedia
The periodic table is an arrangement of the chemical elements and are organized on the basis of their atomic numbers, electron configurations and recurring chemical properties. Elements are presented in order of increasing atomic number. The standard form of the table consists of a grid of elements with rows called periods and columns called groups.
The history of the periodic table reflects over a century of growth in the understanding of chemical properties. The most important event in its history occurred in 1869, when the table was published by Dmitri Mendeleev, who built upon earlier discoveries by scientists such as Antoine - Laurent de Lavoisier and John Newlands, but who is nevertheless generally given sole credit for its development.
A number of physical elements (such as platinum, mercury, tin and zinc) have been known from antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea that had originally been suggested by the Sicilian philosopher Empedocles. The four roots, which were later renamed as elements by Plato, were earth, water, air and fire. Similar ideas about these four elements also existed in other ancient traditions, such as Indian philosophy. While Aristotle and Plato understood the concept of an element, their ideas did nothing to advance the understanding of the nature of matter.
The history of the periodic table is also a history of the discovery of the chemical elements. The first person in history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover the Philosopher 's Stone -- a mythical object that was supposed to turn inexpensive base metals into gold. In 1669 (or later), his experiments with distilled human urine resulted in the production of a glowing white substance, which he called "cold fire '' (kaltes Feuer). He kept his discovery secret until 1680, when Robert Boyle rediscovered phosphorus and published his findings. The discovery of phosphorus helped to raise the question of what it meant for a substance to be an element.
In 1661, Boyle defined an element as "those primitive and simple Bodies of which the mixt ones are said to be composed, and into which they are ultimately resolved. ''
Lavoisier 's Traité Élémentaire de Chimie (Elementary Treatise of Chemistry), which was written in 1789 and first translated into English by the writer Robert Kerr, is considered to be the first modern textbook about chemistry. Lavoisier defined an element as a substance that can not be broken down into a simpler substance by a chemical reaction. This simple definition served for a century and lasted until the discovery of subatomic particles. Lavoisier 's book contained a list of "simple substances '' that Lavoisier believed could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc and sulfur, which formed the basis for the modern list of elements. Lavoisier 's list also included ' light ' and ' caloric ', which at the time were believed to be material substances. He classified these substances into metals and non metals. While many leading chemists refused to believe Lavoisier 's new revelations, the Elementary Treatise was written well enough to convince the younger generation. However, Lavoisier 's descriptions of his elements lack completeness, as he only classified them as metals and non-metals.
In 1815, the English physician and chemist William Prout noticed that atomic weights seemed to be multiples of that of hydrogen.
In 1817, Johann Wolfgang Döbereiner, a chemist, began to formulate one of the earliest attempts to classify the elements. In 1829, he found that he could form some of the elements into groups of three, with the members of each group having related properties. He termed these groups triads.
Definition of Triad law: - "Chemically analogous elements arranged in increasing order of their atomic weights formed well marked groups of three called Triads in which the atomic weight of the middle element was found to be generally the arithmetic mean of the atomic weight of the other two elements in the triad.
Alexandre - Emile Béguyer de Chancourtois, a French geologist, was the first person to notice the periodicity of the elements -- similar elements occurring at regular intervals when they are ordered by their atomic weights. In 1862 he devised an early form of periodic table, which he named Vis tellurique (the ' telluric helix '), after the element tellurium, which fell near the center of his diagram. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois saw that elements with similar properties lined up vertically. His 1863 publication included a chart (which contained ions and compounds, in addition to elements), but his original paper in the Comptes Rendus de l'Académie des Sciences used geological rather than chemical terms and did not include a diagram. As a result, de Chancourtois ' ideas received little attention until after the work of Dmitri Mendeleev had been publicised.
In 1864, the English chemist John Newlands classified the sixty - two known elements into eight groups, based on their physical properties.
Newlands noted that many pairs of similar elements existed, which differed by some multiple of eight in mass number, and was the first to assign them an atomic number. When his ' law of octaves ' was printed in Chemistry News, likening this periodicity of eights to the musical scale, it was ridiculed by some of his contemporaries. His lecture to the Chemistry Society on 1 March 1866 was not published, the Society defending their decision by saying that such ' theoretical ' topics might be controversial.
The importance of Newlands ' analysis was eventually recognised by the Chemistry Society with a Gold Medal five years after they recognised Mendeleev 's work. It was not until the following century, with Gilbert N. Lewis 's valence bond theory (1916) and Irving Langmuir 's octet theory of chemical bonding (1919), that the importance of the periodicity of eight would be accepted. The Royal Chemistry Society acknowledged Newlands ' contribution to science in 2008, when they put a Blue Plaque on the house where he was born, which described him as the "discoverer of the Periodic Law for the chemical elements ''.
He contributed the word ' periodic ' in chemistry.
The Russian chemist Dmitri Mendeleev was the first scientist to make a periodic table similar to the one used today. Mendeleev arranged the elements by atomic mass, corresponding to relative molar mass. It is sometimes said that he played ' chemical solitaire ' on long train journeys, using cards with various facts about the known elements. On March 6, 1869, Mendeleev gave a formal presentation, The Dependence Between the Properties of the Atomic Weights of the Elements, to the Russian Chemical Society. In 1869, the table was published in an obscure Russian journal and then republished in a German journal, Zeitschrift für Chemie. In it, Mendeleev stated that:
Scientific benefits of Mendeleev 's table
Unknown to Mendeleev, a German chemist, Lothar Meyer, was also working on a periodic table. Although his work was published in 1864, and was done independently of Mendeleev, few historians regard him as an equal co-creator of the periodic table. Meyer 's table only included twenty - eight elements, which were not classified by atomic weight, but by valence, and he never reached the idea of predicting new elements and correcting atomic weights. A few months after Mendeleev published his periodic table of the known elements, predicted new elements to help complete his table and corrected the atomic weights of some of the elements, Meyer published a virtually identical periodic table.
Meyer and Mendeleev are considered by some historians of science to be the co-creators of the periodic table, but Mendeleev 's accurate prediction of the qualities of undiscovered elements enables him to have the larger share of the credit.
In 1864, the English chemist William Odling also drew up a table that was remarkably similar to the table produced by Mendeleev. Odling overcame the tellurium - iodine problem and even managed to get thallium, lead, mercury and platinum into the right groups, which is something that Mendeleev failed to do at his first attempt. Odling failed to achieve recognition, however, since it is suspected that he, as Secretary of the Chemical Society of London, was instrumental in discrediting Newlands ' earlier work on the periodic table.
By 1912 almost 50 different radioactive elements had been found, too many for the periodic table. Frederick Soddy in 1913 found that although they emitted different radiation, many elements were alike in their chemical characteristics so shared the same place on the table. They became known as isotopes, from the Greek eisos topos ("same place '').
In 1914, a year before he was killed in action at Gallipoli, the English physicist Henry Moseley found a relationship between the X-ray wavelength of an element and its atomic number. He was then able to re-sequence the periodic table by nuclear charge, rather than by atomic weight. Before this discovery, atomic numbers were sequential numbers based on an element 's atomic weight. Moseley 's discovery showed that atomic numbers were in fact based upon experimental measurements.
Using information about their X-ray wavelengths, Moseley placed argon (with an atomic number Z = 18) before potassium (Z = 19), despite the fact that argon 's atomic weight of 39.9 is greater than the atomic weight of potassium (39.1). The new order was in agreement with the chemical properties of these elements, since argon is a noble gas and potassium is an alkali metal. Similarly, Moseley placed cobalt before nickel and was able to explain that tellurium occurs before iodine, without revising the experimental atomic weight of tellurium, as had been proposed by Mendeleev.
Moseley 's research showed that there were gaps in the periodic table at atomic numbers 43 and 61, which are now known to be occupied by technetium and promethium respectively.
During his Manhattan Project research in 1943, Glenn T. Seaborg experienced unexpected difficulties in isolating the elements americium and curium. Seaborg wondered if these elements belonged to a different series, which would explain why their chemical properties were different from what was expected. In 1945, against the advice of colleagues, he proposed a significant change to Mendeleev 's table: the actinide series.
Seaborg 's actinide concept of heavy element electronic structure, predicting that the actinides form a transition series analogous to the rare earth series of lanthanide elements, is now well accepted and included in the periodic table. The actinide series is the second row of the f - block (5f series). In both the actinide and lanthanide series, an inner electron shell is being filled. The actinide series comprises the elements from actinium to lawrencium. Seaborg 's subsequent elaborations of the actinide concept theorized a series of superheavy elements in a transactinide series comprising elements from 104 to 121 and a superactinide series of elements from 122 to 153.
|
were any of the nuns in sister act real nuns | Sister Act - wikipedia
Sister Act is a 1992 American comedy film directed by Emile Ardolino and written by Joseph Howard, with musical arrangements by Marc Shaiman. It stars Whoopi Goldberg as a lounge singer forced to join a convent after being placed in a witness protection program. It also features Maggie Smith, Kathy Najimy, Wendy Makkena, Mary Wickes, and Harvey Keitel.
Sister Act was one of the most financially successful comedies of the early 1990s, grossing $231 million worldwide. The film spawned a franchise, which consists of a 1993 sequel, Sister Act 2: Back in the Habit, and a musical adaptation, which premiered in 2006. A remake of Sister Act is in the works.
In the film 's prologue, Deloris Wilson is a young student at a Catholic school; she seems more focused on music groups and being the class clown than her studies. As an adult, she is a lounge singer at a nightclub in Reno, Nevada, performing under the name Deloris Van Cartier. One night, she witnesses her mobster boyfriend Vince LaRocca execute his chauffeur as a rat. Lieutenant Eddie Souther persuades her to go into a witness protection program with Saint Katherine 's Parish, a convent in a run - down San Francisco neighborhood, as her safe house. The stoic Reverend Mother finds Deloris uncouth, but Monsignor O'Hara, the neighborhood priest, persuades her to accept her.
Deloris, given the name Sister Mary Clarence as part of her cover, struggles to adapt to austere convent life. However, she befriends several of the nuns, including jolly Sister Mary Patrick, meek Sister Mary Robert, and the elderly deadpan Sister Mary Lazarus, who works as choir director. After Mary Clarence is chastised for sneaking out to a bar, the Reverend Mother assigns her to join the convent choir, who are known to be dreadful, to keep her out of trouble. On her first day, Mary Clarence is elected by Mary Lazarus to become the choir director, after its members learn she has a background in music.
With Sister Mary Lazarus ' help, she helps to rearrange the choir and trains them to become better singers. When the choir perform at Mass one Sunday, they sing "Hail Holy Queen '' beautifully in a traditional manner, before shifting into a gospel and rock - and - roll - infused interpretation of the hymn. Although the Reverend Mother is infuriated, O'Hara congratulates the choir 's unorthodox performance for bringing in people, including teenagers, off the streets and into the church. This leads Mary Clarence to convince him to have the nuns head outside and clean up the neighborhood. The choir continue to amaze parishioners and visitors with their music, including a performance of "My Guy '' (appropriately rewritten as "My God ''), and soon help to transform the neighborhood. This, however, makes Souther slightly annoyed at Deloris for nearly exposing her location on national television because of the nuns ' work.
Eventually, the convent learns from O'Hara that Pope John Paul II is to stop by the church to see the choir himself, as part of his visit to the United States. Believing herself to be no longer required, the Reverend Mother decides to hand in her resignation because of Mary Clarence 's work unintentionally undermining her authority. This shocks Deloris, having learned she will have to leave soon as Vince 's trial draws closer. Souther soon arrests a police detective within his own department, upon discovering he was on Vince 's payroll and had uncovered information on Deloris ' location. Heading to San Francisco to warn Deloris that her cover is blown, Souther arrives just as Vince 's men abduct her.
When the nuns learn of the kidnapping, the Reverend Mother reveals the truth of Mary Clarence 's real identity. Upon hearing them feel at a loss without her help in the choir, she decides to have them come with her and risk their lives to save her. Arriving at Vince 's casino, the group search for Deloris and find her, after she manages to escape from Vince and his men once again. The group quickly attempt to confuse the mobsters while sneaking out Deloris, but wind up becoming trapped in the casino lounge. Not wishing to risk the group 's lives, Deloris prepares to sacrifice herself, and even Vince and his man have difficulty bringing themselves to shoot Deloris while she 's in a nun 's habit. The delay is long enough for the police led by Souther, to arrive and arrest Vince and his men. Despite being annoyed at the risk and other things Deloris did to the convent, the Reverend Mother thanks her for what she has done and decides to remain at the convent to continue her work. Returning to San Francisco, the choir, led by Deloris, sing "I Will Follow Him '' to packed audience in the refurbished Saint Katherine 's, receiving a standing ovation from all, including the Pope.
Screenwriter Paul Rudnick pitched Sister Act to producer Scott Rudin in 1987, with Bette Midler in mind for the lead role. The script was then brought to Disney. However, Midler later turned down the role, fearing that her fans would not want to see her play a nun. Eventually, Whoopi Goldberg signed on to play the lead. As production commenced, the script was rewritten by a half dozen screenwriters, including Carrie Fisher, Robert Harling, and Nancy Meyers. With the movie no longer resembling his original script, Rudnick asked to be credited with a pseudonym in the film, deciding on "Joseph Howard. ''
The church in which Deloris takes sanctuary is St. Paul 's Catholic Church, located at Valley and Church Streets in Noe Valley, an upper - middle - class neighborhood of San Francisco. The storefronts on the opposite side of the street were converted to give the area a ghetto look.
Though the order of the nuns in the film is hinted at being a Carmelite one by Sister Mary Patrick, their religious habit is similar in appearance to that of the Sisters of St. Joseph of the Third Order of St. Francis (minus the cross). Members of the real - life Order, however, no longer wear their traditional habit.
The film 's soundtrack was released by Hollywood Records on June 9, 1992 in conjunction with the film, and contained the musical numbers performed by actors in the film itself, pre-recorded songs that were used as part of the background music, and instrumental music composed by Marc Shaiman for the film. The soundtrack album debuted at # 74 and eventually reached # 40 on the Billboard Top 200 Albums Chart where it charted for 54 weeks. The album received a Gold certification from the RIAA for shipment of 500,000 copies on January 13, 1993.
The film received a generally positive reception from critics, holding a 71 % rating on Rotten Tomatoes based on 24 reviews.
The film received two Golden Globe nominations:
American Film Institute recognition:
The film was a box office success, grossing $139,605,150 domestically and $92,000,000 in foreign countries, effectively grossing $231,605,150 worldwide, becoming the eighth - highest - grossing film worldwide in 1992. It sat at the # 2 spot for four weeks, behind Lethal Weapon 3, Patriot Games and Batman Returns in succession.
On June 10, 1993, actress Donna Douglas and her partner Curt Wilson in Associated Artists Entertainment, Inc., filed a $200 million lawsuit against Disney, Whoopi Goldberg, Bette Midler, their production companies, and Creative Artists Agency claiming the film was plagiarized from a book A Nun in the Closet owned by the partners. Douglas and Wilson claimed that in 1985 they had developed a screenplay for the book. The lawsuit claimed that there were over 100 similarities and plagiarisms between the movie and the book / screenplay owned by Douglas and Wilson. The lawsuit further claimed that the developed screenplay had been submitted to Disney, Goldberg, and Midler three times during 1987 and 1988.
In 1994, Douglas and Wilson declined a $1 million offer in an attempt to win the case. The judge found in favor of Disney and the other defendants. Wilson stated at the time, "They would have had to copy our stuff verbatim for us to prevail. ''
In November 2011, a nun named Queen Mother Dr. Delois Blakely filed a lawsuit against the Walt Disney Company and Sony Pictures claiming that "The Harlem Street Nun, '' an autobiography she wrote in 1987, was the basis for the 1992 film. She alleged that a movie executive expressed an interest in the rights to the movie after she wrote a three - page synopsis. She is suing for "breach of contract, misappropriation of likeness and unjust enrichment. '' Blakely dropped the original lawsuit in January 2012 to serve a more robust lawsuit in late August 2012 with the New York Supreme Court, asking for $1 billion in damages from Disney. In early February, 2013, the New York Supreme Court dismissed the lawsuit with prejudice, awarding no damages to Blakely.
The Region 1 DVD was released on November 6, 2001; however, the disc has no anamorphic enhancement, similar to early DVDs from Buena Vista. Special Features include the film 's theatrical trailer; music videos for "I Will Follow Him '' by Deloris and the Sisters, and "If My Sister 's in Trouble '' by Lady Soul, both of which contain clips from the film; and a featurette titled "Inside Sister Act: The Making Of ''.
The all - region Blu - ray including both films was released on June 19, 2012, with both films presented in 1080p. The 3 - disc set also includes both films on DVD with the same bonus features as previous releases.
The musical Sister Act, directed by Peter Schneider and choreographed by Marguerite Derricks, premiered at the Pasadena Playhouse in Pasadena, California on October 24, 2006 and closed on December 23, 2006. It broke records, grossing $1,085,929 to become the highest grossing show ever at the venue. The production then moved to the Alliance Theatre in Atlanta, Georgia, where it ran from January 17 to February 25, 2007.
The musical then opened in the West End at the London Palladium on June 2, 2009, following previews from May 7. The production was directed by Peter Schneider, produced by Whoopi Goldberg together with the Dutch company Stage Entertainment, and choreographed by Anthony Van Laast, with set design by Klara Zieglerova, costume design by Lez Brotherston and lighting design by Natasha Katz. Following a year - long search, 24 - year - old actress Patina Miller was cast as Deloris, alongside Sheila Hancock as the Mother Superior, Ian Lavender as Monsignor Howard, Chris Jarman as Shank, Ako Mitchell as Eddie, Katie Rowley Jones as Sister Mary Robert, Claire Greenway as Sister Mary Patrick and Julia Sutton as Sister Mary Lazarus. The musical received four Laurence Olivier Awards nominations including Best Musical. On October 30, 2010 the show played its final performance at the London Palladium and transferred to Broadway.
The musical opened at the Broadway Theatre on April 20, 2011, with previews beginning March 24, 2011. Jerry Zaks directed the Broadway production with Douglas Carter Beane rewriting the book. Patina Miller, who originated the role of Deloris in the West End production, reprised her role, making her Broadway debut. She was later replaced by Raven - Symoné, also making her Broadway debut. The Original Broadway cast featured Victoria Clark (Mother Superior), Fred Applegate (Monsignor), Sarah Bolt (Sister Mary Patrick), Chester Gregory (Eddie), Kingsley Leggs (Curtis), Marla Mindelle (Sister Mary Robert) and Audrie Neenan (Sister Mary Lazarus). The musical received five Tony Award nominations including Best Musical.
The musical closed in August 2012 after playing 561 performances.
|
fall of the house of usher vincent price | House of Usher (film) - wikipedia
House of Usher (also known as The Fall of the House of Usher and The Mysterious House of Usher) is a 1960 American horror film directed by Roger Corman and written by Richard Matheson from the short story "The Fall of the House of Usher '' by Edgar Allan Poe. The film was the first of eight Corman / Poe feature films and stars Vincent Price, Myrna Fahey, Mark Damon and Harry Ellerbe.
In 2005, the film was listed with the United States National Film Registry as being deemed "culturally, historically, or aesthetically significant. '' Versions exist on DVD with running times between 76 and 80 minutes.
Philip Winthrop (Mark Damon) travels to the House of Usher, a desolate mansion surrounded by a murky swamp, to meet his fiancée Madeline Usher (Myrna Fahey). Madeline 's brother Roderick (Vincent Price) opposes Philip 's intentions, telling the young man that the Usher family is afflicted by a cursed bloodline which has driven all their ancestors to madness. Roderick foresees the family evils being propagated into future generations with a marriage to Madeline and vehemently discourages the union. Philip becomes increasingly desperate to take Madeline away; she agrees to leave with him, desperate to get away from her brother.
During a heated argument with her brother, Madeline suddenly dies and is laid to rest in the family crypt beneath the house. As Philip is preparing to leave following the entombment, the butler, Bristol (Harry Ellerbe), lets slip that Madeline suffered from catalepsy, a condition which can make its sufferers appear dead.
Philip rips open Madeline 's coffin and finds it empty. He desperately searches for her in the winding passages of the crypt but she eludes him and confronts her brother. Now completely insane, Madeline avenges herself upon the brother who knowingly buried her alive. Both die as a fire breaks out, ending the Usher bloodline, and Philip escapes and watches the burning house sink into the swampy land surrounding it. The film ends with the final words of Poe 's story: "... and the deep and dank tarn closed sullenly and silently over the fragments of the ' House of Usher ' ''.
The film was important in the history of American International Pictures which up until then had specialized in making low budget black and white films to go out on double bills. The market for this kind of movie was in decline so AIP decided to gamble on making a larger budgeted film in colour.
The film was announced in February 1959 and was dubbed the company 's "most ambitious film to date ''.
A number of other companies announced Poe projects around this time: Alex Gordon had a version of Masque of the Red Death, Fox had Murders in the Rue Morgue, Ben Bogeus The Gold Bug, and Universal The Raven.
It was shot in fifteen days.
In February 2011 Intrada made the world premiere release of the Les Baxter score from music - only elements in mono.
Track listing
Eugene Archer, in the September 15, 1960 edition of The New York Times wrote, "American - International, with good intentions of presenting a faithful adaption of Edgar Allan Poe 's classic tale of the macabre... blithely ignored the author 's style. Poe 's prose style, as notable for ellipsis as imagery, compressed or eliminated the expository passages habitual to nineteenth - century fiction and invited the readers ' imaginations to participate. By studiously avoiding explanations not provided by the text, and stultifying the audiences ' imaginations by turning Poe 's murky mansion into a cardboard castle encircled by literal green mist, the film producers have made a horror film that provides a fair degree of literacy at the cost of a patron 's patience. '' He further opined, "Under the low - budget circumstances, Vincent Price and Myrna Fahey should not be blamed for portraying the decadent Ushers with arch affectation, nor Mark Damon held to account for the traces of Brooklynese that creep into his stiffly costumed impersonation of the mystified interloper. ''
Other reviewers have been kinder, however; House of Usher is now regarded as a high point in Corman 's filmography, with an 89 % "fresh '' rating on Rotten Tomatoes.
The film differs from the short story in significant ways. In the short story:
|
how did they build the bridge at hoover dam | Mike O'Callaghan -- Pat Tillman Memorial bridge - wikipedia
The Mike O'Callaghan -- Pat Tillman Memorial Bridge is an arch bridge in the United States that spans the Colorado River between the states of Arizona and Nevada. The bridge is located within the Lake Mead National Recreation Area approximately 30 miles (48 km) southeast of Las Vegas, and carries U.S. Route 93 over the Colorado River. Opened in 2010, it was the key component of the Hoover Dam Bypass project, which rerouted US 93 from its previous routing along the top of Hoover Dam and removed several hairpin turns and blind curves from the route. It is jointly named for Mike O'Callaghan, Governor of Nevada from 1971 -- 1979, and Pat Tillman, an American football player who left his career with the Arizona Cardinals to enlist in the United States Army and was later killed in Afghanistan by friendly fire.
As early as the 1960s, officials identified the US 93 route over Hoover Dam to be dangerous and inadequate for projected traffic volumes. From 1998 -- 2001, officials from Arizona, Nevada, and several federal government agencies collaborated to determine the best routing for an alternative river crossing. In March 2001, the Federal Highway Administration selected the route, which crosses the Colorado River approximately 1,500 feet (460 m) downstream of Hoover Dam. Construction of the bridge approaches began in 2003, and construction of the bridge itself began in February 2005. The bridge was completed in 2010 and the entire bypass route opened to vehicle traffic on October 19, 2010. The Hoover Dam Bypass project was completed within budget at a cost of $240 million; the bridge portion cost $114 million.
The bridge was the first concrete - steel composite arch bridge built in the United States, and incorporates the widest concrete arch in the Western Hemisphere. At 890 feet (270 m) above the Colorado River, it is the second highest bridge in the United States after the Royal Gorge Bridge, and is the world 's highest concrete arch bridge.
Upon the completion of the Boulder City Bypass, the bridge has become a part of Interstate 11, concurrent with US 93.
In 1935, the American Association of State Highway Officials (AASHO, later AASHTO) authorized a southward extension of U.S. Route 93 from its previous southern terminus in Glendale, Nevada to Kingman, Arizona by way of Las Vegas and Boulder City, crossing the Colorado River on the newly constructed Hoover Dam (also known then as Boulder Dam). Clark County was sparsely populated at the time, with a population of less than 9,000 at the 1930 U.S. Census (compared to an estimated 2 million in 2013). Development in and around Las Vegas in the latter half of the 20th century made Las Vegas and its surrounding area a tourist attraction, and US 93 became an important transportation corridor for passenger and commercial traffic between Las Vegas and Phoenix. In 1995, the portion of US 93 over Hoover Dam was included as part of the CANAMEX Corridor, a high - priority transportation corridor established under the North American Free Trade Agreement (NAFTA). This bridge is a key component of the proposed Interstate 11 project.
Through traffic on US 93 combined with pedestrian and tourist traffic at Hoover Dam itself led to major traffic congestion on the dam and on the approaches to the dam. The approaches featured hairpin turns on both the Nevada and Arizona sides of the dam, and the terrain caused limited sight distances around curves. In addition to traffic safety considerations, officials were also concerned about the safety and security of Hoover Dam, specifically the impact a vehicle accident could have on the dam 's operation and the waters of Lake Mead. Officials first discussed the need for a new Colorado River crossing that would bypass the dam in the 1960s. The U.S. Bureau of Reclamation, which operates the dam, began work on the "Colorado River Bridge Project '' in 1989, but the project was put on hold in 1995. In 1997 the Federal Highway Administration took over the project and released a draft environmental impact statement in 1998. From 1998 -- 2001 state officials from Arizona and Nevada as well as several federal government agencies studied the feasibility of several alternative routes and river crossings, as well as the feasibility of modifying the roadway over the dam, restricting traffic over the dam, or doing nothing.
In March 2001 the Federal Highway Administration issued a Record of Decision indicating its selection of the "Sugarloaf Mountain Alternative '' routing. The project called for approximately 2.2 miles (3.5 km) of highway in Nevada, 1.1 miles (1.8 km) of highway in Arizona, and a 1,900 - foot (580 m) bridge that would cross the river 1,500 feet (460 m) downstream (south) of Hoover Dam. Design work began in July 2001.
Security measures implemented following the September 11 attacks prohibited commercial truck traffic from driving across Hoover Dam. Prior to the completion of the bridge, through commercial vehicles were required to follow a 104 - mile (167 km) detour route between Boulder City and Kingman via US 95, Nevada State Route 163, the Colorado River crossing between Laughlin, Nevada and Bullhead City, Arizona, and Arizona State Route 68; the detour added 23 - mile (37 km) to the normal journey on US 93.
Project design was by the Hoover Support team, led by HDR, Inc. and including T.Y. Lin International, Sverdrup Civil, Inc., and other specialist contributors.
The bridge has a length of 1,900 feet (579 m) and a 1,060 ft (320 m) span. The roadway is 900 ft (270 m) above the Colorado River and four lanes wide. This is the first concrete - and - steel composite arch bridge built in the United States. It includes the widest concrete arch in the Western Hemisphere and is also the second highest bridge in the nation, with the arch 840 ft (260 m) above the river. The twin arch ribs are connected by steel struts.
The composite design, using concrete for the arch and columns with steel construction for the roadway deck, was selected for schedule and cost control while being aesthetically compatible with the Hoover Dam. Sean Holstege in The Arizona Republic has called the bridge "an American triumph ''. USA Today called it "America 's Newest Wonder '' on October 18, 2010.
Pedestrian access is provided over the bridge to tourists who wish to take in a different view of the nearby dam and river below, but the dam is not visible for those driving across it. A parking area is provided near the bridge on the Nevada side at what was a staging area during construction. A set of stairs and disabled access ramps lead to the sidewalk across the bridge.
Work began in 2003 on the approaches in both states and the construction contract for the arch bridge was awarded in October 2004. The largest obstacle to the project was the river crossing. The bridge and the bypass were constructed by a consortium of different government agencies and contractors, among them the Federal Highway Administration, the Arizona Department of Transportation, and Nevada Department of Transportation, with RE Monks Construction and Vastco, Inc, constructing the Arizona Approach, Edward Kraemer & Sons, Inc, the Nevada Approach and Las Vegas Paving Corporation undertaking the roadway surfacing on both approaches. The bridge itself was built by Obayashi Corporation and PSM Construction USA, Inc., while Frehner Construction Company, Inc. was responsible for completing the final roadway installations. A permit problem between Clark County and the subcontractor Casino Ready Mix arose in May 2006 over the operation of a concrete - batch plant for the project, and this caused a four - month delay.
Construction required hoisting workers and up to 50 short tons (45 t) of materials 890 feet (270 m) above the Colorado River using 2,300 ft (700 m) - long steel cables held aloft by a "high - line '' crane system. High winds caused a cableway failure in September 2006, resulting in a further two - year delay. The approach spans, consisting of seven pairs of concrete columns -- five on the Nevada side and two on the Arizona side -- were completed in March 2008. In November 2008, construction worker Sherman Jones died in an accident.
The arches are made of 106 pieces -- 53 per arch -- mostly 24 ft (7.3 m) cast in place sections. The arch was constructed from both sides of the bridge concurrently, supported by diagonal cable stays strung from temporary towers. The twin arch spans were completed with the casting of the center segments in August 2009. That same month, the two halves of the arch were completed, and were ⁄ inch (9.5 mm) apart; the gap was filled with a block of reinforced concrete. The temporary cable stays were removed, leaving the arch self - supporting. By December, all eight of the vertical piers on the arch had been set and capped, and at the end of the month the first two of thirty - six 50 - short - ton (45 t) steel girders had been set into place.
By mid-April 2010, all of the girders were set in place, and for the first time construction crews could walk across the structure from Arizona to Nevada. Shortly thereafter, the pouring of the bridge deck began. The bridge deck was fully paved in July, and the high - line cranes were removed from the site as the overall project neared completion. The bridge was completed with a dedication ceremony on October 14, 2010. and a grand opening party on October 16. It was opened to bicycle and pedestrian traffic on October 18 and to vehicular traffic on October 19, a few weeks earlier than estimated. The building of the bridge was featured in episode 5x02 of the TV series Extreme Engineering. The filming of this episode took place before the start of work on the arch.
When the bridge opened to traffic, the roadway over Hoover Dam was closed to through traffic, and all visitor access to the dam was routed to the Nevada side; vehicles are still allowed to drive across the dam to the Arizona side following a security inspection, but must return to the Nevada side to return to US 93. The former US 93 route between the dam and its junction with the present US 93 route has been re-designated as Nevada State Route 172.
In late 2004, the proposed bridge name honoring Mike O'Callaghan and Pat Tillman was announced at a ceremony by the Governor of Nevada, Kenny Guinn, and the Governor of Arizona, Janet Napolitano. O'Callaghan, a decorated Korean War veteran, was the Governor of Nevada from 1971 through 1979, and he was the executive editor at the Las Vegas Sun newspaper for many years until his death on March 5, 2004. Tillman had been a football player for the Arizona State University and for the Arizona Cardinals. He gave up his multimillion - dollar career in the National Football League to enlist as an infantryman in the U.S. Army, but he was killed by friendly fire in Afghanistan on April 22, 2004.
Strong winds gusting across the Black Canyon on September 15, 2006, appear to have been the cause for the collapse of the "high - line '' crane system that was used to carry workmen and materials at the bridge site. No injuries or fatalities occurred because of this accident. Limited construction work resumed in October 2006, but this accident caused a two - year delay in construction.
The bridge - construction companies Obayashi Corp. and PSM Construction, USA, Inc. absorbed the cost of the debris removal and the rebuilding of the cranes. The reconstruction contract for the cranes was awarded to Cincinnati 's F&M Mafco Inc.
Work was also halted when a Las Vegas construction worker, 48 - year - old Sherman Jones, was killed during construction while adjusting a cable used to align temporary concrete towers, when a jack punctured his chest.
The first known suicide at the bridge took place on April 7, 2012. Federal officials were unable to persuade the victim not to jump from the pedestrian walkway overlooking the dam. Others have happened since. Recent known suicides at the bridge include a woman who jumped on January 10, 2014, becoming the 7th suicide according to the Las Vegas Review Journal (LVRJ). The eighth suicide victim was a man who jumped on April 15, 2014 (LVRJ). Representatives of the Nevada Department of Transportation "are constantly monitoring the situation, '' and were "planning to discuss potential preventative measures on the bypass bridge '' at their August or September 2012 meeting.
|
when is war of the apes coming out | War for the Planet of the Apes - wikipedia
War for the Planet of the Apes is a 2017 American dystopian science fiction film directed by Matt Reeves and written by Mark Bomback and Reeves. A sequel to Rise of the Planet of the Apes (2011) and Dawn of the Planet of the Apes (2014), it is the third installment in the Planet of the Apes reboot series. The film stars Andy Serkis, Woody Harrelson and Steve Zahn, and follows a confrontation between the apes, led by Caesar, and the humans for control of Earth. Like its predecessor, its premise shares several similarities to the fifth film in the original series, Battle for the Planet of the Apes, but it is not a direct remake.
Principal photography began on October 14, 2015, in Vancouver, British Columbia, Canada. War for the Planet of the Apes premiered in New York City on July 10, 2017, and was theatrically released in the United States on July 14, 2017, by 20th Century Fox. The film has grossed over $490 million and received critical praise, with many reviewers highlighting the acting (particularly Serkis), visual effects, story, musical score and direction.
Two years after the U.S. military was called to fight off an increasingly intelligent and dangerous tribe of apes, starting a devastating war between the two species, the apes ' clan, led by the chimpanzee Caesar, are attacked in the woods by a rogue paramilitary faction known as Alpha - Omega, led by a mysterious Colonel. Alpha - Omega has in its service apes (called "donkeys '') that had previously followed Koba, a vengeful, human - hating bonobo who led a failed coup against Caesar and started the war after leading a vengeful attack against human survivors in San Francisco.
During Alpha - Omega 's attack, the human soldiers are met by heavy ape resistance. Four soldiers and a "donkey '' gorilla named Red are captured by the apes. Caesar arrives and releases the human soldiers, telling them to deliver a message to the Colonel - that he killed Koba for starting the war and that he desires peace between the humans and apes. Caesar then orders that Red is to be imprisoned but Red escapes, injuring albino gorilla Winter.
Soon after, Caesar 's son, Blue Eyes, and his lieutenant Rocket return from a journey to find a safe haven for the apes. They report that they have found a place across the desert that is perfect for the clan. Winter, still frightened from the soldiers ' attack, wants to leave immediately, but Caesar does not think they are prepared to leave so soon. That night, a group of Alpha - Omega soldiers, led by the Colonel, infiltrates the apes ' home behind a waterfall. The apes discover their presence and kill them all except the Colonel, whom Caesar encounters preparing to escape. Discovering that the Colonel has killed his wife Cornelia along with Blue Eyes, an enraged Caesar lunges at the Colonel but fails to prevent him from escaping out of the waterfall. Upon discovering that Winter has disappeared, Luca, a gorilla and Caesar 's secondary lieutenant, believes that he has betrayed them out of cowardice.
Leaving his younger son Cornelius in the care of Blue Eyes ' wife Lake, Caesar departs to exact revenge on the Colonel for the deaths of his wife and oldest son. He is accompanied by Maurice, an orangutan and Caesar 's adviser, Luca, and Rocket. The other apes head for the desert. During their journey, the apes enter an abandoned village, encounter a soldier, and kill him. Caesar, Maurice, Luca, and Rocket search the dead soldier 's home. Maurice discovers a girl who the soldier was looking after. She is apparently unable to speak. Maurice befriends and adopts the girl, giving her a small rag doll.
Caesar 's party eventually sneak inside an Alpha - Omega camp on a beach and encounter Winter, who has volunteered to become a "donkey '' for the soldiers in return for sparing his life. He tells Caesar 's group that the Colonel has departed for a location called the "border '', where soldiers further north will meet them. Winter tries to call out to the Alpha - Omega soldiers to save him, and Caesar and the others restrain him from making noise. In the struggle, Caesar accidentally smothers and kills Winter. Caesar begins to worry that he is becoming like Koba by killing his fellow apes and seeking revenge.
While following the soldiers to the border, the group discover some soldiers who have been shot and left for dead. Their examination of a soldier who survived reveals that he, like the girl, can not speak. At the soldier 's urging, Caesar kills him. Later, they encounter Bad Ape, a reclusive chimpanzee. Bad Ape reveals that the human soldiers are encamped at the border and hesitantly agrees to lead them there.
When the group arrives at the border, they see hundreds of apes held captive inside a former quarantine facility. While getting a closer look, Luca is killed protecting Caesar from an Alpha - Omega patrol, angering Caesar and causing him to proceed alone. Caesar discovers that the captured apes belong to his clan, and are being forced to build a wall with no food or water. After Caesar is captured by Red, the Colonel reveals to Caesar that the Simian Flu virus has mutated and now causes humans who survived the original strain to become mute, which he believes is a sign of devolution to a primitive state. Caesar deduces that the Colonel is barricading the facility to fend off remnants of the U.S. Army from the north who are coming to execute him because he favors massacring any infected humans, including his own son, to stop the spread of the virus. Caesar is commended by the Colonel for his intelligence, and the Colonel explains that he is fighting a "holy war '' for humanity 's survival.
While Caesar is tortured with starvation, the mute girl, whom Maurice names Nova, sneaks into the facility to give Caesar food, water, and her rag doll (originally given to her by Maurice). To prevent Nova from being discovered, Rocket allows himself to be captured as a diversion. The next day, the Colonel comes to see if Caesar is still alive, and confiscates Nova 's doll upon discovering it. Together, Caesar and Rocket are able to work out a means of escape via an underground tunnel that leads out of the facility. Maurice and Bad Ape use the tunnel to rescue the apes, and Caesar orders the others to escape while he goes to confront the Colonel as the facility comes under attack by the northern army. Caesar reaches the Colonel and prepares to kill him, but discovers that Nova 's doll has infected him with the mutated virus, rendering him unable to speak. Caesar spares the Colonel and watches as he uses his pistol to commit suicide.
During the battle between Alpha - Omega and the northern army, the escaping apes come under fire from Alpha - Omega. Red watches the apes struggle to escape and being picked off by the humans, and seems to be disturbed by watching his own kind suffer such a fate. Caesar attempts to attack the Alpha - Omega forces from behind, but is shot with a crossbow by Preacher, one of the Alpha - Omega soldiers he had previously set free. In an act of redemption, Red saves Caesar 's life by killing Preacher with a grenade launcher, and gets shot by an Alpha - Omega superior as a result. In the chaos, Caesar blows up a large fuel tank, causing a cascade of explosions which destroys the Alpha - Omega facility, allowing the northern army to win the battle. However, the army is subsequently devastated by an avalanche, which Caesar and the other apes, carrying Nova, survive by climbing nearby trees.
The remaining apes depart the facility and cross the desert to find an oasis. While the other apes joyously celebrate their new home, Maurice discovers Caesar 's wound. Maurice then speaks, telling the dying Caesar that Cornelius will know who his father was, what he stood for and what he did to protect the apes. Caesar then slowly and silently succumbs to his wounds, dying peacefully while Maurice mourns his passing as the other apes look on.
After seeing his cut of Dawn, 20th Century Fox and Chernin Entertainment signed Matt Reeves to return as director for a third installment of the reboot series. In January 2014, the studio announced the third film, with Reeves returning to direct and co-write along with Bomback, and Peter Chernin, Rick Jaffa and Amanda Silver serving as producers. During an interview in mid-November 2014 with MTV, Andy Serkis said they did not know the next film 's setting. "... It could be five years after the event. It could be the night after the events of where we left ' Dawn. ' '' In May 2015, the title was first given as War of the Planet of the Apes. By October 2015, it had been retitled as War for the Planet of the Apes.
When director Reeves and screenwriter Bomback came on board to helm Dawn, the film already had a release date, which led to an accelerated production schedule. However, with the third installment, Fox wanted to give the duo plenty of time to write and make the film. Taking advantage of this, the two bonded with each other more than before.
In interviews for Dawn, Reeves talked about the inevitable war Caesar would have with the humans: "As this story continues, we know that war is not avoided by the end of Dawn. That is going to take us into the world of what he is grappling with. Where he is going to be thrust into circumstances that he never, ever wanted to deal with, and was hoping he could avoid. And now he is right in the middle of it. The things that happen in that story test him in huge ways, in the ways in which his relationship with Koba haunts him deeply. It 's going to be an epic story. I think you 've probably read that I sort of described it where in the first film was very much about his rise from humble beginnings to being a revolutionary. The second movie was about having to rise to the challenge of being a great leader in the most difficult of times. This is going to be the story that is going to cement his status as a seminal figure in ape history, and sort of leads to an almost biblical status. He is going to become like a mythic ape figure, like Moses. ''
Toby Kebbell, who portrayed Koba in Dawn, had expressed interest in reprising his role or performing as other characters. Plans to include Koba in a larger role in the film were abandoned early, with Bomback saying, "If you stayed until the very end of Dawn of the Planet of the Apes, you hear Koba 's breathing. We did that to give us a tiny crack of a possibility that we could revive Koba if we wanted to. Very early on in spitballing, we realized there was nothing more to do with Koba -- certainly nothing that would exceed what he had done in the last story. But we knew we wanted to keep him alive as an idea. In playing out the reality of what happened at the end of the last film, Caesar would be traumatized by having to kill his brother. That would have resonance, and we wanted to make sure that did n't get lost. So the answer was that we could go inside Caesar 's mind at this point and revisit Koba that way. ''
In August 2015, Deadline reported that Gabriel Chavarria had been cast as one of the humans in the film. In September 2015, The Hollywood Reporter announced that Woody Harrelson had been cast as the film 's antagonist, and that Chavarria 's role was supporting. In October 2015, TheWrap reported that Steve Zahn was cast as a new ape in the film. It was also announced that actress Amiah Miller was cast as one of the film 's humans, with Judy Greer and Karin Konoval reprising their roles as Cornelia and Maurice, while Aleks Paunovic and Sara Canning were cast as new apes.
Principal photography on the film began on October 14, 2015 in the Lower Mainland in Vancouver, under the working title Hidden Fortress. Filming was expected to take place there until early March 2016. Parts of the film were expected to shoot for up to five days in the Kananaskis in late January and early February. In March, Serkis confirmed that he had finished shooting his portions.
As with Rise and Dawn, the visual effects for War were created by Weta Digital; the apes were created with a mixture of motion - capture and CGI key - frame animation, as they were performed in motion - capture technology and animated in CGI.
At New York Comic - Con 2016, Reeves explained that he and Bomback were influenced by many films before writing. He said, "One of the first things that Mark and I did because we had just finished Dawn was that we decided to watch a million movies. We decided to do what people fantasize what Hollywood screenwriters get to do but no one actually does. We got Fox to give us a theater and we watched movie after movie. We watched every Planet of the Apes movie, war movies, westerns, Empire Strikes Back... We just thought, ' We have to pretend we have all the time in the world, ' even though we had limited time. We got really inspired. ''
Additionally, during production, Reeves and Bomback sought broader inspirations from films like Bridge on the River Kwai and The Great Escape. Feeling that there was a need to imbue Biblical themes and elements, they also watched Biblical epics like Ben - Hur and The Ten Commandments. The influences and inspirations were made evident in the relationship between Caesar and Woody Harrelson 's Colonel, a military leader with pretensions toward godhood. Reeves has compared their relationship to the dynamic between Alec Guinness 's British Commander and Sessue Hayakawa 's prison camp Colonel in Bridge on the River Kwai. Another comparison is in Caesar 's journey to find the Colonel, flanked by a posse of close friends -- a situation Reeves explicitly tied to Clint Eastwood 's war - weary soldier in The Outlaw Josey Wales. Influences from the film Apocalypse Now, notably Harrelson 's character and his Alpha - Omega faction being similar to Colonel Kurtz 's renegade army, were also noted by several journalists. Harrelson has also acknowledged the similarities and inspiration.
On October 17, 2015, it was confirmed that Michael Giacchino, the composer and writer of the soundtrack for Dawn, would return to compose War 's score. The soundtrack was digitally released to iTunes and Amazon on July 7, 2017, and in its physical form by Sony Masterworks on July 21, 2017.
All music composed by Michael Giacchino.
The film was initially set for a July 29, 2016, release. However, in January 2015, Fox postponed the film 's release date to about a year later on July 14, 2017.
Special behind - the - scenes footage for the film was aired on TV on November 22, 2015, as part of a contest announcement presented by director Matt Reeves and Andy Serkis. The footage aired during The Walking Dead on AMC. The announcement allowed winners to wear a performance - capture suit and appear in a scene as an ape. The announcement was released on 20th Century Fox 's official YouTube page later the same day.
At a New York Comic Con special event on October 6, Reeves, Serkis and producer Dylan Clark debuted an exclusive look at the film.
Serkis has also mentioned that the film would be accompanied by a video game, for which he performed motion capture. Titled Planet of the Apes: Last Frontier, the game is set for release for the PlayStation 4, Xbox One and PC in fall 2017.
War for the Planet of the Apes was released on Digital HD on October 10, 2017, and on Blu - ray and DVD on October 24, 2017 by 20th Century Fox Home Entertainment.
War for the Planet of the Apes grossed $146.9 million in the United States and Canada and $343.8 million in other territories for a worldwide total of $490.7 million, against a production budget of $150 million.
In North America, the film was projected to gross $50 -- 60 million in its opening weekend, however, given its acclaimed status and strong word - of - mouth, rival studios believed the film had the potential to debut as high as $70 -- 80 million. War was closely monitored by analysts while the summer was witnessing a decline in ticket sales, a situation that they blamed on franchise fatigue for an overabundance of sequels and reboots (such as Pirates of the Caribbean: Dead Men Tell No Tales, Transformers: The Last Knight and The Mummy). However, box office analysts noted that well - reviewed films have tended to perform in - line with estimates (Guardians of the Galaxy Vol. 2, Wonder Woman and Spider - Man: Homecoming). The film grossed $5 million from Thursday night previews at 3,021 theaters, up 22 % from the $4.1 million earned by its predecessor, and $22.1 million on its first day. It went on to debut to $56.3 million, topping the box office, albeit with a 22 % drop from Dawn 's $72.6 million debut. In its second weekend, the film grossed $20.9 million (a drop of 62.9 %, more than the 50.4 % fall Dawn saw), finishing 4th at the box office. In its third weekend, the film made $10.5 million (dropping another 49.9 %), finishing 6th at the box office. It was lower than the third weeks of both Rise ($16.1 million) and Dawn ($16.8 million).
Outside North America, War for the Planet of the Apes received a scattered release in a span of three months (July -- September). The film began its release in about a third of the marketplace on July 14, albeit only in two major markets, and was projected to have an opening of $50 -- 60 million, with the potential to go higher if smaller Asian markets over-perform, as they have for recent tent poles. The film ended up having an international debut of $44.2 million, including $9.27 million in the United Kingdom.
War for the Planet of the Apes received critical acclaim for the performances of its cast (particularly Serkis), direction, musical score, visual effects, cinematography and morally complex storyline. On review aggregator site Rotten Tomatoes, the film has an approval rating of 93 %, based on 280 reviews, with a rating average of 8.1 / 10. The site 's critical consensus reads, "War for the Planet of the Apes combines breathtaking special effects and a powerful, poignant narrative to conclude this rebooted trilogy on a powerful -- and truly blockbuster -- note. '' On review aggregator Metacritic, which assigns a weighted average rating to reviews, the film has a score of 82 out of 100, based on 50 critics, indicating "universal acclaim ''. Audiences polled by CinemaScore gave the film an average grade of "A -- '' on an A+ to F scale, the same score earned by its two immediate predecessors. Scott Collura of IGN awarded the film a score of 9.5 out of 10, saying: "War for the Planet of the Apes is an excellent closing act to this rebooted trilogy, but also one that does enough world - building that the series can potentially continue from here -- and it 's a rare case where, after three movies, we 're left wanting more. ''
A.O. Scott of The New York Times said of the film, ""War for the Planet of the Apes, '' directed by Matt Reeves, is the grimmest episode so far, and also the strongest, a superb example -- rare in this era of sloppily constructed, commercially hedged cinematic universes -- of clear thinking wedded to inventive technique in popular filmmaking, '' and lauded Andy Serkis 's performance in the film, stating that "Andy Serkis 's performance as Caesar is one of the marvels of modern screen acting. '' Peter Travers of Rolling Stone gave the film 3.5 out of 4 stars, and said that Serkis performed "with a resonant power and depth of feeling that 's nearly Shakespearean. Oscar, get busy: Serkis deserves the gold, '' and went on to say that "War for the Planet of the Apes -- No. 9 in the simian cinema canon -- is the best of the Apes films since the 1968 original. Eric Kohn of Indiewire gave the film a B+ rating, and praised Matt Reeves 's directing, saying "It 's a given that an expensive 21st century sci - fi movie with talking animals, exploding tanks, and jarring machine guns would look and sound great, but Reeves applies these effects with such a measured strategy that they 're always working in service of a greater narrative agenda. '' Kohn went on to applaud the visuals, cinematography, and musical score, stating that "The breathlessly paced montage of flying bullets and angry monkeys raining down on terrified men, aided by Michael Giacchino 's vibrant score, is a strong indicator of the next - level craftsmanship that distinguishes these movies from so many cacophonous Hollywood spectacles; not only is the action easy to follow, but you care for the motion - captured characters at the center of it, while the humans cower in fear. ''
In October 2016, it was reported that a fourth Planet of the Apes film was being discussed.
|
when were the first computers invented and used | Computer - wikipedia
A computer is a device that can be instructed to carry out arbitrary sequences of arithmetic or logical operations automatically. The ability of computers to follow generalized sets of operations, called programs, enables them to perform an extremely wide range of tasks.
Such computers are used as control systems for a very wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer assisted design, but also in general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects millions of other computers.
Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input / output devices that perform both functions (e.g., the 2000s - era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer '' was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue (sic) read the truest computer of Times, and the best Arithmetician that euer (sic) breathed, and he reduceth thy dayes into a short number. '' This usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.
The Online Etymology Dictionary gives the first attested use of "computer '' in the "1640s, (meaning) "one who calculates, ''; this is an "... agent noun from compute (v.) ''. The Online Etymology Dictionary states that the use of the term to mean "calculating machine '' (of any type) is from 1897. '' The Online Etymology Dictionary indicates that the "modern use '' of the term, to mean "programmable digital electronic computer '' dates from "... 1945 under this name; (in a) theoretical (sense) from 1937, as Turing machine ''.
Devices have been used to aid computation for thousands of years, mostly using one - to - one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
The Antikythera mechanism is believed to be the earliest mechanical analog "computer '', according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al - Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear - wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al - Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed - wired knowledge processing machine with a gear train and gear - wheels, circa 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
The slide rule was invented around 1620 -- 1630, shortly after the publication of the concept of the logarithm. It is a hand - operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time -- distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.
In the 1770s Pierre Jaquet - Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed '' to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.
The tide - predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel - and - disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball - and - disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer '', he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general - purpose computer that could be described in modern terms as Turing - complete.
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand -- this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage 's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine 's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide - predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel - and - disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H.L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H.W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).
By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.
Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all - electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
In 1941, Zuse followed his earlier machine up with the Z3, the world 's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5 -- 10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Rather than the harder - to - implement decimal system (used in Charles Babbage 's earlier design), using a binary system meant that Zuse 's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was Turing complete.
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff -- Berry Computer (ABC) in 1942, the first "automatic electronic digital computer ''. This design was also all - electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.
During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro - mechanical bombes. To crack the more sophisticated German Lorenz SZ 40 / 42 machine, used for high - level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world 's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper - tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing - complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both 5 times faster and simpler to operate than Mark I, greatly speeding the decoding process.
The U.S. - built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing - complete. Like the Colossus, a "program '' on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC 's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine '' and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing 's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing - complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored - program computer this changed. A stored - program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored - program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored - program digital computer. His 1945 report "Proposed Electronic Calculator '' was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
The Manchester Small - Scale Experimental Machine, nicknamed Baby, was the world 's first stored - program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random - access digital storage device. Although the computer was considered "small and primitive '' by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world 's first commercially available general - purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 and ran the world 's first regular routine office computer job.
The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the "second generation '' of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell.
The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material... wherein all the components of the electronic circuit are completely integrated ''. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. His chip solved many practical problems that Kilby 's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby 's chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor '', it is largely undisputed that the first single - chip microprocessor was the Intel 4004, designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.
With the continued miniaturization of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so - called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.
Computers are typically classified based on their uses:
The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice '' input devices are all hardware.
A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I / O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1 '', and when off it represents a "0 '' (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand - operated or automated. The act of processing is mainly regulated by the CPU. Some examples of hand - operated input devices are:
The means through which computer gives output are known as output devices. Some examples of output devices are:
The control unit (often called a control system or central controller) manages the computer 's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system 's function is as follows -- note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps '' and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.
The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation -- although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65? ''). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
A computer 's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address '' and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357 '' or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595. '' The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software 's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2 = 256); either from 0 to 255 or − 128 to + 127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two 's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer 's speed.
Computer main memory comes in two principal varieties:
RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer 's initial start - up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer 's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer 's part.
I / O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I / O. I / O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I / O. A 2016 - era flat screen display contains its own computer circuitry.
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time ''. then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time - sharing '' since each program is allocated a "slice '' of time in turn.
Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input / output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice '' until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower - end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored - program architecture and from general purpose computers. They often feature thousands of CPUs, customized high - speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large - scale simulation, graphics rendering, and cryptography applications, as well as with other so - called "embarrassingly parallel '' tasks.
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that can not easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware ''.
There are thousands of different programming languages -- some intended to be general purpose, others useful only for highly specialized applications.
The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
This section applies to most common RAM machine - based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer 's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump '' instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers '' the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:
Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer 's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer 's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error - prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember -- a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer 's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Machine languages and the assembly languages that represent them (collectively termed low - level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a smartphone or a hand - held videogame) can not understand the machine language of an x86 CPU that might be in a PC.
Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high - level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled '' into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem (s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
These 4G languages are less procedural than 3G languages. The benefit of 4GL is that they provide ways to obtain information without requiring the direct help of a programmer. An example of a 4GL is SQL.
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object - oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
Errors in computer programs are called "bugs ''. They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to "hang '', becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer 's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program 's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs '' in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.
Firmware is the technology which has the combination of both hardware and software such as BIOS chip inside a computer. This chip (hardware) is located on the motherboard and has the BIOS set up (software) stored in it.
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military 's SAGE system was the first large - scale example of such a system, which led to a number of special - purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high - tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless '' networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer '' is synonymous with a personal electronic computer, the modern definition of a computer is literally: "A device that computes, especially a programmable (usually) electronic machine that performs high - speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information. '' Any device which processes information qualifies as a computer, especially if the processing is purposeful.
Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example. More realistically, modern computers are made out of transistors made of photolithographed semiconductors.
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
There are many types of computer architectures:
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church -- Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing - complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning.
As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
|
what are the two jewish high holy days | High Holy Days - wikipedia
The High Holidays or High Holy Days, in Judaism, more properly known as the Yamim Noraim (Hebrew: ימים נוראים "Days of Awe ''), may mean:
The term High Holy Days most probably derives from the popular English phrase, "high days and holydays ''. The Hebrew equivalent, "Yamim Noraim '' (Hebrew: ימים נוראים ), is neither Biblical nor Talmudic. Professor Ismar Elbogen, author of "Jewish Liturgy in its Historical Development '', avers that it was a medieval usage, reflecting a change in the mood of Rosh Hashanah from a predominantly joyous celebration to a more subdued day that was a response to a period of persecution.
Many prefer the term High Holy Days over High Holidays because the former emphasizes the personal, reflective, introspective aspects of this period. By contrast, Holidays suggests a time of communal celebrations of events in the history of the Jewish people.
The Hebrew month preceding Rosh Hashanah, Elul, is designated as a month of introspection and repentance. In preparation for the Jewish New Year, special prayers are recited. Psalms 27 is added at the end of morning and evening prayers, and the shofar (ram 's horn) is blown at the end of morning services on weekdays (except for the eve of Rosh Hashanah itself). Among Sephardi Jews, Selichot are recited at dawn on weekdays throughout the month. Also, many complete the entire Book of Psalms twice during the month. It is customary to increase the giving of charity (Tzedakah) and to ask forgiveness from people one may have wronged.
At midnight on the Saturday night before Rosh Hashanah, Ashkenazi Jews begin reciting selichot. On the following days, however, they generally recite the selichot before the regular morning prayers. On the eve of Rosh Hashanah, extra prayers are recited and many fast until noon.
Rosh Hashanah (Hebrew: ראש השנה "Beginning of the Year '') is the Jewish New Year, and falls on the first and second days of the Jewish month of Tishrei (September / October). The Mishnah, the core work of the Jewish Oral Torah, sets this day aside as the new year for calculating calendar years and sabbatical and jubilee years.
Rabbinic literature describes this day as a day of judgment. God is sometimes referred to as the "Ancient of Days. '' Some descriptions depict God as sitting upon a throne, while books containing the deeds of all humanity are opened before Him.
Prayer services are longer than on a regular shabbat or other Jewish holidays, and include (on weekdays) the blowing of the shofar. On the afternoon of the first (or the second, if the first was Saturday) day, the ritual tashlikh is performed, in which sins are "cast '' into open water, such as a river, sea, or lake.
The "ten days of repentance '' or "the days of awe '' include Rosh Hashanah, Yom Kippur and the days in between, during which time Jews should meditate on the subject of the holidays and ask for forgiveness from anyone they have wronged. They include the Fast of Gedaliah, on the third day of Tishri, and Shabbat Shuvah, which is the Shabbat between Rosh Hashanah and Yom Kippur.
Shabbat Shuvah has a special Haftarah that begins Shuvah Yisrael (come back, oh Israel), hence the name of that Shabbat. Traditionally the rabbi gives a long sermon on that day.
It is held that, while judgment on each person is pronounced on Rosh Hashanah, it is not made absolute until Yom Kippur. The Ten Days are therefore an opportunity to mend one 's ways in order to alter the judgment in one 's favor.
Yom Kippur (יום כפור yom kippūr, "Day of Atonement '') is the Jewish festival of the Day of Atonement. The Hebrew Bible calls the day Yom Hakippurim (Hebrew, "Day of the Atonement / s '').
In the Hebrew calendar, the ninth day of Tishri is known as Erev Yom Kippur (Yom Kippur eve). Yom Kippur itself begins around sunset on that day and continues into the next day until nightfall, and therefore lasts about 25 hours.
Observant Jews will fast throughout Yom Kippur and many attend synagogue for most of the day. There are five prayer services, one in the evening (sometimes known as "Kol Nidre '' from one of the main prayers) and four consecutively on the day.
There is a Kabbalistic belief that, though judgment is made absolute on Yom Kippur, it is not registered until the seventh day of Sukkot, known as Hoshana Rabbah. The service for this day therefore contains some reminiscences of those for the High Holy Days, and it is treated as a last opportunity to repent of sins that may have been missed on Yom Kippur. Jews take bouquets of willow branches that represent their sins and they bash them on the floor while saying a special prayer to God to forgive them for the sins that may have been missed on Yom Kippur.
Generally, throughout most of the year, Jewish worship services are open to all, regardless of affiliation, and membership or payment of any fee is not a requirement in order to attend. However, the High Holy Days are usually peak attendance days for synagogues and temples, often filling or over-filling synagogues. For this reason many synagogues issue tickets for attendance and may charge for them: practice varies on whether paid - up synagogue members must also buy these or whether it is included in the subscription. Synagogues never pass a collection plate during services as some churches do, as Jews are forbidden to touch money on the Sabbath or other holidays such as Rosh Hashanah and Yom Kippur. Among synagogues in the United States, donations are often sought during the Kol Nidre service, called the "Kol Nidre Appeal, '' often via a pledge card, where the amount of the donation is represented by a paper tab which can be bent down in the amount of donation desired. Rabbis and other temple representatives say that holiday ticket sales represent a significant source of revenue.
|
who won the australian open tennis this year | 2017 Australian Open - wikipedia
The 2017 Australian Open was a tennis tournament that took place at Melbourne Park between 16 -- 29 January 2017. It was the 105th edition of the Australian Open, and the first Grand Slam tournament of the year. The tournament consisted of events for professional players in singles, doubles and mixed doubles play. Junior and wheelchair players competed in singles and doubles tournaments. As in previous years, the tournament 's title sponsor was Kia.
Novak Djokovic and Angelique Kerber were the defending champions and both were unsuccessful in their title defence; they lost to Denis Istomin and Coco Vandeweghe in the second and fourth rounds, respectively. For the first time since the 2004 French Open, both No. 1 seeds lost before the quarterfinals, with both Andy Murray and Kerber defeated in the fourth round.
Roger Federer won his eighteenth men 's singles Grand Slam title by defeating Rafael Nadal in a five - set final. It was his first major title since 2012 Wimbledon and a rematch of the 2009 Australian Open final, which Nadal won in five sets. Serena Williams overcame her sister Venus in the women 's singles final, surpassing Steffi Graf to become the player with the most major wins in the women 's game in the Open Era.
The 2017 Australian Open was the 105th edition of the tournament and was held at Melbourne Park in Melbourne, Australia.
The tournament was run by the International Tennis Federation (ITF) and is part of the 2017 ATP World Tour and the 2017 WTA Tour calendars under the Grand Slam category. The tournament consisted of both men 's and women 's singles and doubles draws as well as a mixed doubles event. There were singles and doubles events for both boys and girls (players under 18), which are part of the Grade A category of tournaments, and also singles, doubles and quad events for men 's and women 's wheelchair tennis players as part of the NEC tour under the Grand Slam category.
The tournament was played on hard courts and took place over a series of 25 courts, including the three main show courts: Rod Laver Arena, Hisense Arena and Margaret Court Arena.
In Australia, selected key matches were broadcast live by the Seven Network. The majority of matches was shown on the network 's primary channel Channel Seven; however, during news programming nationwide and most night matches in Perth, coverage shifted to either 7Two or 7mate. Additionally, every match was also available to be streamed live through a free 7Tennis mobile app.
Internationally, Eurosport held the rights for Europe, broadcasting matches on Eurosport 1, Eurosport 2 and the Eurosport Player.
Below is a series of tables for each of the competitions showing the ranking points offered for each event.
The Australian Open total prize money for 2017 was increased by 14 % to a tournament record A $ 50,000,000.
Qualifiers prize money was also the Round of 128 prize money. * per team
This was a rematch of the 2009 Australian Open final, which Rafael Nadal won to become the first (and to date, only) Spaniard to win the Australian Open title; as of 2017 it remains his only title at the tournament. The final saw the two holding service for six games of the first set, whilst during the seventh game was the pivotal break of serve giving Federer the opening set. Nadal quickly broke Federer 's serve in the second set racing out to a lead that Federer could not overcome, giving him the second set and levelling the match at one set apiece. The third set was a rather lopsided affair seeing Nadal secure his service game only in the fourth game of the set. The fourth set started off competitively with the two holding serve, until Nadal broke in the fourth game of the set, a lead he would never surrender, evening the match at two sets apiece. The decisive fifth set commenced with a break of Federer 's serve by Nadal, giving him a lead in the early going; however, Nadal 's serve got broken during the sixth game of the set, levelling the match at two sets and three games apiece. Federer won the next three games breaking Nadal 's service in the eighth game of the set to allow him to successfully serve out the match in the final ninth game. This was Roger Federer 's 18th Grand Slam singles title, the most ever by a man in the history of tennis, and it was his fifth Australian Open title, just one shy of the record co-held by Novak Djokovic and Roy Emerson. Federer would go on to equal this record by defending his title successfully the next year.
This was a rematch of the 2003 Australian Open final, where Serena Williams completed the first "Serena Slam '' and her career Grand Slam, whilst Serena won five more Australian Open titles in the interim and her sister Venus had no other final appearances at the event. They each broke the others ' serve twice to start the match with Venus finally holding serve in the fifth service game and her sister Serena holding her own serve in the subsequent game. The seventh game was the pivotal break of service that Serena Williams got on her sister Venus ' serve, costing her the set just a mere three games later. During the second set, the two traded held service games for the first six games to start the set, whilst Venus started serving first. She would get broken again during the seventh game of the set, which eventually surrendered the match to sister Serena. This was Serena Williams ' 23 Grand Slam singles title and seventh Australian Open title for her career, both being Open era records, whilst being one shy of Margaret Court 's record of 24 in the history of tennis.
The following are the seeded players and notable players who withdrew from the event. Seeding are arranged according to ATP and WTA rankings on 9 January 2017, while ranking and points before are as of 16 January 2017. The rankings afterwards comes from 30 January 2017.
The qualifying competition took place in Melbourne Park on 11 -- 14 January 2017.
The following players were accepted directly into the main draw using a protected ranking:
The following players were accepted directly into the main tournament, but withdrew with injuries and other reasons.
|
all of the following are communication traditions to which cmm is related except | Coordinated Management of meaning - wikipedia
In the social sciences, coordinated management of meaning (CMM) provides understanding of how individuals create, coordinate and manage meanings in their process of communication. Generally, it refers to "how individuals establish rules for creating and interpreting meaning and how those rules are enmeshed in a conversation where meaning is constantly being coordinated ''.
People live in a world where there is constant communication. In communicating with others, people assign meanings in their messages based on past conversational experiences from previous social realities. Through communication, an underlying process takes place in which individuals negotiate a common or conflicted meanings of the world around them, thereby creating a new social reality. CMM advocates that meanings can be managed in a productive way so as to improve the state of interactions by coordinating and managing the meaning - making process.
CMM relies on three interdependent elements: coordination, management and meaning. These elements help to explain how social realities are created through conversation.
The theory of CMM was developed in the mid-1970s by W. Barnett Pearce (1943 - 2011) and Vernon E. Cronen. Communication Action and Meaning was devoted to CMM, is thorough explication of CMM, which Pearce and Cronen introduced to the common scholarly vernacular of the discipline. Their scholarly collaboration at the University of Massachusetts at Amherst offered a major contribution to the philosophy of communication as story - centered, applicable, and ever attentive to the importance of human meaning.
The cluster of ideas in which CMM emerged has moved from the periphery toward greater acceptance and CMM has continued to evolve along a trajectory from an interpretive social science to one with a critical edge and then to what its founders call a "practical theory ''.
Aware that the intellectual footing for communication theory had shifted, the first phase of the CMM project involved developing concepts that met the twin criteria of (1) adequately expressing the richness of human communication and (2) guiding empirical investigation. Pearce describes the creation of CMM through the following story:
... I think that I am the first person ever to use the awkward phrase "coordinated management of meaning ''. Of course, tones of voice are often more informative than the verbal content of what is said, and struggle and frustration were expressed in the tones of voice in which "CMM '' was first said. For years, I had been trying to bring together what I was learning from social science research, rhetorical studies, philosophy, theology, and, in my father 's term, the "School of Hard Knocks ''. I felt that most of the models of communication that I knew were useful but that all were limited and limiting in some important ways, and that I had to invent something that was better. Communication is about meaning,... but not just in a passive sense of perceiving messages. Rather, we live in lives filled with meanings and one of our life challenges is to manage those meanings so that we can make our social worlds coherent and live within them with honor and respect. But this process of managing our meanings is never done in isolation. We are always and necessarily coordinating the way we manage our meanings with other people. So, I concluded, communication is about the coordinated management of meaning.
CMM is one of an increasing number of theories that see communication as "performative '' (doing things, not just talking "about '' them) and "constitutive '' (the material substance of the social world, not just a means of transmitting information within it). In CMM - speak, "taking the communication perspective '' means looking at communication rather than through it, and seeing communication as the means by which we make the objects and events of our social worlds.
The "communication perspective '' entails a shift in focus from theory to praxis. CMM concepts and models are best understood as providing tools for naming aspects of performance. To date, CMM has found greater acceptance among practitioners than among scholars. Taking the communication perspective confers something like "communication literacy '' -- the ability to inscribe and read the complex process of communication in real time. Among other things, CMM 's concepts and models guide practitioners in helping clients become aware of the patterns of communication which make up aspects of the social world. They want to change and help both clients and practitioners identify openings or "bifurcation points '' in everyday lives. Many CMM practitioners have an explicit commitment not only to describe and understand, but to improve the conditions in which they and those around them live. They believe that the best way of making better social worlds is to improve the patterns of communication which generates them.
It has been said that "CMM theory is a kind of multi-tool (like a ' Swiss army knife ') that is useful in any situation ''. It is not a single theory, but rather a collection of ideas to understand how humans interact during communication. According to CMM, individuals construct their own social realities while engaged in conversation. To put it simply, communicators apply rules in order to understand what is going on during their social interaction. Based on the situation, different rules are applied in order to produce "better '' patterns of communication.
CMM theory is a fairly complex study focusing on both the complexity in the micro-social processes and the aspects of daily interaction. Overall, it is concerned with how we coordinate and establish meaning during interactions. The theory can be complicated to teach and / or present to others, but it is best understood when it is broken down into the basics. The theory consists of three key concepts, which are further broken down into several different building blocks.
The fundamental building blocks of CMM theory focus specifically on the flow of communication between people. The three different concepts experienced either consciously or unconsciously, are coordination, management and meaning.
Coordination refers to "the degree to which persons perceive that their actions have fitted together into some mutually intelligible sequence or pattern of actions ''. It exists "when two people attempt to make sense out of the sequencing of messages in their conversation ''. That is, if people in the interaction can recognize what their partners are talking about, then we say the conversation come to a coordination. Scientists believes that people 's desire for coordination in interaction arises from the subjectivity of meaning, which means the same message may have different meanings to different people. In order to avoid this pitfall in communication. People work together to share meanings. Research shows that sense making is the foundation of coordination. By tokens within the information connected by means of channel can the logic relationship emerges, then it contributes to the sense making. Sense making helps people to establish common understanding then further develops coordination between people.
The concept of coordination has to do with the fact that our actions do not stand alone with regard to communication. The words or actions that we use during a conversation come together to produce patterns. These patterns, also known as stories lived, influence the behavior used during each interaction as a way to collaborate. Pearce and Cronen are quick to point out that coordination does not imply a commitment to coordinate "smoothly '', but rather the concept is meant to provide the basis for being mindful of the other side of the story.
There are three possible outcome of coordination:
1. People in the interaction achieve coordination.
2. People in the interaction failed to achieve coordination.
3. People in the interaction achieve some degree of coordination.
If the interaction fail to achieve coordination or achieve partially coordination, the possible solution is to move the level of meaning to another level.
Our interactions are guided and defined by rules. "Interactants must understand the social reality and then incorporate rules as they decide how to act in a given situation. '' From the use of rules, individuals manage and coordinate meanings in the conversation. "Once rules are established in a dialogue, interactants will have a sufficiently common symbolic framework for communication. '' For instance, it would be ambiguous if a friend says "I hate you ''. Does the friend really hate whomever he / she is speaking to or he / she is just expressing his / her feelings at the moment? Rules will help clarify and explain this kind of meanings.
CMM theory sees each conversation as a complex interconnected series of events in which each individual affects and is affected by the other. Although the primary emphasis of CMM theory has to do with the concept of first person communication, known as a participatory view, once the concepts are understood they are more readily visible during other interactions. Furthermore, this knowledge can be applied to similar situations which will in turn lead to more effective communication.
Coordinated management of meaning states that people "organize meaning in a hierarchical manner. '' Theorists on CMM was in agreement on two points regarding hierarchical meaning. "First, the hierarchical of meaning defines the context in which regulative and constitutive rules are to be understood. Second, these contexts are arranged in a hierarchical af abstractness, such that higher levels of the hierarchy help to define -- and may subsume -- lower level. '' It can interpreted to each of the contexts in the "hierarchical can be understood by looking at the other contexts, and each contexts is always contextualizing other contexts. ''
There are six levels of meaning. Levels below are illustrated from lower level to higher level.
The content or message according to CMM theory relates to the raw data and information spoken aloud during communication.To put it simply, content is the words used to communicate. The content is essentially the basic building blocks of any language; however, it is important to note that the content by itself is not sufficient to establish the meaning of the communication.
Another integral part of the CMM theory includes the speech act. "Speech acts communicate the intention of the speaker and indicate how a particular communication should be taken. '' The simplest explanation of a speech act is "actions that you perform by speaking. They include compliments, insults, promises, threats, assertions, and questions ''. CMM theory draws upon the speech act theory, which further breaks down speech acts into separate categories of sounds or utterances. Though the speech act theory is much more detailed, it is important to have an understanding of both illocutionary and perlocutionary utterances.
There are many different utterances or speech acts including questions, answers, commands, promises and statements. Having knowledge of each of these plays a large part in an individual being able to participate in a communications exchange.
An episode is a situation created by persons in a conversation. The same content can take on different meaning when the situation is different. For example, a phrase used among close family or friends may take on an entirely different meaning in a job interview. In the interactions, people may punctuate differently on a same episode. This will result to people deal with the differences on their punctuations on subsequent episodes. Especially when people situated the bi-cultural or multi-cultural situation has identified a number of specific acts which occurred in an equivalent situation in the other culture, would have totally disrupted the episode.
Relationship is the higher level of the meaning, where "relational boundaries in that parameters are established for attitudes and behaviors. '' This building block is fairly easy to understand as it is the dynamic of what connects two (or more) individuals during an exchange of information. Examples of a relationship could be defined as a parent / child, teacher / student, strangers, etc. Communication between strangers would likely be different from conversations amongst family members.
Life scripts can be understood as the patterns of episodes. On this level, "every individual 's history of relationships and interactions will influence rules and interaction patterns. '' Life Scripts are similar to the autobiography of individuals. Its comprising the person 's exceptions for variety communicative events. Several CMM texts describe this building block as a "script for who we are '' as the role an individual plays in the movie of life. For example, an individual may believe they are funny, and therefore may act according to that perspective while engaged in different conversations.
The concept of culture in CMM theory relates to a set of rules for acting and speaking which govern what we understand to be normal in a given episode. There are different rules for social interaction depending on the culture. To some extent, during communication individuals act in accordance with their cultural values. While we often do n't even realize that culture impacts communication during day to day interactions, people must learn to be compatible with individuals from different cultures in order to have effective communication.
Pearce is adamant that CMM is not just an interpretive theory but is meant to be a practical theory as well. There is extensive literature involving the use of CMM to address family violence, intra-community relations, workplace conflict and many other social issues. A research employs CMM to understand the "perceived acts of discrimination manifested within the context of everyday interactions. '' By applying CMM into research, the researchers are able to explicate the rules of meaning - making that majority and minority groups followed in understanding the discrimination act. Another application example was done in 1994 when CMM was initially recognized by people. It believes that the framework of CMM provides an understanding of "the structure and process of consumer decision making by placing those decisions within the context of a family 's social reality ''.
Along this line, CMM theorists have used or developed several analysis models to help understand and improve communication. The models addressed here are the hierarchy model, the serpentine model, charmed and strange loops. Examples for the first model have been adapted from ones Pearce uses in one of his writings where he analyzes the courtroom conversation between Ramzi Yousef, the individual convicted of bombing the World Trade Center in 1994, and Kevin T. Duffy, the federal judge who presided over his trial. In Yousef 's statement before sentencing, he criticizes the US for its hypocrisy; he accuses the US of being the premier terrorist, and reasserts his pride in his fight against the US. At the sentencing, Duffy accuses Yousef of being a virus, evil, perverting the principles of Islam, and interested only in death. Neither individual really talks to the other, but rather at them.
The hierarchy model is the hierarchy of organized meanings as illustrated in the "Meaning '' section. The hierarchy model is a tool for an individual to explore the perspectives of their conversational partners while also enabling them to take a more thorough look at their own personal perspective. The elements at the top of each list form the overall context in which each story takes place and have an influence on the elements below them. The levels of meaning from lowest to highest are: content, speech act, episodes, relationship, life scripts, and cultural patterns.
According to Stephen W. Littlejohn and Karen A. Foss in their book "Theories of Human Communication '' (Tenth Edition) describe a type of logical force called contextual force. Contextual force causes a person to follow a form of logic that leads one to believe that an action or interpretation is a direct result of, and is appropriate to, the context. For example, "How else could I have reacted? '' or "Naturally I acted that way, it was appropriate to the situation, '' leads to the mentality of "I did what I had to do. '' Secondly, in CMM contexts are extremely important and they are not static and unchanging. For example, a relationship that is longstanding can contextualize the episode of an ugly argument as something unpleasant, but unavoidable. The couple will most likely worth though this ugly argument because of their relationship contextualizing the episode. However, an episode of an ugly argument can contextualize a relationship if a couple is on their first date. Therefore the argument is more likely to contextualize the relationship is over or not worth pursuing. What contextualizes what in the hierarchy of organized meanings overlaps and is interlinked in a complicated hierarchy of meanings which can shift at any moment.
Works Cited Littlejohn, Stephen W., and Karen A. Foss. "Chapter 6: The Conversation. '' Theories of Human Communication. Long Grove, IL: Waveland, 2017. N. pag. Print.
The CMM theorists take the hierarchy model a step further by reinforcing the importance of interaction and adding the aspect of time. Pearce stresses that communication can not be done alone and that furthermore this usually occurs before or after another 's actions. Therefore, understanding past events and their impact on individuals is essential to improving communication. This new model is called the serpentine model and visually demonstrates how communication is a back and forth interaction between participants rather than just a simple transmission of information.
The embedded contexts illustrated in the hierarchy model represent a stable hierarchy. It suggests that higher levels subsume lower levels. Meanwhile, sometimes "lower levels can reflect back and affect the meaning of higher levels. '' This process is termed "loop ''. CMM believes that there is a stronger "contextual effect '', which works from higher levels to the lower levels, and a weaker "implicative effect '', which works the other way. When loops are consistent with the hierarchy, it is identified as a "charmed loop ''. In this kind of interaction, each person 's perceptions and actions help to reinforce the other 's perceptions and actions.
When the lower levels are inconsistent with the higher levels, it is called a "strange loop ''. Essentially, "a ' strange ' loop is a repetitive interactional pattern that alternates between contradictory meanings ''. For example, the alcoholic identifies that he is an alcoholic and then quits drinking. Since he has quit drinking, he convinces himself that he is not really an alcoholic and so he starts drinking again, which makes him an alcoholic. He alternates between contradictory perceptions of being an alcoholic and not being an alcoholic. The charmed and strange loop model also has its applications. In a research regarding the social construction of male college student logical forces. The charmed and strange loop model was utilized in studying male college students ' narratives in describing their memorable sexual experiences.
Less commonly, there is a third variation called "subversive '' loop. Texts and contexts within a subversive loop are mutually invalidating and can prevent coherence and coordination. It may result in intentionally outrageous behavior, efforts to act in uninterruptible ways, or refusal to recognize the possibility that the outsider can understand the situation of the insider.
CMM theory is regarding as kind of multi-tool by providing a framework to structure different themes. In this regard, there are lots of qualitative study using CMM to illustrated its utility for framing their fingdings. Because people interpret messages and know the rules or guide which can follow and have actions constitute appropriate responses. Now, its focus on cultural influence to get insights into how individuals negotiate complex messages occurring at different levels of meaning. Since CMM attempts to explain the process by group member to make sense out of the regular path of messages and carried out into a group conversation. So according to CMM, individual perspective with group approach conversation need to combine and create a better meaning - building.
In order to provide criticism of the CMM theory, it is important to establish a baseline for what accounts for a "good '' study. Many scholars use different criteria for determining what makes a theory relevant, but they most often surround the following six concepts.
CMM has been criticized for too broad in its scope and highly abstract in its nature. "Poole wrote ' It is difficult... to paint with broad strokes and at the same time give difficult areas the attention they deserve '. ''. In 1987, Brenders also stated that "in its broad - stroked approach to human interaction, CMM has missed many of the linguistic, international, and theoretical nuances necessary for an understanding of communicative meaning '' It is also criticized for its conceptual apparatus as "incomplete with regard to a full examination of the material layering of practices ''
From a humanistic perspective, CMM theory is seen as valuable as it seeks to provide a way to clarify communication for better interaction and understanding. Its utility lies in "how people achieve meaning, their potential recurring conflicts, and the influence of the self on the communication process is admirable. '' It promotes reform by encouraging individuals to explain particular viewpoints in order to reach understanding.
The final point can be seen as both a criticism and positive critique. Pearce and Cronen are constantly building upon the CMM theory which was originally outlined in the 1970s. By constant corrections and revisions, the theorists are working toward improving the examination of communication interactions; however, with each new update, minor course corrections alter the terms and meanings which increase the complexity of the overall theory.
CMM has guided research in an array of context and disciplines. Further discussion of CMM concepts and applications for research can be found at the following location: CMM Research
|
they killed paradise and put up a parking lot | Big Yellow Taxi - wikipedia
"Big Yellow Taxi '' is a song written, composed, and originally recorded by Canadian singer - songwriter Joni Mitchell in 1970, and originally released on her album Ladies of the Canyon. It was a hit in her native Canada (No. 14) as well as Australia (No. 6) and the UK (No. 11). It only reached No. 67 in the US in 1970, but was later a bigger hit there for her in a live version released in 1974, which peaked at No. 24. Charting versions have also been recorded by The Neighborhood (who had the original top US 40 hit with the track in 1970, peaking at No. 29), Maire Brennan, Amy Grant and Counting Crows.
Mitchell said this about writing the song to journalist Alan McDougall in the early 1970s:
I wrote ' Big Yellow Taxi ' on my first trip to Hawaii. I took a taxi to the hotel and when I woke up the next morning, I threw back the curtains and saw these beautiful green mountains in the distance. Then, I looked down and there was a parking lot as far as the eye could see, and it broke my heart... this blight on paradise. That 's when I sat down and wrote the song.
The song is known for its environmental concern -- "They paved paradise to put up a parking lot '' and "Hey farmer, farmer, put away that DDT now '' -- and sentimental sound. The line "They took all the trees, and put ' em in a tree museum / And charged the people a dollar and a half just to see ' em '' refers to Foster Botanical Garden in downtown Honolulu, which is a living museum of tropical plants, some rare and endangered.
In the song 's final verse, the political gives way to the personal. Mitchell recounts the departure of her "old man '' in the titular "big yellow taxi, '' which may refer to the old Metro Toronto Police patrol cars, which until 1986 were painted yellow. In many covers the departed one may be interpreted as variously a boyfriend, a husband or a father. The literal interpretation is that he is walking out on the singer by taking a taxi; otherwise it is assumed he is being taken away by the authorities.
Mitchell 's original recording was first released as a single and then, as stated above, included on her 1970 album Ladies of the Canyon. A later live version was released in 1974 (1975 in France and Spain) and reached No. 24 on the U.S. charts. Mitchell 's playful closing vocals have made the song one of the most identifiable in her repertoire, still receiving significant airplay in Canada. In 2005, it was voted No. 9 on CBC 's list of the top 50 essential Canadian tracks.
In 2007, Joni Mitchell released the album Shine, which includes a newly recorded, rearranged version of the song.
There are various slight alterations of the lyrics from different versions. Joni Mitchell 's original version runs:
They took all the trees And put them in a tree museum Then they charged the people A dollar and a half just to see ' em
whereas in Amy Grant 's version, the people are charged "twenty - five bucks, '' and in Mitchell 's own 2007 re-recording, the people are charged "an arm and a leg. ''
Bob Dylan, instead of singing about the "big yellow taxi '' that "took away my old man, '' sings, "A big yellow bulldozer took away the house and land. '' Similarly, in Mitchell 's live version of the song released on Miles of Aisles in 1974, she sings about "a big yellow tractor '' that "pushed around my house, pushed around my land. '' She then repeats the same verse, but with the original lyrics. While Amy Grant retains the taxi, her final reprise of the line about "paved paradise '' reads "steam rolled paradise. ''
On the Counting Crows 's 2002 cover version, lead singer Adam Duritz sings "Late, last night I heard the screen door sway / and a big yellow taxi took my girl away '' instead of "Late last night I heard the screen door slam / and a big yellow taxi took away my old man. ''
An animated music video of Joni Mitchell 's "Big Yellow Taxi '' was produced by John Wilson of Fine Arts Films as an animated short for the Sonny and Cher television show in the mid-1970s. The only commercial release of this full - length music video was in the Video Gems home video release on VHS titled John Wilson 's Mini Musicals, also released as The All Electric Music Movie. The home video also contains an animated music video of Mitchell 's song "Both Sides, Now ''.
In 1993, Máire "Moya '' Brennan covered the song.
In 1995, Amy Grant released a cover of "Big Yellow Taxi '' to pop and Adult Contemporary radio in the United States and United Kingdom. The song was the fourth pop radio single from her House of Love album (the third in the U.S.). Grant 's version featured slightly altered lyrics, which she changed at Joni Mitchell 's request. The single peaked at No. 67 on The Billboard Hot 100, No. 18 on the Adult Contemporary chart, and at No. 20 in the U.K. Grant also released a music video for the single, which was aired in the U.S. and U.K. and released to home video on Grant 's Greatest Videos 1986 - 2004 DVD. Grant also performed the song for her 2006 concert album, Time Again... Amy Grant Live.
In 2002, the Counting Crows covered the song with backing vocals by Vanessa Carlton. It was featured on the soundtrack to the film Two Weeks Notice and is the most successful version of the song to date (U.S. Billboard Adult Top 40). The single was certified Gold on 25 October 2004 by the Recording Industry Association of America. Originally, the song was a hidden track on the band 's 2002 album Hard Candy, and it did not include Carlton until it was to be featured in the film. New releases of the album included it as a track with her added, as with her in the video, although Counting Crows and Carlton neither appeared in the video together nor recorded together. This song became the band 's only Top 20 single in the UK, peaking at No. 13. This version slightly changed Mitchell 's original lyrics to describe when the eponymous taxi took "my girl '' away, instead of Mitchell 's "my old man. '' The original version of the song without Vanessa was included on the album "Nolee Mix '' which was released to promote the My Scene dolls.
Many other artists have covered the song.
|
when was the cat of nine tails last used | Cat o ' nine tails - wikipedia
The cat o ' nine tails, commonly shortened to the cat, is a type of multi-tailed whip that originated as an implement for severe physical punishment, notably in the Royal Navy and Army of the United Kingdom, and also as a judicial punishment in Britain and some other countries.
The term first appears in 1695, although the design is much older. It was probably so called in reference to its "claws '', which inflict parallel wounds. There are equivalent terms in many languages, usually strictly translating, and also some analogous terms referring to a similar instrument 's number of tails (cord or leather), such as the Dutch zevenstaart (seven tail (s)), negenstaart (nine tail (s)), the Spanish gato de nueve colas or the Italian gatto a nove code.
The cat is made up of nine knotted thongs of cotton cord, about 2 ⁄ feet or 76 cm long, designed to lacerate the skin and cause intense pain.
It traditionally has nine thongs as a result of the manner in which rope is plaited. Thinner rope is made from three strands of yarn plaited together, and thicker rope from three strands of thinner rope plaited together. To make a cat o ' nine tails, a rope is unravelled into three small ropes, each of which is unravelled again.
Variations exist, either named cat (of x tails) or not, such as the whip used on adult Egyptian prisoners which had a cord on a cudgel branching into seven tails, each with six knots, used only on adult men, with boys being subject to caning, until Egypt banned the use of the device in 2001.
Sometimes the term "cat '' is used incorrectly to describe various other punitive flogging devices with multiple tails in any number, even one made from 80 twigs (so rather a limp birch) to flog a drunk or other offender instead of 80 lashes normally applicable under shariah law. The closed cat, one without tails, was called a starter.
The naval cat, also known as the "captain 's daughter '' (which in principle was used under his authority) weighed about 13 ounces (370 grams) and was composed of a handle connected to nine thinner pieces of line, with each line knotted several times along its length. Formal floggings -- those ordered by captain or court martial -- were administered ceremonially on deck, the crew being summoned to "witness punishment '' and the prisoner being brought forward by marines with fixed bayonets.
During the period of the Napoleonic Wars, the naval cat 's handle was made of rope about 2 feet (0.61 m) long and about 1 inch (25 mm) in diameter, and was traditionally covered with red baize cloth. The tails were made of cord about a quarter inch (6 mm) in diameter and typically 2 feet long. Drunkenness or striking an officer might incur a dozen lashes, which could be administered on the authority of the ship 's captain. Greater punishments were generally administered following a formal court martial, with Royal Navy records reflecting some standard penalties of two hundred lashes for desertion, three hundred for mutiny, and up to five hundred for theft. The offence of sodomy generally drew the death penalty, though one eighteenth century court martial awarded a punishment of one thousand lashes - a roughly equivalent sentence as there was no likelihood of survival.
A new cat was made for each flogging by a bosun 's mate and kept in a red baize bag until use. If several dozen lashes were awarded, each could be administered by a fresh bosun 's mate -- a left - handed one could be included to assure extra painful crisscrossing of the wounds. One dozen was usually awarded as a highly sensitizing prelude to running the gauntlet.
For summary punishment of Royal Navy boys, a lighter model was made, the reduced cat, also known as boy 's cat, boy 's pussy or just pussy, that had only five tails of smooth whip cord. If formally convicted by a court martial, however, even boys would suffer the punishment of the adult cat. While adult sailors received their lashes on the back, they were administered to boys on the bare posterior, usually while "kissing the gunner 's daughter '' (bending over a gun barrel), just as boys ' lighter "daily '' chastisement was usually over their (often naked) rear - end (mainly with a cane -- this could be applied to the hand, but captains generally refused such impractical disablement -- or a rope 's end). Bare - bottom discipline was a tradition of the English upper and middle classes, who frequented public schools, so midshipmen (trainee officers, usually from ' good families ', getting a cheaper equivalent education by enlisting) were not spared, at best sometimes allowed to receive their lashes inside a cabin. Still, it is reported that the ' infantile ' embarrassment of bare - bottom punishment was believed essential for optimal deterrence; cocky miscreants might brave the pain of the adult cat in the macho spirit of "taking it like a man '' or even as a "badge of honour ''.
On board training ships, where most of the crew were boys, the cat was never introduced, but their bare bottoms risked, as in other naval establishments on land, "the sting of the birch '', another favourite in public schools.
"The severest form of flogging was a flogging round the fleet. The number of lashes was divided by the number of ships in port and the offender was rowed between ships for each ship 's company to witness the punishment. '' Penalties of hundreds of lashes were imposed for the gravest offences, including sedition and mutiny. The prisoner was rowed around the fleet in an open boat and received a number of his lashes at each ship in turn, for as long as the surgeon allowed. Sentences often took months or years to complete, depending on how much a man was expected to bear at a time. Normally 250 -- 500 lashes would kill a man, as infections would spread. '' After the flogging was completed, the sailor 's lacerated back was frequently rinsed with brine or seawater, which was thought to serve as a crude antiseptic (now known not to be the case). Although the purpose was to control infection, it caused the sailor to endure additional pain, and gave rise to the expression, "rubbing salt into his wounds, '' which came to mean vindictively or gratuitously increasing a punishment or injury already imposed.
The British Army had a similar multiple whip, though much lighter in construction, made of a drumstick with attached strings. The flogger was usually a drummer rather than a strong bosun 's mate. Flogging with the cat o ' nine tails fell into disuse around 1870.
Whereas the British naval cat rarely cut (contrary to graphic films) but rather abraded the skin, the falls (tresses) of the British Army cat were lighter (around 1 / 8th of an inch) and the string was in fact codline - a very dense material akin to tarred string. Although the total whip would weigh only a fraction of a naval rope cat, the thin, dense codline tresses were far more likely to cut the skin.
It was also used elsewhere in the empire, notably at the penal colonies in Australia, and also in Canada (a dominion in 1867) where it was used until 1881. An 1812 drawing shows a drummer apparently lashing the buttocks of a naked soldier who is tied with spread legs on an A-frame made from sergeants ' half pikes. In many places, soldiers were generally flogged stripped to the waist.
The cat - o ' - nine - tails was also used on adult convicts in prisons; a 1951 memorandum (possibly confirming earlier practice) ordered all UK male prisons to use only cat o ' nine tails (and birches) from a national stock at Wandsworth prison, where they were to be ' thoroughly ' tested before being supplied in triplicate to a prison whenever a flogging was pending for use as prison discipline. In the 20th century, this use was confined to very serious cases involving violence against a prison officer, and each flogging had to be confirmed by central government.
Especially harsh floggings were given with it in secondary penal colonies of early colonial Australia, particularly at such places as Norfolk Island (apparently this had 9 leather thongs, each with a lead weight, meant as the ultimate deterrent for hardened life - convicts), Port Arthur and Moreton Bay (now Brisbane).
It was used on slave trade ships to punish the slaves.
Judicial corporal punishment was removed from the statute book in Great Britain in 1948. The cat was still being used in Australia in 1957 and is still in use in a few Commonwealth countries, although the cane is used in more countries.
Judicial corporal punishment has been abolished or declared unconstitutional since 1997 in Jamaica, St. Vincent and the Grenadines, South Africa, Zambia, Uganda (in 2001) and Fiji (in 2002).
However, some former colonies in the Caribbean have reinstated flogging with the cat. Antigua and Barbuda reinstated it in 1990, followed by the Bahamas in 1991 (where, however, it was subsequently banned by law) and Barbados in 1993 (only to be formally declared inhumane and thus unconstitutional by the Barbados Supreme Court).
Trinidad & Tobago never banned the "Cat ''. Under the Corporal Punishment (Offenders over Sixteen) Act 1953, use of the "Cat '' was limited to male offenders over the age of 16. The age limit was raised in 2000 to 18.
The Government of Trinidad & Tobago has been accused of torture and "cruel, inhuman and degrading '' treatment of prisoners, and in 2005 was ordered by the Inter-American Court of Human Rights to pay US $50,000 for "moral damages '' to a prisoner who had received 15 strokes of the "Cat '' plus expenses for his medical and psychological care; it is unclear whether the Court 's decisions were implemented. Trinidad & Tobago did not acknowledge the Court 's jurisdiction, since it had denounced the American Convention on Human Rights several years before the Court started hearing this case.
|
what was simon and garfunkel's first album | Simon & Garfunkel Discography - wikipedia
Simon & Garfunkel, an American singer - songwriter duo, has released five studio albums, fifteen compilation albums, four live albums, one extended play, 26 singles, one soundtrack, and four box sets since 1964. Paul Simon and Art Garfunkel first formed a duo in 1957 as Tom & Jerry, before separating and later reforming as Simon & Garfunkel.
Simon & Garfunkel 's debut album, Wednesday Morning, 3 A.M., was released on October 19, 1964. Initially a flop, it was re-released two years later with the new version of the single "The Sound of Silence '', which was overdubbed with electric instruments and drums by producer Tom Wilson. The re-released version peaked at number thirty in the US Billboard 200 chart and at twenty - four in the UK Albums Chart, and later received a platinum certification by the Recording Industry Association of America (RIAA). The overdubbed version of the eponymous single was released on their second studio album, Sounds of Silence, released on January 17, 1966. It peaked at twenty - one on the Billboard charts and at thirty in the UK Album Charts, and later received a three - times multi-platinum certification by the RIAA. Besides the same - named single, the album also featured Simon 's "I Am a Rock '', a song that first appeared on his 1965 debut solo album, The Paul Simon Songbook.
Simon & Garfunkel 's third album, Parsley, Sage, Rosemary and Thyme, was released on October 10, 1966, and produced five singles. It peaked at number four in the US and number thirteen in the UK, and received a three - time multi-platinum certification by RIAA. The single "Mrs. Robinson '' was included in the band 's first and only soundtrack, The Graduate, and was later included on their fourth studio album Bookends, which was released on April 3, 1968. It peaked at number one in both the US and UK, therefore becoming their first number one album, and received two - times multi-platinum in the US. On January 26, 1970, they released their fifth and final studio album, Bridge over Troubled Water. It was their most successful to date, peaking at number one in several countries, including the UK and US. The album sold over twenty - five million copies worldwide, and received eight - time multi-platinum in the US.
Despite the success of their fifth album, the duo Simon & Garfunkel decided to part company, announcing their break - up later that year. They have nonetheless made a number of reunion performances, including a free concert in New York City 's Central Park in 1981, which drew a crowd of half - a-million people and resulted in the live album The Concert in Central Park.
"Mrs. Robinson '' (live version) "The Boxer '' (live version) "Bridge over Troubled Water '' (live version)
|
where are bgt semi finals held this year | Britain 's Got Talent - wikipedia
Britain 's Got Talent (often abbreviated to BGT) is a televised British talent show competition, broadcast on ITV. It is part of the global Got Talent franchise created by Simon Cowell, and is produced by both Thames (formerly Talkback Thames) and Syco Entertainment production, with its distribution handled by FremantleMedia. Since its premiere in June 2007, each series has been aired in late Spring / early Summer, and hosted by Ant & Dec. To accompany each series since it first began, a sister show is run on ITV2 entitled Britain 's Got More Talent presented by Stephen Mulhern. Initially planned for 2005 before the first series of America 's Got Talent, a dispute between Paul O'Grady, the originally conceived host of the programme, and the broadcaster, led to production being suspended until 2007.
Contestants of any age, who possess some sort of talent, can audition for the show. Those that enter for a series, perform before a panel of judges to secure a place in the live episodes - the current lineup consists of Cowell, Amanda Holden, Alesha Dixon, and David Walliams. Those that make it through the auditions compete against other acts in a series of live semi-finals, with the winning two acts of each semi-final proceeding into the show 's live final. The prize for winning the contest is a cash prize (the amount varying over the show 's history), and an opportunity to perform at the Royal Variety Performance in front of members of the British Royal Family, including either Queen Elizabeth II or the Prince of Wales. To date, the show has had twelve winners, ranging from musicians and singers to variety acts, magicians and dancers.
A significant show in British popular culture, Britain 's Got Talent is the UK 's biggest television talent competition, ahead of both The X Factor (also created by Cowell) and The Voice UK. On average, it draws viewing figures of 9.9 million viewers per series - the show 's live final in the third series attracted a record 17.3 million viewers, obtaining a 64.6 % audience share at the time of its broadcast. At present, the programme is contracted to run until 2019.
The show 's format was devised by X Factor creator and Sony Music executive, Simon Cowell, who was involved in the creation of other Got Talent programmes across several different countries. To showcase his idea, a pilot episode was filmed in September 2005, with the judging panel consisting of Cowell, Fern Britton (at the time, presenter of This Morning), as well as tabloid journalist Piers Morgan. The pilot was not broadcast on television until it was shown as part of a documentary series entitled The Talent Show Story in January 2012.
The original plan for the show was for it to be aired within 2005 -- 2006, before the broadcast of America 's Got Talent, with Paul O'Grady presenting the programme under the title Paul O'Grady's Got Talent, after having hosted the pilot. His selection as the host in the original production plans was due to the popularity he was attaining from his teatime chat show, The Paul O'Grady Show. However, complications arose when O'Grady was involved in a row with ITV and refused to appear on another of the broadcaster 's programme, eventually defecting to Channel 4 to continue hosting his teatime show and effectively putting plans for the show on hold. In a 2010 interview, O'Grady commented about the row by stating:
"I did the pilot for Britain 's Got Talent -- which was originally going to be called Paul O'Grady's Got Talent. But I told the producers they were having a joke if they thought I would front a show with that title. The original panel of judges was going to be Simon Cowell, Fern Britton and Piers Morgan. I was the host. Then when I had the row with ITV I was banned from the studios. I remember I rang Simon and told him he had a huge hit on his hands, but there was no way I could do it. I said, if I am banned I have to be banned from everything. I ca n't be a hypocrite and come in and do this. I had to bow out. ''
On 12 February 2007, following the success of America 's Got Talent the previous year, ITV announced their intentions for a British series of Got Talent. Their announcement revealed changes to the original plan for the programme, with Ant & Dec revealed to be the hosts for the new programme. While Cowell remained as part of the judging panel, the new plan intended for David Hasselhoff and Cheryl Cole. However, both resigned before the programme was due to air, leading to Morgan being part of the panel as originally planned, and actress Amanda Holden joining him and Cowell as a judge; Hasselhoff would later join the panel for the programme 's fifth series after being a part of the panel for America 's Got Talent, while Cowell later employed Cole to be a replacement for Sharon Osbourne on The X Factor. At the same time, the broadcaster also announced that the show would be accompanied by a sister show on ITV2, entitled Britain 's Got More Talent, with Stephen Mulhern as its presenter.
The show holds two rounds of auditions for contestants. The first round, referred to as the "open auditions '', are held across several different cities around the UK during the Autumn months. The second round, referred to as the "Judges ' Auditions '', are held the following year during January and February, prior to the broadcast of the show later that year during spring or early summer - these auditions consists of the contestants who made it through the first round, and are held within a select set of cities, which has commonly included Cardiff, Glasgow, Manchester, Birmingham and London. For the Judges ' Auditions, each site used for these is located within a theatre or convention hall. These sites are primarily chosen for the purpose of having facilities that can handle large volumes of contestants, with each set up into three arrangements when auditions are taking place: a waiting area for contestants to prepare and await their turn to perform, with monitors to allow them to see the performances of other contestants; the wings, where a contestant enters and leaves from, and where friends and family of the contestant can view them from; and the main stage, where the contestant performs their act before the judging panel, who are located in front of the stage. The main stage area is usually modified to include the judges ' panel desk, along with a special lighting rig above the stage, consisting of Xs that each have the name of a judge under them.
Each contestant that auditions is given a number by the production team and remains in the waiting area until called out into the wings, giving them a certain amount of time to prepare their act. Upon being allowed onto the main stage, they will usually be asked by one of the judges for their name and what they plan to perform, along with other details such as age, occupation, personal background, and what they wish to achieve if they won. After this, they are then given three minutes to conduct their performance before both the judges and a live audience, with some acts being supported by a backing track provided by the production team. If at any time, the judges find the performance to be unconvincing, boring or completely wrong, they may use the buzzer before them, which changes the Xs from white to red; if all the judges press their buzzers, then the performance is immediately over. However, a judge can retract their buzzer 's use if they felt they had done so prematurely before witnessing a contest 's performance to the end; this is true if the performance appears to look bad, but later turns out to have been good in their eyes.
Once a performance is over, each judge will give an overview of what they thought about the act, before casting a vote. If the contestant (s) receives a majority vote of "Yes '', they then proceed onto the next stage in the contest, otherwise they are eliminated from that series ' contest. In Series 8, a new feature was added to the auditions, which had been previously used on Germany 's Got Talent, called the "Golden Buzzer ''. Situated in the centre of the judges ' desk, the Golden Buzzer allows a judge to effectively send the contestant (s) into the live semi-finals, regardless of the opinions of the other judges, if they felt that a contestant 's performance was outstanding; when pressed, the judge 's X turns gold and the stage is showered in gold glitter strips. However, as a general rule, they may only press it once per series and can not press it again in later auditions; the hosts of Britain 's Got Talent may also press the Golden Buzzer for a contestant, but must also adhere to the same rule.
Filming for the show begins during the Judges ' Auditions, with footage that is recorded in these being edited into a series of episodes that are aired on a weekly schedule before the live semi-finals, and featuring the best highlights of the auditions, including those from the best performances the judges saw, to the worst that appeared, whether funny, poorly conceived or generally bad, and in some cases totally inappropriate. These episodes also include interviews with the contestants in a separate area, as well as additional interviews conducted within the wings before and after a performance, that are conducted by the hosts, who also give personal comments about a performance while watching from the wings.
This stage takes place after the auditions have been completed, and is also referred to as Deliberation Day, in which the judges look through the acts that have successfully made it to this stage, and begin whittling them down to those who would stand a fair chance in the live semi-finals. The amount that goes through has varied over the show 's history, though usually consists of a number that can be divided equally over the semi-finals being held in a series. Once the judges have decided on who will go through, all contestants that have reached this stage are called back to discover if they will progress into the live semi-finals or not. After this has been done, the acts are divided up between the semi-finals that the series will have; usually eight in each series, except for the sixth to tenth series which had nine acts per semi-final.
For the fifth series, some acts were asked to perform again, as the judges had had difficulty coming to a final decision on the semi-finalist, and thus needed to see their performance again in order to make up their minds; it is only time in the show 's history that this has happened, and has not been repeated since.
Contestants that make it into the semi-finals by making it through the auditions and being chosen by the judges (or, from series 8, received the Golden Buzzer during their audition), perform once more before an audience and the judges, with their performance broadcast on live television. Until the tenth series, live episodes were broadcast from The Fountain Studios in Wembley, the same site used for The X Factor, but following its closure in 2016, the show relocated its live episodes to Elstree Studios in 2017, before moving to Hammersmith Apollo the following year. Like the Audition stage of the contest, each semi-finalist must attempt to impress by primarily conducting a new routine of their act within the same span of time; the judges can still use a buzzer if they are displeased with a performance and can end it early if all the buzzers are used, along with giving a personal opinion about an act when the performance is over. Of the semi-finalists that take part, only two can progress into the final, which is determined by two different types of votes - a public phone vote, and a judges ' vote.
The phone vote, which occurs after all the semi-finalists have performed, determines the first winner of the semi-final and takes place via a special phone line over a short break from the programme. During this time, the public votes for the act they liked best, through a phone number in which the final two digits are different for each semi-finalist - these digits are primarily arranged by the order of their appearance. Once the lines have closed and the votes have been counted, the programme airs a live results episode, in which the semi-finalist with the highest number of votes automatically moves into the final. The second winner is determined by the judges ' vote, which held after the results of the phone vote have been given out, and determines whether the second or third most popular semi-finalist in the public vote moves on to the final. The judges ' vote, held after the result of the phone vote, determines the second act that wins this stage, and is conducted between the second and third most popular acts the public voted on. After the number of judges was increased to four, the rule on the judges ' vote was modified - if the vote is tied, the semi-finalist with the second highest tally of public votes automatically moves on to the final. For the eleventh series only, the judges ' vote was not used, meaning that the finalists were determined by the public vote only - the semi-finalists in each semi-final for this series with the highest and second - highest tally of votes, moved on to the final. From the start of the sixth series, the show introduced a new format known as the "Judges ' Wildcard '', in which the judges can reinstate an act that had been eliminated from the semi-final, through a private vote conducted before the airing of the final. This format was later expanded to include a "Public Wildcard '', in which the public could vote on an act that had been eliminated in the semi-finals during the judges ' vote and reinstate it for the final; although used in the ninth and tenth series, this format was dropped for the eleventh series.
For the final of each series, the format of the contest operates in exactly the same manner as the semi-finals, including being broadcast live on television, though this time the winner is determined purely by a phone vote, with the finalists attempting to secure more votes than the others by performing a new routine at their best. Upon the result being given, the top two acts are brought onto the stage, with the hosts announcing which of them received the most votes. The winner of each year 's contest receives a cash prize, and earns the opportunity to perform in the Royal Variety Performance later that year.
For the show 's scheduling, the live episodes are usually arranged to take place over the course of a week, with the semi-finals aired mainly on weekdays, and the final aired on the weekend; in some series, the schedule featured a break between the live episodes of the semi-finals, due to coverage of live events. All live episodes are divided into two parts when they are aired - the first half focuses on performances, each beginning with a short clip of the semi-finalist / finalist 's background, and ending with comments by the judges; the second half occurs after voting is closed, usually after another programme has aired between the two parts, and focuses on the results of the phone vote, with a guest performance taking place prior to the announcement of the results. This format was initially used for the live finals only, up until the start of the semi-finals in the fourth series.
For the first four series after the show began in June 2007, the judging panel consisted of music executive and television producer Simon Cowell, television and West End star Amanda Holden, and newspaper editor and journalist Piers Morgan. In 2009, the producers made plans to alter the show 's format to allow for a fourth judge when the third series was set to begin, with plans for Kelly Brook to be the new judge on the panel. Less than a week after the series began, the producers dropped this change on the belief that this alteration to the show 's format would complicate it, resulting in Brooks being credited as a guest judge for that series. In 2010, Cowell fell ill during filming of the fourth series and was unable to attend the Birmingham auditions, leading to Louis Walsh stepping in as a guest judge in his place, until he had recovered.
In 2011, the panel saw its first major change, when Morgan revealed he was leaving the show to travel to America and begin filming of his new show. As Cowell had announced he would n't be present for the fifth series ' auditions, due to his busy schedule with launching The X Factor USA, the show 's producers decided to amend the panel, and recruited comedian Michael McIntyre, and actor, singer and former America 's Got Talent judge David Hasselhoff, to join Holden during the auditions, with Walsh returning as a guest judge for the London auditions due to Hasselhoff 's commitments with a pantomime at that time. When the series entered its live episodes, Cowell returned to oversee the acts as a fourth judge. Later that year, in October 2011, both Hasselhoff and McIntrye declined to return for the sixth series, while Cowell announced he was returning full - time to the show.
On 2 January 2012, the producers revealed its decision to adopt the use of a fourth judge for the programme 's format, announcing that both Cowell and Holden would now be joined by David Walliams and Alesha Dixon for the sixth series, with the latter moving to the talent show after deciding to leave BBC 's Strictly Come Dancing. During filming in February, Holden was unable to attend the London auditions, due to having given birth to her daughter at that time which had led to her suffering some after - effects from her pregnancy. As a result, the producers brought in actress and model Carmen Electra as a guest judge until she recovered. In subsequent series, the line - up remained as Cowell, Holden, Walliams and Dixon, with no further incidents except for the eighth series in 2014 - Cowell missed the first day of the Manchester auditions, leading to Ant & Dec filling in for him, and was also absent for the final day of the London auditions - and the tenth series in 2016 - Cowell was late for an audition, and was temporarily replaced by Walliam 's mother Kathleen, who was attending it.
The first series was aired during 2007, between 9 -- 17 June. Auditions for this series took place within the cities of Manchester, Birmingham, London and Cardiff, between January and February earlier that year. The series had 3 live semi-finals, featuring a total of 24 semi-finalists, all of whom were vying for a chance to perform at the Royal Variety Performance, as well as claiming a £ 100,000 cash prize. The series was won by opera singer Paul Potts; the results of the other finalists were not announced.
The second series was aired during 2008, between 12 April to 31 May, and featured notable differences. Not only did the series run for much longer, auditions took place in Blackpool and Glasgow, the latter following complaints that Scotland had n't been visited during the previous series, along with Manchester, Birmingham, London and Cardiff. In addition, the show had five live semi-finals, featuring a total of 40 semi-finalists. The series was won by street - dancer George Sampson, with dual dance group Signature coming in second, and singer Andrew Johnston placing third.
The third series was aired during 2009, between 11 April to 30 May, with auditions held in the same five cities as before. Initially, the producers intended to change the format by including a fourth judge on the panel, but this was later dropped a few days after auditions began. The series was won by dance troupe Diversity, with singer Susan Boyle coming in second, and saxophonist Julian Smith placing third.
The fourth series was aired during 2010, between 17 April to 5 June; a single episode of this series, intended for airing on 22 May, was pushed back to 23 May, in order to avoid it clashing with live coverage of the UEFA Champions League Final that year. The auditions were once more held across the same five cities as before, though the series also held auditions with Newcastle upon Tyne; the city had been originally planned to hold auditions for the previous series, but these were cancelled before this could happen. Owing to illness, Cowell was unable to attend the Birmingham auditions, which led to Louis Walsh being in brought in as a guest judge for these. The series was won by gymnastic troupe Spelbound, with dancing duo Twist and Pulse coming in second, and drummer Kieran Gaffney placing third.
The fifth series was aired during 2011, between 16 April to 4 June, and was the first to be broadcast completely in high - definition; like before, a single episode intended for airing on 28 May, was pushed back to 29 May, to avoid it clashing with live coverage of the UEFA Champions League Final that year. Auditions took place across the same five cities, though also included Liverpool. This series saw a change in the judging panel, following Piers Morgan 's departure from the show, with Holden joined by David Hasselhoff and Michael McIntyre during the auditions; Cowell appeared during the live episodes of the series with the rest of the panel, while Louis Walsh returned as a guest judge for the London auditions when Hasselhoff could n't attend due to other commitments at the time. The series was won by singer Jai McDowall, with singer Ronan Parke coming in second, and boyband New Bounce placing third.
The sixth series was aired during 2012, between 24 March and 12 May. For this series, the cash prize was increased from £ 100,000 to £ 500,000, and a new feature was introduced called the "Wildcard '', in which the judges could select one of the acts eliminated in the semi-finals, to return and compete in the finals. The show also increased the number of semi-finalist for the semi-finals to 45, with nine acts per semi-final, and the number of judges for the entire contest to 4; the previous series also featured four judges, albeit for the live episodes only. In addition, the show attempted to bring in a new way of voting for the semi-finals via a mobile app, but this was suspended for the series after it suffered technical problems during the first live semi-final.
This series featured an open audition in London, along with inviting other acts to audition via YouTube, before holding Judges ' Auditions within Birmingham, London, Manchester and Cardiff, Blackpool and Edinburgh. As both McIntyre and Hasselhoff announced in late 2011 they would n't be returning, the show announced on 2 January 2012 that they would be replaced by David Walliams and Alesha Dixon, and join both Holden and Cowell for the new series, the latter having announced he would be returning as a full - time judge on the show. Holden was unable to attend some of the auditions due to her pregnancy that year, leading to Carmen Electra stepping in as a guest judge for these. The series was won by trainer and dog duo Ashleigh and Pudsey, with opera duo Jonathan and Charlotte coming in second, and Welsh boys choir Only Boys Aloud placing third.
The seventh series was aired during 2013, between 13 April to 8 June; the show took a break on the 29 May, due to live football coverage of England 's friendly with the Republic of Ireland. While the show retained the new features introduced in the previous series, the cash prize was reduced to £ 250,000, with the series featuring auditions within five cities - Birmingham, London, Cardiff, Glasgow and Manchester. The series was won by shadow theatre troupe Attraction, with comedian Jack Carroll coming in second, and opera duo Richard & Adam placing third.
The eighth series aired during 2014, between 12 April to 7 June. This series was the first to introduce the "Golden Buzzer '', and for the first time since the first series, auditions were not held in Scotland, instead being held in Northern Ireland within Belfast, along with Cardiff, London, Birmingham and Manchester; Edinburgh joined these cities to hold open auditions in late 2013, along with Blackpool and Brighton, with additional open auditions held in various local branches of Morrisons within "Talent Spot '' tents, owing to the show 's sponsorship deal with the supermarket chain at the time. The series was won by boy band Collabro, with opera singer Lucy Kay coming in second, and rapper duo Bars & Melody placing third.
The ninth series was aired during 2015, between 11 April to 31 May. This series saw the "Wildcard '' feature updated; along with the judges being able to put forth an eliminated act from the semi-finals into the final (referred to as the Judges ' Wildcard), the show now also allowed the public to vote between the three most popular eliminated acts, with the one with the highest number of votes going forward into the final - this act is referred to as the Public Wildcard. Audition took place within Edinburgh, Manchester, Birmingham, and London, with the latter three cities holding open auditions in late 2014 along with Newcastle, Cardiff, Portsmouth, Leeds, Norwich, and Bristol. The winner of the series was trainer and dog duo Jules O'Dwyer & Matisse, with magician Jamie Raven coming second, and Welsh choir Côr Glanaethwy placing third.
The tenth series was aired during 2016, between 9 April to 28 May. Auditions were held within Liverpool, Birmingham and London, with all three holding open auditions in late 2015 along with Cardiff, Glasgow, and Manchester. It was the last series to hold live episodes within The Fountain Studios, before its closure at the end of the year. The series was won by magician Richard Jones, with singer Wayne Woodward coming in second, and dance group Boogie Storm placing third.
The eleventh series was aired during 2017, between 15 April to 3 June; the final was originally planned for 4 June, but this was moved forward to avoid it clashing with the One Love Manchester benefit concert that day. The series saw two major changes: the first saw the total number of semi-finalist reduced to 40 with eight per each semi-final, as it had been prior to the sixth series; the second saw the Judges ' vote being dropped, with the two semi-finalists with the highest number of public votes moving on into the final. In addition, the live episodes were now broadcast from Elstree Studios, owing to the closure of the previous site. Auditions were held within Salford, Birmingham, London, and Blackpool, with the latter two cities holding open auditions in late 2016, along with Peterborough, Cardiff, Edinburgh, Kingston upon Hull, Lincoln, Reading, Manchester and Luton. The series was won by pianist Tokio Myers, with magician Issy Simpson coming second, and stand - up comedian Daliso Chaponda placing third.
The twelfth series was aired during 2018, between 14 April to 3 June. Following the previous series, the Judges ' vote was brought back into the show 's format, while the live episodes were aired from Hammersmith Apollo and presented solely by Declan Donnelly; although Anthony McPartlin had stepped down from his TV commitments in March 2018, he still appeared in the series ' audition episodes, which had been filmed during January and February that year. Auditions were held within Manchester, Blackpool and London, with two of these cities holding open auditions in 2017, alongside a number of locations within the United Kingdom and Ireland, including Edinburgh, Perth, Dundee, Aberdeen, Glasgow, Dublin, and Inverness. The series was won by stand - up comedian Lost Voice Guy, with comedy singer / pianist Robert White coming second, and singer Donchez Dacres placing third.
Britain 's Got More Talent is a companion sister show that is broadcast on ITV2, and is aired after each episode of Britain 's Got Talent since the main show began in June 2007. Hosted by Stephen Mulhern (with occasional appearances by Ant & Dec), the programme operates in a similar manner to the former spin off show The Xtra Factor (the companion show of The X Factor), in that it features interviews with contestants and behind - the - scenes footage, though what is shown depends on what the main show is focused on during its broadcast. When the auditions stage is being broadcast, its sister show focuses on highlights of acts that could n't be shown on the main show, with Mulhern operating in a similar manner to Ant & Dec by viewing the performances in the wings and giving comments, as well as interviewing contestants before and after their performance. When the main show begins broadcasting live episodes, its sister show conducts live "after - show '' episodes, featuring interviews with the semi-finalists / finalists, as well as chatting with the judges.
In addition to this format, each year also sees Britain 's Got More Talent broadcasting a special set of compilation episodes featuring the best and worst auditions from that year 's contest, entitled Britain 's Got Talent: Best and Worst, which are presented by Mulhern who introduces each clip shown.
Britain 's Got Talent has been nominated for a number of National Television Awards in the category of ' Most Popular Talent Show ' since 2007. But has lost out to its sister shows The X Factor and Strictly Come Dancing. Ant and Dec have won the award for ' Most Popular Entertainment Presenters ' at the same awards for twelve consecutive years (as of January 2014). Britain 's Got Talent has also been nominated for two British Academy Television Awards in 2008, but failed to win any awards. In 2007 and 2008, the show was nominated at the TV Quick and Choice Awards in the ' Best Talent Show ' category, losing out to The X Factor and Strictly Come Dancing respectively.
In 2008, it was a recipient of a Royal Television Society Programme Award for its technical achievements. It has also won four Nickelodeon UK Kids ' Choice Awards from five nominations. In 2009, it won its first ever Digital Spy Reality Award for George Sampson for ' Favourite Reality Contestant '. The show was further nominated in the ' Reality Show ' category, but lost out to The X Factor in the ' Reality TV Presenter ' category for Ant & Dec and two nominations in the ' Reality TV Judge ' category for Cowell and Morgan.
The show was criticised by psychologist Glenn Wilson, who referred to it as a "freak show ''. He stated that "(contestants ') deficiencies and shortcomings are as important as their talent. We enjoy the stress we are putting these people under -- will they or will they not survive? ''
The treatment of contestants at the audition stage was heavily criticised by the Daily Mail, which described applicants being kept waiting for over 10 hours with no food or drink provided, with no certainty of being allowed to perform more than a few seconds of their act. It also detailed how staff intentionally built up the hopes of low - quality performers in order to maximise the dramatic effect of the judges ' put - downs, and the fine points of the contracts performers must sign, which gives the show infinite freedom to "modify '' the footage for their own purposes, and to use the footage indefinitely for whatever purpose they choose.
In two separate interviews in 2012, MC Kinky said "Shows like X Factor and Britain 's Got Talent reduce the art of making music and practising your craft to the level of a low rent game show with huge financial backing and support. It 's a means to make money, not a means to produce ground breaking or interesting artists that demonstrate what they are feeling or are compelled to do. It 's corporate '' and "it 's a churn ' em out fast food form of putrid shit that I have no affiliation with ''.
In 2013, Bruce Forsyth questioned the show 's allowing children to audition. He said, "I do n't think that 's entertainment. I do n't think they should put children on that are too young. If you 're going to do that, have a separate show. Have a children 's show, British Children Have Talent. '' Cowell responded to Forsyth, stating that: "someone, Mr Grumpy, said we should n't have children your age on the show '', after the performance of dance troupe Youth Creation. Jessie J joined the debate, declaring: "I can not agree with kids having to go through three or four auditions when it 's purely for ridicule. I do n't understand why it 's legal, I think it 's wrong ''.
In 2013 it was revealed that up to 50 % of acts on the televised shows had been headhunted by producers. In 2012, electropop band Superpowerless were approached to appear in the semi-finals. They attended the audition after assurances that the act would be portrayed in a positive light. On the day they felt that all interviews, especially those with Stephen Mulhern, were conducted in a manner intending to portray them in a negative light, reducing their act to a novelty / comedy routine intended for ridicule and humiliation. While many newspapers wrote articles on this topic, very few were published as the news outlets were told that running the story would cut that publication out of any advance coverage of the show in the future.
Between 2008 - 2011, several of the show 's semi-finalists, finalists, and winners from various series, took part in a live tour entitled "Britain 's Got Talent Tour ''. The event consisted of several shows held across various UK cities during the Summer months, with site locations including Cardiff, Liverpool, Birmingham, Belfast, Sheffield, Glasgow, Edinburgh, Nottingham, London, and Manchester. When the first live tour, hosted by Stephen Mulhern, was announced on 17 April 2008, demand for tickets for the thirteen dates set for it was high. This led to an extension in the number of performances for it, increasing the overall number from twelve - the ten finalists from that year 's series and two of the semi-finalists, Tracey Lee Collins and Anya Sparks - to twenty two, including matinées, and a duet with Faryl Smith and Andrew Johnston. In 2009, Mulhern hosted a new live tour, which initially featured four dates before it was later increased to run for eighteen shows, and included performances by Diversity, Flawless, Aidan Davis, Shaun Smith, Stavros Flatley, Hollie Steel, 2 Grand, Julian Smith, Shaheen Jafargholi, Susan Boyle, Darth Jackson, DJ Talent and the 2008 winner, George Sampson. In 2010, a third live tour was created, featuring a much larger schedule and taking place across 16 cities and 23 shows. It was hosted by comedian Paddy McGuinness, and featured performances by that year 's finalists of Britain 's Got Talent - Spelbound, Twist & Pulse, Kieran Gaffney, Tobias Mead, Tina & Chandi, Paul Burling, Christopher Stone, Janey Cutler, Liam McNally and Connected.
In 2011, a fourth live tour was created. Hosted by Mulhern, it featured performances by the finalists of that year 's Britain 's Got Talent - Jai McDowall, Ronan Parke, New Bounce, Razy Gogonea, Michael Collings, Paul Gbegbaje, Steven Hall, James Hobley, Les Gibson and Jean Martyn. However ticket sales were drastically reduced due to low interest in the tour was as a result of the financial climate that year. Because there were raised concerns a new tour would flop if sales failed to improve, the tour was axed in 2012.
There are 6 pieces of related merchandise:
Since 2010, a Britain 's Got Talent app has been available on Apple 's App Store and Google Play. The features of the app vary from year to year but always include an interactive feature (e.g. a buzzer, polls or quizzes), relevant social media feeds and clips from the show. In 2015, free in - app voting was introduced. This means viewers are able to vote free of charge for five acts of their choice per voting window during the semi-finals and final rounds.
|
why should we always treat all rational beings as ends and never merely as means | Kantian ethics - wikipedia
Kantian ethics refers to a deontological ethical theory ascribed to the German philosopher Immanuel Kant. The theory, developed as a result of Enlightenment rationalism, is based on the view that the only intrinsically good thing is a good will; an action can only be good if its maxim -- the principle behind it -- is duty to the moral law. Central to Kant 's construction of the moral law is the categorical imperative, which acts on all people, regardless of their interests or desires. Kant formulated the categorical imperative in various ways. His principle of universalizability requires that, for an action to be permissible, it must be possible to apply it to all people without a contradiction occurring. If a contradiction occurs the act violates Aristotle 's "Non-contradiction '' concept which states that just actions can not lead to contradictions. Kant 's formulation of humanity, the second section of the Categorical Imperative, states that as an end in itself humans are required never to treat others merely as a means to an end, but always, additionally, as ends in themselves. The formulation of autonomy concludes that rational agents are bound to the moral law by their own will, while Kant 's concept of the Kingdom of Ends requires that people act as if the principles of their actions establish a law for a hypothetical kingdom. Kant also distinguished between perfect and imperfect duties. A perfect duty, such as the duty not to lie, always holds true; an imperfect duty, such as the duty to give to charity, can be made flexible and applied in particular time and place.
American philosopher Louis Pojman has cited Pietism, political philosopher Jean - Jacques Rousseau, the modern debate between rationalism and empiricism, and the influence of natural law as influences on the development of Kant 's ethics. Other philosophers have argued that Kant 's parents and his teacher, Martin Knutzen, influenced his ethics. Those influenced by Kantian ethics include philosopher Jürgen Habermas, political philosopher John Rawls, and psychoanalyst Jacques Lacan. German philosopher G.W.F. Hegel criticised Kant for not providing specific enough detail in his moral theory to affect decision - making and for denying human nature. German philosopher Arthur Schopenhauer argued that ethics should attempt to describe how people behave and criticised Kant for being prescriptive. Michael Stocker has argued that acting out of duty can diminish other moral motivations such as friendship, while Marcia Baron has defended the theory by arguing that duty does not diminish other motivations. The Catholic Church has criticised Kant 's ethics as contradictory and regards Christian ethics as more compatible with virtue ethics.
The claim that all humans are due dignity and respect as autonomous agents means that medical professionals should be happy for their treatments to be performed on anyone and that patients must never be treated merely as useful for society. Kant 's approach to sexual ethics emerged from his view that humans should never be used merely as a means to an end, leading him to regard sexual activity as degrading and to condemn certain specific sexual practices - for example, extramarital sex. Feminist philosophers have used Kantian ethics to condemn practices such as prostitution and pornography because they treat women as means. Kant also believed that, because animals do not possess rationality, we can not have duties to them except indirect duties not to develop immoral dispositions through cruelty towards them. Kant used the example of lying as an application of his ethics: because there is a perfect duty to tell the truth, we must never lie, even if it seems that lying would bring about better consequences than telling the truth.
Although all of Kant 's work develops his ethical theory, it is most clearly defined in Groundwork of the Metaphysics of Morals, Critique of Practical Reason and Metaphysics of Morals. As part of the Enlightenment tradition, Kant based his ethical theory on the belief that reason should be used to determine how people ought to act. He did not attempt to prescribe specific action, but instructed that reason should be used to determine how to behave.
In his combined works, Kant constructed the basis for an ethical law by the concept of duty. Kant began his ethical theory by arguing that the only virtue that can be unqualifiedly good is a good will. No other virtue has this status because every other virtue can be used to achieve immoral ends (the virtue of loyalty is not good if one is loyal to an evil person, for example). The good will is unique in that it is always good and maintains its moral value even when it fails to achieve its moral intentions. Kant regarded the good will as a single moral principle which freely chooses to use the other virtues for moral ends.
For Kant a good will is a broader conception than a will which acts from duty. A will which acts from duty is distinguishable as a will which overcomes hindrances in order to keep the moral law. A dutiful will is thus a special case of a good will which becomes visible in adverse conditions. Kant argues that only acts performed with regard to duty have moral worth. This is not to say that acts performed merely in accordance with duty are worthless (these still deserve approval and encouragement), but that special esteem is given to acts which are performed out of duty.
Kant 's conception of duty does not entail that people perform their duties grudgingly. Although duty often constrains people and prompts them to act against their inclinations, it still comes from an agent 's volition: they desire to keep the moral law. Thus, when an agent performs an action from duty it is because the rational incentives matter to them more than their opposing inclinations. Kant wished to move beyond the conception morality as externally imposed duties and present an ethics of autonomy, when rational agents freely recognise the claims reason makes upon them.
Applying the categorical imperative, duties arise because failure to fulfil them would either result in a contradiction in conception or in a contradiction in the will. The former are classified as perfect duties, the latter as imperfect. A perfect duty always holds true -- there is a perfect duty to tell the truth, so we must never lie. An imperfect duty allows flexibility -- beneficence is an imperfect duty because we are not obliged to be completely beneficent at all times, but may choose the times and places in which we are. Kant believed that perfect duties are more important than imperfect duties: if a conflict between duties arises, the perfect duty must be followed.
Main Article: Categorical Imperative
The primary formulation of Kant 's ethics is the categorical imperative, from which he derived four further formulations. Kant made a distinction between categorical and hypothetical imperatives. A hypothetical imperative is one we must obey if we want to satisfy our desires: ' go to the doctor ' is a hypothetical imperative because we are only obliged to obey it if we want to get well. A categorical imperative binds us regardless of our desires: everyone has a duty to not lie, regardless of circumstances and even if it is in our interest to do so. These imperatives are morally binding because they are based on reason, rather than contingent facts about an agent. Unlike hypothetical imperatives, which bind us insofar as we are part of a group or society which we owe duties to, we can not opt out of the categorical imperative because we can not opt out of being rational agents. We owe a duty to rationality by virtue of being rational agents; therefore, rational moral principles apply to all rational agents at all times.
Kant 's first formulation of the Categorical Imperative is that of universalizability:
Act only according to that maxim by which you can at the same time will that it should become a universal law.
When someone acts, it is according to a rule, or maxim. For Kant, an act is only permissible if one is willing for the maxim that allows the action to be a universal law by which everyone acts. Maxims fail this test if they produce either a contradiction in conception or a contradiction in the will when universalized. A contradiction in conception happens when, if a maxim were to be universalized, it ceases to make sense because the "... maxim would necessarily destroy itself as soon as it was made a universal law. '' For example, if the maxim ' It is permissible to break promises ' was universalized, no one would trust any promises made, so the idea of a promise would become meaningless; the maxim would be self - contradictory because, when universalized, promises cease to be meaningful. The maxim is not moral because it is logically impossible to universalize -- we could not conceive of a world where this maxim was universalized. A maxim can also be immoral if it creates a contradiction in the will when universalized. This does not mean a logical contradiction, but that universalizing the maxim leads to a state of affairs that no rational being would desire. For example, Driver argues that the maxim ' I will not give to charity ' produces a contradiction in the will when universalized because a world where no one gives to charity would be undesirable for the person who acts by that maxim.
Kant believed that morality is the objective law of reason: just as objective physical laws necessitate physical actions (apples fall down because of gravity, for example), objective rational laws necessitate rational actions. He thus believed that a perfectly rational being must also be perfectly moral because a perfectly rational being subjectively finds it necessary to do what is rationally necessary. Because humans are not perfectly rational (they partly act by instinct), Kant believed that humans must conform their subjective will with objective rational laws, which he called conformity obligation. Kant argued that the objective law of reason is a priori, existing externally from rational being. Just as physical laws exist prior to physical beings, rational laws (morality) exist prior to rational beings. Therefore, according to Kant, rational morality is universal and can not change depending on circumstance.
Kant 's second formulation of the Categorical Imperative is to treat humanity as an end in itself:
Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.
Kant argued that rational beings can never be treated merely as means to ends; they must always also be treated as ends themselves, requiring that their own reasoned motives must be equally respected. This derives from Kant 's claim that reason motivates morality: it demands that we respect reason as a motive in all beings, including other people. A rational being can not rationally consent to being used merely as a means to an end, so they must always be treated as an end. Kant justified this by arguing that moral obligation is a rational necessity: that which is rationally willed is morally right. Because all rational agents rationally will themselves to be an end and never merely a means, it is morally obligatory that they are treated as such. This does not mean that we can never treat a human as a means to an end, but that when we do, we also treat him as an end in himself.
Kant 's Formula of Autonomy expresses the idea that an agent is obliged to follow the Categorical Imperative because of their rational will, rather than any outside influence. Kant believed that any moral law motivated by the desire to fulfill some other interest would deny the Categorical Imperative, leading him to argue that the moral law must only arise from a rational will. This principle requires people to recognize the right of others to act autonomously and means that, as moral laws must be universalisable, what is required of one person is required of all.
Another formulation of Kant 's Categorical Imperative is the Kingdom of Ends:
A rational being must always regard himself as giving laws either as member or as sovereign in a kingdom of ends which is rendered possible by the freedom of will.
This formulation requires that actions be considered as if their maxim is to provide a law for a hypothetical Kingdom of Ends. Accordingly, people have an obligation to act upon principles that a community of rational agents would accept as laws. In such a community, each individual would only accept maxims that can govern every member of the community without treating any member merely as a means to an end. Although the Kingdom of Ends is an ideal -- the actions of other people and events of nature ensure that actions with good intentions sometimes result in harm -- we are still required to act categorically, as legislators of this ideal kingdom.
Louis Pojman has suggested four strong influences on Kant 's ethics. The first is the Lutheran sect Pietism, to which Kant 's parents subscribed. Pietism emphasised honesty and moral living over doctrinal belief, more concerned with feeling than rationality. Kant believed that rationality is required, but that it should be concerned with morality and good will. Second is the political philosopher Jean - Jacques Rousseau, whose work, The Social Contract, influenced Kant 's view on the fundamental worth of human beings. Pojman also cites contemporary ethical debates as influential to the development of Kant 's ethics. Kant favoured rationalism over empiricism, which meant he viewed morality as a form of knowledge, rather than something based on human desire. Natural law (the belief that the moral law is determined by nature) and intuitionism (the belief that humans have intuitive awareness of objective moral truths) were, according to Pojman, also influential for Kant.
Biographer of Kant, Manfred Kuhn, suggested that the values Kant 's parents held, of "hard work, honesty, cleanliness, and independence '', set him an example and influenced him more than their Pietism did. In the Stanford Encyclopedia of Philosophy, Michael Rohlf suggests that Kant was influenced by his teacher, Martin Knutzen, himself influenced by the work of Christian Wolff and John Locke, and who introduced Kant to the work of English physicist Isaac Newton.
German philosopher Jürgen Habermas has proposed a theory of discourse ethics that he claims is a descendant of Kantian ethics. He proposes that action should be based on communication between those involved, in which their interests and intentions are discussed so they can be understood by all. Rejecting any form of coercion or manipulation, Habermas believes that agreement between the parties is crucial for a moral decision to be reached. Like Kantian ethics, discourse ethics is a cognitive ethical theory, in that it supposes that truth and falsity can be attributed to ethical propositions. It also formulates a rule by which ethical actions can be determined and proposes that ethical actions should be universalisable, in a similar way to Kant 's ethics.
Habermas argues that his ethical theory is an improvement on Kant 's ethics. He rejects the dualistic framework of Kant 's ethics. Kant distinguished between the phenomena world, which can be sensed and experienced by humans, and the noumena, or spiritual world, which is inaccessible to humans. This dichotomy was necessary for Kant because it could explain the autonomy of a human agent: although a human is bound in the phenomenal world, their actions are free in the intelligible world. For Habermas, morality arises from discourse, which is made necessary by their rationality and needs, rather than their freedom.
The social contract theory of political philosopher John Rawls, developed in his work A Theory of Justice, was influenced by Kant 's ethics. Rawls argued that a just society would be fair. To achieve this fairness, he proposed a hypothetical moment prior to the existence of a society, at which the society is ordered: this is the original position. This should take place from behind a veil of ignorance, where no one knows what their own position in society will be, preventing people from being biased by their own interests and ensuring a fair result. Rawls ' theory of justice rests on the belief that individuals are free, equal, and moral; he regarded all human beings as possessing some degree of reasonableness and rationality, which he saw as the constituents of morality and entitling their possessors to equal justice. Rawls dismissed much of Kant 's dualism, arguing that the structure of Kantian ethics, once reformulated, is clearer without it -- he described this as one of the goals of A Theory of Justice.
French psychoanalyst Jacques Lacan linked psychoanalysis with Kantian ethics in his works The Ethics of Psychoanalysis and Kant avec Sade and compared Kant with the Marquis de Sade. Lacan argued that Sade 's maxim of jouissance -- the pursuit of sexual pleasure or enjoyment -- is morally acceptable by Kant 's criteria because it can be universalised. He proposed that, while Kant presented human freedom as critical to the moral law, Sade further argued that human freedom is only fully realised through the maxim of jouissance.
Philosopher Onora O'Neill, who studied under John Rawls at  Harvard University, is a contemporary Kantian ethicist who supports a Kantian approach to issues of social justice. O'Neill argues that a successful Kantian account of social justice must not rely on any unwarranted idealizations or assumption. She notes that philosophers have previously charged Kant with idealizing humans as autonomous beings, without any social context or life goals, though maintains that Kant 's ethics can be read without such an idealization. O'Neill prefers Kant 's conception of reason as practical and available to be used by humans, rather than as principles attached to every human being. Conceiving of reason as a tool to make decisions with means that the only thing able to restrain the principles we adopt is that they could be adopted by all. If we can not will that everyone adopts a certain principle, then we can not give them reasons to adopt it. To use reason, and to reason with other people, we must reject those principles that can not be universally adopted. In this way, O'Neill reached Kant 's formulation of universalisability without adopting an idealistic view of human autonomy. This model of universalisability does not require that we adopt all universalisable principles, but merely prohibits us from adopting those that are not.
From this model of Kantian ethics, O'Neill begins to develop a theory of justice. She argues that the rejection of certain principles, such as deception and coercion, provides a starting point for basic conceptions of justice, which she argues are more determinate for human beings that the more abstract principles of equality or liberty. Nevertheless, she concedes that these principles may seem to be excessively demanding: there are many actions and institutions that do rely on non-universalisable principles, such as injury.
In his paper The Schizophrenia of Modern Ethical Theories, philosopher Michael Stocker challenges Kantian ethics (and all modern ethical theories) by arguing that actions from duty lack certain moral value. He gives the example of Smith, who visits his friend in hospital out of duty, rather than because of the friendship; he argues that this visit seems morally lacking because it is motivated by the wrong thing. Marcia Baron has attempted to defend Kantian ethics on this point. After presenting a number of reasons that we might find acting out of duty objectionable, she argues that these problems only arise when people misconstrue what their duty is. Acting out of duty is not intrinsically wrong, but immoral consequences can occur when people misunderstand what they are duty - bound to do. Duty need not be seen as cold and impersonal: one may have a duty to cultivate their character or improve their personal relationships. Baron further argues that duty should be construed as a secondary motive -- that is, a motive that regulates and sets conditions on what may be done, rather than prompt specific actions. She argues that, seen this way, duty neither reveals a deficiency in one 's natural inclinations to act, nor undermines the motives and feelings that are essential to friendship. For Baron, being governed by duty does not mean that duty is always the primary motivation to act; rather, it entails that considerations of duty are always action - guiding. A responsible moral agent should take an interest in moral questions, such as questions of character. These should guide moral agents to act from duty.
German philosopher G.W.F. Hegel presented two main criticisms of Kantian ethics. He first argued that Kantian ethics provides no specific information about what people should do because Kant 's moral law is solely a principle of non-contradiction. He argued that Kant 's ethics lack any content and so can not constitute a supreme principle of morality. To illustrate this point, Hegel and his followers have presented a number of cases in which the Formula of Universal Law either provides no meaningful answer or gives an obviously wrong answer. Hegel used Kant 's example of being trusted with another man 's money to argue that Kant 's Formula of Universal Law can not determine whether a social system of property is a morally good thing, because either answer can entail contradictions. He also used the example of helping the poor: if everyone helped the poor, there would be no poor left to help, so beneficence would be impossible if universalised, making it immoral according to Kant 's model. Hegel 's second criticism was that Kant 's ethics forces humans into an internal conflict between reason and desire. For Hegel, it is unnatural for humans to suppress their desire and subordinate it to reason. This means that, by not addressing the tension between self - interest and morality, Kant 's ethics can not give humans any reason to be moral.
German philosopher Arthur Schopenhauer criticised Kant 's belief that ethics should concern what ought to be done, insisting that the scope of ethics should be to attempt to explain and interpret what actually happens. Whereas Kant presented an idealized version of what ought to be done in a perfect world, Schopenhauer argued that ethics should instead be practical and arrive at conclusions that could work in the real world, capable of being presented as a solution to the world 's problems. Schopenhauer drew a parallel with aesthetics, arguing that in both cases prescriptive rules are not the most important part of the discipline. Because he believed that virtue can not be taught -- a person is either virtuous or is not -- he cast the proper place of morality as restraining and guiding people 's behavior, rather than presenting unattainable universal laws.
Philosopher Friedrich Nietzsche criticised all contemporary moral systems, with a special focus on Christian and Kantian ethics. He argued that all modern ethical systems share two problematic characteristics: first, they make a metaphysical claim about the nature of humanity, which must be accepted for the system to have any normative force; and second, the system benefits the interests of certain people, often over those of others. Although Nietzsche 's primary objection is not that metaphysical claims about humanity are untenable (he also objected to ethical theories that do not make such claims), his two main targets -- Kantianism and Christianity -- do make metaphysical claims, which therefore feature prominently in Nietzsche 's criticism.
Nietzsche rejected fundamental components of Kant 's ethics, particularly his argument that morality, God, and immorality, can be shown through reason. Nietzsche cast suspicion on the use of moral intuition, which Kant used as the foundation of his morality, arguing that it has no normative force in ethics. He further attempted to undermine key concepts in Kant 's moral psychology, such as the will and pure reason. Like Kant, Nietzsche developed a concept of autonomy; however, he rejected Kant 's idea that valuing our own autonomy requires us to respect the autonomy of others. A naturalist reading of Nietzsche 's moral psychology stands contrary to Kant 's conception of reason and desire. Under the Kantian model, reason is a fundamentally different motive to desire because it has the capacity to stand back from a situation and make an independent decision. Nietzsche conceives of the self as a social structure of all our different drives and motivations; thus, when it seems that our intellect has made a decision against our drives, it is actually just an alternative drive taking dominance over another. This is in direct contrast with Kant 's view of the intellect as opposed to instinct; instead, it is just another instinct. There is thus no self - capable of standing back and making a decision; the decision the self - makes is simply determined by the strongest drive. Kantian commentators have argued that Nietzsche 's practical philosophy requires the existence of a self - capable of standing back in the Kantian sense. For an individual to create values of their own, which is a key idea in Nietzsche 's philosophy, they must be able to conceive of themselves as a unified agent. Even if the agent is influenced by their drives, he must regard them as his own, which undermines Nietzsche 's conception of autonomy.
Utilitarian philosopher John Stuart Mill criticised Kant for not realizing that moral laws are justified by a moral intuition based on utilitarian principles (that the greatest good for the greatest number ought to be sought). Mill argued that Kant 's ethics could not explain why certain actions are wrong without appealing to utilitarianism. As basis for morality, Mill believed that his principle of utility has a stronger intuitive grounding than Kant 's reliance on reason, and can better explain why certain actions are right or wrong.
Virtue ethics is a form of ethical theory which emphasizes the character of an agent, rather than specific acts; many of its proponents have criticised Kant 's deontological approach to ethics. Elizabeth Anscombe criticised modern ethical theories, including Kantian ethics, for their obsession with law and obligation. As well as arguing that theories which rely on a universal moral law are too rigid, Anscombe suggested that, because a moral law implies a moral lawgiver, they are irrelevant in modern secular society. In his work After Virtue, Alasdair MacIntyre criticises Kant 's formulation of universalisability, arguing that various trivial and immoral maxims can pass the test, such as "Keep all your promises throughout your entire life except one ''. He further challenges Kant 's formulation of humanity as an end in itself by arguing that Kant provided no reason to treat others as means: the maxim "Let everyone except me be treated as a means '', though seemingly immoral, can be universalized. Bernard Williams argues that, by abstracting persons from character, Kant misrepresents persons and morality and Philippa Foot identified Kant as one of a select group of philosophers responsible for the neglect of virtue by analytic philosophy.
Roman Catholic priest Servais Pinckaers regarded Christian ethics as closer to the virtue ethics of Aristotle than Kant 's ethics. He presented virtue ethics as freedom for excellence, which regards freedom as acting in accordance with nature to develop one 's virtues. Initially, this requires following rules -- but the intention is that the agent develop virtuously, and regard acting morally as a joy. This is in contrast with freedom of indifference, which Pinckaers attributes to William Ockham and likens to Kant. On this view, freedom is set against nature: free actions are those not determined by passions or emotions. There is no development or progress in an agent 's virtue, merely the forming of habit. This is closer to Kant 's view of ethics, because Kant 's conception of autonomy requires that an agent is not merely guided by their emotions, and is set in contrast with Pinckaer 's conception of Christian ethics.
A number of philosophers, including Elizabeth Anscombe, Jean Bethke Elshtain, Servais Pinckaers, Iris Murdoch, and the Catholic Encyclopedia, have all suggested that the Kantian conception of ethics rooted in autonomy is contradictory in its dual contention that humans are co-legislators of morality and that morality is a priori. They argue that if something is universally a priori (i.e., existing unchangingly prior to experience), then it can not also be in part dependent upon humans, who have not always existed. On the other hand, if humans truly do legislate morality, then they are not bound by it objectively, because they are always free to change it.
This objection seems to rest on a misunderstanding of Kant 's views since Kant argued that morality is dependent upon the concept of a rational will (and the related concept of a categorical imperative: an imperative which any rational being must necessarily will for itself). It is not based on contingent features of any being 's will, nor upon human wills in particular, so there is no sense in which Kant makes ethics "dependent '' upon anything which has not always existed. Furthermore, the sense in which our wills are subject to the law is precisely that if our wills are rational, we must will in a lawlike fashion; that is, we must will according to moral judgments we apply to all rational beings, including ourselves. This is more easily understood by parsing the term "autonomy '' into its Greek roots: auto (self) + nomos (rule or law). That is, an autonomous will, according to Kant, is not merely one which follows its own will, but whose will is lawful - that is, conforming to the principle of universalizability, which Kant also identifies with reason. Ironically, in another passage, willing according to immutable reason is precisely the kind of capacity Elshtain ascribes to God as the basis of his moral authority, and she commands this over an inferior voluntarist version of divine command theory, which would make both morality and God 's will contingent. Kant 's theory is a version of the first rather than the second view of autonomy, so neither God nor any human authority, including contingent human institutions, play any unique authoritative role in his moral theory. Kant and Elshtain, that is, both agree God has no choice but to conform his will to the immutable facts of reason, including moral truths; humans do have such a choice, but otherwise their relationship to morality is the same as that of God 's: they can recognize moral facts, but do not determine their content through contingent acts of will.
Kant believed that the shared ability of humans to reason should be the basis of morality, and that it is the ability to reason that makes humans morally significant. He, therefore, believed that all humans should have the right to common dignity and respect. Margaret Eaton argues that, according to Kant 's ethics, a medical professional must be happy for their own practices to be used by and on anyone, even if they were the patient themselves. For example, a researcher who wished to perform tests on patients without their knowledge must be happy for all researchers to do so. She also argues that Kant 's requirement of autonomy would mean that a patient must be able to make a fully informed decision about treatment, making it immoral to perform tests on unknowing patients. Medical research should be motivated out of respect for the patient, so they must be informed of all facts, even if this would be likely to dissuade the patient. Jeremy Sugarman has argued that Kant 's formulation of autonomy requires that patients are never used merely for the benefit of society, but are always treated as rational people with their own goals. Aaron Hinkley notes that a Kantian account of autonomy requires respect for choices that are arrived at rationally, not for choices which are arrived at by idiosyncratic or non-rational means. He argues that there may be some difference between what a purely rational agent would choose and what a patient actually chooses, the difference being the result of non-rational idiosyncrasies. Although a Kantian physician ought not to lie to or coerce a patient, Hinkley suggests that some form of paternalism - such as through withholding information which may prompt a non-rational response - could be acceptable.
In her work How Kantian Ethics Should Treat Pregnancy and Abortion, Susan Feldman argues that abortion should be defended according to Kantian ethics. She proposed that a woman should be treated as a dignified autonomous person, with control over their body, as Kant suggested. She believes that the free choice of women would be paramount in Kantian ethics, requiring abortion to be the mother 's decision. Dean Harris has noted that, if Kantian ethics is to be used in the discussion of abortion, it must be decided whether a fetus is an autonomous person. Kantian ethicist Carl Cohen argues that the potential to be rational or participation in a generally rational species is the relevant distinction between humans and inanimate objects or irrational animals. Cohen believes that even when humans are not rational because of age (such as babies or fetuses) or mental disability, agents are still morally obligated to treat them as an ends in themselves, equivalent to a rational adult such as a mother seeking an abortion.
Kant viewed humans as being subject to the animalistic desires of self - preservation, species - preservation, and the preservation of enjoyment. He argued that humans have a duty to avoid maxims that harm or degrade themselves, including suicide, sexual degradation, and drunkenness. This led Kant to regard sexual intercourse as degrading because it reduces humans to an object of pleasure. He admitted sex only within marriage, which he regarded as "a merely animal union ''. He believed that masturbation is worse than suicide, reducing a person 's status to below that of an animal; he argued that rape should be punished with castration and that bestiality requires expulsion from society. Feminist philosopher Catharine MacKinnon has argued that many contemporary practices would be deemed immoral by Kant 's standards because they dehumanize women. Sexual harassment, prostitution and pornography, she argues, objectify women and do not meet Kant 's standard of human autonomy. Commercial sex has been criticised for turning both parties into objects (and thus using them as a means to an end); mutual consent is problematic because in consenting, people choose to objectify themselves. Alan Soble has noted that more liberal Kantian ethicists believe that, depending on other contextual factors, the consent of women can vindicate their participation in pornography and prostitution.
Because Kant viewed rationality as the basis for being a moral patient -- one due moral consideration -- he believed that animals have no moral rights. Animals, according to Kant, are not rational, thus one can not behave immorally towards them. Although he did not believe we have any duties towards animals, Kant did believe being cruel to them was wrong because our behaviour might influence our attitudes toward human beings: if we become accustomed to harming animals, then we are more likely to see harming humans as acceptable.
Ethicist Tom Regan rejected Kant 's assessment of the moral worth of animals on three main points: First, he rejected Kant 's claim that animals are not self - conscious. He then challenged Kant 's claim that animals have no intrinsic moral worth because they can not make a moral judgment. Regan argued that, if a being 's moral worth is determined by its ability to make a moral judgment, then we must regard humans who are incapable of moral thought as being equally undue moral consideration. Regan finally argued that Kant 's assertion that animals exist merely as a means to an end is unsupported; the fact that animals have a life that can go well or badly suggests that, like humans, they have their own ends.
Kant believed that the Categorical Imperative provides us with the maxim that we ought not to lie in any circumstances, even if we are trying to bring about good consequences, such as lying to a murderer to prevent them from finding their intended victim. Kant argued that, because we can not fully know what the consequences of any action will be, the result might be unexpectedly harmful. Therefore, we ought to act to avoid the known wrong -- lying -- rather than to avoid a potential wrong. If there are harmful consequences, we are blameless because we acted according to our duty. Driver argues that this might not be a problem if we choose to formulate our maxims differently: the maxim ' I will lie to save an innocent life ' can be universalized. However, this new maxim may still treat the murderer as a means to an end, which we have a duty to avoid doing. Thus we may still be required to tell the truth to the murderer in Kant 's example.
|
venus has a higher average surface temperature than mercury. why | Venus - wikipedia
Venus is the second planet from the Sun, orbiting it every 224.7 Earth days. It has the longest rotation period (243 days) of any planet in the Solar System and rotates in the opposite direction to most other planets. It has no natural satellites. It is named after the Roman goddess of love and beauty. It is the second - brightest natural object in the night sky after the Moon, reaching an apparent magnitude of − 4.6 -- bright enough to cast shadows at night and, rarely, visible to the naked eye in broad daylight. Orbiting within Earth 's orbit, Venus is an inferior planet and never appears to venture far from the Sun; its maximum angular distance from the Sun (elongation) is 47.8 °.
Venus is a terrestrial planet and is sometimes called Earth 's "sister planet '' because of their similar size, mass, proximity to the Sun, and bulk composition. It is radically different from Earth in other respects. It has the densest atmosphere of the four terrestrial planets, consisting of more than 96 % carbon dioxide. The atmospheric pressure at the planet 's surface is 92 times that of Earth, or roughly the pressure found 900 m (3,000 ft) underwater on Earth. Venus is by far the hottest planet in the Solar System, with a mean surface temperature of 735 K (462 ° C; 863 ° F), even though Mercury is closer to the Sun. Venus is shrouded by an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space in visible light. It may have had water oceans in the past, but these would have vaporized as the temperature rose due to a runaway greenhouse effect. The water has probably photodissociated, and the free hydrogen has been swept into interplanetary space by the solar wind because of the lack of a planetary magnetic field. Venus 's surface is a dry desertscape interspersed with slab - like rocks and is periodically resurfaced by volcanism.
As one of the brightest objects in the sky, Venus has been a major fixture in human culture for as long as records have existed. It has been made sacred to gods of many cultures, and has been a prime inspiration for writers and poets as the "morning star '' and "evening star ''. Venus was the first planet to have its motions plotted across the sky, as early as the second millennium BC.
As the closest planet to Earth, Venus has been a prime target for early interplanetary exploration. It was the first planet beyond Earth visited by a spacecraft (Mariner 2 in 1962), and the first to be successfully landed on (by Venera 7 in 1970). Venus 's thick clouds render observation of its surface impossible in visible light, and the first detailed maps did not emerge until the arrival of the Magellan orbiter in 1991. Plans have been proposed for rovers or more complex missions, but they are hindered by Venus 's hostile surface conditions.
Venus is one of the four terrestrial planets in the Solar System, meaning that it is a rocky body like Earth. It is similar to Earth in size and mass, and is often described as Earth 's "sister '' or "twin ''. The diameter of Venus is 12,103.6 km (7,520.8 mi) -- only 638.4 km (396.7 mi) less than Earth 's -- and its mass is 81.5 % of Earth 's. Conditions on the Venusian surface differ radically from those on Earth because its dense atmosphere is 96.5 % carbon dioxide, with most of the remaining 3.5 % being nitrogen.
The Venusian surface was a subject of speculation until some of its secrets were revealed by planetary science in the 20th century. Venera landers in 1975 and 1982 returned images of a surface covered in sediment and relatively angular rocks. The surface was mapped in detail by Magellan in 1990 -- 91. The ground shows evidence of extensive volcanism, and the sulfur in the atmosphere may indicate that there have been some recent eruptions.
About 80 % of the Venusian surface is covered by smooth, volcanic plains, consisting of 70 % plains with wrinkle ridges and 10 % smooth or lobate plains. Two highland "continents '' make up the rest of its surface area, one lying in the planet 's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is 11 km (7 mi) above the Venusian average surface elevation. The southern continent is called Aphrodite Terra, after the Greek goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area.
The absence of evidence of lava flow accompanying any of the visible calderas remains an enigma. The planet has few impact craters, demonstrating that the surface is relatively young, approximately 300 -- 600 million years old. Venus has some unique surface features in addition to the impact craters, mountains, and valleys commonly found on rocky planets. Among these are flat - topped volcanic features called "farra '', which look somewhat like pancakes and range in size from 20 to 50 km (12 to 31 mi) across, and from 100 to 1,000 m (330 to 3,280 ft) high; radial, star - like fracture systems called "novae ''; features with both radial and concentric fractures resembling spider webs, known as "arachnoids ''; and "coronae '', circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin.
Most Venusian surface features are named after historical and mythological women. Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regions Alpha Regio, Beta Regio, and Ovda Regio. The latter three features were named before the current system was adopted by the International Astronomical Union, the body which oversees planetary nomenclature.
The longitudes of physical features on Venus are expressed relative to its prime meridian. The original prime meridian passed through the radar - bright spot at the centre of the oval feature Eve, located south of Alpha Regio. After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne.
Much of the Venusian surface appears to have been shaped by volcanic activity. Venus has several times as many volcanoes as Earth, and it has 167 large volcanoes that are over 100 km (62 mi) across. The only volcanic complex of this size on Earth is the Big Island of Hawaii. This is not because Venus is more volcanically active than Earth, but because its crust is older. Earth 's oceanic crust is continually recycled by subduction at the boundaries of tectonic plates, and has an average age of about 100 million years, whereas the Venusian surface is estimated to be 300 -- 600 million years old.
Several lines of evidence point to ongoing volcanic activity on Venus. During the Soviet Venera program, the Venera 9 orbiter obtained spectroscopic evidence of lightning on Venus, and the Venera 12 descent probe obtained additional evidence of lightning and thunder. The European Space Agency 's Venus Express in 2007 detected whistler waves further confirming the occurrence of lightning on Venus. One possibility is that ash from a volcanic eruption was generating the lightning. Another piece of evidence comes from measurements of sulfur dioxide concentrations in the atmosphere, which dropped by a factor of 10 between 1978 and 1986, jumped in 2006, and again declined 10-fold. This may mean that levels had been boosted several times by large volcanic eruptions.
In 2008 and 2009, the first direct evidence for ongoing volcanism was observed by Venus Express, in the form of four transient localized infrared hot spots within the rift zone Ganis Chasma, near the shield volcano Maat Mons. Three of the spots were observed in more than one successive orbit. These spots are thought to represent lava freshly released by volcanic eruptions. The actual temperatures are not known, because the size of the hot spots could not be measured, but are likely to have been in the 800 -- 1,100 K (527 -- 827 ° C; 980 -- 1,520 ° F) range, relative to a normal temperature of 740 K (467 ° C; 872 ° F).
Almost a thousand impact craters on Venus are evenly distributed across its surface. On other cratered bodies, such as Earth and the Moon, craters show a range of states of degradation. On the Moon, degradation is caused by subsequent impacts, whereas on Earth it is caused by wind and rain erosion. On Venus, about 85 % of the craters are in pristine condition. The number of craters, together with their well - preserved condition, indicates the planet underwent a global resurfacing event about 300 -- 600 million years ago, followed by a decay in volcanism. Whereas Earth 's crust is in continuous motion, Venus is thought to be unable to sustain such a process. Without plate tectonics to dissipate heat from its mantle, Venus instead undergoes a cyclical process in which mantle temperatures rise until they reach a critical level that weakens the crust. Then, over a period of about 100 million years, subduction occurs on an enormous scale, completely recycling the crust.
Venusian craters range from 3 to 280 km (2 to 174 mi) in diameter. No craters are smaller than 3 km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed down so much by the atmosphere that they do not create an impact crater. Incoming projectiles less than 50 m (160 ft) in diameter will fragment and burn up in the atmosphere before reaching the ground.
Without seismic data or knowledge of its moment of inertia, little direct information is available about the internal structure and geochemistry of Venus. The similarity in size and density between Venus and Earth suggests they share a similar internal structure: a core, mantle, and crust. Like that of Earth, the Venusian core is at least partially liquid because the two planets have been cooling at about the same rate. The slightly smaller size of Venus means pressures are 24 % lower in its deep interior than Earth 's. The principal difference between the two planets is the lack of evidence for plate tectonics on Venus, possibly because its crust is too strong to subduct without water to make it less viscous. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field. Instead, Venus may lose its internal heat in periodic major resurfacing events.
Venus has an extremely dense atmosphere composed of 96.5 % carbon dioxide, 3.5 % nitrogen, and traces of other gases, most notably sulfur dioxide. The mass of its atmosphere is 93 times that of Earth 's, whereas the pressure at its surface is about 92 times that at Earth 's -- a pressure equivalent to that at a depth of nearly 1 kilometre (0.62 mi) under Earth 's oceans. The density at the surface is 65 kg / m, 6.5 % that of water or 50 times as dense as Earth 's atmosphere at 293 K (20 ° C; 68 ° F) at sea level. The CO 2 - rich atmosphere generates the strongest greenhouse effect in the Solar System, creating surface temperatures of at least 735 K (462 ° C; 864 ° F). This makes Venus 's surface hotter than Mercury 's, which has a minimum surface temperature of 53 K (− 220 ° C; − 364 ° F) and maximum surface temperature of 693 K (420 ° C; 788 ° F), even though Venus is nearly twice Mercury 's distance from the Sun and thus receives only 25 % of Mercury 's solar irradiance. This temperature is higher than that used for sterilization. The surface of Venus is often said to resemble traditional accounts of Hell.
Studies have suggested that billions of years ago Venus 's atmosphere was much more like Earth 's than it is now, and that there may have been substantial quantities of liquid water on the surface, but after a period of 600 million to several billion years, a runaway greenhouse effect was caused by the evaporation of that original water, which generated a critical level of greenhouse gases in its atmosphere. Although the surface conditions on Venus are no longer hospitable to any Earthlike life that may have formed before this event, there is speculation on the possibility that life exists in the upper cloud layers of Venus, 50 km (31 mi) up from the surface, where the temperature ranges between 303 and 353 K (30 and 80 ° C; 86 and 176 ° F) but the environment is acidic.
Thermal inertia and the transfer of heat by winds in the lower atmosphere mean that the temperature of Venus 's surface does not vary significantly between the night and day sides, despite Venus 's extremely slow rotation. Winds at the surface are slow, moving at a few kilometres per hour, but because of the high density of the atmosphere at the surface, they exert a significant amount of force against obstructions, and transport dust and small stones across the surface. This alone would make it difficult for a human to walk through, even if the heat, pressure, and lack of oxygen were not a problem.
Above the dense CO 2 layer are thick clouds consisting mainly of sulfuric acid droplets. The clouds also contain sulfur aerosol, about 1 % ferric chloride and some water. Other possible constituents of the cloud particles are ferric sulfate, aluminium chloride and phosphoric anhydride. Clouds at different levels have different compositions and particle size distributions. These clouds reflect and scatter about 90 % of the sunlight that falls on them back into space, and prevent visual observation of Venus 's surface. The permanent cloud cover means that although Venus is closer than Earth to the Sun, it receives less sunlight on the ground. Strong 300 km / h (185 mph) winds at the cloud tops go around Venus about every four to five Earth days. Winds on Venus move at up to 60 times the speed of its rotation, whereas Earth 's fastest winds are only 10 -- 20 % rotation speed.
The surface of Venus is effectively isothermal; it retains a constant temperature not only between day and night sides but between the equator and the poles. Venus 's minute axial tilt -- less than 3 °, compared to 23 ° on Earth -- also minimises seasonal temperature variation. The only appreciable variation in temperature occurs with altitude. The highest point on Venus, Maxwell Montes, is therefore the coolest point on Venus, with a temperature of about 655 K (380 ° C; 715 ° F) and an atmospheric pressure of about 4.5 MPa (45 bar). In 1995, the Magellan spacecraft imaged a highly reflective substance at the tops of the highest mountain peaks that bore a strong resemblance to terrestrial snow. This substance likely formed from a similar process to snow, albeit at a far higher temperature. Too volatile to condense on the surface, it rose in gaseous form to higher elevations, where it is cooler and could precipitate. The identity of this substance is not known with certainty, but speculation has ranged from elemental tellurium to lead sulfide (galena).
The clouds of Venus may be capable of producing lightning. The existence of lightning in the atmosphere of Venus has been controversial since the first suspected bursts were detected by the Soviet Venera probes. In 2006 -- 07, Venus Express clearly detected whistler mode waves, the signatures of lightning. Their intermittent appearance indicates a pattern associated with weather activity. According to these measurements, the lightning rate is at least half of that on Earth. In 2007, Venus Express discovered that a huge double atmospheric vortex exists at the south pole.
Venus Express also discovered, in 2011, that an ozone layer exists high in the atmosphere of Venus. On 29 January 2013, ESA scientists reported that the ionosphere of Venus streams outwards in a manner similar to "the ion tail seen streaming from a comet under similar conditions. ''
In December 2015 and to a lesser extent in April and May 2016, researchers working on Japan 's Akatsuki mission observed bow shapes in the atmosphere of Venus. This was considered direct evidence of the existence of perhaps the largest stationary gravity waves in the solar system.
In 1967, Venera 4 found Venus 's magnetic field to be much weaker than that of Earth. This magnetic field is induced by an interaction between the ionosphere and the solar wind, rather than by an internal dynamo as in the Earth 's core. Venus 's small induced magnetosphere provides negligible protection to the atmosphere against cosmic radiation.
The lack of an intrinsic magnetic field at Venus was surprising, given that it is similar to Earth in size, and was expected also to contain a dynamo at its core. A dynamo requires three things: a conducting liquid, rotation, and convection. The core is thought to be electrically conductive and, although its rotation is often thought to be too slow, simulations show it is adequate to produce a dynamo. This implies that the dynamo is missing because of a lack of convection in Venus 's core. On Earth, convection occurs in the liquid outer layer of the core because the bottom of the liquid layer is much hotter than the top. On Venus, a global resurfacing event may have shut down plate tectonics and led to a reduced heat flux through the crust. This caused the mantle temperature to increase, thereby reducing the heat flux out of the core. As a result, no internal geodynamo is available to drive a magnetic field. Instead, the heat from the core is being used to reheat the crust.
One possibility is that Venus has no solid inner core, or that its core is not cooling, so that the entire liquid part of the core is at approximately the same temperature. Another possibility is that its core has already completely solidified. The state of the core is highly dependent on the concentration of sulfur, which is unknown at present.
The weak magnetosphere around Venus means that the solar wind is interacting directly with its outer atmosphere. Here, ions of hydrogen and oxygen are being created by the dissociation of neutral molecules from ultraviolet radiation. The solar wind then supplies energy that gives some of these ions sufficient velocity to escape Venus 's gravity field. This erosion process results in a steady loss of low - mass hydrogen, helium, and oxygen ions, whereas higher - mass molecules, such as carbon dioxide, are more likely to be retained. Atmospheric erosion by the solar wind probably led to the loss of most of Venus 's water during the first billion years after it formed. The erosion has increased the ratio of higher - mass deuterium to lower - mass hydrogen in the atmosphere 100 times compared to the rest of the solar system.
Venus orbits the Sun at an average distance of about 0.72 AU (108 million km; 67 million mi), and completes an orbit every 224.7 days. Although all planetary orbits are elliptical, Venus 's orbit is the closest to circular, with an eccentricity of less than 0.01. When Venus lies between Earth and the Sun in inferior conjunction, it makes the closest approach to Earth of any planet at an average distance of 41 million km (25 million mi). The planet reaches inferior conjunction every 584 days, on average. Because of the decreasing eccentricity of Earth 's orbit, the minimum distances will become greater over tens of thousands of years. From the year 1 to 5383, there are 526 approaches less than 40 million km; then there are none for about 60,158 years.
All the planets in the Solar System orbit the Sun in an anti-clockwise direction as viewed from above Earth 's north pole. Most planets also rotate on their axes in an anti-clockwise direction, but Venus rotates clockwise in retrograde rotation once every 243 Earth days -- the slowest rotation of any planet. Because its rotation is so slow, Venus is very close to spherical. A Venusian sidereal day thus lasts longer than a Venusian year (243 versus 224.7 Earth days). Venus 's equator rotates at 6.52 km / h (4.05 mph), whereas Earth 's rotates at 1,669.8 km / h (1,037.6 mph). Venus 's rotation has slowed down in the 16 years between the Magellan spacecraft and Venus Express visits; each Venusian sidereal day has increased by 6.5 minutes in that time span. Because of the retrograde rotation, the length of a solar day on Venus is significantly shorter than the sidereal day, at 116.75 Earth days (making the Venusian solar day shorter than Mercury 's 176 Earth days). One Venusian year is about 1.92 Venusian solar days. To an observer on the surface of Venus, the Sun would rise in the west and set in the east, although Venus 's opaque clouds prevent observing the Sun from the planet 's surface.
Venus may have formed from the solar nebula with a different rotation period and obliquity, reaching its current state because of chaotic spin changes caused by planetary perturbations and tidal effects on its dense atmosphere, a change that would have occurred over the course of billions of years. The rotation period of Venus may represent an equilibrium state between tidal locking to the Sun 's gravitation, which tends to slow rotation, and an atmospheric tide created by solar heating of the thick Venusian atmosphere. The 584 - day average interval between successive close approaches to Earth is almost exactly equal to 5 Venusian solar days, but the hypothesis of a spin -- orbit resonance with Earth has been discounted.
Venus has no natural satellites. It has several trojan asteroids: the quasi-satellite 2002 VE 68 and two other temporary trojans, 2001 CK and 2012 XE 133. In the 17th century, Giovanni Cassini reported a moon orbiting Venus, which was named Neith and numerous sightings were reported over the following 7009631152000000000 ♠ 200 years, but most were determined to be stars in the vicinity. Alex Alemi 's and David Stevenson 's 2006 study of models of the early Solar System at the California Institute of Technology shows Venus likely had at least one moon created by a huge impact event billions of years ago. About 10 million years later, according to the study, another impact reversed the planet 's spin direction and caused the Venusian moon gradually to spiral inward until it collided with Venus. If later impacts created moons, these were removed in the same way. An alternative explanation for the lack of satellites is the effect of strong solar tides, which can destabilize large satellites orbiting the inner terrestrial planets.
To the naked eye, Venus appears as a white point of light brighter than any other planet or star (apart from the Sun). Its brightest apparent magnitude, − 4.9, occurs during crescent phase, only 36 days before or after inferior conjunction. Venus will be brightest on 30 April 2017, then grow dimmer for nearly a year. Venus fades to about magnitude − 3 when it is backlit by the Sun. The planet is bright enough to be seen in a clear midday sky and is more easily visible when the Sun is low on the horizon or setting. As an inferior planet, it always lies within about 47 ° of the Sun.
Venus "overtakes '' Earth every 584 days as it orbits the Sun. As it does so, it changes from the "Evening Star '', visible after sunset, to the "Morning Star '', visible before sunrise. Although Mercury, the other inferior planet, reaches a maximum elongation of only 28 ° and is often difficult to discern in twilight, Venus is hard to miss when it is at its brightest. Its greater maximum elongation means it is visible in dark skies long after sunset. As the brightest point - like object in the sky, Venus is a commonly misreported "unidentified flying object ''.
As it orbits the Sun, Venus displays phases like those of the Moon in a telescopic view. The planet appears as a small and "full '' disc when it is on the opposite side of the Sun (at superior conjunction). Venus shows a larger disc and "quarter phase '' at its maximum elongations from the Sun, and appears its brightest in the night sky. The planet presents a much larger thin "crescent '' in telescopic views as it passes along the near side between Earth and the Sun. Venus displays its largest size and "new phase '' when it is between Earth and the Sun (at inferior conjunction). Its atmosphere is visible through telescopes by the halo of sunlight refracted around it.
The Venusian orbit is slightly inclined relative to Earth 's orbit; thus, when the planet passes between Earth and the Sun, it usually does not cross the face of the Sun. Transits of Venus occur when the planet 's inferior conjunction coincides with its presence in the plane of Earth 's orbit. Transits of Venus occur in cycles of 7009766849680000000 ♠ 243 years with the current pattern of transits being pairs of transits separated by eight years, at intervals of about 7009332932680000000 ♠ 105.5 years or 7009383424840000000 ♠ 121.5 years -- a pattern first discovered in 1639 by the English astronomer Jeremiah Horrocks.
The latest pair was June 8, 2004 and June 5 -- 6, 2012. The transit could be watched live from many online outlets or observed locally with the right equipment and conditions.
The preceding pair of transits occurred in December 1874 and December 1882; the following pair will occur in December 2117 and December 2125. The oldest film known is the 1874 Passage de Venus, showing the 1874 Venus transit of the sun. Historically, transits of Venus were important, because they allowed astronomers to determine the size of the astronomical unit, and hence the size of the Solar System as shown by Horrocks in 1639. Captain Cook 's exploration of the east coast of Australia came after he had sailed to Tahiti in 1768 to observe a transit of Venus.
The pentagram of Venus is the path that Venus makes as observed from Earth. Successive inferior conjunctions of Venus repeat very near a 13: 8 orbital resonance (Earth orbits 8 times for every 13 orbits of Venus), shifting 144 ° upon sequential inferior conjunctions. The resonance 13: 8 ratio is approximate. 8 / 13 is approximately 0.615385 while Venus orbits the Sun in 0.615187 years.
Naked eye observations of Venus during daylight hours exist in several anecdotes and records. Astronomer Edmund Halley calculated its maximum naked eye brightness in 1716, when many Londoners were alarmed by its appearance in the daytime. French emperor Napoleon Bonaparte once witnessed a daytime apparition of the planet while at a reception in Luxembourg. Another historical daytime observation of the planet took place during the inauguration of the American president Abraham Lincoln in Washington, D.C., on 4 March 1865. Although naked eye visibility of Venus 's phases is disputed, records exist of observations of its crescent.
A long - standing mystery of Venus observations is the so - called ashen light -- an apparent weak illumination of its dark side, seen when the planet is in the crescent phase. The first claimed observation of ashen light was made in 1643, but the existence of the illumination has never been reliably confirmed. Observers have speculated it may result from electrical activity in the Venusian atmosphere, but it could be illusory, resulting from the physiological effect of observing a bright, crescent - shaped object.
Venus was known to ancient civilizations both as the "morning star '' and as the "evening star '', names that reflect the early assumption that these were two separate objects. The ancient Sumerians, who recognized Venus as a single object, believed that it was their goddess Inanna. Inanna 's movements in several of her myths, including Inanna and Shukaletuda and Inanna 's Descent into the Underworld appear to parallel the motion of the planet Venus. The Venus tablet of Ammisaduqa, believed to have been compiled around the mid-seventeenth century BCE, shows the Babylonians understood the two were a single object, referred to in the tablet as the "bright queen of the sky '', and could support this view with detailed observations.
The ancient Greeks thought that Venus was two separate stars: Phosphorus and Hesperus. Pliny the Elder credited the realization that they were a single object to Pythagoras in the sixth century BCE, while Diogenes Laertius argued that Parmenides was probably responsible. The ancient Chinese referred to the morning Venus as "the Great White '' (Tai - bai 太白) or "the Opener (Starter) of Brightness '' (Qi - ming 啟明), and the evening Venus as "the Excellent West One '' (Chang - geng 長庚). The Romans designated the morning aspect of Venus as Lucifer, literally "Light - Bringer '', and the evening aspect as Vesper, both literal translations of the respective Greek names.
In the second century, in his astronomical treatise Almagest, Ptolemy theorized that both Mercury and Venus are located between the Sun and the Earth. The 11th century Persian astronomer Avicenna claimed to have observed the transit of Venus, which later astronomers took as confirmation of Ptolemy 's theory. In the 12th century, the Andalusian astronomer Ibn Bajjah observed "two planets as black spots on the face of the Sun '', which were later identified as the transits of Venus and Mercury by the Maragha astronomer Qotb al - Din Shirazi in the 13th century.
When the Italian physicist Galileo Galilei first observed the planet in the early 17th century, he found it showed phases like the Moon, varying from crescent to gibbous to full and vice versa. When Venus is furthest from the Sun in the sky, it shows a half - lit phase, and when it is closest to the Sun in the sky, it shows as a crescent or full phase. This could be possible only if Venus orbited the Sun, and this was among the first observations to clearly contradict the Ptolemaic geocentric model that the Solar System was concentric and centred on Earth.
The 1639 transit of Venus was accurately predicted by Jeremiah Horrocks and observed by him and his friend, William Crabtree, at each of their respective homes, on 4 December 1639 (24 November under the Julian calendar in use at that time).
The atmosphere of Venus was discovered in 1761 by Russian polymath Mikhail Lomonosov. Venus 's atmosphere was observed in 1790 by German astronomer Johann Schröter. Schröter found when the planet was a thin crescent, the cusps extended through more than 180 °. He correctly surmised this was due to scattering of sunlight in a dense atmosphere. Later, American astronomer Chester Smith Lyman observed a complete ring around the dark side of the planet when it was at inferior conjunction, providing further evidence for an atmosphere. The atmosphere complicated efforts to determine a rotation period for the planet, and observers such as Italian - born astronomer Giovanni Cassini and Schröter incorrectly estimated periods of about 7004864000000000000 ♠ 24 h from the motions of markings on the planet 's apparent surface.
Little more was discovered about Venus until the 20th century. Its almost featureless disc gave no hint what its surface might be like, and it was only with the development of spectroscopic, radar and ultraviolet observations that more of its secrets were revealed. The first ultraviolet observations were carried out in the 1920s, when Frank E. Ross found that ultraviolet photographs revealed considerable detail that was absent in visible and infrared radiation. He suggested this was due to a dense, yellow lower atmosphere with high cirrus clouds above it.
Spectroscopic observations in the 1900s gave the first clues about the Venusian rotation. Vesto Slipher tried to measure the Doppler shift of light from Venus, but found he could not detect any rotation. He surmised the planet must have a much longer rotation period than had previously been thought. Later work in the 1950s showed the rotation was retrograde. Radar observations of Venus were first carried out in the 1960s, and provided the first measurements of the rotation period, which were close to the modern value.
Radar observations in the 1970s revealed details of the Venusian surface for the first time. Pulses of radio waves were beamed at the planet using the 300 m (980 ft) radio telescope at Arecibo Observatory, and the echoes revealed two highly reflective regions, designated the Alpha and Beta regions. The observations also revealed a bright region attributed to mountains, which was called Maxwell Montes. These three features are now the only ones on Venus that do not have female names.
The first robotic space probe mission to Venus, and the first to any planet, began with the Soviet Venera program in 1961. The United States ' exploration of Venus had its first success with the Mariner 2 mission on 14 December 1962, becoming the world 's first successful interplanetary mission, passing 34,833 km (21,644 mi) above the surface of Venus, and gathering data on the planet 's atmosphere.
On 18 October 1967, the Soviet Venera 4 successfully entered the atmosphere and deployed science experiments. Venera 4 showed the surface temperature was hotter than Mariner 2 had calculated, at almost 500 ° C, determined that the atmosphere is 95 % carbon dioxide (CO 2), and discovered that Venus 's atmosphere was considerably denser than Venera 4 's designers had anticipated. The joint Venera 4 -- Mariner 5 data were analysed by a combined Soviet -- American science team in a series of colloquia over the following year, in an early example of space cooperation.
In 1974, Mariner 10 swung by Venus on its way to Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere.
In 1975, the Soviet Venera 9 and 10 landers transmitted the first images from the surface of Venus, which were in black and white. In 1982 the first colour images of the surface were obtained with the Soviet Venera 13 and 14 landers.
NASA obtained additional data in 1978 with the Pioneer Venus project that consisted of two separate missions: Pioneer Venus Orbiter and Pioneer Venus Multiprobe. The successful Soviet Venera program came to a close in October 1983, when Venera 15 and 16 were placed in orbit to conduct detailed mapping of 25 % of Venus 's terrain (from the north pole to 30 ° N latitude)
Several other Venus flybys took place in the 1980s and 1990s that increased the understanding of Venus, including Vega 1 (1985), Vega 2 (1985), Galileo (1990), Magellan (1994), Cassini -- Huygens (1998), and MESSENGER (2006). Then, Venus Express by the European Space Agency (ESA) entered orbit around Venus in April 2006. Equipped with seven scientific instruments, Venus Express provided unprecedented long - term observation of Venus 's atmosphere. ESA concluded that mission in December 2014.
As of 2016, Japan 's Akatsuki is in a highly elliptical orbit around Venus since 7 December 2015, and there are several probing proposals under study by Roscosmos, NASA, and India 's ISRO.
In 2016, NASA announced that it was planning a rover, the Automaton Rover for Extreme Environments, designed to survive for an extended time in Venus 's environmental conditions. It would be controlled by a mechanical computer and driven by wind power.
Venus is a primary feature of the night sky, and so has been of remarkable importance in mythology, astrology and fiction throughout history and in different cultures. Classical poets such as Homer, Sappho, Ovid and Virgil spoke of the star and its light. Romantic poets such as William Blake, Robert Frost, Alfred Lord Tennyson and William Wordsworth wrote odes to it. With the invention of the telescope, the idea that Venus was a physical world and possible destination began to take form.
The impenetrable Venusian cloud cover gave science fiction writers free rein to speculate on conditions at its surface; all the more so when early observations showed that not only was it similar in size to Earth, it possessed a substantial atmosphere. Closer to the Sun than Earth, the planet was frequently depicted as warmer, but still habitable by humans. The genre reached its peak between the 1930s and 1950s, at a time when science had revealed some aspects of Venus, but not yet the harsh reality of its surface conditions. Findings from the first missions to Venus showed the reality to be quite different, and brought this particular genre to an end. As scientific knowledge of Venus advanced, so science fiction authors tried to keep pace, particularly by conjecturing human attempts to terraform Venus.
The astronomical symbol for Venus is the same as that used in biology for the female sex: a circle with a small cross beneath. The Venus symbol also represents femininity, and in Western alchemy stood for the metal copper. Polished copper has been used for mirrors from antiquity, and the symbol for Venus has sometimes been understood to stand for the mirror of the goddess.
The speculation of the existence of life on Venus decreased significantly since the early 1960s, when spacecraft began studying Venus and it became clear that the conditions on Venus are extreme compared to those on Earth.
The fact that Venus is located closer to the Sun than Earth, raising temperatures on the surface to nearly 735 K (462 ° C; 863 ° F), the atmospheric pressure is ninety times that of Earth, and the extreme impact of the greenhouse effect, make water - based life as we know it unlikely. A few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the lower - temperature, acidic upper layers of the Venusian atmosphere. The atmospheric pressure and temperature fifty kilometres above the surface are similar to those at Earth 's surface. This has led to proposals to use aerostats (lighter - than - air balloons) for initial exploration and ultimately for permanent "floating cities '' in the Venusian atmosphere. Among the many engineering challenges are the dangerous amounts of sulfuric acid at these heights.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Virgo Supercluster → Laniakea Supercluster → Observable universe → Universe Each arrow (→) may be read as "within '' or "part of ''.
|
the deviation of any variable (x) from the mean measured in terms of standard deviations is | Standard deviation - wikipedia
In statistics, the standard deviation (SD, also represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A low standard deviation indicates that the data points tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.
The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data.
In addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. This derivation of a standard deviation is often called the "standard error '' of the estimate or "standard error of the mean '' when referring to a mean. It is computed as the standard deviation of all the means that would be computed from that population if an infinite number of samples were drawn and a mean for each sample were computed.
It is very important to note that the standard deviation of a population and the standard error of a statistic derived from that population (such as the mean) are quite different but related (related by the inverse of the square root of the number of observations). The reported margin of error of a poll is computed from the standard error of the mean (or alternatively from the product of the standard deviation of the population and the inverse of the square root of the sample size, which is the same thing) and is typically about twice the standard deviation -- the half - width of a 95 percent confidence interval.
In science, many researchers report the standard deviation of experimental data, and only effects that fall much farther than two standard deviations away from what would have been expected are considered statistically significant -- normal random error or variation in the measurements is in this way distinguished from likely genuine effects or associations. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment.
When only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above - mentioned quantity as applied to those data or to a modified quantity that is an unbiased estimate of the population standard deviation (the standard deviation of the entire population).
Logan gives the following example. Furness and Bryant measured the resting metabolic rate for 8 male and 6 female breeding Northern fulmars. The table shows the Furness data set.
The graph shows the metabolic rate for males and females. By visual inspection, it appears that the variability of the metabolic rate is greater for males than for females.
The sample standard deviation of the metabolic rate for the female fulmars is calculated as follows. The formula for the sample standard deviation is
where (x 1, x 2,..., x N) (\ displaystyle \ textstyle \ (x_ (1), \, x_ (2), \, \ ldots, \, x_ (N) \)) are the observed values of the sample items, x _̄ (\ displaystyle \ textstyle (\ overline (x))) is the mean value of these observations, and N is the number of observations in the sample.
In the sample standard deviation formula, for this example, the numerator is the sum of the squared deviation of each individual animal 's metabolic rate from the mean metabolic rate. The table below shows the calculation of this sum of squared deviations for the female fulmars. For females, the sum of squared deviations is 886047.09, as shown in the table.
The denominator in the sample standard deviation formula is N -- 1, where N is the number of animals. In this example, there are N = 6 females, so the denominator is 6 -- 1 = 5. The sample standard deviation for the female fulmars is therefore
For the male fulmars, a similar calculation gives a sample standard deviation of 894.37, approximately twice as large as the standard deviation for the females. The graph shows the metabolic rate data, the means (red dots), and the standard deviations (red lines) for females and males.
Use of the sample standard deviation implies that these 14 fulmars are a sample from a larger population of fulmars. If these 14 fulmars comprised the entire population (perhaps the last 14 surviving fulmars), then instead of the sample standard deviation, the calculation would use the population standard deviation. In the population standard deviation formula, the denominator is N instead of N - 1. It is rare that measurements can be taken for an entire population, so, by default, statistical software packages calculate the sample standard deviation. Similarly, journal articles report the sample standard deviation unless otherwise specified.
Suppose that the entire population of interest was eight students in a particular class. For a finite set of numbers, the population standard deviation is found by taking the square root of the average of the squared deviations of the values subtracted from their average value. The marks of a class of eight students (that is, a statistical population) are the following eight values:
These eight data points have the mean (average) of 5:
First, calculate the deviations of each data point from the mean, and square the result of each:
The variance is the mean of these values:
and the population standard deviation is equal to the square root of the variance:
This formula is valid only if the eight values with which we began form the complete population. If the values instead were a random sample drawn from some large parent population (for example, they were 8 marks randomly and independently chosen from a class of 2 million), then one often divides by 7 (which is n − 1) instead of 8 (which is n) in the denominator of the last formula. In that case the result of the original formula would be called the sample standard deviation. Dividing by n − 1 rather than by n gives an unbiased estimate of the variance of the larger parent population. This is known as Bessel 's correction.
If the population of interest is approximately normally distributed, the standard deviation provides information on the proportion of observations above or below certain values. For example, the average height for adult men in the United States is about 70 inches (177.8 cm), with a standard deviation of around 3 inches (7.62 cm). This means that most men (about 68 %, assuming a normal distribution) have a height within 3 inches (7.62 cm) of the mean (67 -- 73 inches (170.18 -- 185.42 cm)) -- one standard deviation -- and almost all men (about 95 %) have a height within 6 inches (15.24 cm) of the mean (64 -- 76 inches (162.56 -- 193.04 cm)) -- two standard deviations. If the standard deviation were zero, then all men would be exactly 70 inches (177.8 cm) tall. If the standard deviation were 20 inches (50.8 cm), then men would have much more variable heights, with a typical range of about 50 -- 90 inches (127 -- 228.6 cm). Three standard deviations account for 99.7 % of the sample population being studied, assuming the distribution is normal (bell - shaped). (See the 68 - 95 - 99.7 rule, or the empirical rule, for more information.)
Let X be a random variable with mean value μ:
Here the operator E denotes the average or expected value of X. Then the standard deviation of X is the quantity
(derived using the properties of expected value).
In other words, the standard deviation σ (sigma) is the square root of the variance of X; i.e., it is the square root of the average value of (X − μ).
The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation, since these expected values need not exist. For example, the standard deviation of a random variable that follows a Cauchy distribution is undefined because its expected value μ is undefined.
In the case where X takes random values from a finite data set x, x,..., x, with each value having the same probability, the standard deviation is
or, using summation notation,
If, instead of having equal probabilities, the values have different probabilities, let x have probability p, x have probability p,..., x have probability p. In this case, the standard deviation will be
The standard deviation of a continuous real - valued random variable X with probability density function p (x) is
and where the integrals are definite integrals taken for x ranging over the set of possible values of the random variable X.
In the case of a parametric family of distributions, the standard deviation can be expressed in terms of the parameters. For example, in the case of the log - normal distribution with parameters μ and σ, the standard deviation is
One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that can not be done, the standard deviation σ is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers). However, unlike in the case of estimating the population mean, for which the sample mean is a simple estimator with many desirable properties (unbiased, efficient, maximum likelihood), there is no single estimator for the standard deviation with all these properties, and unbiased estimation of standard deviation is a very technically involved problem. Most often, the standard deviation is estimated using the corrected sample standard deviation (using N − 1), defined below, and this is often referred to as the "sample standard deviation '', without qualifiers. However, other estimators are better in other respects: the uncorrected estimator (using N) yields lower mean squared error, while using N − 1.5 (for the normal distribution) almost completely eliminates bias.
The formula for the population standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). This estimator, denoted by s, is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows:
where (x 1, x 2,..., x N) (\ displaystyle \ textstyle \ (x_ (1), \, x_ (2), \, \ ldots, \, x_ (N) \)) are the observed values of the sample items and x _̄ (\ displaystyle \ textstyle (\ overline (x))) is the mean value of these observations, while the denominator N stands for the size of the sample: this is the square root of the sample variance, which is the average of the squared deviations about the sample mean.
This is a consistent estimator (it converges in probability to the population value as the number of samples goes to infinity), and is the maximum - likelihood estimate when the population is normally distributed. However, this is a biased estimator, as the estimates are generally too low. The bias decreases as sample size grows, dropping off as 1 / N, and thus is most significant for small or moderate sample sizes; for N > 75 (\ displaystyle N > 75) the bias is below 1 %. Thus for very large sample sizes, the uncorrected sample standard deviation is generally acceptable. This estimator also has a uniformly smaller mean squared error than the corrected sample standard deviation.
If the biased sample variance (the second central moment of the sample, which is a downward - biased estimate of the population variance) is used to compute an estimate of the population 's standard deviation, the result is
Here taking the square root introduces further downward bias, by Jensen 's inequality, due to the square root being a concave function. The bias in the variance is easily corrected, but the bias from the square root is more difficult to correct, and depends on the distribution in question.
An unbiased estimator for the variance is given by applying Bessel 's correction, using N − 1 instead of N to yield the unbiased sample variance, denoted s:
This estimator is unbiased if the variance exists and the sample values are drawn independently with replacement. N − 1 corresponds to the number of degrees of freedom in the vector of deviations from the mean, (x 1 − x _̄,..., x n − x _̄). (\ displaystyle \ textstyle (x_ (1) - (\ overline (x)), \; \ dots, \; x_ (n) - (\ overline (x))).)
Taking square roots reintroduces bias (because the square root is a nonlinear function, which does not commute with the expectation), yielding the corrected sample standard deviation, denoted by s:
As explained above, while s is an unbiased estimator for the population variance, s is still a biased estimator for the population standard deviation, though markedly less biased than the uncorrected sample standard deviation. This estimator is commonly used and generally known simply as the "sample standard deviation ''. The bias may still be large for small samples (N less than 10). As sample size increases, the amount of bias decreases. We obtain more information and the difference between 1 N (\ displaystyle (\ frac (1) (N))) and 1 N − 1 (\ displaystyle (\ frac (1) (N - 1))) becomes smaller.
For unbiased estimation of standard deviation, there is no formula that works across all distributions, unlike for mean and variance. Instead, s is used as a basis, and is scaled by a correction factor to produce an unbiased estimate. For the normal distribution, an unbiased estimator is given by s / c, where the correction factor (which depends on N) is given in terms of the Gamma function, and equals:
This arises because the sampling distribution of the sample standard deviation follows a (scaled) chi distribution, and the correction factor is the mean of the chi distribution.
An approximation can be given by replacing N − 1 with N − 1.5, yielding:
The error in this approximation decays quadratically (as 1 / N), and it is suited for all but the smallest samples or highest precision: for N = 3 the bias is equal to 1.3 %, and for N = 9 the bias is already less than 0.1 %.
For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation:
where γ denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
The standard deviation we obtain by sampling a distribution is itself not absolutely accurate, both for mathematical reasons (explained here by the confidence interval) and for practical reasons of measurement (measurement error). The mathematical effect can be described by the confidence interval or CI. To show how a larger sample will make the confidence interval narrower, consider the following examples: A small population of N = 2 has only 1 degree of freedom for estimating the standard deviation. The result is that a 95 % CI of the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows:
where q p (\ displaystyle q_ (p)) is the p - th quantile of the chi - square distribution with k degrees of freedom, and 1 − α (\ displaystyle 1 - \ alpha) is the confidence level. This is equivalent to the following:
With k = 1, q 0.025 = 0.000982 (\ displaystyle q_ (0.025) = 0.000982) and q 0.975 = 5.024 (\ displaystyle q_ (0.975) = 5.024). The reciprocals of the square roots of these two numbers give us the factors 0.45 and 31.9 given above.
A larger population of N = 10 has 9 degrees of freedom for estimating the standard deviation. The same computations as above give us in this case a 95 % CI running from 0.69 × SD to 1.83 × SD. So even with a sample population of 10, the actual SD can still be almost a factor 2 higher than the sampled SD. For a sample population N = 100, this is down to 0.88 × SD to 1.16 × SD. To be more certain that the sampled SD is close to the actual SD we need to sample a large number of points.
These same formulae can be used to obtain confidence intervals on the variance of residuals from a least squares fit under standard normal theory, where k is now the number of degrees of freedom for error.
The standard deviation is invariant under changes in location, and scales directly with the scale of the random variable. Thus, for a constant c and random variables X and Y:
The standard deviation of the sum of two random variables can be related to their individual standard deviations and the covariance between them:
where var = σ 2 (\ displaystyle \ textstyle \ operatorname (var) \, = \, \ sigma ^ (2)) and cov (\ displaystyle \ textstyle \ operatorname (cov)) stand for variance and covariance, respectively.
The calculation of the sum of squared deviations can be related to moments calculated directly from the data. In the following formula, the letter E is interpreted to mean expected value, i.e., mean.
The sample standard deviation can be computed as:
For a finite population with equal probabilities at all points, we have
This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value. See computational formula for the variance for proof, and for an analogous result for the sample standard deviation.
A large standard deviation indicates that the data points can spread far from the mean and a small standard deviation indicates that they are clustered closely around the mean.
For example, each of the three populations (0, 0, 14, 14), (0, 6, 8, 14) and (6, 6, 8, 8) has a mean of 7. Their standard deviations are 7, 5, and 1, respectively. The third population has a much smaller standard deviation than the other two because its values are all close to 7. It will have the same units as the data points themselves. If, for instance, the data set (0, 6, 8, 14) represents the ages of a population of four siblings in years, the standard deviation is 5 years. As another example, the population (1000, 1006, 1008, 1014) may represent the distances traveled by four athletes, measured in meters. It has a mean of 1007 meters, and a standard deviation of 5 meters.
Standard deviation may serve as a measure of uncertainty. In physical science, for example, the reported standard deviation of a group of repeated measurements gives the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then the theory being tested probably needs to be revised. This makes sense since they fall outside the range of values that could reasonably be expected to occur, if the prediction were correct and the standard deviation appropriately quantified. See prediction interval.
While the standard deviation does measure how far typical values tend to be from the mean, other measures are available. An example is the mean absolute deviation, which might be considered a more direct measure of average distance, compared to the root mean square distance inherent in the standard deviation.
The practical value of understanding the standard deviation of a set of values is in appreciating how much variation there is from the average (mean).
Standard deviation is often used to compare real - world data against a model to test the model. For example, in industrial applications the weight of products coming off a production line may need to comply with a legally required value. By weighing some fraction of the products an average weight can be found, which will always be slightly different to the long - term average. By using standard deviations, a minimum and maximum value can be calculated that the averaged weight will be within some very high percentage of the time (99.9 % or more). If it falls outside the range then the production process may need to be corrected. Statistical tests such as these are particularly important when the testing is relatively expensive. For example, if the product needs to be opened and drained and weighed, or if the product was otherwise used up by the test.
In experimental science, a theoretical model of reality is used. Particle physics conventionally uses a standard of "5 sigma '' for the declaration of a discovery. A five - sigma level translates to one chance in 3.5 million that a random fluctuation would yield the result. This level of certainty was required in order to assert that a particle consistent with the Higgs boson had been discovered in two independent experiments at CERN, and this was also the significance level leading to the declaration of the first detection of gravitational waves.
As a simple example, consider the average daily maximum temperatures for two cities, one inland and one on the coast. It is helpful to understand that the range of daily maximum temperatures for cities near the coast is smaller than for cities inland. Thus, while these two cities may each have the same average maximum temperature, the standard deviation of the daily maximum temperature for the coastal city will be less than that of the inland city as, on any particular day, the actual maximum temperature is more likely to be farther from the average maximum temperature for the inland city than for the coastal one.
In finance, standard deviation is often used as a measure of the risk associated with price - fluctuations of a given asset (stocks, bonds, property, etc.), or the risk of a portfolio of assets (actively managed mutual funds, index mutual funds, or ETFs). Risk is an important factor in determining how to efficiently manage a portfolio of investments because it determines the variation in returns on the asset and / or portfolio and gives investors a mathematical basis for investment decisions (known as mean - variance optimization). The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns. Standard deviation provides a quantified estimate of the uncertainty of future returns.
For example, assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B 's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return). Stock B is likely to fall short of the initial investment (but also to exceed the initial investment) more often than Stock A under the same circumstances, and is estimated to return only two percent more on average. In this example, Stock A is expected to earn about 10 percent, plus or minus 20 pp (a range of 30 percent to − 10 percent), about two - thirds of the future year returns. When considering more extreme possible returns or outcomes in future, an investor should expect results of as much as 10 percent plus or minus 60 pp, or a range from 70 percent to − 50 percent, which includes outcomes for three standard deviations from the average return (about 99.7 percent of probable returns).
Calculating the average (or arithmetic mean) of the return of a security over a given period will generate the expected return of the asset. For each period, subtracting the expected return from the actual return results in the difference from the mean. Squaring the difference in each period and taking the average gives the overall variance of the return of the asset. The larger the variance, the greater risk the security carries. Finding the square root of this variance will give the standard deviation of the investment tool in question.
Population standard deviation is used to set the width of Bollinger Bands, a widely adopted technical analysis tool. For example, the upper Bollinger Band is given as x + nσ. The most commonly used value for n is 2; there is about a five percent chance of going outside, assuming a normal distribution of returns.
Financial time series are known to be non-stationary series, whereas the statistical calculations above, such as standard deviation, apply only to stationary series. To apply the above statistical tools to non-stationary series, the series first must be transformed to a stationary series, enabling use of statistical tools that now have a valid basis from which to work.
To gain some geometric insights and clarification, we will start with a population of three values, x, x, x. This defines a point P = (x, x, x) in R. Consider the line L = ((r, r, r): r ∈ R). This is the "main diagonal '' going through the origin. If our three given values were all equal, then the standard deviation would be zero and P would lie on L. So it is not unreasonable to assume that the standard deviation is related to the distance of P to L. That is indeed the case. To move orthogonally from L to the point P, one begins at the point:
whose coordinates are the mean of the values we started out with.
M (\ displaystyle M) is on L (\ displaystyle L) therefore M = (l, l, l) (\ displaystyle M = (l, l, l)) with l ∈ R (\ displaystyle l \ in (\ textbf (R)))
The line L (\ displaystyle L) is to be orthogonal to the vector from M (\ displaystyle M) to P (\ displaystyle P). Therefore:
L ⋅ (P − M) = 0 (r, r, r) ⋅ (x 1 − l, x 2 − l, x 3 − l) = 0 r (x 1 − l + x 2 − l + x 3 − l) = 0 r (∑ i x i − 3 l) = 0 ∑ i x i − 3 l = 0 1 3 ∑ i x i = l x _̄ = l (\ displaystyle (\ begin (aligned) L \ cdot (P-M) & = 0 \ \ (r, r, r) \ cdot (x_ (1) - l, x_ (2) - l, x_ (3) - l) & = 0 \ \ r (x_ (1) - l + x_ (2) - l + x_ (3) - l) & = 0 \ \ r (\ sum \ limits _ (i) x_ (i) - 3l) & = 0 \ \ \ sum \ limits _ (i) x_ (i) - 3l& = 0 \ \ (\ frac (1) (3)) \ sum \ limits _ (i) x_ (i) & = l \ \ (\ overline (x)) & = l \ end (aligned)))
A little algebra shows that the distance between P and M (which is the same as the orthogonal distance between P and the line L) ∑ i (x i − x _̄) 2 (\ displaystyle (\ sqrt (\ sum \ limits _ (i) (x_ (i) - (\ overline (x))) ^ (2)))) is equal to the standard deviation of the vector (x, x, x), multiplied by the square root of the number of dimensions of the vector (3 in this case).
An observation is rarely more than a few standard deviations away from the mean. Chebyshev 's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table.
The central limit theorem states that the distribution of an average of many independent, identically distributed random variables tends toward the famous bell - shaped normal distribution with a probability density function of
where μ is the expected value of the random variables, σ equals their distribution 's standard deviation divided by n, and n is the number of random variables. The standard deviation therefore is simply a scaling variable that adjusts how broad the curve will be, though it also appears in the normalizing constant.
If a data distribution is approximately normal, then the proportion of data values within z standard deviations of the mean is defined by:
where erf (\ displaystyle \ textstyle \ operatorname (erf)) is the error function. The proportion that is less than or equal to a number, x, is given by the cumulative distribution function:
If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, μ ± σ, where μ is the arithmetic mean), about 95 percent are within two standard deviations (μ ± 2σ), and about 99.7 percent lie within three standard deviations (μ ± 3σ). This is known as the 68 - 95 - 99.7 rule, or the empirical rule.
For various values of z, the percentage of values expected to lie in and outside the symmetric interval, CI = (− zσ, zσ), are as follows:
The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a "natural '' measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise statement is the following: suppose x,..., x are real numbers and define the function:
Using calculus or by completing the square, it is possible to show that σ (r) has a unique minimum at the mean:
Variability can also be measured by the coefficient of variation, which is the ratio of the standard deviation to the mean. It is a dimensionless number.
Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:
where N is the number of observations in the sample used to estimate the mean. This can easily be proven with (see basic properties of the variance):
(Statistical Independence is assumed.)
hence
Resulting in:
It should be emphasized that in order to estimate the standard deviation of the mean σ mean (\ displaystyle \ sigma _ (\ text (mean))) it is necessary to know the standard deviation of the entire population σ (\ displaystyle \ sigma) beforehand. However, in most applications this parameter is unknown. For example, if a series of 10 measurements of a previously unknown quantity is performed in a laboratory, it is possible to calculate the resulting sample mean and sample standard deviation, but it is impossible to calculate the standard deviation of the mean.
The following two formulas can represent a running (repeatedly updated) standard deviation. A set of two power sums s and s are computed over a set of N values of x, denoted as x,..., x:
Given the results of these running summations, the values N, s, s can be used at any time to compute the current value of the running standard deviation:
Where N, as mentioned above, is the size of the set of values (or can also be regarded as s).
Similarly for sample standard deviation,
In a computer implementation, as the three s sums become large, we need to consider round - off error, arithmetic overflow, and arithmetic underflow. The method below calculates the running sums method with reduced rounding errors. This is a "one pass '' algorithm for calculating variance of n samples without the need to store prior data during the calculation. Applying this method to a time series will result in successive values of standard deviation corresponding to n data points as n grows larger with each new sample, rather than a constant - width sliding window calculation.
For k = 1,..., n:
where A is the mean value.
Note: Q 1 = 0 (\ displaystyle Q_ (1) = 0) since k − 1 = 0 (\ displaystyle k - 1 = 0) or x 1 = A 1 (\ displaystyle x_ (1) = A_ (1))
Sample variance:
Population variance:
When the values x are weighted with unequal weights w, the power sums s, s, s are each computed as:
And the standard deviation equations remain unchanged. Note that s is now the sum of the weights and not the number of samples N.
The incremental method with reduced rounding errors can also be applied, with some additional complexity.
A running sum of weights must be computed for each k from 1 to n:
and places where 1 / n is used above must be replaced by w / W:
In the final division,
and
or
where n is the total number of elements, and n ' is the number of elements with non-zero weights. The above formulas become equal to the simpler formulas given above if weights are taken as equal to one.
The term standard deviation was first used in writing by Karl Pearson in 1894, following his use of it in lectures. This was as a replacement for earlier alternative names for the same idea: for example, Gauss used mean error.
|
who wrote music for chitty chitty bang bang | Chitty Chitty Bang Bang - wikipedia
Chitty Chitty Bang Bang is a 1968 British musical adventure fantasy film, directed by Ken Hughes and written by Roald Dahl and Hughes, loosely based on Ian Fleming 's 1964 novel Chitty - Chitty - Bang - Bang: The Magical Car. The film stars Dick Van Dyke, Sally Ann Howes, Adrian Hall, Heather Ripley, Lionel Jeffries, James Robertson Justice, Robert Helpmann and Gert Fröbe.
The film was produced by Albert R. Broccoli, the regular co-producer of the James Bond series of films (also based on Ian Fleming novels). John Stears supervised the special effects. Irwin Kostal supervised and conducted the music, while the musical numbers, written by Richard M. and Robert B. Sherman of Mary Poppins, were staged by Marc Breaux and Dee Dee Wood. The song "Chitty Chitty Bang Bang '' was nominated for an Academy Award.
The story opens with a montage of European Grand Prix races in which one particular car appears to win every race it runs in from 1907 through 1908. However, in its final 1909 race, the car crashes and catches fire, ending its racing career. The car eventually ends up in an old garage in rural England, where two children, Jeremy (Adrian Hall) and Jemima Potts (Heather Ripley), have grown fond of it. However, a man in the junkyard intends to buy the car from the garage owner, Mr. Coggins (Desmond Llewelyn), for scrap. The children, who live with their widowed father Caractacus Potts (Dick Van Dyke), an eccentric inventor, and the family 's equally peculiar grandfather, implore their father to buy the car, but Caractacus ca n't afford it. While playing truant from school, they meet Truly Scrumptious (Sally Ann Howes), a beautiful upper - class woman with her own motor car, who brings them home to report their truancy to their father. After she leaves, Caractacus promises the children that he will save the car, but is taken aback at the cost he has committed himself to. He looks for ways to raise money to avoid letting them down.
The next morning, Potts discovers that the sweets produced by a machine he has invented can be played like a flute. He tries to sell the "Toot Sweets '' to Truly 's father, Lord Scrumptious (James Robertson Justice), a major confectionery manufacturer. He is almost successful until the whistle attracts a pack of dogs who overrun the factory, resulting in Caractacus 's proposition being rejected.
Caractacus next takes his automatic hair - cutting machine to a carnival to raise money, but his invention accidentally ruins the hair of a customer. Potts eludes the man by joining a song - and - dance act. He becomes the centre of the show and earns enough in tips to buy the car and rebuild it. They name the car "Chitty Chitty Bang Bang '' for the unusual noise of its engine. In the first trip in the car, Caractacus, the children, and Truly picnic on the beach. Caractacus tells them a tale about nasty Baron Bomburst (Gert Fröbe), the tyrant of fictional Vulgaria, who wants to steal Chitty Chitty Bang Bang.
As Potts tells his story, the quartet and the car are stranded by high tide and are attacked by pirates working for the Baron. All of a sudden, Chitty deploys huge flotation devices and transforms into a power boat, and they escape Bomburst 's yacht and return to shore. The Baron sends two spies to capture the car, but they capture Lord Scrumptious, then Grandpa Potts (Lionel Jeffries), mistaking each for the car 's creator. Caractacus, Truly, and the children see Grandpa being taken away by airship, and they give chase. When they accidentally drive off a cliff, Chitty sprouts wings and propellers and begins to fly. They follow the airship to Vulgaria and find a land without children; the Baroness Bomburst (Anna Quayle) abhors them and imprisons any she finds. Grandpa has been ordered by the Baron to make another floating car, and he bluffs his abilities to avoid being executed. The Potts ' party is hidden by the local Toymaker (Benny Hill), who now works only for the childish Baron. Chitty is discovered and taken to the castle. While Caractacus and the toymaker search for Grandpa and Truly searches for food, the children are caught by the Baroness ' Child Catcher (Robert Helpmann).
The Toymaker takes Truly and Caractacus to a grotto beneath the castle where the townspeople have been hiding their children. They concoct a scheme to free the children and the village from the Baron. The Toymaker sneaks them into the castle disguised as life - size dolls for the Baron 's birthday. Caractacus snares the Baron, and the children swarm into the banquet hall, overcoming the Baron 's palace guards and guests. In the ensuing chaos, the Baron, Baroness, and the evil Child Catcher are captured. The Potts family and Truly fly back to England. When they arrive home, Lord Scrumptious surprises Caractacus with an offer to buy the Toot Sweet as a canine confection. Caractacus, realising that he will be rich, rushes to tell Truly the news. They kiss, and Truly agrees to marry him. As they drive home, he acknowledges the importance of pragmatism, as the car takes off into the air again.
The cast includes:
The part of Truly Scrumptious had originally been offered to Julie Andrews, to reunite her with Van Dyke after their success in Mary Poppins. Andrews rejected the role specifically because she considered that the part was too close to the Poppins mould. Instead, Sally Ann Howes was given the role. Dick Van Dyke was cast after he turned down the role of Fagin from another 1968 musical Oliver! (which ended up going to Ron Moody).
The Caractacus Potts inventions in the film were created by Rowland Emett; by 1976, Time magazine, describing Emett 's work, said no term other than "Fantasticator... could remotely convey the diverse genius of the perky, pink - cheeked Englishman whose pixilations, in cartoon, watercolor and clanking 3 - D reality, range from the celebrated Far Tottering and Oyster Creek Railway to the demented thingamabobs that made the 1968 movie Chitty Chitty Bang Bang a minuscule classic. ''
Six Chitty - Chitty Bang - Bang cars were created for the film, only one of which was fully functional. At a 1973 auction in Florida, one of them sold for $37,000, equal to $203,972 today. The original "hero '' car, in a condition described as fully functional and road - going, was offered at auction on 15 May 2011 by a California - based auction house. The car sold for $805,000, less than the $1 -- 2 million it was expected to reach. It was purchased by New Zealand film director Sir Peter Jackson.
The film was the tenth most popular at the US box office in 1969.
Time began its review saying the film is a "picture for the ages -- the ages between five and twelve '' and ends noting that "At a time when violence and sex are the dual sellers at the box office, Chitty Chitty Bang Bang looks better than it is simply because it 's not not all all bad bad ''; the film 's "eleven songs have all the rich melodic variety of an automobile horn. Persistent syncopation and some breathless choreography partly redeem it, but most of the film 's sporadic success is due to Director Ken Hughes 's fantasy scenes, which make up in imagination what they lack in technical facility. ''
The New York Times critic Renata Adler wrote, "in spite of the dreadful title, Chitty Chitty Bang Bang... is a fast, dense, friendly children 's musical, with something of the joys of singing together on a team bus on the way to a game ''; Adler called the screenplay "remarkably good '' and the film 's "preoccupation with sweets and machinery seems ideal for children ''; she ends her review on the same note as Time: "There is nothing coy, or stodgy or too frightening about the film; and this year, when it has seemed highly doubtful that children ought to go to the movies at all, Chitty Chitty Bang Bang sees to it that none of the audience 's terrific eagerness to have a good time is betrayed or lost. ''
Film critic Roger Ebert reviewed the film (Chicago Sun Times, 24 December 1968). He wrote: "Chitty Chitty Bang Bang contains about the best two - hour children 's movie you could hope for, with a marvelous magical auto and lots of adventure and a nutty old grandpa and a mean Baron and some funny dances and a couple of (scary) moments. ''
In 2008 film critic and historian Leonard Maltin considered the picture "one big Edsel, with totally forgettable score and some of the shoddiest special effects ever. '' In 2008, Entertainment Weekly called Helpmann 's depiction of the Child Catcher one of the "50 Most Vile Movie Villains. ''
As of March 2014, the film has a 65 % "Fresh '' rating (17 of 26 reviews) on Rotten Tomatoes.
The film was nominated for the American Film Institute 's 2006 AFI 's Greatest Movie Musicals list.
The original soundtrack album, as was typical of soundtrack albums, presented mostly songs with very few instrumental tracks. The songs were also edited, with specially recorded intros and outros and most instrumental portions removed, both because of time limitations of the vinyl LP and the belief that listeners would not be interested in listening to long instrumental dance portions during the songs.
The soundtrack has been released on CD four times, the first two releases using the original LP masters rather than going back to the original music masters to compile a more complete soundtrack album with underscoring and complete versions of songs. The 1997 Rykodisc release included several quick bits of dialogue from the film between some of the tracks and has gone out of circulation. On 24 February 2004, a few short months after MGM released the movie on a 2 - Disc Special Edition DVD, Varèse Sarabande reissued a newly remastered soundtrack album without the dialogue tracks, restoring it to its original 1968 LP format.
In 2011, Kritzerland released the definitive soundtrack album, a 2 - CD set featuring the Original Soundtrack Album plus bonus tracks, music from the Song and Picture Book Album on disc 1, and the Richard Sherman Demos, as well as six Playback Tracks (including a long version of international covers of the theme song). Inexplicably, this release was limited to only 1,000 units.
In April 2013, Perseverance Records re-released the Kritzerland double CD set with expansive new liner notes by John Trujillo and a completely new designed booklet by Perseverance regular James Wingrove.
Chitty Chitty Bang Bang was released numerous times in the VHS format. In 1998 the film saw its first DVD release. The year 2003 brought a two - disc "Special Edition '' release. On 2 November 2010, 20th Century Fox released a two - disc Blu - ray and DVD combination set featuring the extras from the 2003 release as well as new features. The 1993 LaserDisc release by MGM / UA Home Video was the first home video release with the proper 2.20: 1 Super Panavision 70 aspect ratio.
The film did not follow Fleming 's novel closely. A separate novelisation of the film was published at the time of the film 's release. It basically followed the film 's story but with some differences of tone and emphasis, e.g. it mentioned that Caractacus Potts had had difficulty coping after the death of his wife, and it made it clearer that the sequences including Baron Bomburst were extended fantasy sequences. It was written by John Burke.
|
what is a post graduate diploma in australia | Graduate diploma - wikipedia
A graduate diploma (GradD, GDip, GrDip, GradDip) is generally a qualification taken after completion of a first degree, although the level of study varies in different countries from being at the same level as the final year of a bachelor 's degree to being at a level between a master 's degree and a doctorate. In some countries the graduate diploma and postgraduate diploma are synonymous, while in others (particularly where the graduate diploma is at undergraduate degree level) the postgraduate diploma is a higher qualification.
Graduate diplomas offered in Canada (French: Diplôme d'études supérieures spécialisées), are typically taken following a bachelor 's degree and a successful award allows progression to a master 's degree. Depending on the institution, a graduate diploma in Canada may be at graduate level or bachelor 's level. Similar courses at other Canadian institutions may be termed postgraduate diplomas at graduate level and post-baccalaureate diploma at bachelor 's level.
Graduate diplomas offered in Australia are typical of those offered in England, Wales, and Ireland. The diploma is normally taken following a bachelor 's degree, and a successful award allows progression to a master 's degree without having received honours with the bachelor 's degree. The qualification is at level 8 of the Australian Qualifications Framework, the same as an honours degree.
In New Zealand, a graduate diploma is an "advanced undergraduate '' qualification normally completed after a bachelor 's degree or done at the same time of the bachelors study, and may be used to prove the student 's ability to undertake postgraduate studies. A graduate diploma (e.g., Graduate Diploma in Education etc.) is different from a postgraduate diploma, which is a course of study at postgraduate level (e.g., Postgraduate Diploma in Clinical Psychology etc.).
In the UK, a graduate diploma is a short course at the level of a bachelor 's degree that is normally studied by students who have already graduated in another field. Graduate diplomas are distinguished from graduate certificates by having a longer period of study, equivalent to two thirds of an academic year or more.
Graduate diplomas should not be confused with a postgraduate diploma, which is a master 's degree - level qualification. Historically, this has not always been the case, with postgraduate diploma and graduate diploma used interchangeably, but the Quality Assurance Agency now makes a clear distinction between these titles. Some institutions have renamed courses as a result, e.g. The College of Law renamed the official title for its law conversion course from Postgraduate Diploma in Law to Graduate Diploma in Law as, although the law conversion course is studied postgraduately, the contents of the course are only undergraduate in nature.
The graduate diploma or higher diploma in the Republic of Ireland is an award at the same level as an honours bachelor 's degree. It comprises one year of full - time study and is taken after, and in a different subject from, an earlier bachelor 's degree. A wide variety of courses are offered; it is also possible to progress to a master 's degree.
The diploma is generally in two forms:
The graduate diploma (GradDip) is offered by the Dublin Institute of Technology, Dublin City University, HETAC and the University of Limerick. The higher diploma (HDip) is offered by HETAC, NUI institutions, and Trinity College, Dublin.
In India postgraduate diploma, is a master 's degree - level qualification which usually is a 1 yr advanced program. Certain institutes provide master 's level programs with increased number lower credit courses for 2 years, such courses are also called postgraduate diploma. At times for transnational equivalency the same is casually noted as Graduate diplomas. Advanced diplomas provided are equivalent to a post-baccalaureate diploma, this is an eight - month to one - year course.
The graduate diploma is an academic or vocational qualification; as an academic qualification it is often taken after a bachelor 's degree although sometimes only a foundation degree is required. It is usually awarded by a university or a graduate school and usually takes 2 terms of study to complete. It is also possible for academic graduate diploma holders to progress to a master 's degree on an accelerated pathway compared to first embarking on a 3 - or 4 - year degree program. To ensure that the graduate diploma qualification is recognised, the awarding body must be registered with the Singapore Ministry of Education.
The graduate diploma is generally a professional conversion qualification to reskill a graduate with new specialised skills, for instance the GDipPsy - Graduate Diploma in Psychology is aimed at offering specialised skills in psychology. See also postgraduate diploma.
The graduate diploma is offered at different levels by different institutions. The National University of Singapore requires study at master 's level, but the graduate diplomas at the Singapore University of Social Sciences are UK (i.e. bachelor 's - level) graduate diplomas awarded by the University of London and similarly the Ngee Ann - Adelaide Education Centre offers Australian graduate diplomas (i.e. Australian honours degree level) awarded by the University of Adelaide. The Graduate Diploma of Singapore Raffles Music College is a post-foundation degree qualification rather than post-bachelor's. Graduate diplomas from Aventis School of Management are six month part - time courses at an undefined level, although the School also offers postgraduate diplomas.
The WSQ Graduate Diploma is the highest qualification in Singapore 's vocational Workforce Skills Qualifications framework, administered by the Singapore Workforce Development Agency. This is not tied to the levels of academic degrees.
Graduate Diplomas (in Danish: HD) are two - year full - time - equivalent programmes at bachelor 's degree level, normally studied as four - year part - time courses. They are studied in business - related fields such as Business Administration and Innovation Management. Programs are normally split into Part 1 (graduate certificate) and Part 2 (graduate diploma), each being 60 ECTS Credits (one year of full - time - equivalent study).
In the US, graduate diplomas are "Intermediate Graduate Qualifications '' involving study beyond master 's level but not reaching PhD level. They are generally found in professional, rather than academic, fields. Other qualifications at this level include advanced graduate certificates, specialist certificates and specialist degrees.
The graduate diploma forms part of the lifelong education pathway on the Malaysian Qualifications Framework. They are qualifications at the level of a bachelor 's degree but with half of the credit value.
|
how did the egyptian sphinx lose its nose | Great Sphinx of Giza - wikipedia
The Great Sphinx of Giza (Arabic: أبو الهول Abū al - Haul, English: The Terrifying One; literally: Father of Dread), commonly referred to as the Sphinx of Giza or just the Sphinx, is a limestone statue of a reclining sphinx, a mythical creature with the body of a lion and the head of a human. Facing directly from West to East, it stands on the Giza Plateau on the west bank of the Nile in Giza, Egypt. The face of the Sphinx is generally believed to represent the Pharaoh Khafre.
Cut from the bedrock, the original shape of the Sphinx has been restored with layers of blocks. It measures 73 metres (240 ft) long from paw to tail, 20.21 m (66.31 ft) high from the base to the top of the head and 19 metres (62 ft) wide at its rear haunches. It is the oldest known monumental sculpture in Egypt and is commonly believed to have been built by ancient Egyptians of the Old Kingdom during the reign of the Pharaoh Khafre (c. 2558 -- 2532 BC).
The Sphinx is a monolith carved into the bedrock of the plateau, which also served as the quarry for the pyramids and other monuments in the area. The nummulitic limestone of the area consists of layers which offer differing resistance to erosion (mostly caused by wind and windblown sand), leading to the uneven degradation apparent in the Sphinx 's body. The lowest part of the body, including the legs, is solid rock. The body of the lion up to its neck is fashioned from softer layers that have suffered considerable disintegration. The layer in which the head was sculpted is much harder.
The Great Sphinx is one of the world 's largest and oldest statues, but basic facts about it are still subject to debate, such as when it was built, by whom and for what purpose.
It is impossible to identify what name the creators called their statue, as the Great Sphinx does not appear in any known inscription of the Old Kingdom and there are no inscriptions anywhere describing its construction or its original purpose. In the New Kingdom, the Sphinx was called Hor - em - akhet (English: Horus of the Horizon; Hellenized: Harmachis), and the pharaoh Thutmose IV (1401 -- 1391 or 1397 -- 1388 BC) specifically referred to it as such in his "Dream Stele. ''
The commonly used name "Sphinx '' was given to it in classical antiquity, about 2000 years after the commonly accepted date of its construction by reference to a Greek mythological beast with a lion 's body, a woman 's head and the wings of an eagle (although, like most Egyptian sphinxes, the Great Sphinx has a man 's head and no wings). The English word sphinx comes from the ancient Greek Σφίγξ (transliterated: sphinx) apparently from the verb σφίγγω (transliterated: sphingo / English: to squeeze), after the Greek sphinx who strangled anyone who failed to answer her riddle.
The name may alternatively be a linguistic corruption of the phonetically different ancient Egyptian word Ssp - anx (in Manuel de Codage). This name is given to royal statues of the Fourth dynasty of ancient Egypt (2575 -- 2467 BC) and later in the New Kingdom (c. 1570 -- 1070 BC) to the Great Sphinx more specifically.
Medieval Arab writers, including al - Maqrīzī, call the Sphinx balhib and bilhaw, which suggest a Coptic influence. The modern Egyptian Arabic name is أبو الهول (Abū al Hūl, English: The Terrifying One).
Though there have been conflicting evidence and viewpoints over the years, the view held by modern Egyptology at large remains that the Great Sphinx was built in approximately 2500 BC for the pharaoh Khafra, the builder of the Second Pyramid at Giza.
Selim Hassan, writing in 1949 on recent excavations of the Sphinx enclosure, summed up the problem:
Taking all things into consideration, it seems that we must give the credit of erecting this, the world 's most wonderful statue, to Khafre, but always with this reservation: that there is not one single contemporary inscription which connects the Sphinx with Khafre; so, sound as it may appear, we must treat the evidence as circumstantial, until such time as a lucky turn of the spade of the excavator will reveal to the world a definite reference to the erection of the Sphinx.
The "circumstantial '' evidence mentioned by Hassan includes the Sphinx 's location in the context of the funerary complex surrounding the Second Pyramid, which is traditionally connected with Khafra. Apart from the Causeway, the Pyramid and the Sphinx, the complex also includes the Sphinx Temple and Valley Temple, both of which display similar design of their inner courts. The Sphinx Temple was built using blocks cut from the Sphinx enclosure, while those of the Valley Temple were quarried from the plateau, some of the largest weighing upwards of 100 tons.
A diorite statue of Khafre, which was discovered buried upside down along with other debris in the Valley Temple, is claimed as support for the Khafra theory.
The Dream Stele, erected much later by the pharaoh Thutmose IV (1401 -- 1391 or 1397 -- 1388 BC), associates the Sphinx with Khafra. When the stele was discovered, its lines of text were already damaged and incomplete, and only referred to Khaf, not Khafra. An extract was translated:
which we bring for him: oxen... and all the young vegetables; and we shall give praise to Wenofer... Khaf... the statue made for Atum - Hor - em - Akhet.
The Egyptologist Thomas Young, finding the Khaf hieroglyphs in a damaged cartouche used to surround a royal name, inserted the glyph ra to complete Khafra 's name. When the Stele was re-excavated in 1925, the lines of text referring to Khaf flaked off and were destroyed.
Theories held by academic Egyptologists regarding the builder of the Sphinx and the date of its construction are not universally accepted, and various persons have proposed alternative hypotheses about both the builder and dating.
Some early Egyptologists and excavators of the Giza pyramid complex believed the Great Sphinx and associated temples to predate the fourth dynasty rule of Khufu, Khafre, and Menkaure. Petrie wrote in 1883 regarding the state of opinion regarding the age of the nearby temples, and by extension the Sphinx: "The date of the Granite Temple (Valley Temple) has been so positively asserted to be earlier than the fourth dynasty, that it may seem rash to dispute the point ''.
In 1857, Auguste Mariette, founder of the Egyptian Museum in Cairo, unearthed the much later Inventory Stela (estimated Dynasty XXVI, c. 678 -- 525 BC), which tells how Khufu came upon the Sphinx, already buried in sand. Although certain tracts on the Stela are considered good evidence, this passage is widely dismissed as Late Period historical revisionism, a purposeful fake, created by the local priests with the attempt to certify the contemporary Isis temple an ancient history it never had. Such an act became common when religious institutions such as temples, shrines and priest 's domains were fighting for political attention and for financial and economic donations.
Gaston Maspero, the French Egyptologist and second director of the Egyptian Museum in Cairo, conducted a survey of the Sphinx in 1886. He concluded that because the Dream stela showed the cartouche of Khafre in line thirteen, it was he who was responsible for the excavation and therefore the Sphinx must predate Khafre and his predecessors -- possibly Dynasty IV, c. 2575 -- 2467 BC. English Egyptologist E.A. Wallis Budge agreed that the Sphinx predated Khafre 's reign, writing in The Gods of the Egyptians (1914): "This marvelous object (the Great Sphinx) was in existence in the days of Khafre, or Khephren, and it is probable that it is a very great deal older than his reign and that it dates from the end of the archaic period (c. 2686 BC). '' Maspero believed the Sphinx to be "the most ancient monument in Egypt ''.
Rainer Stadelmann, former director of the German Archaeological Institute in Cairo, examined the distinct iconography of the nemes (headdress) and the now - detached beard of the Sphinx and concluded the style is more indicative of the Pharaoh Khufu (2589 -- 2566 BC), builder of the Great Pyramid of Giza and Khafra 's father. He supports this by suggesting Khafra 's Causeway was built to conform to a pre-existing structure, which, he concludes, given its location, could only have been the Sphinx.
Colin Reader, an English geologist who independently conducted a more recent survey of the enclosure, agrees the various quarries on the site have been excavated around the Causeway. Because these quarries are known to have been used by Khufu, Reader concludes that the Causeway (and the temples on either end thereof) must predate Khufu, thereby casting doubt on the conventional Egyptian chronology.
Frank Domingo, a forensic scientist in the New York City Police Department and an expert forensic anthropologist, used detailed measurements of the Sphinx, forensic drawings and computer imaging to conclude the face depicted on the Sphinx is not the same face as is depicted on a statue attributed to Khafra.
In 2004, Vassil Dobrev of the Institut Français d'Archéologie Orientale in Cairo announced he had uncovered new evidence the Great Sphinx may have been the work of the little - known Pharaoh Djedefre (2528 -- 2520 BC), Khafra 's half brother and a son of Khufu. Dobrev suggests Djedefre built the Sphinx in the image of his father Khufu, identifying him with the sun god Ra in order to restore respect for their dynasty. Dobrev also notes, like Stadelmann and others, the causeway connecting Khafre 's pyramid to the temples was built around the Sphinx suggesting it was already in existence at the time.
The Orion correlation theory, as expounded by popular authors Graham Hancock and Robert Bauval, is based on the proposed exact correlation of the three pyramids at Giza with the three stars ζ Ori, ε Ori and δ Ori, the stars forming Orion 's Belt, in the relative positions occupied by these stars in 10500 BC. The authors argue that the geographic relationship of the Sphinx, the Giza pyramids and the Nile directly corresponds with Leo, Orion and the Milky Way respectively. Sometimes cited as an example of pseudoarchaeology, the theory is at variance with mainstream scholarship.
The Sphinx water erosion hypothesis contends that the main type of weathering evident on the enclosure walls of the Great Sphinx could only have been caused by prolonged and extensive rainfall, and must therefore predate the time of the pharaoh Khafra. The hypothesis is championed primarily by Robert M. Schoch, a geologist and associate professor of natural science at the College of General Studies at Boston University, and John Anthony West, an author and alternative Egyptologist.
Colin Reader, a British geologist, studied the erosion patterns and noticed that they are found predominantly on the western enclosure wall and not on the Sphinx itself. He proposed the rainfall water runoff hypothesis, which also recognizes climate change transitions in the area.
Author Robert K.G. Temple proposes that the Sphinx was originally a statue of the Jackal - Dog Anubis, the God of the Necropolis, and that its face was recarved in the likeness of a Middle Kingdom pharaoh, Amenemhet II. Temple bases his identification on the style of the eye make - up and style of the pleats on the headdress.
Over the years several authors have commented on what they perceive as "Negroid '' characteristics in the face of the Sphinx. This issue has become part of the Ancient Egyptian race controversy, with respect to the ancient population as a whole. The face of the Sphinx has been damaged over the millennia.
At some unknown time the Giza Necropolis was abandoned, and the Sphinx was eventually buried up to its shoulders in sand. The first documented attempt at an excavation dates to c. 1400 BC, when the young Thutmose IV (1401 -- 1391 or 1397 -- 1388 BC) gathered a team and, after much effort, managed to dig out the front paws, between which he placed a granite slab, known as the Dream Stele, inscribed with the following excerpt:
... the royal son, Thothmos, being arrived, while walking at midday and seating himself under the shadow of this mighty god, was overcome by slumber and slept at the very moment when Ra is at the summit (of heaven). He found that the Majesty of this august god spoke to him with his own mouth, as a father speaks to his son, saying: Look upon me, contemplate me, O my son Thothmos; I am thy father, Harmakhis - Khopri - Ra - Tum; I bestow upon thee the sovereignty over my domain, the supremacy over the living... Behold my actual condition that thou mayest protect all my perfect limbs. The sand of the desert whereon I am laid has covered me. Save me, causing all that is in my heart to be executed.
Later, Ramesses II the Great (1279 -- 1213 BC) may have undertaken a second excavation.
Mark Lehner, an Egyptologist who has excavated and mapped the Giza plateau, originally asserted that there had been a far earlier renovation during the Old Kingdom (c. 2686 -- 2184 BC), although he has subsequently recanted this "heretical '' viewpoint.
In AD 1817 the first modern archaeological dig, supervised by the Italian Giovanni Battista Caviglia, uncovered the Sphinx 's chest completely.
One of the people working on clearing the sands from around the Great Sphinx was Eugène Grébaut, a French Director of the Antiquities Service
In the beginning of the year 1887, the chest, the paws, the altar, and plateau were all made visible. Flights of steps were unearthed, and finally accurate measurements were taken of the great figures. The height from the lowest of the steps was found to be one hundred feet, and the space between the paws was found to be thirty - five feet long and ten feet wide. Here there was formerly an altar; and a stele of Thûtmosis IV was discovered, recording a dream in which he was ordered to clear away the sand that even then was gathering round the site of the Sphinx.
The entire Sphinx was finally excavated in 1925 to 1936, in digs led by Émile Baraize.
In 1931 engineers of the Egyptian government repaired the head of the Sphinx. Part of its headdress had fallen off in 1926 due to erosion, which had also cut deeply into its neck. This questionable repair was by the addition of a concrete collar between the headdress and the neck, creating an altered profile. Many renovations to the stone base and raw rock body were done in the 1980s, and then redone in the 1990s.
The one - metre - wide nose on the face is missing. Examination of the Sphinx 's face shows that long rods or chisels were hammered into the nose, one down from the bridge and one beneath the nostril, then used to pry the nose off towards the south.
The Arab historian al - Maqrīzī, writing in the 15th century, attributes the loss of the nose to iconoclasm by Muhammad Sa'im al - Dahr -- a Sufi Muslim from the khanqah of Sa'id al - Su'ada -- in AD 1378, upon finding the local peasants making offerings to the Sphinx in the hope of increasing their harvest. Enraged, he destroyed the nose, and was later hanged for vandalism. Al - Maqrīzī describes the Sphinx as the "talisman of the Nile '' on which the locals believed the flood cycle depended.
There is a story that the nose was broken off by a cannonball fired by Napoleon 's soldiers. Other variants indict British troops, the Mamluks, and others. Sketches of the Sphinx by the Dane Frederic Louis Norden, made in 1738 and published in 1757, show the Sphinx missing its nose. This predates Napoleon 's birth in 1769.
In addition to the lost nose, a ceremonial pharaonic beard is thought to have been attached, although this may have been added in later periods after the original construction. Egyptologist Vassil Dobrev has suggested that had the beard been an original part of the Sphinx, it would have damaged the chin of the statue upon falling. The lack of visible damage supports his theory that the beard was a later addition.
Residues of red pigment are visible on areas of the Sphinx 's face. Traces of yellow and blue pigment have been found elsewhere on the Sphinx, leading Mark Lehner to suggest that the monument "was once decked out in gaudy comic book colors ''.
Colin Reader has proposed that the Sphinx was probably the focus of solar worship in the Early Dynastic Period, before the Giza Plateau became a necropolis in the Old Kingdom (c. 2686 -- 2134 BC). He ties this in with his conclusions that the Sphinx, the Sphinx temple, the Causeway and the Khafra mortuary temple are all part of a complex which predates Dynasty IV (c. 2613 -- 2494 BC). The lion has long been a symbol associated with the sun in ancient Near Eastern civilizations. Images depicting the Egyptian king in the form of a lion smiting his enemies date as far back as the Early Dynastic Period.
In the New Kingdom, the Sphinx became more specifically associated with the god Hor - em - akhet (Hellenized: Harmachis) or "Horus - at - the - Horizon '', which represented the pharaoh in his role as the Shesep - ankh (English: Living Image) of the god Atum. Pharaoh Amenhotep II (1427 -- 1401 or 1397 BC) built a temple to the north east of the Sphinx nearly 1000 years after its construction, and dedicated it to the cult of Hor - em - akhet.
In the last 700 years, there has been a proliferation of travellers and reports from Lower Egypt, unlike Upper Egypt, which was seldom reported from prior to the mid-18th century. Alexandria, Rosetta, Damietta, Cairo and the Giza Pyramids are described repeatedly, but not necessarily comprehensively. Many accounts were published and widely read. These include those of George Sandys, André Thévet, Athanasius Kircher, Balthasar de Monconys, Jean de Thévenot, John Greaves, Johann Michael Vansleb, Benoît de Maillet, Cornelis de Bruijn, Paul Lucas, Richard Pococke, Frederic Louis Norden and others. But there is an even larger set of more anonymous people who wrote obscure and little - read works, sometimes only unpublished manuscripts in libraries or private collections, including Henry Castela, Hans Ludwig von Lichtenstein, Michael Heberer von Bretten, Wilhelm von Boldensele, Pierre Belon du Mans, Vincent Stochove, Christophe Harant, Gilles Fermanel, Robert Fauvel, Jean Palerne Foresien, Willian Lithgow, Joos van Ghistele, etc.
Over the centuries, writers and scholars have recorded their impressions and reactions upon seeing the Sphinx. The vast majority were concerned with a general description, often including a mixture of science, romance and mystique. A typical description of the Sphinx by tourists and leisure travelers throughout the 19th and 20th century was made by John Lawson Stoddard:
It is the antiquity of the Sphinx which thrills us as we look upon it, for in itself it has no charms. The desert 's waves have risen to its breast, as if to wrap the monster in a winding - sheet of gold. The face and head have been mutilated by Moslem fanatics. The mouth, the beauty of whose lips was once admired, is now expressionless. Yet grand in its loneliness, -- veiled in the mystery of unnamed ages, -- the relic of Egyptian antiquity stands solemn and silent in the presence of the awful desert -- symbol of eternity. Here it disputes with Time the empire of the past; forever gazing on and on into a future which will still be distant when we, like all who have preceded us and looked upon its face, have lived our little lives and disappeared.
From the 16th century far into the 19th century, observers repeatedly noted that the Sphinx has the face, neck and breast of a woman. Examples included Johannes Helferich (1579), George Sandys (1615), Johann Michael Vansleb (1677), Benoît de Maillet (1735) and Elliot Warburton (1844).
Most early Western images were book illustrations in print form, elaborated by a professional engraver from either previous images available or some original drawing or sketch supplied by an author, and usually now lost. Seven years after visiting Giza, André Thévet (Cosmographie de Levant, 1556) described the Sphinx as "the head of a colossus, caused to be made by Isis, daughter of Inachus, then so beloved of Jupiter ''. He, or his artist and engraver, pictured it as a curly - haired monster with a grassy dog collar. Athanasius Kircher (who never visited Egypt) depicted the Sphinx as a Roman statue, reflecting his ability to conceptualize (Turris Babel, 1679). Johannes Helferich 's (1579) Sphinx is a pinched - face, round - breasted woman with a straight haired wig; the only edge over Thevet is that the hair suggests the flaring lappets of the headdress. George Sandys stated that the Sphinx was a harlot; Balthasar de Monconys interpreted the headdress as a kind of hairnet, while François de La Boullaye - Le Gouz 's Sphinx had a rounded hairdo with bulky collar.
Richard Pococke 's Sphinx was an adoption of Cornelis de Bruijn 's drawing of 1698, featuring only minor changes, but is closer to the actual appearance of the Sphinx than anything previous. The print versions of Norden 's careful drawings for his Voyage d'Egypte et de Nubie, 1755 are the first to clearly show that the nose was missing. However, from the time of the Napoleonic invasion of Egypt onwards, a number of accurate images were widely available in Europe, and copied by others.
Mystery of the Sphinx, narrated by Charlton Heston, a documentary presenting the theories of John Anthony West, was shown as an NBC Special on 10 November 1993 (winning an Emmy award for Best Research) A 95 - minute DVD, Mystery of the Sphinx: Expanded Edition, was released in 2007. Age of the Sphinx, a BBC Two Timewatch documentary presenting the theories of John Anthony West and critical to both sides of the argument, was shown on 27 November 1994. In 2008, the film 10,000 BC showed a supposed original Sphinx with a lion 's head. Before this film, this lion head theory had been published in documentary films about the origin of the Sphinx.
André Thévet, Cosmographie de Levant (1556)
Hogenberg and Braun (map), Cairus, quae olim Babylon (1572), exists in various editions, from various authors, with the Sphinx looking different.
Jan Sommer, (unpublished) Voyages en Egypte des annees 1589, 1590 & 1591, Institut de France, 1971 (Voyageurs occidentaux en Égypte 3)
George Sandys, A relation of a journey begun an dom. 1610 (1615)
François de La Boullaye - Le Gouz, Les Voyages et Observations (1653)
Balthasar de Monconys, Journal des voyages (1665)
Olfert Dapper, Description de l'Afrique (1665), note the two different displays of the Sphinx.
Cornelis de Bruijn, Reizen van Cornelis de Bruyn door de vermaardste Deelen van Klein Asia (1698)
The Sphinx as seen by Frederic Louis Norden (1755)
Frederic Louis Norden, Voyage d'Égypte et de Nubie (1755)
Description de l'Egypte (Panckoucke edition), Planches, Antiquités, volume V (1823), also published in the Imperial edition of 1822.
Description de l'Egypte (Panckoucke edition), Planches, Antiquités, volume V (1823), also published in the Imperial edition of 1822.
Members of the Second Japanese Embassy to Europe (1863) in front of the Sphinx, 1864.
Jean - Léon Gérôme 's Bonaparte Before the Sphinx, 1867 -- 1868.
Johanne Baptista Homann (map), Aegyptus hodierna (1724)
Lantern Slide Collection: Views, Objects: Egypt. Gizeh (selected images). View 04: Pyramids and Sphinx., n.d., Kay C. Lenskold. Floral Park, N.Y. Brooklyn Museum Archives
|
how much is a 1/2 cup of water | Measuring cup - wikipedia
A measuring cup or measuring jug is a kitchen utensil used primarily to measure the volume of liquid or bulk solid cooking ingredients such as flour and sugar, especially for volumes from about 50 mL (2 fl oz) upwards. Measuring cups are also used to measure washing powder, liquid detergents and bleach for clothes washing. The cup will usually have a scale marked in cups and fractions of a cup, and often with fluid measure and weight of a selection of dry foodstuffs.
Measuring cups may be made of plastic, glass, or metal. Transparent (or translucent) cups can be read from an external scale; metal ones only from a dipstick or scale marked on the inside.
Measuring cups usually have capacities from 250 ml (approx. 1 cup (volume) to 1000 ml (approx. 4 cups = 2 pints = 1 quart), though larger sizes are also available for commercial use. They usually have scale markings at different heights: the substance being measured is added to the cup until it reaches the wanted level. Dry measure cups without a scale are sometimes used, in sets typically of 1 / 4, 1 / 3, 1 / 2, and 1 cup. The units may be milliliters or fractions of a liter, or the cup (unit) with its fractions (typically 1 / 4, 1 / 3, 1 / 2, 2 / 3, and 3 / 4), pints, and often fluid ounces. Sometimes multiples of teaspoons and tablespoons are included. There may also be scales for the approximate weight for particular substances, such as flour and sugar.
Many dry ingredients, such as granulated sugar, are not very compressible, so volume measures are consistent. Others, notably flour, are more variable. For example, 1 cup of all - purpose flour sifted into a cup and leveled weighs about 100 grams, whereas 1 cup of all - purpose flour scooped from its container and leveled weighs about 140 grams.
Using a measuring cup to measure bulk foods which can be compressed to a variable degree such as chopped vegetables or shredded cheese leads to large measurement uncertainties. It is easier to chop down the units for a better measure.
|
when does the new power rangers come out | Power Rangers (film) - wikipedia
Saban 's Power Rangers, simply known as Power Rangers, is a 2017 American superhero film based on the franchise of the same name, directed by Dean Israelite and written by John Gatins. It is the third Power Rangers film, and is a reboot. The film features the main characters of the Mighty Morphin Power Rangers television series with a new cast, starring Dacre Montgomery, Naomi Scott, RJ Cyler, Becky G, Ludi Lin, Bill Hader, Bryan Cranston, and Elizabeth Banks.
It is the first blockbuster film to feature LGBTQ and autistic superheroes. Franchise creator Haim Saban returned to produce the film under his investment firm. The film premiered at the Regency Village Theater in Los Angeles on March 22, 2017, and was released in the United States on March 24, 2017. It was also met with mixed reviews upon release, with criticism primarily focusing on its uneven tone, product placement and divergences from its source material, but praise aimed at the performances (particularly Montgomery 's and Cyler 's). It was also a box office disappointment, grossing $142 million worldwide against a budget of $105 million.
In the Cenozoic era, six interplanetary warriors, the Power Rangers, are tasked with protecting life on Earth and the Zeo Crystal. The Green Ranger, Rita Repulsa, betrays them and plans to dominate the universe. The Red Ranger, Zordon, survives Rita 's attack and hides five of the Rangers ' power source, the Power Coins. He orders Alpha 5, his robotic assistant, to perform a meteor strike that kills him, the dinosaurs, and sends Rita to the bottom of the sea, foiling her scheme.
In 21st - century Angel Grove, high school football star Jason Scott is dismissed from the team and placed under house arrest after a failed prank. In detention, he encounters Billy Cranston and Kimberly Hart. After defending Billy from a bully, Billy offers to deactivate Jason 's ankle monitor for help at an old gold mine that evening. Once there, Jason leaves to explore and runs into Kimberly. Billy detonates explosives to break some rock, attracting the attention of Jason, Kimberly and nearby students Trini and Zack. The five discover the Power Coins and each take one. While escaping mine security, their car is hit by a train. The five find themselves at home the next morning and discover that the coins have granted them superhuman abilities. Elsewhere, Rita 's body is found. Waking, she goes on a rampage, hunting pieces of gold to raise her minion Goldar to find the Zeo Crystal.
The five teenagers return to the mine and discover an ancient spaceship where they meet Alpha 5 and Zordon 's consciousness. They inform the teenagers about the Rangers history and Rita, warning that they have eleven days until Rita has her full power, finds the Zeo Crystal, and uses it to destroy life on Earth. The five leave the ship with no intention of returning until Zordon pleads with Jason to convince the team.
The five return to train the next day, but they can not morph. They spend the next week training against simulated Putties and trying unsuccessfully to morph. To inspire the Rangers, Alpha reveals the Zords. Zack takes his Zord out for a joyride and almost kills the other Rangers when he crashes it. This angers Jason, and they fight. While trying to separate the two, Billy spontaneously morphs. However, when he becomes conscious of it, the armor disappears. Angered at their lack of progress, Zordon dismisses the group. Jason returns to the ship to confront Zordon and discovers that once the Rangers morph, it will open the Morphing Grid and allow Zordon to restore himself in a physical body. Feeling betrayed, Jason accuses Zordon of using the team for his own benefit. That night, the team camp at the mine and bond with each other.
Later that night, Rita attacks Trini and orders her to bring the Rangers to the docks. Trini informs them about Rita and they arrive to fight, but are quickly defeated. Rita forces Billy to reveal the location of the Zeo Crystal, which he 'd figured out, kills him, and releases the others. The Rangers take Billy 's body to the ship and ask Zordon to resurrect him. The Rangers agree they would give their lives for each other and resolve to defeat Rita. In doing so, they unlock the Morphing Grid. Zordon revives Billy, sacrificing being able to restore his physical self. With the team restored and confident, the Rangers morph into their armor.
Rita creates Goldar, raises an army of Putties, and attacks Angel Grove to find the Zeo Crystal. The Rangers battle the Putties and head to Angel Grove in their Zords. After the Rangers destroy the Putties, Goldar pushes the Rangers and their Zords into a fiery pit. In the pit, the Zords combine and form the Megazord. Rita merges with Goldar. The Rangers battle and destroy Goldar. After refusing Jason 's offer to surrender, a defiant Rita tells the Rangers that more will come for the crystal and leaps at the Megazord only to be slapped into space. The Rangers are praised as local heroes, and with Rita 's threats foiled, they return to their normal lives while keeping their powers.
In a mid-credits scene, in detention, the teacher announces that Tommy Oliver will be joining them, but the desk is empty save for a green jacket.
Jason David Frank and Amy Jo Johnson, two of the stars of the original TV series, make cameo appearances as Angel Grove citizens.
Saban Capital Group and Lionsgate announced the film in May 2014, with Roberto Orci originally attached to produce. Ashley Miller and Zack Stentz were hired to write the film 's script. Orci eventually left the project to work on Star Trek Beyond. On April 10, 2015, TheWrap reported that Dean Israelite was in negotiations to direct the film. Israelite told IGN in an interview that the film would be "completely playful, and it needs to be really fun and funny. But like Project Almanac, it 's going to feel very grounded at the same time, and very contemporary and have a real edge to it, and a real gut to it, it 's going to be a fun, joyful (movie) but one that feels completely grounded in a real world, with real characters going through real things ''. Brian Tyler was brought on to compose the film 's music. Israelite has said that the film updates itself from the original series, being more character - driven and incorporating naturalism and a grounded nature.
Actors began testing for the roles of the five Power Rangers on October 2, 2015. On October 7, 2015, Naomi Scott was cast as Kimberly. Newcomers Dacre Montgomery, Ludi Lin and RJ Cyler were then cast as Jason, Zack, and Billy, respectively. At the month 's end, Becky G was chosen to play Trini. When it came to casting the Rangers, director Dean Israelite said, "From the very beginning, diversity was a very important part of the whole process, '' and that while the characters ' races were switched around, he added that, "We made sure that the essence of each of those characters are who they were in the original show, and this really will be an origin story of those characters. '' On February 2, 2016, it was announced that Elizabeth Banks would portray Rita Repulsa. Four months later, Bryan Cranston, who voiced Twin Man and Snizard in the original series, announced he was cast as Zordon. Cranston revealed that he would perform motion - capture and CGI. In September 2016, Walter Emanuel Jones, the actor who played Zack in the original series, stated none of the original cast would cameo in the film. Towards the end of the month, comedian Bill Hader was cast as Alpha 5. In March 2017, it was reported that Amy Jo Johnson and Jason David Frank, who played Kimberly and Tommy in the original series, would cameo in the film, despite Jones ' earlier comments.
Filming was originally set to begin in January 2016 but was rescheduled and began on February 29 in Vancouver. On May 28, 2016, filming was complete. Additional filming occurred in October 2016. A cast member claims that the film has broken the record for the longest wire jump, but this has not been independently confirmed.
The film was released in Dolby Vision and Dolby Atmos sound.
The official soundtrack, with music by Brian Tyler, was released digitally on March 24, 2017, and on CD on April 4, distributed by Varèse Sarabande.
All music composed by Brian Tyler.
Originally scheduled for release on July 22, 2016, Lionsgate delayed it to March 24, 2017. The film received its world premiere in Berlin, Germany on March 17, 2017. All five of the surviving actors who originally portrayed the Rangers in the series attended the film 's Los Angeles premiere on March 22, 2017. It was the first time they had been together publicly since 1995.
On March 3, 2016, Lionsgate released the first official photo of the five Rangers, and the following month, the company released the first official photo of Banks as Rita Repulsa. On May 5, the company unveiled the first official images of the Rangers ' suits. Speaking of the suits, director Dean Israelite said that "The show was about kids coming of age, about metamorphosis, these suits needed to feel like they were catalyzed by these kids and their energy, their spirit '', while production designer Andrew Menzies commented that the new suits are "an alien costume that grows on them, that 's not man - made. You ca n't win everyone over, but we are trying to appeal to a more mature audience and gain new fans. '' A teaser poster was released in June, with additional character posters released in July, September, and October. On October 8, 2016, a Discover The Power teaser trailer for the film was released.
A fictional Angel Grove High School Newspaper website was created, alongside the official Power Rangers website, which features a GIF creator that allows users to make a GIF out of scenes from the teaser trailer. There is also an official toyline, produced by Bandai, and an extensive merchandising range was produced to promote the film.
Max Landis, who was fired from writing the film, criticized the trailer, saying that it looked like Chronicle, a film that he had written. The trailer garnered mixed reactions, with some praising it for its darker, contemporary reimagining of the classic characters, while still looking action - packed and fun at the same time, and others criticizing it for its lack of connection to the original series, saying it appeared "brooding ''. The trailer received over 150 million views in the first two days after it was uploaded. Lionsgate revealed the T - Rex zord toy, among others, on October 28, 2016, and the Power Rangers Twitter account revealed the Megazord toy on November 4, 2016. On November 15, 2016, Lionsgate revealed the toys based on the film 's individual Zords.
On December 8, 2016, a new poster debuted, as well as a photo of Rita Repulsa. On December 19, 2016, Lionsgate and Boom! Studios announced that they would release a graphic novel titled Power Rangers: Aftershock, set immediately after the events of the movie. An international trailer was released on December 22, 2016. Qualcomm and Lionsgate produced a virtual reality mobile app of the film Power Rangers Movie Command Center that was exhibited at the CES 2017, from January 5 -- 8, 2017, and was released in the App Store on March 8, 2017. On January 19, a second trailer, titled It 's Morphin Time!, was released. Lionsgate debuted yet another trailer, which it called the All - Star Trailer, on February 17. New TV spots were released on February 27, two about the Power Rangers, and one about Rita Repulsa. A clip was released on March 6, followed by two more on March 9. Thirty - seven stills were then released. Another TV spot was released on March 10. In the final week before the movie premiered, two more clips, as well as photos, were released.
A Build - A-Bear Workshop Power Ranger product range was announced on March 9, 2017. Krispy Kreme released doughnuts to promote the film, and served as an advertising partner. Placement of Krispy Kreme products and locations were featured in the film numerous times.
The group collaborated with YouTube sports entertainment group Dude Perfect ahead of the film 's release, in a video titled Dude Perfect vs. Power Rangers.
Lionsgate and Saban, in collaboration with nWay Games, released a PvP fighting mobile game called Power Rangers: Legacy Wars on March 24, 2017, to coincide with the film 's release. The game was featured on both Android and the Apple Store and got to the top spot on the Apple App Store for both iPhones and iPads, for two consecutive days and number two on the Google Play Store.
Power Rangers was released on Digital HD on June 13, 2017, and was followed by a release on Ultra HD, Blu - ray and DVD on June 27, 2017 with retail exclusive variants being made available at Best Buy, Target and Wal - Mart. The film debuted at the No. 1 spot on the NPD VideoScan overall disc sales chart, which tracks combined DVD and Blu - ray Disc sales; NPD 's dedicated Blu - ray Disc sales chart; and Home Media Magazine 's video rental chart for the week ending July 2, 2017. The film retained the top spot on the NPD VideoScan 's overall disc sales chart for the week ending on July 9, 2017.
Power Rangers grossed $85.4 million in the United States and Canada and $57 million in other territories for a worldwide gross of $142.3 million, against a production budget of $100 million.
In the United States and Canada, Power Rangers opened alongside Life, CHiPs and Wilson, and was projected to gross $30 -- 35 million from 3,693 theaters on its opening weekend. The film made $3.6 million from Thursday night previews and $15 million on its first day. It went on to debut to $40.3 million, finishing second at the box office behind Beauty and the Beast ($90.4 million). The audience was notably diverse and mostly 18 -- 34 years old. In its second weekend the film grossed $14.5 million (a drop of 64 %), finishing 4th at the box office. In June 2017, Dean Israelite said that the film 's PG - 13 rating probably contributed to the film 's underperformance at the box office.
Power Rangers received mixed reviews from critics. On review aggregator website Rotten Tomatoes, the film has an approval rating of 45 % based on 143 reviews, and an average rating of 5.1 / 10. The site 's critical consensus reads, "Power Rangers has neither the campy fun of its TV predecessor nor the blockbuster action of its cinematic superhero competitors, and sadly never quite manages to shift into turbo for some good old - fashioned morphin time. '' On Metacritic, which assigns a normalized rating, the film has a score 44 out of 100, based on 30 critics, indicating "mixed or average reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A '' on an A+ to F scale, while PostTrak reported 66 % of audience members gave the film a "definite recommend ''.
IGN gave the film a 7 / 10, saying, "Power Rangers does n't quite pull off everything it wants to, but it 's a fun time at the theater nonetheless. '' Mike McCahill of The Guardian wrote that "the film achieves a functioning mediocrity we perhaps might have thought beyond this franchise, '' and gave it 2 out of 5 stars.
Mike Ryan of Uproxx gave the film a negative review, writing: "Power Rangers has one of the most zig - zagged tones of any big budget studio film I 've seen in a long time. It 's jarring at times how often it goes back and forth between ' gritty ' and ' silly '. '' Writing for Variety, Owen Gleiberman also criticized the film 's conflicting tones, saying: "... 25 years ago, Mighty Morphin Power Rangers was launched as superhero fodder for kids, and there was indeed a place for it, but we 're now so awash in superhero culture that kids no longer need the safe, lame, pandering junior - league version of it. They can just watch Ant - Man or the PG - 13 Suicide Squad. Safe, lame, and pandering have all grown up. '' In a review for The Telegraph, Robbie Collin gave it 1 / 5 stars, criticizing the "abjectly embarrassing frenzy of product placement ''.
In a May 2016 conference call to analysts, Lionsgate CEO Jon Feltheimer stated, "We could see doing five or six or seven. '' On March 22, 2017, Haim Saban said that he and Lionsgate already have a six - film story arc. However in May 2017, Forbes noted that due to the underwhelming performance of the film in most markets, it was unlikely any sequels would be made. Later that same month, it was reported that the sequels could still be made thanks to record - breaking merchandise sales. Prior to the home release of the movie, Israelite confirmed that talks were taking place regarding a sequel and that he would like to include Lord Zedd in it. The possibility of a sequel increased once more in early July 2017 when it was reported that the film held the number one spot in home media sales and rentals in its first week. In August 2017, Saban abandoned its trademark for the film 's logo. A Saban Brands representative stated in October 2017 that "Power Rangers continues to own and renew hundreds of trademark registrations worldwide, including for the 2017 movie logo. The trademark registration process is very nuanced, and the status of the single application has no bearing on our ownership of or the future plans for Power Rangers. The franchise remains as strong and enthusiastic about its future as ever. '' On May 1, 2018, Saban Brands agreed to sell Power Rangers and other entertainment assets to Hasbro (who came close but failed to buy Lionsgate) for US $522 million in cash and stock with the sale expect to closed in the second quarter.
On August 8, 2018, Hasbro announced they would be working with a film studio to develop a follow - up to Power Rangers.
|
who are 3 famous musicians from the harlem renaissance what are the most famous for performing | Harlem Renaissance - wikipedia
The Harlem Renaissance was a cultural, social, and artistic explosion that took place in Harlem, New York, spanning the 1920s. During the time, it was known as the "New Negro Movement '', named after the 1925 anthology by Alain Locke. The Movement also included the new African - American cultural expressions across the urban areas in the Northeast and Midwest United States affected by the African - American Great Migration, of which Harlem was the largest. The Harlem Renaissance was considered to be a rebirth of African - American arts. Though it was centered in the Harlem neighborhood of the borough of Manhattan in New York City, many francophone black writers from African and Caribbean colonies who lived in Paris were also influenced by the Harlem Renaissance.
The Harlem Renaissance is generally considered to have spanned from about 1918 until the mid-1930s. Many of its ideas lived on much longer. The zenith of this "flowering of Negro literature '', as James Weldon Johnson preferred to call the Harlem Renaissance, took place between 1924 (when Opportunity: A Journal of Negro Life hosted a party for black writers where many white publishers were in attendance) and 1929 (the year of the stock market crash and the beginning of the Great Depression).
Until the end of the Civil War, the majority of African Americans had been enslaved and lived in the South. During the Reconstruction Era, the emancipated African Americans, freedmen, began to strive for civic participation, political equality and economic and cultural self - determination. Soon after the end of the Civil War the Ku Klux Klan Act of 1871 gave rise to speeches by African - American Congressmen addressing this Bill. By 1875 sixteen African Americans had been elected and served in Congress and gave numerous speeches with their newfound civil empowerment. The Ku Klux Klan Act of 1871 was denounced by black Congressmen and resulted in the passage of the Civil Rights Act of 1875, part of Reconstruction legislation by Republicans. By the late 1870s, Democratic whites managed to regain power in the South. From 1890 to 1908 they proceeded to pass legislation that disenfranchised most Negros and many poor whites, trapping them without representation. They established white supremacist regimes of Jim Crow segregation in the South and one - party block voting behind southern Democrats. The Democratic whites denied African Americans their exercise of civil and political rights by terrorizing black communities with lynch mobs and other forms of vigilante violence as well as by instituting a convict labor system that forced many thousands of African Americans back into unpaid labor in mines, on plantations, and on public works projects such as roads and levees. Convict laborers were typically subject to brutal forms of corporal punishment, overwork, and disease from unsanitary conditions. Death rates were extraordinarily high. While a small number of blacks were able to acquire land shortly after the Civil War, most were exploited as sharecroppers. As life in the South became increasingly difficult, African Americans began to migrate north in great numbers.
Most of the African - American literary movement arose from a generation that had memories of the gains and losses of Reconstruction after the Civil War. Sometimes their parents or grandparents had been slaves. Their ancestors had sometimes benefited by paternal investment in cultural capital, including better - than - average education. Many in the Harlem Renaissance were part of the early 20th century Great Migration out of the South into the Negro neighborhoods of the North and Midwest. African Americans sought a better standard of living and relief from the institutionalized racism in the South. Others were people of African descent from racially stratified communities in the Caribbean who came to the United States hoping for a better life. Uniting most of them was their convergence in Harlem.
During the early portion of the 20th century, Harlem was the destination for migrants from around the country, attracting both people seeking work from the South, and an educated class who made the area a center of culture, as well as a growing "Negro '' middle class. The district had originally been developed in the 19th century as an exclusive suburb for the white middle and upper middle classes; its affluent beginnings led to the development of stately houses, grand avenues, and world - class amenities such as the Polo Grounds and the Harlem Opera House. During the enormous influx of European immigrants in the late 19th century, the once exclusive district was abandoned by the white middle class, who moved farther north.
Harlem became an African - American neighborhood in the early 1900s. In 1910, a large block along 135th Street and Fifth Avenue was bought by various African - American realtors and a church group. Many more African Americans arrived during the First World War. Due to the war, the migration of laborers from Europe virtually ceased, while the war effort resulted in a massive demand for unskilled industrial labor. The Great Migration brought hundreds of thousands of African Americans to cities such as Chicago, Philadelphia, Detroit, and New York.
Despite the increasing popularity of Negro culture, virulent white racism, often by more recent ethnic immigrants, continued to affect African - American communities, even in the North. After the end of World War I, many African - American soldiers -- who fought in segregated units such as the Harlem Hellfighters -- came home to a nation whose citizens often did not respect their accomplishments. Race riots and other civil uprisings occurred throughout the US during the Red Summer of 1919, reflecting economic competition over jobs and housing in many cities, as well as tensions over social territories.
The first stage of the Harlem Renaissance started in the late 1910s. In 1917, the premiere of Three Plays for a Negro Theatre took place. These plays, written by white playwright Ridgely Torrence, featured African - American actors conveying complex human emotions and yearnings. They rejected the stereotypes of the blackface and minstrel show traditions. James Weldon Johnson in 1917 called the premieres of these plays "the most important single event in the entire history of the Negro in the American Theater ''. Another landmark came in 1919, when the poet Claude McKay published his militant sonnet, "If We Must Die '', which introduced a dramatically political dimension to the themes of African cultural inheritance and modern urban experience featured in his 1917 poems "Invocation '' and "Harlem Dancer '' (published under the pseudonym Eli Edwards, these were his first appearance in print in the United States after immigrating from Jamaica). Although "If We Must Die '' never alluded to race, African - American readers heard its note of defiance in the face of racism and the nationwide race riots and lynchings then taking place. By the end of the First World War, the fiction of James Weldon Johnson and the poetry of Claude McKay were describing the reality of contemporary African - American life in America.
In 1917 Hubert Harrison, "The Father of Harlem Radicalism, '' founded the Liberty League and The Voice, the first organization and the first newspaper, respectively, of the "New Negro Movement. '' Harrison 's organization and newspaper were political, but also emphasized the arts (his newspaper had "Poetry for the People '' and book review sections). In 1927, in the Pittsburgh Courier, Harrison challenged the notion of the renaissance. He argued that the "Negro Literary Renaissance '' notion overlooked "the stream of literary and artistic products which had flowed uninterruptedly from Negro writers from 1850 to the present, '' and said the so - called "renaissance '' was largely a white invention.
The Harlem Renaissance grew out of the changes that had taken place in the African - American community since the abolition of slavery, as the expansion of communities in the North. These accelerated as a consequence of World War I and the great social and cultural changes in early 20th - century United States. Industrialization was attracting people to cities from rural areas and gave rise to a new mass culture. Contributing factors leading to the Harlem Renaissance were the Great Migration of African Americans to northern cities, which concentrated ambitious people in places where they could encourage each other, and the First World War, which had created new industrial work opportunities for tens of thousands of people. Factors leading to the decline of this era include the Great Depression.
Christianity played a major role in the Harlem Renaissance. Many of the writers and social critics discussed the role of Christianity in African - American lives. For example, a famous poem by Langston Hughes, "Madam and the Minister '', reflects the temperature and mood towards religion in the Harlem Renaissance. The cover story for The Crisis magazine ′ s publication in May 1936 explains how important Christianity was regarding the proposed union of the three largest Methodist churches of 1936. This article shows the controversial question about the formation of a Union for these churches. The article "The Catholic Church and the Negro Priest '', also published in The Crisis, January 1920, demonstrates the obstacles African - American priests faced in the Catholic Church. The article confronts what it saw as policies based on race that excluded African Americans from higher positions in the church.
Various forms of religious worship existed during this time of African - American intellectual reawakening. Although there were racist attitudes within the current Abrahamic religious arenas many African Americans continued to push towards the practice of a more inclusive doctrine. For example, George Joseph MacWilliam presents various experiences, during his pursuit towards priesthood, of rejection on the basis of his color and race yet he shares his frustration in attempts to incite action on the part of The Crisis magazine community.
There were other forms of spiritualism practiced among African Americans during the Harlem Renaissance. Some of these religions and philosophies were inherited from African ancestry.
For example, the religion of Islam was present in Africa as early as the 8th century through the Trans - Saharan trade. Islam came to Harlem likely through the migration of members of the Moorish Science Temple of America, which was established in 1913 in New Jersey.
Various forms of Judaism were practiced, including Orthodox, Conservative, and Reform Judaism, but it was Black Hebrew Israelites that founded their religious belief system during the early 20th century in the Harlem Renaissance.
Traditional forms of religion acquired from various parts of Africa were inherited and practiced during this era. Some common examples were Voodoo and Santeria.
Religious critique during this era was found in literature, art, and poetry. The Harlem Renaissance encouraged analytic dialogue that included the open critique and the adjustment of current religious ideas.
One of the major contributors to the discussion of African - American renaissance culture was Aaron Douglas who, with his artwork, also reflected the revisions African Americans were making to the Christian dogma. Douglas uses biblical imagery as inspiration to various pieces of art work but with the rebellious twist of an African influence.
Countee Cullen 's poem "Heritage '' expresses the inner struggle of an African American between his past African heritage and the new Christian culture. A more severe criticism of the Christian religion can be found in Langston Hughes ' poem "Merry Christmas '', where he exposes the irony of religion as a symbol for good and yet a force for oppression and injustice.
A new way of playing the piano called the Harlem Stride style was created during the Harlem Renaissance, and helped blur the lines between the poor Negroes and socially elite Negroes. The traditional jazz band was composed primarily of brass instruments and was considered a symbol of the south, but the piano was considered an instrument of the wealthy. With this instrumental modification to the existing genre, the wealthy blacks now had more access to jazz music. Its popularity soon spread throughout the country and was consequently at an all - time high. Innovation and liveliness were important characteristics of performers in the beginnings of jazz. Jazz musicians at the time such as Eubie Blake, Jelly Roll Morton, Luckey Roberts, James P. Johnson, Willie "The Lion '' Smith, Fats Waller and Duke Ellington were very talented, skillful, competitive and inspirational, and are still being considered to have laid great parts of the foundations for future musicians of their genre. Duke Ellington gained popularity during the Harlem Renaissance. According to Charles Garrett, "The resulting portrait of Ellington reveals him to be not only the gifted composer, bandleader, and musician we have come to know, but also an earthly person with basic desires, weaknesses, and eccentricities. '' Ellington did not let his popularity get to him. He remained calm and focused on his music.
During this period, the musical style of blacks was becoming more and more attractive to whites. White novelists, dramatists and composers started to exploit the musical tendencies and themes of African Americans in their works. Composers used poems written by African - American poets in their songs, and would implement the rhythms, harmonies and melodies of African - American music -- such as blues, spirituals, and jazz -- into their concert pieces. Negroes began to merge with Whites into the classical world of musical composition. The first Negro male to gain wide recognition as a concert artist in both his region and internationally was Roland Hayes. He trained with Arthur Calhoun in Chattanooga, and at Fisk University in Nashville. Later, he studied with Arthur Hubbard in Boston and with George Henschel and Amanda Ira Aldridge in London, England. He began singing in public as a student, and toured with the Fisk Jubilee Singers in 1911.
During the Harlem Renaissance, Black America 's clothing scene took a dramatic turn from the prim and proper. Many young women preferred extreme versions of current white fashions - from short skirts and silk stockings to drop - waisted dresses and cloche hats. The extraordinarily successful black dancer Josephine Baker, though performing in Paris during the height of the Renaissance, was a major fashion trendsetter for black and white women alike. Her gowns from the couturier Jean Patou were much copied, especially her stage costumes, which Vogue magazine called "startling. '' Popular by the 1930s was a trendy, egret - trimmed beret. Men wore loose suits that led to the later style known as the "Zoot, '' which consisted of wide - legged, high - waisted, peg - top trousers, and a long coat with padded shoulders and wide lapels. Men also wore wide - brimmed hats, colored socks, white gloves, and velvet - collared Chesterfield coats. During this period, African Americans expressed respect for their heritage through a fad for leopard - skin coats, indicating the power of the African animal.
Characterizing the Harlem Renaissance was an overt racial pride that came to be represented in the idea of the New Negro, who through intellect and production of literature, art, and music could challenge the pervading racism and stereotypes to promote progressive or socialist politics, and racial and social integration. The creation of art and literature would serve to "uplift '' the race.
There would be no uniting form singularly characterizing the art that emerged from the Harlem Renaissance. Rather, it encompassed a wide variety of cultural elements and styles, including a Pan-African perspective, "high - culture '' and "low - culture '' or "low - life, '' from the traditional form of music to the blues and jazz, traditional and new experimental forms in literature such as modernism and the new form of jazz poetry. This duality meant that numerous African - American artists came into conflict with conservatives in the black intelligentsia, who took issue with certain depictions of black life.
Some common themes represented during the Harlem Renaissance were the influence of the experience of slavery and emerging African - American folk traditions on black identity, the effects of institutional racism, the dilemmas inherent in performing and writing for elite white audiences, and the question of how to convey the experience of modern black life in the urban North.
The Harlem Renaissance was one of primarily African - American involvement. It rested on a support system of black patrons, black - owned businesses and publications. However, it also depended on the patronage of white Americans, such as Carl Van Vechten and Charlotte Osgood Mason, who provided various forms of assistance, opening doors which otherwise might have remained closed to the publication of work outside the black American community. This support often took the form of patronage or publication. Carl Van Vechten was one of the most noteworthy white Americans involved with the Harlem Renaissance. He allowed for assistance to the black American community because he wanted racial sameness.
There were other whites interested in so - called "primitive '' cultures, as many whites viewed black American culture at that time, and wanted to see such "primitivism '' in the work coming out of the Harlem Renaissance. As with most fads, some people may have been exploited in the rush for publicity.
Interest in African - American lives also generated experimental but lasting collaborative work, such as the all - black productions of George Gershwin 's opera Porgy and Bess, and Virgil Thomson and Gertrude Stein 's Four Saints in Three Acts. In both productions the choral conductor Eva Jessye was part of the creative team. Her choir was featured in Four Saints. The music world also found white band leaders defying racist attitudes to include the best and the brightest African - American stars of music and song in their productions.
The African Americans used art to prove their humanity and demand for equality. The Harlem Renaissance led to more opportunities for blacks to be published by mainstream houses. Many authors began to publish novels, magazines and newspapers during this time. The new fiction attracted a great amount of attention from the nation at large. Among authors who became nationally known were Jean Toomer, Jessie Fauset, Claude McKay, Zora Neale Hurston, James Weldon Johnson, Alain Locke, Omar Al Amiri, Eric D. Walrond and Langston Hughes.
The Harlem Renaissance helped lay the foundation for the post-World War II protest movement of the Civil Rights Movement. Moreover, many black artists who rose to creative maturity afterward were inspired by this literary movement.
The Renaissance was more than a literary or artistic movement, as it possessed a certain sociological development -- particularly through a new racial consciousness -- through ethnic pride, as seen in the Back to Africa movement led by Marcus Garvey. At the same time, a different expression of ethnic pride, promoted by W.E.B. Du Bois, introduced the notion of the "talented tenth '': the African Americans who were fortunate enough to inherit money or property or obtain a college degree during the transition from Reconstruction to the Jim Crow period of the early twentieth century. These "talented tenth '' were considered the finest examples of the worth of black Americans as a response to the rampant racism of the period. (No particular leadership was assigned to the talented tenth, but they were to be emulated.) In both literature and popular discussion, complex ideas such as Du Bois 's concept of "twoness '' (dualism) were introduced (see The Souls of Black Folk; 1903). Du Bois explored a divided awareness of one 's identity that was a unique critique of the social ramifications of racial consciousness. This exploration was later revived during the Black Pride movement of the early 1970s.
"Sometimes I feel discriminated against, but it does not make me angry. It merely astonishes me. How can anyone deny themselves the pleasure of my company? It 's beyond me. '' - Zora Neale Hurston
The Harlem Renaissance was successful in that it brought the Black experience clearly within the corpus of American cultural history. Not only through an explosion of culture, but on a sociological level, the legacy of the Harlem Renaissance redefined how America, and the world, viewed African Americans. The migration of southern Blacks to the north changed the image of the African American from rural, undereducated peasants to one of urban, cosmopolitan sophistication. This new identity led to a greater social consciousness, and African Americans became players on the world stage, expanding intellectual and social contacts internationally.
The progress -- both symbolic and real -- during this period became a point of reference from which the African - American community gained a spirit of self - determination that provided a growing sense of both Black urbanity and Black militancy, as well as a foundation for the community to build upon for the Civil Rights struggles in the 1950s and 1960s.
The urban setting of rapidly developing Harlem provided a venue for African Americans of all backgrounds to appreciate the variety of Black life and culture. Through this expression, the Harlem Renaissance encouraged the new appreciation of folk roots and culture. For instance, folk materials and spirituals provided a rich source for the artistic and intellectual imagination, which freed Blacks from the establishment of past condition. Through sharing in these cultural experiences, a consciousness sprung forth in the form of a united racial identity.
However, there was some pressure within certain groups of the Harlem Renaissance to adopt sentiments of conservative white America in order to be taken seriously by the mainstream. The result being that queer culture, while far - more accepted in Harlem than most places in the country at the time, was most fully lived out in the smoky dark lights of bars, nightclubs, and cabarets in the city. It was within these venues that the blues music scene boomed, and since it had not yet gained recognition within popular culture, queer artists used it as a way to express themselves honestly. Even though there were factions within the Renaissance that were accepting of queer culture / lifestyles, one could still be arrested for engaging in homosexual acts. Many people, including author Alice Dunbar - Nelson and "The Mother of Blues '' Gertrude "Ma '' Rainey, had husbands but were romantically linked to other women as well. Ma Rainey was known to dress in traditionally male clothing and her blues lyrics often reflected her sexual proclivities for women, which was extremely radical at the time. Ma Rainey was also the first person to introduce blues music into vaudeville. Rainey 's protégé, Bessie Smith was another artist who used the blues as a way to express herself with such lines as "When you see two women walking hand in hand, just look em ' over and try to understand: They 'll go to those parties -- have the lights down low -- only those parties where women can go. ''
Another prominent blues singer was Gladys Bentley, who was known to cross-dress. Bentley was the club owner of Clam House on 133rd Street in Harlem, which was a hub for queer patrons. The Hamilton Lodge in Harlem hosted an annual drag ball that attracted thousands to watch as a couple hundred young men came to dance the night away in drag. Though there were safe havens within Harlem, there were prominent voices such as that of Abyssinian Baptist Church 's minister Adam Clayton who actively campaigned against homosexuality.
The Harlem Renaissance gave birth to the idea of The New Negro. The New Negro movement was an effort to define what it meant to be African - American by African Americans rather than let the degrading stereotypes and caricatures found in black face minstrelsy practices to do so. There was also The Neo-New Negro movement, which not only challenged racial definitions and stereotypes, but also sought to challenge gender roles, normative sexuality, and sexism in America in general. In this respect, the Harlem Renaissance was far ahead of the rest of America in terms of embracing feminism and queer culture. These ideals received some push back as freedom of sexuality, particularly pertaining to women (which during the time in Harlem was known as women - loving women), was seen as confirming the stereotype that black women were loose and lacked sexual discernment. The black bourgeoisie saw this as hampering the cause of black people in America and giving fuel to the fire of racist sentiments around the country. Yet for all of the efforts by both sectors of white and conservative black America, queer culture and artists defined major portions of not only the Harlem Renaissance, but also defined so much of our culture today. Author of "The Black Man 's Burden '', Henry Louis Gates Jr. wrote on this very subject matter, the Harlem Renaissance "was surely as gay as it was black ''.
Many critics point out that the Harlem Renaissance could not escape its history and culture in its attempt to create a new one, or sufficiently separate from the foundational elements of White, European culture. Often Harlem intellectuals, while proclaiming a new racial consciousness, resorted to mimicry of their white counterparts by adopting their clothing, sophisticated manners and etiquette. This "mimicry '' may also be called assimilation, as that is typically what minority members of any social construct must do in order to fit social norms created by that construct 's majority. This could be seen as a reason that the artistic and cultural products of the Harlem Renaissance did not overcome the presence of White - American values, and did not reject these values. In this regard, the creation of the "New Negro '' as the Harlem intellectuals sought, was considered a success.
The Harlem Renaissance appealed to a mixed audience. The literature appealed to the African - American middle class and to whites. Magazines such as The Crisis, a monthly journal of the NAACP, and Opportunity, an official publication of the National Urban League, employed Harlem Renaissance writers on their editorial staffs; published poetry and short stories by black writers; and promoted African - American literature through articles, reviews, and annual literary prizes. As important as these literary outlets were, however, the Renaissance relied heavily on white publishing houses and white - owned magazines. A major accomplishment of the Renaissance was to open the door to mainstream white periodicals and publishing houses, although the relationship between the Renaissance writers and white publishers and audiences created some controversy. W.E.B. Du Bois did not oppose the relationship between black writers and white publishers, but he was critical of works such as Claude McKay 's bestselling novel Home to Harlem (1928) for appealing to the "prurient demand (s) '' of white readers and publishers for portrayals of black "licentiousness ''. Langston Hughes spoke for most of the writers and artists when he wrote in his essay "The Negro Artist and the Racial Mountain '' (1926) that black artists intended to express themselves freely, no matter what the black public or white public thought. Hughes in his writings also returned to the theme of racial passing, but during the Harlem Renaissance, he began to explore the topic of homosexuality and homophobia. He began to use disruptive language in his writings. He explored this topic because it was a theme that during this time period was not discussed.
African - American musicians and other performers also played to mixed audiences. Harlem 's cabarets and clubs attracted both Harlem residents and white New Yorkers seeking out Harlem nightlife. Harlem 's famous Cotton Club, where Duke Ellington performed, carried this to an extreme, by providing black entertainment for exclusively white audiences. Ultimately, the more successful black musicians and entertainers who appealed to a mainstream audience moved their performances downtown.
Certain aspects of the Harlem Renaissance were accepted without debate, and without scrutiny. One of these was the future of the "New Negro ''. Artists and intellectuals of the Harlem Renaissance echoed American progressivism in its faith in democratic reform, in its belief in art and literature as agents of change, and in its almost uncritical belief in itself and its future. This progressivist worldview rendered Black intellectuals -- just like their White counterparts -- unprepared for the rude shock of the Great Depression, and the Harlem Renaissance ended abruptly because of naive assumptions about the centrality of culture, unrelated to economic and social realities.
|
tierra mia coffee pacific boulevard huntington park ca | Tierra Mia Coffee - wikipedia
Tierra Mia Coffee Company is a specialty coffee retailer and roaster that operates five retail locations in Southern California, United States. The company opened its first coffeehouse in March 2008 in the city of South Gate at the intersection of Firestone Boulevard and Atlantic Boulevard. In March 2010, Tierra Mia Coffee opened its second location in the city of Huntington Park, within the historic Pacific Boulevard commercial district. The Pacific Boulevard commercial district is the third highest grossing commercial district in the County of Los Angeles. In July 2010 the company opened its third location adjacent to city hall in the city of Santa Fe Springs. In March 2012 Tierra Mia Coffee opened its fourth store and first drive thru location in the city of Pico Rivera on Slauson Avenue, and in August 2012 opened its fifth store in Downtown Los Angeles at the intersection of Spring Street and 7th Street.
Tierra Mia Coffee 's mission is to "provide the highest quality Latin - inspired coffee, beverages, and pastries in a setting that is comfortable, contemporary, and highly reflective of Latin American culture. '' The company roasts all of its coffee offering and bakes all of its pastry offering sold in its stores.
Pulitzer Prize winning food critic Jonathan Gold of the LA Weekly wrote about Tierra Mia Coffee in August 2008, and described the company as a third wave of coffee concept that offered "the world 's best beans ''. Tierra Mia Coffee has also been profiled in the Los Angeles Times, La Opinión and Al Borde, and has appeared on television newscasts of Univision and Telemundo. In September 2008, Tierra Mia Coffee held an opening event that was attended and keynoted by California State Controller John Chiang.
In December 2011, in LA Weekly 's Best of LA, Tierra Mia Coffee was listed as one of the "10 Best Coffee Shops in Los Angeles ''.
Tierra Mia Coffee makes its espresso utilizing La Marzocco and Mazzer equipment and creates beverages with latte art. All of its brewed coffee is made to order using a pour over method. The company 's menu features original Tierra Mia Coffee creations such as the Mocha Mexicano, Horchata Latte, Horchata Frappe, Chocolate Mexicano Frappe and Tres Leches Muffin.
Tierra Mia Coffee was founded by Ulysses Romero and funded by a group of private investors. Ulysses Romero holds an MBA from Stanford University 's Graduate School of Business and a bachelor 's degree from Berkeley 's Haas School of Business. He previously worked at Kean Coffee a concept started by coffee luminary Martin Diedrich, founder of Diedrich Coffee, one of the first coffeehouse chains in Orange County, California.
|
when does the next season of twin peaks start | Twin Peaks (2017 TV series) - wikipedia
Twin Peaks, also known as Twin Peaks: The Return, is an American mystery horror drama television series created by Mark Frost and David Lynch, and directed by Lynch. It is a continuation of the 1990 -- 91 ABC series of the same name. Developed and written by Lynch and Frost over several years, the limited series consists of 18 episodes and premiered on Showtime on May 21, 2017. An ensemble of returning and new cast members appear, led by original star Kyle MacLachlan.
Set 25 years after the events of the original Twin Peaks, the series follows multiple storylines, many of which are connected through association with FBI Special Agent Dale Cooper (MacLachlan). It takes place in a variety of settings beyond the fictional Washington town of Twin Peaks, including Las Vegas, South Dakota, Philadelphia, and New Mexico. Showtime president David Nevins said that "the core of (the series) is Agent Cooper 's odyssey back to Twin Peaks ''.
The series garnered critical acclaim, with praise centering on its unconventional narrative and structure, visual invention, and performances. Many publications, including Rolling Stone, The Washington Post, and Esquire, named it the best television show of 2017. The film journals Sight & Sound and Cahiers du cinéma named The Return the second - best and best "film '' of the year respectively, sparking discussion about the artistic difference, if any, between theatrical film and TV series in the era of streaming.
The first series of Twin Peaks, an American serial drama television series created by Mark Frost and David Lynch, premiered on April 8, 1990, on ABC. It was one of the top - rated series of 1990, but declining ratings led to its cancellation in 1991 after its second season. In subsequent years, Twin Peaks has often been listed among the greatest television dramas of all time. A prequel film directed by Lynch, Twin Peaks: Fire Walk with Me, was released in 1992. Lynch planned two more films that would have concluded the series ' narrative, but in 2001 stated that Twin Peaks was as "dead as a doornail. ''
In 2007, artist Matt Haley began work on a graphic novel continuation, which he hoped would be included in the "Complete Mystery '' DVD box set. Twin Peaks producer Robert Engels agreed to help write it on the condition that Lynch and Frost approved the project; Haley said: "(Engels) and I had a number of discussions about what the story would be. I was keen to use whatever notes they had for the proposed third season. I really wanted this to be a literal ' third season ' of the show. '' Paramount Home Entertainment agreed to package it with the box set, also on the condition that Lynch and Frost approved. Though Frost approved the project, Lynch vetoed it, stating that he respected the effort but did not want to continue the story of Twin Peaks.
In 2013, rumors that Twin Peaks would return were dismissed by Lynch 's daughter Jennifer Lynch (author of The Secret Diary of Laura Palmer) as well as by Frost. Cast member Ray Wise recounted what Lynch had said to him about a possible continuation: "Well, Ray, you know, the town is still there. And I suppose it 's possible that we could revisit it. Of course, (your character is) already dead... but we could maybe work around that. ''
In January 2014, a casting call for a "Twin Peaks promo '', directed by Lynch, was revealed to be the filming of a featurette for the Twin Peaks: The Complete Mystery Blu - ray set. In September 2014, Lynch answered a question about Twin Peaks at the Lucca Film Festival by saying it was a "tricky question '', and that "there 's always a possibility... and you just have to wait and see. ''
On October 6, 2014, Showtime announced that it would air a nine - episode miniseries written by Lynch and Frost and directed by Lynch. Frost emphasized that the new episodes were not a remake or reboot but a continuation of the series. The episodes are set in the present day, and the passage of 25 years is an important element in the plot. As to whether the miniseries would become an ongoing series, Frost said: "If we have a great time doing it and everybody loves it and they decide there 's room for more, I could see it going that way. ''
In March 2015, Lynch expressed doubts about the production due to "complications ''. Showtime confirmed the series was moving forward, stating: "Nothing is going on that 's any more than any preproduction process with David Lynch. Everything is moving forward and everybody is crazy thrilled and excited. '' In April 2015, Lynch said he would not direct the nine episodes due to budget constraints. He and Showtime came to an agreement, with Lynch confirming on May 15, 2015, that he would direct, and that there would be more episodes than the originally announced nine. At a Twin Peaks panel in Seattle, cast members Sherilyn Fenn and Sheryl Lee said that the new series would consist of 18 episodes and Angelo Badalamenti would return as composer.
On January 12, 2015, Kyle MacLachlan was confirmed to return to the series. In October 2015, it was confirmed that Michael Ontkean, who portrayed Sheriff Harry S. Truman and has since retired from acting, would not return for the revival. In October 2015, it was reported that the role of town sheriff would be filled by Robert Forster, later confirmed as playing Frank Truman, brother of Harry. Forster had been cast as Harry in the 1990 pilot, but was replaced by Ontkean due to scheduling issues. Also in October, David Duchovny teased his return as Agent Denise Bryson. In November 2015, it was reported that Miguel Ferrer would reprise his role as Albert Rosenfield and that Richard Beymer and David Patrick Kelly would return as Benjamin Horne and Jerry Horne respectively. In December 2015, Alicia Witt confirmed she would reprise her role as Gersten Hayward. Michael J. Anderson was asked to reprise his role as The Man from Another Place, but declined.
Russ Tamblyn underwent open - heart surgery in late 2014 and was still recovering in 2015. Lynch and Frost were still hoping Tamblyn would join the cast for the new season, which was later confirmed. On September 28, 2015, Catherine E. Coulson, who reprised her role of the Log Lady in the new series, died of cancer. She filmed her final scene four days before her death.
The series ' first teaser trailer, released in December 2015, confirmed the involvement of Michael Horse (Tommy "Hawk '' Hill). In January 2016, it was reported that Sherilyn Fenn would reprise her role as Audrey Horne in a "major presence. '' In February 2016, it was reported that Lynch would reprise his role as Gordon Cole. Frequent Lynch collaborator Laura Dern was cast in a "top - secret pivotal role '', which eventually proved to be Diane, the previously unseen character to whom Cooper frequently dictated taped messages during the show 's original run. In April 2016, a complete cast list was released, featuring 217 actors, with actors returning from the earlier series marked with asterisks. Mary Reber, who plays Alice Tremond in the finale, is the actual owner of the house used for the Palmer residence.
David Bowie was asked to make a cameo appearance as FBI Agent Phillip Jeffries, his character from Twin Peaks: Fire Walk with Me. As Bowie 's health was declining, his lawyer told Lynch that he was unavailable. Before his death in January 2016, Bowie gave the production permission to reuse old footage featuring him; however, he was unhappy with the accent he had used in the film, and requested that he be dubbed over by an authentic Louisiana actor, leading to the casting of Nathan Frizzell as the voice of Jeffries. In January and February 2017, respectively, cast members Miguel Ferrer and Warren Frost died, but both appear in the new series. Harry Dean Stanton, who reprised his role as Carl Rodd, died in September 2017, less than two weeks after the last episode of the series aired.
In July 2015, Frost suggested that the series would premiere in 2017 rather than 2016, as originally planned. The series began filming in September 2015, and Showtime president David Nevins said, "I 'm hoping we make 2016. It 's not clear. It 's ultimately going to be in (series co-creators David Lynch and Mark Frost 's) control. '' Nevins also stated, "I do n't know (how many episodes there will be). They 're going to decide, I expect it to be more than nine, but it 's open - ended. I know what the shooting schedule is and then we 'll have him cut into it however many episodes it feels best at. '' In January 2016, Nevins confirmed that the series would premiere in the first half of 2017. The series was shot continuously from a single, long shooting script before being edited into episodes. Filming was completed by April 2016.
Twin Peaks
Government
Las Vegas
South Dakota
Supernatural
Michael J. Anderson did not reprise his role as The Man from Another Place, who instead appears as a treelike computer - generated effect and is voiced by an uncredited actor. When asked who provided the voice for the CGI character, executive producer Sabrina Sutherland replied, "Unfortunately, I think this question should remain a mystery and not be answered. ''
Other
New York
New Mexico, 1956
Montana
Odessa
Notes
The show 's score contains new and reused compositions by Angelo Badalamenti, dark ambient music and sound design by Dean Hurley and David Lynch (including some from The Air Is on Fire), and unreleased music from Lynch and Badalamenti 's 1990s project Thought Gang, two of which previously appeared in Fire Walk with Me. Hurley 's contributions were released on the album Anthology Resource Vol. 1 △ △ on August 6, 2017, by Sacred Bones Records. Several tracks from Johnny Jewel 's album Windswept also appear throughout. Threnody for the Victims of Hiroshima by Krzysztof Penderecki appears in key scenes.
Angelo Badalamenti 's score was released on September 8, 2017, by Rhino Records as Twin Peaks: Limited Event Series Original Soundtrack.
Additionally, multiple episodes contain musical performances at the Roadhouse. Lynch hand - picked several of the bands, including Nine Inch Nails, Sharon Van Etten, Chromatics, and Eddie Vedder. Twin Peaks: Music from the Limited Event Series, an album containing many of these performances along with other songs heard on the series, was released by Rhino Records on September 8, 2017.
Other music, mostly played diegetically includes:
Beethoven 's Moonlight Sonata and "Last Call '' by David Lynch are played slowed down significantly.
Twin Peaks premiered on Showtime on May 21, 2017, with a two - hour episode. After the airing, the premiere and an additional two episodes became available online, and the series aired in weekly increments from that point onwards (at Lynch 's insistence). Overall, the series consists of 18 episodes. It concluded on September 3, 2017, with a two - part finale.
In the United Kingdom, Sky Atlantic simulcast the first two episodes beginning at 2: 00 am British Summer Time on May 22, 2017, and the next two episodes were released on Sky UK 's on - demand service after the premiere. In the Nordic countries, the series is broadcast on HBO Nordic, with the two - hour premiere airing on May 22, and subsequent episodes being made available the day after its U.S. airing. In Canada, the series is available on CraveTV and The Movie Network, and debuted simultaneously with the U.S. broadcast. In Australia, episodes of the series are available to stream on Stan the same day as the original U.S. broadcast. Two episodes were screened at the 2017 Cannes Film Festival. In Japan, the series airs on the satellite television network Wowow, which also aired the original series.
The first two episodes garnered positive reviews from critics. On Rotten Tomatoes, it has a 94 % rating with an average score of 7.75 out of 10 based on 71 reviews. The site 's critical consensus is, "Surreal, suspenseful, and visually stunning, this new Twin Peaks is an auteurist triumph for David Lynch. '' On Metacritic, Twin Peaks has a score of 74 out of 100 based on 26 reviews.
Sonia Saraiya of Variety wrote "Twin Peaks: The Return is weird and creepy and slow. But it is interesting. The show is very stubbornly itself -- not quite film and not quite TV, rejecting both standard storytelling and standard forms. It 's not especially fun to watch and it can be quite disturbing. But there is never a sense that you are watching something devoid of vision or intention. Lynch 's vision is so total and absolute that he can get away with what would n't be otherwise acceptable. ''
The Hollywood Reporter 's Daniel Fienberg commented that "The thing that struck me most immediately about the premiere is how relatively cogent it was, with a clear emphasis on ' relatively '. What premiered on Sunday was as accessibly scary, disturbing and audaciously funny as many of the best parts of the original Twin Peaks, and nowhere near as hallucinatory and subtextually distilled as the prequel film Fire Walk With Me. '' Fienberg also wrote about the series ' format: "It 's obvious this Twin Peaks is going to be an 18 - hour unit. There was no discernible separation between hours and if credits had n't rolled, the second hour could probably just as easily have flowed into the third. This is n't episodic TV. It 's another thing. ''
In her "A '' grade review, Emily L. Stephens of The A.V. Club wrote regarding its possible reception from critics and viewers: "This two - part premiere is going to be wildly difficult for any two people to agree upon, in part because a viewer 's assessment of the revival will depend upon what they hoped for. If you were looking forward to a return of the sometimes campy, sometimes cozy humor of the original two seasons of Twin Peaks, this premiere could come as a shock. If you were anticipating that once jolting, now familiar blend of genres, this is... not that. '' She called the two - part premiere "pure Lynchian horror ''.
At the 2017 Cannes Film Festival, Lynch screened the two - hour premiere of the series and received a five - minute standing ovation from the crowd.
Sean T. Collins of Rolling Stone called the series "one of the most groundbreaking TV series ever '', praising its original, complex story lines and the performances of its cast, particularly Kyle MacLachlan. Matt Zoller Seitz of Vulture wrote that the show was "the most original and disturbing to hit TV drama since The Sopranos ''. In his season review for IGN, Matt Fowler noted that Twin Peaks "came back as a true artistic force that challenged just about every storytelling convention we know '' and scored it an 8.8 out of 10. Additionally, Sight & Sound and Cahiers du cinéma magazines named Twin Peaks: The Return respectively as the second - best and the best "film '' of the year, with Sight & Sound placing it behind only the psychological horror film Get Out. Metacritic ranked Twin Peaks as the second - best TV series of 2017; 20 major publications ranked it as the best show of the year.
The two - hour premiere on May 21, 2017, received 506,000 viewers on Showtime, which Deadline Hollywood called "soft for such a strongly promoted prestige project ''. Ratings increased to 626,000 after the encore broadcasts that night and the premiere also had over 450,000 viewers via streaming and on - demand.
Viewership for the premiere increased to 804,000 in Live + 3 ratings, and it had a viewership of 1.7 million across streaming and on - demand platforms. Showtime announced that the weekend of the Twin Peaks premiere had the most signups to their streaming service ever. Prior to the finale, the series was averaging 2 million weekly viewers, when including time - shifting, encores and streaming. Showtime president David Nevins said that Twin Peaks "has exceeded expectations '' from a financial perspective.
The series was released on Blu - ray and DVD on December 5, 2017, under the title Twin Peaks: A Limited Event Series. The set includes more than six hours of behind - the - scenes content.
In the 2017 Home Media Awards, which honor the year 's best home video releases, Twin Peaks: A Limited Event Series won four awards: Title of the Year, TV on Disc of the Year, Best TV Movie or Miniseries, and Best Extras / Bonus Material.
Both David Lynch and Mark Frost have expressed interest in making another season of Twin Peaks, but Lynch has noted that such a project will not immediately follow The Return, given that it took them four and a half years to write and film the third season.
|
what are the characteristic of each vocal and dance form of latin american music | Music of Latin America - wikipedia
The music of Latin America refers to music originating from Latin America, namely the Romance - speaking countries and territories of the Americas and the Caribbean south of the United States. Latin American music also incorporates African music from slaves who were transported to the Americas by European settlers as well as music from the indigenous peoples of the Americas. Due to its highly syncretic nature, Latin American music encompasses a wide variety of styles, including influential genres such as bachata, bossa nova, merengue, rumba, salsa, samba, son, and tango. During the 20th century, many styles were influenced by the music of the United States giving rise to genres such as Latin pop, rock, jazz, hip hop, and reggaeton.
Geographically, it usually refers to the Spanish and Portuguese - speaking regions of Latin America, but sometimes includes Francophone countries and territories of the Caribbean and South America as well. It also encompasses Latin American styles that have originated in the United States such as salsa and Tejano. The origins of Latin American music can be traced back to the Spanish and Portuguese conquest of the Americas in the 16th century, when the European settlers brought their music from overseas. Latin American music is performed in Spanish, Portuguese, and to a lesser extent, French.
The tango is perhaps Argentina 's best - known musical genre, famous worldwide. Other styles include the Chacarera, Milonga, Zamba and Chamamé. Modern rhythms include Cuarteto (music from the Cordoba Province) and Electrotango.
Argentine rock (known locally as rock nacional) was most popular during the 1980s, and remains Argentina 's most popular music. Rock en Español was first popular in Argentina, then swept through other Hispanic American countries and Spain. The movement was known as the "Argentine Wave. '' Europe strongly influenced this sound as the immigrants brought their style of music with them.
Bolivian music is perhaps the most strongly linked to its native population among the national styles of South America. After the nationalistic period of the 1950s Aymara and Quechuan culture became more widely accepted, and their folk music evolved into a more pop - like sound. Los Kjarkas played a pivotal role in this fusion. Other forms of native music (such as huayños and caporales) are also widely played. Cumbia is another popular genre. There are also lesser - known regional forms, such as the music from Santa Cruz and Tarija (where styles such as Cueca and Chacarera are popular).
Brazil is a large, diverse country with a long history of popular - musical development, ranging from the early - 20th - century innovation of samba to the modern Música popular brasileira. Bossa nova is internationally well - known, and Forró (pronounced (foˈʁɔ)) is also widely known and popular in Brazil. Lambada is influenced by rhythms like cumbia and merengue. Funk carioca is also a highly popular style.
Many musical genres are native to Chile; one of the most popular was the Chilean Romantic Cumbia, exemplified by artists such as Americo and Leo Rey. The Nueva Canción originated in the 1960s and 1970s and spread in popularity until the 1973 Chilean coup d'état, when most musicians were arrested, killed or exiled.
In Central Chile, several styles can be found: the Cueca (the national dance), the Tonada, the Refalosa, the Sajuriana, the Zapateado, the Cuando and the Vals. In the Norte Grande region traditional music resembles the music of southern Perú and western Bolivia, and is known as Andean music. This music, which reflects the spirit of the indigenous people of the Altiplano, was an inspiration for the Nueva canción. The Chiloé Archipelago has unique folk - music styles, due to its isolation from the culture centres of Santiago and Lima.
Music from Chilean Polynesia, Rapa Nui music, is derived from Polynesian culture rather than colonial society or European influences.
The music of Costa Rica is represented by musical expressions as parrandera, the Tambito, waltz, bolero, gang, calypso, chiquichiqui, mento the run and callera. They emerged from the migration processes and historical exchanges between indigenous, European and African. Typical instruments are the quijongo, marimba, ocarinas, low drawer, the Sabak, reed flutes, accordion, mandolin and guitar.
Cuba has produced many musical genres, and a number of musicians in a variety of styles. Blended styles range from the danzón to the rumba.
Colombian music can be divided into four musical zones: the Atlantic coast, the Pacific coast, the Andean region and Los Llanos. The Atlantic music features rhythms such as the cumbia, porros and mapalé. Music from the Pacific coast such features rhythms such as the currulao -- which is tinged with Spanish influence -- and the Jota chocoana (along with many more afro - drum predominating music forms) -- tinged with African and Aboriginal influence. Colombian Andean has been strongly influenced by Spanish rhythms and instruments, and differs noticeably from the indigenous music of Peru or Bolivia. Typical forms include the bambuco, pasillo guabina and torbellino, played with pianos and string instruments such as the tiple guitarra. The music of Los Llanos, música llanera, is usually accompanied by a harp, a cuatro (a type of four - string guitar) and maracas. It has much in common with the music of the Venezuelan Llanos.
Apart from these traditional forms, two newer musical styles have conquered large parts of the country: la salsa, which has spread throughout the Pacific coast and the vallenato, which originated in La Guajira and César (on the northern Caribbean coast). The latter is based on European accordion music. Merengue music is heard as well. More recently, musical styles such as reggaeton and bachata have also become popular.
Merengue típico and Orchestra merengue have been popular in the Dominican Republic for many decades, and is widely regarded as the national music. Bachata is a more recent arrival, taking influences from the bolero and derived from the country 's rural guitar music. Bachata has evolved and risen in popularity over the last 40 years in the Dominican Republic and other areas (such as Puerto Rico) with the help of artists such as Antony Santos, Luis Segura, Luis Vargas, Teodoro Reyes, Yoskar Sarante, Alex Bueno, and Aventura. Bachata, merengue and salsa are now equally popular among Spanish - speaking Caribbean people. When the Spanish conquistadors sailed across the Atlantic they brought with them a type of music known as hesparo, which contributed to the development of Dominican music. A romantic style is also popular in the Dominican Republic from vocalists such as Angela Carrasco, Anthony Rios, Dhario Primero, Maridalia Hernandez and Olga Lara.
Traditional Ecuadorian music can be classified as mestizo, Indian and Afro - Ecuadorian music. Mestizo music evolved from the interrelation between Spanish and Indian music. It has rhythms such as pasacalles, pasillos, albazos and sanjuanitos, and is usually played by stringed instruments. There are also regional variations: coastal styles, such as vals (similar to Vals Peruano (Waltz)) and montubio music (from the coastal hill country).
Indian music in Ecuador is determined in varying degrees by the influence of quichua culture. Within it are sanjuanitos (different from the meztizo sanjuanito), capishkas, danzantes and yaravis. Non-quichua indigenous music ranges from the Tsáchila music of Santo Domingo (influenced by the neighboring Afro - marimba) to the Amazonian music of groups such as the Shuar.
Black Ecuadorian music can be classified into two main forms. The first type is black music from the coastal Esmeraldas province, and is characterized by the marimba. The second variety is black music from the Chota Valley in the northern Sierra (primarily known as Bomba del Chota), characterized by a more - pronounced mestizo and Indian influence than marimba esmeraldeña. Most of these musical styles are also played by wind ensembles of varying sizes at popular festivals around the country. Like other Latin American countries, Ecuadorian music includes local exponents of international styles: from opera, salsa and rock to cumbia, thrash metal and jazz.
Salvadoran music may be compared with the Colombian style of music known as cumbia. Popular styles in modern El Salvador (in addition to cumbia) are salsa, Bachata and Reggaeton. "Political chaos tore the country apart in the early 20th century, and music was often suppressed, especially those with strong native influences. In the 1940s, for example, it was decreed that a dance called "Xuc '' was to be the "national dance '' which was created and led by Paquito Palaviccini 's and his Orquestra Internacional Polio ". In recent years reggaeton and hip hop have gained popularity, led by groups such as Pescozada and Mecate. Salvadorian music has a musical style influenced by Mayan music (played on the El Salvador - Guatemala border, in Chalatenango). Another popular style of music not native to El Salvador is known as Punta, a Belizean, Guatemalan and Honduran style.
Some of the leading classical composers from El Salvador include Alex Panamá, Carlos Colón - Quintana, and German Cáceres.
Guatemala has a very long musical tradition.
Haitian music combines a wide range of influences drawn from the many people who have settled on this Caribbean island. It reflects French, African rhythms, Spanish elements and others who have inhabited the island of Hispaniola and minor native Taino influences. Styles of music unique to the nation of Haiti include music derived from Vodou ceremonial traditions, Rara parading music, Twoubadou ballads, Mini-jazz rock bands, Rasin movement, Hip hop Kreyòl, the wildly popular Compas, and Méringue as its basic rhythm.
Evolving in Haiti during the mid-1800s, the Haitian méringue (known as the mereng in creole) is regarded as the oldest surviving form of its kind performed today and is its national symbol. According to Jean Fouchard, mereng evolved from the fusion of slave music genres (such as the chica and calenda) with ballroom forms related to the French - Haitian contredanse (kontradans in creole). Mereng 's name, he says, derives from the mouringue music of the Bara, a Bantu people of Madagascar. That few Malagasies came to the Americas casts doubt on this etymology, but it is significant because it emphasizes what Fouchard (and most Haitians) consider the African - derived nature of their music and national identity.
Very popular today is compas, short for compas direct, a modern méringue made popular by Nemours Jean - Baptiste, on a recording released in 1955. The name derives from compás, the Spanish word meaning rhythm or tones. It involves mostly medium - to - fast tempo beats with an emphasis on electric guitars, synthesizers, and either a solo alto saxophone, a horn section or the synthesizer equivalent. In Creole, it is spelled as konpa dirèk or simply konpa. It is commonly spelled as it is pronounced as kompa.
The music of Honduras varies from Punta (the local genre of the Garifunas) to Caribbean music such as salsa, merengue, reggae and reggaeton (all widely heard, especially in the north). Mexican ranchera music has a large following in the rural interior of the country. The country 's ancient capital of Comayagua is an important center for modern Honduran music, and is home to the College for Fine Arts.
Mexico is perhaps one of the most musically diverse countries in the world. Each of its 31 states, its capital city and each of Mexico City 's boroughs claim unique styles of music. The most representative genre is mariachi music. Although commonly misportrayed as buskers, mariachis musicians play extremely technical, structured music or blends such as jarabe. Most mariachi music is sung in verses of prose poetry. Ranchera, Mexico 's country music, differs from mariachi in that it is less technical and its lyrics are not sung in prose. Other regional music includes: son jarocho, son huasteco, cumbia sonidera, Mexican pop, rock en español, Mexican rock and canto nuevo. There is also music based on sounds made by dancing (such as the zapateada).
Northeastern Mexico is home to another popular style called norteña, which assimilates Mexican ranchera with Colombian cumbia and is typically played with Bavarian accordions and Bohemian polka influence. Variations of norteña include duranguense, tambora sinaloense, corridos and nortec (norteño - techno). The eastern part of the country makes heavy use of the harp, typical of the son arocho style. The music in southern Mexico is particularly represented by its use of the marimba, which has its origins in the Soconusco region between Mexico and Guatemala.
The north - central states have recently spawned a Tecktonik - style music, combining electro and other dance genres with more traditional music.
The most popular style of music in Nicaragua is palo de Mayo, which is both a type of dance music and a festival where the dance (and music) originated. Other popular music includes marimba, punta, Garifuna music, son nica, folk music, merengue, bachata and salsa.
The music of Panama is the result of the mestizaje, It has occurred during the last five hundred years between the Iberian traditions, especially those of Andalusia, American Indians and those of West Africa. Mestizaje that has been enriched by cultural exchange caused by several waves of migrations originating in Europe, in various parts of the Caribbean (mostly Barbados, Trinidad, Jamaica and Saint Lucia) in Asia and several points in South and North America. These migrations were due to the Spanish colonization of America, which was forced to use the Royal Route of Panama as an inter-oceanic trade route, which included the slave trade (an institution abolished in Panama in 1851); To the traffic, product of the exploitation of the silver mines in the Viceroyalty of Peru during centuries XVI and XVII; To the legendary riches of the Fair of Portobelo, between centuries XVII and XVIII; To the construction of the Transísmico Railroad, begun in 1850, and the Interoceanic Canal, initiated by France in 1879, concluded by the United States in 1914 and expanded by Panama from 2007.
With this rich cultural heritage, Panama has contributed significantly to the development of Cumbia, Decima, Panamanian saloma, Pasillo, panamanian bunde, bullerengue, Punto Music, Tamborito, Mejorana, Panamanian Murga, Tamborera (Examples: Guarare and Tambor de la Alegria), bolero, jazz, Salsa, reggae and calypso, through composers like Nicolas Aceves Núñez (hall, cumbia, tamborito, Pasillo), Luis Russell (jazz), Ricardo Fábrega (bolero and Tamborera), José Luis Rodríguez Vélez (cumbia and bolero), Arturo "Chino '' Hassan (bolero), Nando Boom (reggae), Lord Cobra (calypso), Rubén Blades (salsa), Danilo Pérez (jazz), Vicente Gómez Gudiño (Pasillo), César Alcedo, among many others.
Paraguayan music depends largely upon two instruments: the guitar and the harp, which were brought by the conquistadors and found their own voices in the country. Polka Paraguaya, which adopted its name from the European dance, is the most popular type of music and has different versions (including the galopa, the krye'ÿ and the canción Paraguaya, or "Paraguayan song ''). The first two are faster and more upbeat than a standard polka; the third is a bit slower and slightly melancholy. Other popular styles include the purahéi jahe'o and the compuesto (which tell sad, epic or love stories). The polka is usually based on poetic lyrics, but there are some emblematic pieces of Paraguayan music (such as "Pájaro Campana '', or "Songbird '', by Félix Pérez Cardozo).
Guarania is the second - best - known Paraguayan musical style, and was created by musician José Asunción Flores in 1925.
Peruvian music is made up of indigenous, Spanish and West African influences. Coastal Afro - Peruvian music is characterized by the use of the cajón peruano. Amerindian music varies according to region and ethnicity. The best - known Amerindian style is the huayno (also popular in Bolivia), played on instruments such as the charango and guitar. Mestizo music is varied and includes popular valses and marinera from the northern coast.
The history of music on the island of Puerto Rico begins with its original inhabitants, the Taínos. The Taíno Indians have influenced the Puerto Rican culture greatly, leaving behind important contributions such as their musical instruments, language, food, plant medicine and art. The heart of much Puerto Rican music is the idea of improvisation in both the music and the lyrics. A performance takes on an added dimension when the audience can anticipate the response of one performer to a difficult passage of music or clever lyrics created by another. When two singers, either both men or a man and a woman, engage in vocal competition in música jíbara this is a special type of seis called a controversia. Of all Puerto Rico 's musical exports, the best - known is reggaeton. Bomba and plena have long been popular, while reggaetón is a relatively recent invention. It is a form of urban contemporary music, often combining other Latin musical styles, Caribbean and West Indies music, (such as reggae, soca, Spanish reggae, salsa, merengue and bachata. It originates from Panamanian Spanish reggae and Jamaican dancehall, however received its rise to popularity through Puerto Rico. Tropikeo is the fusion of R&B, Rap, Hip Hop, Funk and Techno Music within a Tropical musical frame of salsa, in which the conga drums and / or timbales drums are the main source of rhythm of the tune, in conjunction with a heavy salsa "montuno '' of the piano. The lyrics of the song can be rapped or sung, or used combining both styles, as well as danced in both styles. Aguinaldo from Puerto Rico is similar to Christmas carols, except that they are usually sung in a parranda, which is rather like a lively parade that moves from house to house in a neighborhood, looking for holiday food and drink. The melodies were subsequently used for the improvisational décima and seis. There are aguinaldos that are usually sung in churches or religious services, while there are aguinaldos that are more popular and are sung in the parrandas. Danza is a very sophisticated form of music that can be extremely varied in its expression; they can be either romantic or festive. Romantic danzas have four sections, beginning with an eight measure paseo followed by three themes of sixteen measures each. The third theme typically includes a solo by the bombardino and, often, a return to the first theme or a coda at the end. Festive danzas are free - form, with the only rules being an introduction and a swift rhythm. Plena is a narrative song from the coastal regions of Puerto Rico, especially around Ponce, Puerto Rico. Its origins have been various claimed as far back as 1875 and as late as 1920. As rural farmers moved to San Juan, Puerto Rico and other cities, they brought plena with them and eventually added horns and improvised call and response vocals. Lyrics generally deal with stories or current events, though some are light - hearted or humorous.
Llanera is Venezuelan popular music originating in the llanos plains, although a more upbeat and festive gaita version is heard western Venezuela (particularly in Zulia State). There are also African - influenced styles which emphasize drumming and dance, and such diverse styles as music from the Guayana region (influenced by neighboring English - speaking countries) and Andean music from Mérida.
Uruguayan music has similar roots to that of Argentina. Uruguayan tango and milonga are both popular styles, and folk music from along the River Plate is indistinguishable from its Argentine counterpart. Uruguay rock and cancion popular (Uruguayan versions of rock and pop music) are popular local forms. Candombe, a style of drumming descended from African slaves in the area, is quintessentially Uruguayan (although it is played to a lesser extent in Argentina). It is most popular in Montevideo, but may also be heard in a number of other cities.
Based on Cuban music (especially Cuban son and son montuno) in rhythm, tempo, bass line, riffs and instrumentation, Salsa represents an amalgamation of musical styles including rock, jazz, and other Latin American (and Puerto Rican) musical traditions. Modern salsa (as it became known worldwide) was forged in the pan-Latin melting pot of New York City in the late 1960s and early 1970s.
Latin Trap has become famous around 2015. It has influences of American trap and reggaeton music.
Reggaeton (also known as reggaetón and reguetón (1)) is a musical genre which originated in Puerto Rico during the late 1990s. It is influenced by hip hop and Latin American and Caribbean music. Vocals include rapping and singing, typically in Spanish.
The Latin (or romantic) ballad is a Latin musical genre which originated in the 1960s. This ballad is very popular in Hispanic America and Spain, and is characterized by a sensitive rhythm. A descendant of the bolero, it has several variants (such as salsa and cumbia). Since the mid-20th century a number of artists have popularized the genre, such as Julio Iglesias, Luis Miguel, Enrique Iglesias, Alejandra Ávalos, Cristian Castro, and José José.
Imported styles of popular music with a distinctively Latin flavor include Latin jazz, Argentine and Chilean rock and Cuban and Mexican hip hop, all influenced by styles from the United States (jazz, rock and roll and hip hop). Music from non-Latin parts of the Caribbean are also popular throughout Latin America, especially Jamaican reggae and dub, Trinidadian chutney, calypso music and soca. Flamenco, rumba, pasodoble and fados from the Iberian peninsula are well - known due to the Iberian heritage in Latin America.
|
where is alpha amylase found in the body | Alpha - amylase - wikipedia
α - Amylase is a protein enzyme EC 3.2. 1.1 that hydrolyses alpha bonds of large, alpha - linked polysaccharides, such as starch and glycogen, yielding glucose and maltose. It is the major form of amylase found in humans and other mammals. It is also present in seeds containing starch as a food reserve, and is secreted by many fungi.
Although found in many tissues, amylase is most prominent in pancreatic juice and saliva, each of which has its own isoform of human α - amylase. They behave differently on isoelectric focusing, and can also be separated in testing by using specific monoclonal antibodies. In humans, all amylase isoforms link to chromosome 1p 21 (see AMY1A).
Amylase is found in saliva and breaks starch into maltose and dextrin. This form of amylase is also called "ptyalin '' / ˈtaɪəlɪn / It will break large, insoluble starch molecules into soluble starches (amylodextrin, erythrodextrin, and achrodextrin) producing successively smaller starches and ultimately maltose. Ptyalin acts on linear α (1, 4) glycosidic linkages, but compound hydrolysis requires an enzyme that acts on branched products. Salivary amylase is inactivated in the stomach by gastric acid. In gastric juice adjusted to pH 3.3, ptyalin was totally inactivated in 20 minutes at 37 ° C. In contrast, 50 % of amylase activity remained after 150 minutes of exposure to gastric juice at pH 4.3. Both starch, the substrate for ptyalin, and the product (short chains of glucose) are able to partially protect it against inactivation by gastric acid. Ptyalin added to buffer at pH 3.0 underwent complete inactivation in 120 minutes; however, addition of starch at a 0.1 % level resulted in 10 % of the activity remaining, and similar addition of starch to a 1.0 % level resulted in about 40 % of the activity remaining at 120 minutes.
The salivary amylase gene has undergone duplication during evolution, and DNA hybridization studies indicate many individuals have multiple tandem repeats of the gene. The number of gene copies correlates with the levels of salivary amylase, as measured by protein blot assays using antibodies to human amylase. Gene copy number is associated with apparent evolutionary exposure to high - starch diets. For example, a Japanese individual had 14 copies of the amylase gene (one allele with 10 copies, and a second allele with four copies). The Japanese diet has traditionally contained large amounts of rice starch. In contrast, a Biaka individual carried six copies (three copies on each allele). The Biaka are rainforest hunter - gatherers who have traditionally consumed a low - starch diet. Perry and colleagues speculated the increased copy number of the salivary amylase gene may have enhanced survival coincident to a shift to a starchy diet during human evolution.
Pancreatic α - amylase randomly cleaves the α (1 - 4) glycosidic linkages of amylose to yield dextrin, maltose, or maltotriose. It adopts a double displacement mechanism with retention of anomeric configuration.
The test for amylase is easier to perform than that for lipase, making it the primary test used to detect and monitor pancreatitis. Medical laboratories will usually measure either pancreatic amylase or total amylase. If only pancreatic amylase is measured, an increase will not be noted with mumps or other salivary gland trauma.
However, because of the small amount present, timing is critical when sampling blood for this measurement. Blood should be taken soon after a bout of pancreatitis pain, otherwise it is excreted rapidly by the kidneys.
Salivary α - amylase has been used as a biomarker for stress and as a surrogate marker of sympathetic nervous system (SNS) activity that does not require a blood draw.
Increased plasma levels in humans are found in:
Total amylase readings of over 10 times the upper limit of normal (ULN) are suggestive of pancreatitis. Five to 10 times the ULN may indicate ileus or duodenal disease or renal failure, and lower elevations are commonly found in salivary gland disease.
α - Amylase activity in grain is measured by, for instance, the Hagberg - Perten Falling Number, a test to assess sprout damages, or the Phadebas method.
α - Amylase is used in ethanol production to break starches in grains into fermentable sugars.
The first step in the production of high - fructose corn syrup is the treatment of cornstarch with α - amylase, producing shorter chains of sugars oligosaccharides.
An α - amylase called "Termamyl '', sourced from Bacillus licheniformis, is also used in some detergents, especially dishwashing and starch - removing detergents.
See amylase for more uses of the amylase family in general.
α - Amylase has exhibited efficacy in degrading polymicrobial bacterial biofilms by hydrolyzing the α (1 - 4) glycosidic linkages within the structural, matrix exopolysaccharides of the extracellular polymeric substance (EPS).
The tris molecule is reported to inhibit a number of bacterial α - amylases, so they should not be used in tris buffer.
Several methods are available for determination of α - amylase activity, and different industries tend to rely on different methods. The starch iodine test, a development of the iodine test, is based on colour change, as α - amylase degrades starch and is commonly used in many applications. A similar but industrially produced test is the Phadebas amylase test, which is used as a qualitative and quantitative test within many industries, such as detergents, various flour, grain, and malt foods, and forensic biology.
α - Amylases contain a number of distinct protein domains. The catalytic domain has a structure consisting of an eight - stranded alpha / beta barrel that contains the active site, interrupted by a ~ 70 - amino acid calcium - binding domain protruding between beta strand 3 and alpha helix 3, and a carboxyl - terminal Greek key beta - barrel domain. Several alpha - amylases contain a beta - sheet domain, usually at the C terminus. This domain is organised as a five - stranded antiparallel beta - sheet. Several alpha - amylases contain an all - beta domain, usually at the C terminus.
This article incorporates text from the public domain Pfam and InterPro IPR006047 This article incorporates text from the public domain Pfam and InterPro IPR012850 This article incorporates text from the public domain Pfam and InterPro IPR006048
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.