question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
who won the first ever women's royal rumble | Royal Rumble - wikipedia
The Royal Rumble is a professional wrestling event, produced every January since 1988 by professional wrestling promotion WWE. It is named after the Royal Rumble match, a battle royal whose participants enter at timed intervals.
After the initial event was broadcast as a television special on USA Network, the Royal Rumble has been shown on pay - per - view and is one of WWE 's "Big Four '', along with WrestleMania, SummerSlam, and Survivor Series.
The Royal Rumble is a pay - per - view consisting of the Royal Rumble match, title matches, and various other matches. The first Royal Rumble event took place on January 24, 1988, and was broadcast live on the USA Network. The following year, the event started to be broadcast on pay - per - view and thus became one of WWE 's "big four '' pay - per - views, along with WrestleMania, Survivor Series, and SummerSlam.
The men 's Royal Rumble match is usually located at the top of the card, though there have been exceptions, such as the 1988, 1996, 1997, 1998, 2006, 2013, and 2018 events. Due to the Rumble match taking up a large amount of time (most Rumble matches last roughly one hour), the Rumble event tends to have a smaller card than most other pay - per - view events, which routinely have six to eight matches per card. The 2008 Royal Rumble was the first WWE pay - per - view to be available in high - definition. The 2018 Royal Rumble was the first to include a women 's Royal Rumble match, which was the main event for that year.
The Royal Rumble is based on the classic Battle Royal match, in which a number of wrestlers (traditionally 30) aim at eliminating their competitors by tossing them over the top rope, with both feet touching the floor. The winner of the event is the last wrestler remaining after all others have been eliminated. According to Hornswoggle, who worked for WWE from 2006 until 2016 and participated in two Rumbles, participants may learn their eliminations by knowing the two wrestlers who are eliminated before them and which wrestlers are entering the Royal Rumble before and after their elimination.
In March 2007, WWE released a complete DVD box set titled Royal Rumble: The Complete Anthology, which showcases every Royal Rumble event in its entirety, up to the 2007 Royal Rumble.
|
who determined overall strategy for the armed forces of the confederacy | Confederate States Army - Wikipedia
1,082,119 total who served
American Indian Wars Cortina Troubles American Civil War
The Confederate States Army (C.S.A.) was the military land force of the Confederate States of America (Confederacy) during the American Civil War (1861 -- 1865). On February 28, 1861, the Provisional Confederate Congress established a provisional volunteer army and gave control over military operations and authority for mustering state forces and volunteers to the newly chosen Confederate president, Jefferson Davis. Davis was a graduate of the U.S. Military Academy, and colonel of a volunteer regiment during the Mexican -- American War. He had also been a United States Senator from Mississippi and U.S. Secretary of War under President Franklin Pierce. On March 1, 1861, on behalf of the Confederate government, Davis assumed control of the military situation at Charleston, South Carolina, where South Carolina state militia besieged Fort Sumter in Charleston harbor, held by a small U.S. Army garrison. By March 1861, the Provisional Confederate Congress expanded the provisional forces and established a more permanent Confederate States Army.
An accurate count of the total number of individuals who served in the Confederate Army is not possible due to incomplete and destroyed Confederate records; estimates of the number of individual Confederate soldiers are between 750,000 and 1,000,000 men. This does not include an unknown number of slaves who were pressed into performing various tasks for the army, such as construction of fortifications and defenses or driving wagons. Since these figures include estimates of the total number of individual soldiers who served at any time during the war, they do not represent the size of the army at any given date. These numbers do not include men who served in Confederate States Navy.
Although most of the soldiers who fought in the American Civil War were volunteers, both sides by 1862 resorted to conscription, primarily as a means to force men to register and to volunteer. In the absence of exact records, estimates of the percentage of Confederate soldiers who were draftees are about double the 6 percent of United States soldiers who were conscripts.
Confederate casualty figures also are incomplete and unreliable. The best estimates of the number of deaths of Confederate soldiers are about 94,000 killed or mortally wounded in battle, 164,000 deaths from disease and between 26,000 and 31,000 deaths in United States prison camps. One estimate of Confederate wounded, which is considered incomplete, is 194,026. These numbers do not include men who died from other causes such as accidents, which would add several thousand to the death toll.
The main Confederate armies, the Army of Northern Virginia under General Robert E. Lee and the remnants of the Army of Tennessee and various other units under General Joseph E. Johnston, surrendered to the U.S. on April 9, 1865 (officially April 12), and April 18, 1865 (officially April 26). Other Confederate forces surrendered between April 16, 1865 and June 28, 1865. By the end of the war, more than 100,000 Confederate soldiers had deserted, and some estimates put the number as high as one third of Confederate soldiers. The Confederacy 's government effectively dissolved when it fled Richmond in April and exerted no control of the remaining armies.
By the time Abraham Lincoln took office as President of the United States on March 4, 1861, the seven seceding slave states had formed the Confederate States. The Confederacy seized federal property, including nearly all U.S. Army forts, within its borders. Lincoln was determined to hold the forts remaining under U.S. control when he took office, especially Fort Sumter in the harbor of Charleston, South Carolina. By the time Lincoln was sworn in as president, the Provisional Confederate Congress had authorized the organization of a large Provisional Army of the Confederate States (PACS).
Under orders from Confederate President Jefferson Davis, C.S. troops under the command of General P.G.T. Beauregard bombarded Fort Sumter on April 12 -- 13, 1861, forcing its capitulation on April 14. The United States were outraged by the Confederacy 's attack and demanded war. It rallied behind Lincoln 's call on April 15, for all the states to send troops to recapture the forts from the secessionists, to put down the rebellion and to preserve the United States intact. Four more slave states then joined the Confederacy. Both the United States and the Confederate States began in earnest to raise large, mostly volunteer, armies with the objectives of putting down the rebellion and preserving the Union, on the one hand, or of establishing independence from the United States, on the other.
The Confederate Congress provided for a Confederate army patterned after the United States Army. It was to consist of a large provisional force to exist only in time of war and a small permanent regular army. The provisional, volunteer army was established by an act of the Provisional Confederate Congress passed on February 28, 1861, one week before the act which established the permanent regular army organization, passed on March 6. Although the two forces were to exist concurrently, very little was done to organize the Confederate regular army.
Members of all the Confederate States military forces (the army, the navy, and the marine corps) are often referred to as "Confederates '', and members of the Confederate army were referred to as "Confederate soldiers ''. Supplementing the Confederate army were the various state militias of the Confederacy:
Control and operation of the Confederate army was administered by the Confederate States War Department, which was established by the Confederate Provisional Congress in an act on February 21, 1861. The Confederate Congress gave control over military operations, and authority for mustering state forces and volunteers to the President of the Confederate States of America on February 28, 1861, and March 6, 1861. On March 8 the Confederate Congress passed a law that authorized Davis to issue proclamations to call up no more than 100,000 men. The War Department asked for 8,000 volunteers on March 9, 20,000 on April 8, and 49,000 on and after April 16. Davis proposed an army of 100,000 men in his message to Congress on April 29.
On August 8, 1861, the Confederacy called for 400,000 volunteers to serve for one or three years. In April 1862, the Confederacy passed the first conscription law in either C.S. or U.S. history, the Conscription Act, which made all able bodied white men between the ages of 18 and 35 liable for a three - year term of service in the PACS. It also extended the terms of enlistment for all one - year soldiers to three years. Men employed in certain occupations considered to be most valuable for the home front (such as railroad and river workers, civil officials, telegraph operators, miners, druggists and teachers) were exempt from the draft. The act was amended twice in 1862. On September 27, the maximum age of conscription was extended to 45. On October 11, the Confederate Congress passed the so - called "Twenty Negro Law '', which exempted anyone who owned 20 or more slaves, a move that caused deep resentment among conscripts who did not own slaves.
The Confederate Congress made several more amendments over the course of the war to address losses suffered in battle as well as the United States ' greater supply of manpower. In December 1863, they abolished the practice of allowing a rich drafted man to hire a substitute to take his place in the ranks. Substitution had also been practiced in the United States, leading to similar resentment from the lower classes. In February 1864, the age limits were extended to between 17 and 50. Challenges to the subsequent acts came before five state supreme courts; all five upheld them.
In his 2010 book Major Problems in the Civil War, historian Michael Perman says that historians are of two minds on why millions of men seemed so eager to fight, suffer and die over four years:
Some historians emphasize that Civil War soldiers were driven by political ideology, holding firm beliefs about the importance of liberty, Union, or state rights, or about the need to protect or to destroy slavery. Others point to less overtly political reasons to fight, such as the defense of one 's home and family, or the honor and brotherhood to be preserved when fighting alongside other men. Most historians agree that, no matter what he thought about when he went into the war, the experience of combat affected him profoundly and sometimes affected his reasons for continuing to fight.
Educated soldiers drew upon their knowledge of American history to justify their costs. McPherson says:
The most popular press coming out of Richmond before and during the Civil War sought to inspire a sense of patriotism, Confederate identity, and moral high ground in the southern population though newspaper media.
The southern churches met the shortage of Army chaplains by sending missionaries. The Southern Baptists started in 1862 and had a total of 78 missionaries. Presbyterians were even more active with 112 missionaries and early 1865. Other missionaries were funded and supported by the Episcopalians, Methodists, and Lutherans. One result was wave after wave of revivals in the Army. Religion played a major part in the lives of Confederate soldiers. Some men with a weak religious affiliation became committed Christians, and saw their military service in terms of God 's wishes. Religion strengthened the soldiers ' loyalty to comrades and the Confederacy. Military historian Samuel J. Watson argues that Christian faith was a major factor in combat motivation. The soldiers ' faith was consoling for the loss of comrades; it was a shield against fear; it helped cut down on drinking and fighting; it enlarged the soldiers community of close friends and helped make up for long - term separation from home.
In his 1997 book For Cause and Comrades, which examines the motivations of the American Civil War 's soldiers, historian James M. McPherson contrasts the views of Confederate soldiers regarding slavery to that of the colonial American revolutionaries of the 18th century. He stated that while the American colonists of the 1770s saw an incongruity with slave ownership and proclaiming to be fighting for liberty, the Confederacy 's soldiers did not, as the Confederate ideology of white supremacy negated any contradiction between the two:
Unlike many slaveholders in the age of Thomas Jefferson, Confederate soldiers from slaveholding families expressed no feelings of embarrassment or inconsistency in fighting for their own liberty while holding other people in slavery. Indeed, white supremacy and the right of property in slaves were at the core of the ideology for which Confederate soldiers fought.
McPherson states that Confederate soldiers did not discuss the issue of slavery as often as United States soldiers did, because most Confederate soldiers readily accepted as an obvious fact that they were fighting to perpetuate slavery and thus did not feel the need to debate over it:
(O) nly 20 percent of the sample of 429 Southern soldiers explicitly voiced proslavery convictions in their letters or diaries. As one might expect, a much higher percentage of soldiers from slaveholding families than from nonslaveholding families expressed such a purpose: 33 percent, compared with 12 percent. Ironically, the proportion of Union soldiers who wrote about the slavery question was greater, as the next chapter will show. There is a ready explanation for this apparent paradox. Emancipation was a salient issue for Union soldiers because it was controversial. Slavery was less salient for most Confederate soldiers because it was not controversial. They took slavery for granted as one of the Southern ' rights ' and institutions for which they fought, and did not feel compelled to discuss it.
Continuing, McPherson also stated that of the hundreds of Confederate soldiers ' letters he had examined, none of them contained any anti-slavery sentiment whatsoever:
Although only 20 percent of the soldiers avowed explicit proslavery purposes in their letters and diaries, none at all dissented from that view.
But McPherson admits flaws in his sampling of letters. Soldiers from slaveholding families were overrepresented by 100 %:
Nonslaveholding farmers are underrepresented in the Confederate sample. Indeed, while about one - third of all Confederate soldiers belonged to slaveholding families, slightly more than two - thirds of the sample whose slaveholding status is known did so.
In some cases, Confederate men were motivated to join the army in response to the United States ' actions in regards to opposition to slavery. After U.S. President Abraham Lincoln issued the Emancipation Proclamation, some Confederate soldiers welcomed the move, as they believed it would strengthen pro-slavery sentiment in the Confederacy and thus, lead to greater enlistment of white men into the Confederate army.
One Confederate soldier from Texas gave his reasons for fighting for the Confederacy, stating that "we are fighting for our property '', contrasting this with the motivations of Union soldiers, who he claimed were fighting for the "flimsy and abstract idea that a negro is equal to an Anglo ''. One Louisianan artillery soldier stated, "I never want to see the day when a negro is put on an equality with a white person. There is too many free niggers... now to suit me, let alone having four millions. '' A North Carolinian soldier stated, "(A) white man is better than a nigger. ''
In 1894, Virginian and former Confederate soldier John S. Mosby, reflecting back on his role in the war, stated in a letter to a friend that "I 've always understood that we went to war on account of the thing we quarreled with the North about. I 've never heard of any other cause than slavery. ''
At many points during the war, and especially near the end, Confederate armies were very poorly fed. Back home their families were in worsening condition and faced starvation and marauders. Many soldiers went home temporarily ("absent without official leave '') and quietly returned when their family problems had been resolved. By September 1864, however, President Davis publicly admitted that two thirds of the soldiers were absent, "most of them without leave. '' The problem escalated rapidly after that, and fewer and fewer men returned. Soldiers who were fighting in defense of their homes realized that they had to desert to fulfill that duty. Historian Mark Weitz argues that the official count of 103,400 deserters is too low. He concludes that most of the desertions came because the soldier felt he owed a higher duty to his own family than to the Confederacy.
Confederate policies generally were severe. For example, on August 19, 1862 General Stonewall Jackson approved the court - martial sentence of execution for three soldiers for desertion. He rejected pleas for clemency from the soldier 's regimental commander. Jackson 's goal was to maintain discipline in a volunteer army whose homes were under threat of enemy occupation.
Historians have emphasized how soldiers from poor families deserted because they were urgently needed at home. Local pressures mounted as United States forces occupied more and more of the Confederacy, putting more and more families at risk. One Confederate officer at the time noted, "The deserters belong almost entirely to the poorest class of non slave - holders whose labor is indispensable to the daily support of their families '' and that "When the father, husband or son is forced into the service, the suffering at home with them is inevitable. It is not in the nature of these men to remain quiet in the ranks under such circumstances. ''
Some soldiers also deserted from ideological motivations. A growing threat to the solidarity of the Confederacy was dissatisfaction in the Appalachian mountain districts caused by lingering unionism and a distrust of the slave power. Many of their soldiers deserted, returned home, and formed a military force that fought off regular army units trying to punish them. North Carolina lost 23 % of its soldiers (24,122) to desertion. The state provided more soldiers per capita than any other Confederate state, and had more deserters as well.
Young Mark Twain deserted long before he became a famous writer and lecturer, but he often commented upon the episode in comic fashion. Beneath his desertion from a Missouri State Guard unit was his deep unease about losing his personal honor, his fear of facing death as a soldier, and his rejection of a Southern identity as a professional author.
Because of the destruction of any central repository of records in Richmond in 1865 and the comparatively poor record - keeping of the time, there can be no definitive number that represents the strength of the Confederate States Army. Estimates range from 500,000 to 2,000,000 men who were involved at any time during the war. Reports from the War Department began at the end of 1861 (326,768 men), 1862 (449,439), 1863 (464,646), 1864 (400,787), and "last reports '' (358,692). Estimates of enlistments throughout the war were 1,227,890 to 1,406,180.
The following calls for men were issued:
The CSA was initially a (strategically) defensive army, and many soldiers were resentful when Lee led the Army of Northern Virginia in an invasion of the North in the Antietam Campaign.
The army did not have a formal overall military commander, or general - in - chief, until late in the war. The Confederate President, Jefferson Davis, himself a former U.S. Army officer and U.S. Secretary of War, served as commander - in - chief and provided the strategic direction for Confederate land and naval forces. The following men had varying degrees of control:
The lack of centralized control was a strategic weakness for the Confederacy, and there are few instances of multiple armies acting in concert across multiple theaters to achieve a common objective. (An exception to this was in late 1862 when Lee 's invasion of Maryland was coincident with two other actions: Bragg 's invasion of Kentucky and Earl Van Dorn 's advance against Corinth, Mississippi. All three initiatives were unsuccessful, however.) Likewise, an extreme example of "States Rights '' control of C.S. soldiers was Georgia Governor Joseph E. Brown, who not only reportedly tried to keep Georgia troops from leaving the State of Georgia in 1861 but also tried to keep them from C.S. government control when U.S. forces entered Georgia in 1864.
Many of the Confederacy 's senior military leaders (including Robert E. Lee, Albert Sidney Johnston, James Longstreet) and even President Jefferson Davis were former U.S. Army and, in smaller numbers, U.S. Navy officers who had been opposed to, disapproved of, or were at least unenthusiastic about secession but resigned their U.S. commissions upon hearing that their states had left the Union. They felt that they had no choice but to help defend their homes. President Abraham Lincoln was exasperated to hear of such men who professed to love their country but were willing to fight against it.
As in the U.S. Army, the Confederate army 's soldiers were organized by military specialty. The combat arms included infantry, cavalry and artillery.
Although fewer soldiers might comprise a squad or platoon, the smallest infantry maneuver unit in the Army was a company of 100 soldiers. Ten companies were organized into an infantry regiment, which theoretically had 1,000 men. In reality, as disease, desertions and casualties took their toll, and the common practice of sending replacements to form new regiments took hold, most regiments were greatly reduced in strength. By the mid-war, most regiments averaged 300 -- 400 men, with Confederate units slightly smaller on average than their U.S. counterparts. For example, at the pivotal Battle of Chancellorsville, the average U.S. Army infantry regiment 's strength was 433 men, versus 409 for Confederate infantry regiments.
Rough unit sizes for CSA combat units during the war:
Regiments, which were the basic units of army organization through which soldiers were supplied and deployed, were raised by individual states. They were generally referred by number and state, for example 1st Texas, 12th Virginia. To the extent the word "battalion '' was used to describe a military unit, it referred to a multi-company task force of a regiment or a near - regimental size unit. Throughout the war, the Confederacy raised the equivalent of 1,010 regiments in all branches, including militias, versus 2,050 regiments for the U.S. Army.
Four regiments usually formed a brigade, although as the number of men in many regiments became greatly reduced, especially later in the war, more than four were often assigned to a brigade. Occasionally, regiments would be transferred between brigades. Two to four brigades usually formed a division. Two to four divisions usually formed a corps. Two to four corps usually formed an army. Occasionally, a single corps might operate independently as if it were a small army. The Confederate States Army consisted of several field armies, named after their primary area of operation. The largest Confederate field army was the Army of Northern Virginia, whose surrender at Appomattox Courthouse in 1865 marked the end of major combat operations in the US Civil War.
Companies were commanded by captains and had two or more lieutenants. Regiments were commanded by colonels. Lieutenant colonels were second in command. At least one major was next in command. Brigades were commanded by brigadier generals although casualties or other attrition sometimes meant that brigades would be commanded by senior colonels or even a lower grade officer. Barring the same type of circumstances which might leave a lower grade officer in temporary command, divisions were commanded by major generals and corps were commanded by lieutenant generals. A few corps commanders were never confirmed as lieutenant generals and exercised corps command for varying periods of time as major generals. Armies of more than one corps were commanded by (full) generals.
Corporal of the Artillery division of the Confederate States of America Army.
Confederate mortar crew at Warrington, Florida in 1861, across from Fort Pickens.
Confederate artillery at Charleston Harbor, 1863.
There were four grades of general officer (general, lieutenant general, major general, and brigadier general), but all wore the same insignia regardless of grade. This was a decision made early in the conflict. The Confederate Congress initially made the rank of brigadier general the highest rank. As the war progressed, the other general - officer ranks were quickly added, but no insignia for them was created. (Robert E. Lee was a notable exception to this. He chose to wear the rank insignia of a colonel.) Only seven men achieved the rank of (full) general; the highest ranking (earliest date of rank) was Samuel Cooper, Adjutant General and Inspector General of the Confederate States Army.
Officers ' uniforms bore a braid design on the sleeves and kepi, the number of adjacent strips (and therefore the width of the lines of the design) denoting rank. The color of the piping and kepi denoted the military branch. The braid was sometimes left off by officers since it made them conspicuous targets. The kepi was rarely used, the common slouch hat being preferred for its practicality in the Southern climate.
Branch colors were used for color of chevrons -- blue for infantry, yellow for cavalry, and red for artillery. This could differ with some units, however, depending on available resources or the unit commander 's desire. Cavalry regiments from Texas, for example, often used red insignia and at least one Texas infantry regiment used black.
The CSA differed from many contemporaneous armies in that all officers under the rank of brigadier general were elected by the soldiers under their command. The Confederate Congress authorized the awarding of medals for courage and good conduct on October 13, 1862, but war time difficulties prevented the procurement of the needed medals. To avoid postponing recognition for their valor, those nominated for the awards had their names placed on a Roll of Honor, which would be read at the first dress parade after its receipt and be published in at least one newspaper in each state.
The C.S. Army was composed of independent armies and military departments that were constituted, renamed, and disbanded as needs arose, particularly in reaction to offensives launched by the United States. These major units were generally named after states or geographic regions (in comparison to the U.S. Army 's custom of naming armies after rivers). Armies were usually commanded by full generals (there were seven in the C.S. Army) or lieutenant generals. Some of the more important armies and their commanders were:
Some other prominent Confederate generals who led significant units operating sometimes independently in the CSA included Thomas J. "Stonewall '' Jackson, James Longstreet, J.E.B. Stuart, Gideon Pillow, and A.P. Hill.
The supply situation for most Confederate armies was dismal, even when they were victorious on the battlefield. The central government was short of money so each state government had to supply its own regiments. The lack of central authority and the ineffective railroads, combined with the frequent unwillingness or inability of Southern state governments to provide adequate funding, were key factors in the Confederate army 's demise. The Confederacy early on lost control of most of its major river and ocean ports to capture or blockade. The road system was poor, and it relied more and more on a heavily overburdened railroad system. U.S. forces destroyed track, engines, cars, bridges and telegraph lines as often as possible, knowing that new equipment was unavailable to the Confederacy. Occasional raids into the North were designed to bring back money and supplies. In 1864, the Confederates burned down Chambersburg, a Pennsylvania city they had raided twice in the years before, due to its failure to pay an extortion demand.
As a result of severe supply problems, as well as the lack of textile factories in the Confederacy and the successful U.S. naval blockade of Southern ports, the typical Confederate soldier was rarely able to wear the standard regulation uniform, particularly as the war progressed. While on the march or in parade formation, Confederate armies often displayed a wide array of dress, ranging from faded, patched - together regulation uniforms; rough, homespun uniforms colored with homemade dyes such as butternut (a yellow - brown color), and even soldiers in a hodgepodge of civilian clothing. After a successful battle, it was not unusual for victorious Confederate troops to procure U.S. Army uniform parts from captured supplies and dead U.S. soldiers; this would occasionally cause confusion in later battles and skirmishes.
Individual states were expected to supply their soldiers, which led to a lack of uniformity. Some states (such as North Carolina) were able to better supply their soldiers, while other states (such as Texas) were unable for various reasons to adequately supply their troops as the war continued.
Furthermore, each state often had its own uniform regulations and insignia, which meant that the "standard '' Confederate uniform often featured a variety of differences based on the state the soldier came from. For example, uniforms for North Carolina regiments often featured a colored strip of cloth on their shoulders to designate what part of the service the soldier was in. Confederate soldiers also frequently suffered from inadequate supplies of shoes, tents, and other gear, and would be forced to innovate and make do with whatever they could scrounge from the local countryside. While Confederate officers were generally better - supplied and were normally able to wear a regulation officer 's uniform, they often chose to share other hardships -- such as the lack of adequate food -- with their troops.
Confederate soldiers were also faced with inadequate food rations, especially as the war progressed. There was plenty of meat in the Confederacy. The unsolvable problem was shipping it to the armies, especially when Lee 's army in Virginia was at the end of a long, tenuous supply line. United States victory at Vicksburg in 1863 shut off supplies from Texas and the west.
By 1863 Confederate generals such as Robert E. Lee often spent as much time and effort searching for food for their men as they did in planning strategy and tactics. Individual commanders often had to "beg, borrow or steal '' food and ammunition from whatever sources were available, including captured U.S. depots and encampments, and private citizens regardless of their loyalties. Lee 's campaign against Gettysburg and southern Pennsylvania (a rich agricultural region) was driven in part by his desperate need of supplies, especially food.
General Sherman 's total warfare reduced the ability of the South to produce food and ship it to the armies or its cities. Coupled with the U.S. blockade of all ports the devastation of plantations, farms and railroads meant the Confederacy increasingly lost the capacity to feed its soldiers and civilians.
Native Americans served in both the United States and Confederate military during the American Civil War. They fought knowing they might jeopardize their freedom, unique cultures, and ancestral lands if they ended up on the losing side of the Civil War. During the Civil War 28,693 Native Americans served in the U.S. and Confederate armies, participating in battles such as Pea Ridge, Second Manassas, Antietam, Spotsylvania, Cold Harbor, and in Federal assaults on Petersburg. Many Native American tribes, such as the Creek and the Choctaw, were slaveholders themselves, and thus, found a political and economic commonality with the Confederacy.
At the beginning of the war, Albert Pike was appointed as Confederate envoy to Native Americans. In this capacity he negotiated several treaties, one such treaty was the Treaty with Choctaws and Chickasaws conducted in July 1861. The treaty covered sixty - four terms covering many subjects like Choctaw and Chickasaw nation sovereignty, Confederate States of America citizenship possibilities, and an entitled delegate in the House of Representatives of the Confederate States of America. The Cherokee, Choctaw, Seminole, Catawba, and Creek tribes were the only tribes to fight on the Confederate side. The Confederacy wanted to recruit Indians east of the Mississippi River in 1862, so they opened up a recruiting camp in Mobile, Alabama "at the foot of Stone Street ''. The Mobile Advertiser and Register would advertise for a chance at military service.
A Chance for Active Service. The Secretary of War has authorized me to enlist all the Indians east of the Mississippi River into the service of the Confederate States, as Scouts. In addition to the Indians, I will receive all white male citizens, who are good marksmen. To each member, Fifty Dollars Bounty, clothes, arms, camp equipage &c: furnished. The weapons shall be Enfield Rifles. For further information address me at Mobile, Ala. (Signed) S.G. Spann, Comm'ing Choctaw Forces.
Stand Watie, along with a few Cherokee, sided with the Confederate army, in which he was made colonel and commanded a battalion of Cherokee. Reluctantly, on October 7, 1861, Chief Ross signed a treaty transferring all obligations due to the Cherokee from the United States to the Confederate States. In the treaty, the Cherokee were guaranteed protection, rations of food, livestock, tools and other goods, as well as a delegate to the Confederate Congress at Richmond.
In exchange, the Cherokee would furnish ten companies of mounted men, and allow the construction of military posts and roads within the Cherokee Nation. However, no Indian regiment was to be called on to fight outside Indian Territory. As a result of the Treaty, the 2nd Cherokee Mounted Rifles, led by Col. John Drew, was formed. Following the Battle of Pea Ridge, Arkansas, March 7 -- 8, 1862, Drew 's Mounted Rifles defected to the United States forces in Kansas, where they joined the Indian Home Guard. In the summer of 1862, U.S. troops captured Chief Ross, who was paroled and spent the remainder of the war in Washington and Philadelphia proclaiming Cherokee loyalty to the United States Army.
William Holland Thomas, the only white chief of the Eastern Band of Cherokee Indians, recruited hundreds of Cherokees for the Confederate army, particularly for Thomas ' Legion. The Legion, raised in September 1862, fought until the end of the War.
Choctaw Confederate battalions were formed in Indian Territory and later in Mississippi in support of the southern cause. The Choctaws, who were expecting support from the Confederates, got little. Webb Garrison, a Civil War historian, describes their response: when Confederate Brigadier General Albert Pike authorized the raising of regiments during the fall of 1860, Seminoles, Creeks, Chickasaws, Choctaws, and Cherokees responded with considerable enthusiasm. Their zeal for the Confederate cause, however, began to evaporate when they found that neither arms nor pay had been arranged for them. A disgusted officer later acknowledged that "with the exception of a partial supply for the Choctaw regiment, no tents, clothing, or camp, and garrison equipage was furnished to any of them. ''
With so many white males conscripted into the army and roughly 40 % of its population unfree, the work required to maintain a functioning society in the Confederacy ended up largely on the backs of slaves. Even Georgian governor Joseph E. Brown noted that "the country and the army are mainly dependent upon slave labor for support. '' African American slave labor was used in a wide variety of logistical support roles for the Confederacy, from infrastructure and mining, to teamster and medical roles such as hospital attendants and nurses.
The Confederacy did not allow African Americans to join the army, including both free blacks and slaves. The idea of arming the Confederacy 's slaves for use as soldiers was speculated on from the onset of the war, but such proposals were not seriously considered by Jefferson Davis or others in the Confederate administration until late in the war, when severe manpower shortages were faced. Gary Gallagher says, "When Lee publicly advocated arming slaves in early 1865, he did so as a desperate expedient that might prolong Southern military resistance. ''. After acrimonious debate the Confederate Congress agreed in March, 1865. The war was nearly over by then and very few slaves ended up being enlisted before the Confederate armies all surrendered.
As early as November 1864, some Confederates knew that the chance of securing victory against the U.S. was slim. Despite lacking foreign assistance and recognition and facing slim chances of victory against superior U.S. assets, Confederate newspapers such as the Georgian Atlanta Southern Confederacy continued to maintain their position and oppose the idea of armed black men in the Confederate army, even late in the war as January 1865. They stated that it was incongruous with the Confederacy 's goals and views regarding African Americans and slavery. The Georgian newspaper opined that using black men as soldiers would be an embarrassment to Confederates and their children, saying that although African Americans should be used for slave labor, they should not be used as armed soldiers, opining that:
Prominent Confederates such as R.M.T. Hunter and Georgian Democrat Howell Cobb opposed arming slaves, saying that it was "suicidal '' and would run contrary to the Confederacy 's ideology. Opposing such a move, Cobb stated that African Americans were untrustworthy and innately lacked the qualities to make good soldiers, and that using them would cause many Confederates to quit the army.
The overwhelming support most Confederates had for maintaining black slavery was the primary cause of their strong opposition to using African Americans as armed soldiers. Maintaining the institution of slavery was the primary goal of the Confederacy 's existence, and thus, using their slaves as soldiers was incongruous with that goal. According to historian Paul D. Escott:
(F) or a great many of the most powerful southerners the idea of arming and freeing the slaves was repugnant because the protection of slavery had been and still remained the central core of Confederate purpose... Slavery was the basis of the planter class 's wealth, power, and position in society. The South 's leading men had built their world upon slavery and the idea of voluntarily destroying that world, even in the ultimate crisis, was almost unthinkable to them. Such feelings moved Senator R.M.T. Hunter to deliver a long speech against the bill to arm the slaves.
Though most Confederates were opposed to the idea of using black soldiers, a small number suggested the idea. An acrimonious and controversial debate was raised by a letter from Patrick Cleburne urging the Confederacy to raise black soldiers by offering emancipation; Jefferson Davis refused to consider the proposal and issued instructions forbidding the matter from being discussed. It would not be until Robert E. Lee wrote the Confederate Congress urging them that the idea would take serious traction.
On March 13, 1865, the Confederate Congress passed General Order 14 by a single vote in the Confederate senate, and Jefferson Davis signed the order into law. The order was issued March 23, but as it was late in the war, only a few African American companies were raised in the Richmond area before the town was captured by the U.S. Army and placed back under U.S. control.
According to historian James M. McPherson in 1994, "no black soldiers fought in the Confederate army, unless they were passing as white. '' He noted that some Confederates brought along "their body servants, who in many cases had grown up with them '' and that "on occasion some of those body servants were known to have picked up a rifle... But there was no official recruitment of black soldiers in the Confederate army until the very end of the war '', when it was brought about only by a "desperate shortage of manpower. ''
In some cases, the Confederates forced their African American slaves to fire upon U.S. soldiers at gunpoint, such as at the first Battle of Bull Run. According to John Parker, one such slave who was forced by the Confederates to fight U.S. soldiers, "Our masters tried all they could to make us fight... They promised to give us our freedom and money besides, but none of us believed them; we only fought because we had to. '' Parker stated that had he been given an opportunity, he would have turned against his Confederate captors, and "could do it with pleasure ''. According to abolitionist Henry Highland Garnet in 1862, he had met a slave who "had unwillingly fought on the side of Rebellion '', but said slave had since defected to "the side of Union and universal liberty. ''
During the Siege of Yorktown (1862), The United States Army 's elite sniper unit, the 1st United States Sharpshooters, was devastatingly effective at shooting Confederate artillerymen defending the city. In response, some Confederate artillery crews started forcing slaves to load the cannons. "They forced their negroes to load their cannon, '' reported a U.S. officer. "They shot them if they would not load the cannon, and we shot them if they did. ''
In other cases, under explicit orders from their commanders, Confederate armies would often forcibly kidnap free African American civilians during their incursions into U.S. territory, sending them south into Confederate territory and thus enslaving them, as was the case with the Army of Northern Virginia when it invaded Pennsylvania in 1863.
The usage of black men as U.S. soldiers by the U.S., combined with Abraham Lincoln 's issuing of the Emancipation Proclamation, profoundly angered the Confederacy, with the Confederates calling it uncivilized. As a response, in May 1863 the Confederacy passed a law demanding "full and ample retaliation '' against the United States, stating that any black person captured in "arms against the Confederate States '' or giving aid and comfort to their enemies would be turned over to state authorities, where they could be tried as slave insurrectionists; a capital offense punishable with a sentence of death. However, Confederate authorities feared retaliation, and as such no black prisoner was ever put on trial and executed.
James McPherson states that "Confederate troops sometimes murdered black soldiers and their officers as they tried to surrender. In most cases, though, Confederate officers returned captured black soldiers to slavery or put them to hard labor on southern fortifications. '' African American USCT soldiers were often singled out by the Confederates and suffered extra violence when captured by them. They were often the victims of battlefield massacres and atrocities at the hands of the Confederates, most notably at Fort Pillow in Tennessee and at the Battle of the Crater in Virginia.
The Confederate law declaring black U.S. soldiers as being insurrectionist slaves, combined with the Confederacy 's discriminatory mistreatment of captured black U.S. soldiers, became a stumbling block for prisoner exchanges between the United States and the Confederacy, as the U.S. government in the Lieber Code officially objected to the Confederacy 's discriminatory mistreatment of prisoners of war on basis of color. The Republican Party 's platform of the 1864 presidential election reflected this view, as it too condemned the Confederacy 's discriminatory mistreatment of captured black U.S. soldiers. According to the authors of Liberty, Equality, Power, "Expressing outrage at this treatment, in 1863 the Lincoln administration suspended the exchange of prisoners until the Confederacy agree to treat white and black prisoners alike. The Confederacy refused. ''
Incomplete and destroyed records make an accurate count of the number of men who served in the Confederate army impossible. Historians provide estimates of the actual number of individual Confederate soldiers between 750,000 and 1,000,000 men.
The exact number is unknown. Since these figures include estimates of the total number of individual soldiers who served in each army at any time during the war, they do not represent the size of the armies at any given date. Confederate casualty figures are as incomplete and unreliable as the figures on the number of Confederate soldiers. The best estimates of the number of deaths of Confederate soldiers appear to be about 94,000 killed or mortally wounded in battle, 164,000 deaths from disease and between 26,000 and 31,000 deaths in U.S. prison camps. In contrast, about 25,000 U.S. soldiers died as a result of accidents, drowning, murder, killed after capture, suicide, execution for various crimes, execution by the Confederates (64), sunstroke, other and not stated. Confederate casualties for all these reasons are unavailable. Since some Confederate soldiers would have died for these reasons, more total deaths and total casualties for the Confederacy must have occurred. One estimate of Confederate wounded, which is considered incomplete, is 194,026; another is 226,000. At the end of the war 174,223 men of the Confederate forces surrendered to the U.S. Army.
Compared to the U.S. Army at the time, the Confederate army was not very ethnically diverse. Ninety - one percent of Confederate soldiers were native - born white men and only 9 % were foreign - born white men, Irishmen being the largest group with others including Germans, French, Mexicans, and British. A small number of Asian men were forcibly inducted into the Confederate army against their will when they arrived in Louisiana from overseas.
340 - 377.
|
when is the ap top 25 poll released | AP Poll - wikipedia
The Associated Press (AP Poll) provides weekly rankings of the top 25 NCAA teams in one of three Division I college sports: football, men 's basketball and women 's basketball. The rankings are compiled by polling 65 sportswriters and broadcasters from across the nation. Each voter provides his own ranking of the top 25 teams, and the individual rankings are then combined to produce the national ranking by giving a team 25 points for a first place vote, 24 for a second place vote, and so on down to 1 point for a twenty - fifth place vote. Ballots of the voting members in the AP Poll are made public.
The football poll is released Sundays at 2pm Eastern time during the football season, unless ranked teams have not finished their games.
The AP college football poll has a long history. The news media began running their own polls of sports writers to determine who was, by popular opinion, the best football team in the country at the end of the season. One of the earliest such polls was the AP College Football Poll, first run in 1934. In 1935, AP sports editor Alan J. Gould declared a three way tie for national champion in football between Minnesota, Princeton, and Southern Methodist. Minnesota fans protested, and a number of Gould 's colleagues led by Charles "Cy '' Sherman suggested he create a poll of sports editors instead of only using his own list, and the next year the poll was born. It has run continuously from 1936.
Due to the long - standing historical ties between individual college football conferences and high - paying bowl games like the Rose Bowl and Orange Bowl, the NCAA had not held a tournament or championship game to determine the champion of what is now the highest division, NCAA Division I, Football Bowl Subdivision (the Division I, Football Championship Subdivision and lower divisions do hold championship tournaments). As a result, the public and the media began to acknowledge the leading vote - getter in the final AP Poll as the national champion for that season.
While the AP Poll currently lists the Top 25 teams in the nation, from 1936 to 1961 the wire service only ranked 20 teams. From 1962 to 1967 only 10 teams were recognized. From 1968 to 1988, the AP again resumed its Top 20 before expanding to the current 25 teams in 1989.
The AP began conducting a preseason poll starting in 1950.
At the end of the 1947 season the AP released an unofficial post-bowl poll which differed from the regular season final poll. Until the 1968 college football season, the final AP poll of the season was released following the end of the regular season, with the lone exception of the 1965 season. In 1964, Alabama was named the national champion in the final AP Poll following the completion of the regular season, but lost in the Orange Bowl to Texas, leaving Arkansas as the only undefeated, untied team after the Razorbacks defeated Nebraska in the Cotton Bowl Classic. In 1965, the AP 's decision to wait to crown its champion paid handsomely, as top - ranked Michigan State lost to UCLA in the Rose Bowl, number two Arkansas lost to LSU in the Cotton Bowl Classic, and fourth - ranked Alabama defeated third - ranked Nebraska in the Orange Bowl, vaulting the Crimson Tide to the top of the AP 's final poll (Michigan State was named national champion in the final UPI Coaches Poll, which did not conduct a post-bowl poll).
Beginning in the 1968 season, the post bowl game poll became permanent and the AP championship reflected the bowl game results. The UPI did not follow suit with the coaches ' poll until the 1974 season.
= Clemson 1 vs. Alabama 2 = = = As of the completion of the 2015 season the number one ranked team has faced the number two ranked team 50 times since the inception of the AP Poll in 1936. The number one team has a record of 28 -- 20 -- 2 against the number two team.
In 1997, the Bowl Championship Series (BCS) was developed to try to unify the poll results by picking two teams for a "real '' national championship game. For the first several years the AP Poll factored in the determination of the BCS rankings, along with other factors including the Coaches Poll and computer - based polls. Because of a series of controversies surrounding the BCS, the AP demanded in December, 2004, that its poll no longer be used in the BCS rankings, and so the 2004 -- 2005 season was the last season that the AP Poll was used for this purpose.
In the 2003 season the BCS system broke down when the final BCS standings ranked the University of Southern California (USC) at No. 3 while the two human polls in the system had ranked USC at No. 1. As a result, USC did not play in the BCS ' designated national championship game. After defeating another highly ranked team, Michigan, in its final game, the AP Poll kept USC at No. 1 while the Coaches Poll was contractually obligated to select the winner of the BCS game, Louisiana State University (LSU), as the No. 1 team. The resulting split national title was the very problem that the BCS was created to solve, and has been widely considered an embarrassment.
In 2004, a new controversy erupted at the end of the season when Auburn and Utah, who both finished the regular season 12 -- 0, were left out of the BCS title game in favor of Oklahoma who also was 12 -- 0 and had won decisively over Colorado in the Big 12 Championship game. USC went on to a win easily over Oklahoma in the Orange Bowl while Auburn and Utah both won their bowl games, leaving three undefeated teams at the end of the season. Also, in that same year, Texas made up late ground on California (Cal) in the BCS standings and as a result grabbed a high - payout, at - large spot in the Rose Bowl. Previous to that poll, Cal had been ranked ahead of Texas in both human polls and the BCS poll. Going into their final game, the Golden Bears were made aware that while margin of victory did not affect computer rankings, it did affect human polls and just eight voters changing their vote could affect the final standings. Both teams won their game that week, but the Texas coach, Mack Brown, had made a public effort to lobby for his team to be moved higher in the ranking. When the human polls were released, Texas remained behind Cal, but it had closed the gap enough so that the BCS poll (which determines placement) placed Texas above Cal, angering both Cal and its conference, the Pac - 10. The final poll positions had been unchanged with Cal at No. 4 AP, No. 4 coaches, and No. 6 computers polls and Texas at No. 6 AP, No. 5 coaches, and No. 4 computer polls. The AP Poll voters were caught in the middle because their vote changes were automatically made public, while the votes of the Coaches poll were kept confidential. Although there had been a more substantial shift in the votes of the Coaches Poll, the only clear targets for the ire of fanatical fans were the voters in the AP Poll. While officials from both Cal and the Pac - 10 called for the coaches ' votes to be made public, the overtures were turned down and did little to solve the problem of AP voters. Cal went on to lose to Texas Tech in the Holiday Bowl. Texas defeated Michigan in the Rose Bowl.
Many members of the press who voted in the AP Poll were upset by the controversy and, at the behest of its members, the AP asked that its poll no longer be used in the BCS rankings. The 2004 season was the last season that the AP Poll was used in the BCS rankings, it was replaced in the BCS equation by the newly created Harris Interactive College Football Poll.
The AP Poll is not the only college football poll. The other major poll is the Coaches Poll, which has been sponsored by several organizations: the United Press (1950 -- 1957), the United Press International (1958 -- 1990), USA Today (1991 -- present), CNN (1991 -- 1996), and ESPN (1997 -- 2005). Having two major polls has led to numerous "split '' national titles, where the two polls disagreed on the No. 1 team. This has occurred on eleven different occasions (1954, 1957, 1965, 1970, 1973, 1974, 1978, 1990, 1991, 1997, 2003).
In Division I men 's and women 's college basketball, the AP Poll is largely just a tool to compare schools throughout the season and spark debate, as it has no bearing on postseason play. Generally, all top 25 teams in the poll are invited to the men 's and women 's NCAA basketball tournament, also known as March Madness. The poll is usually released every Monday and voters ' ballots are made public.
The AP began compiling a ranking of the top 20 college men 's basketball teams during the 1948 -- 1949 season. It has issued this poll continuously since the 1950 -- 1951 season. North Carolina has been ranked # 1 the most times (9) and UCLA has been ranked # 1 the second most times (8).
The women 's basketball poll began during the 1976 -- 1977 season, and was initially compiled by Mel Greenberg and published by The Philadelphia Inquirer. At first, it was a poll of coaches conducted via telephone, where coaches identified top teams and a list of the Top 20 team was produced. The initial list of coaches did not include Pat Summitt, who asked to join the group, not to improve her rankings, but because of the lack of media coverage, Summitt believed it would be a good way to stay on top of who the top teams were outside of her own schedule. The contributors continued to be coaches until 1994, when the AP took over administration of the poll from Greenberg, and switched to a panel of writers. In 1994, Tennessee started out as No. 1 in the polls with Connecticut at No. 4. After losses by the No. 2 and No. 3 teams, Tennessee and Connecticut were ranked No. 1 and No. 2, headed into a showdown, scheduled as a special event on Martin Luther King day, the only women 's basketball game scheduled on that day. Because of the unusual circumstances, the decision was made to hold off the AP voting for one day, to ensure it would be after the game. Connecticut won the game, and moved into first place in the AP poll, published on Tuesday for the only time. (Connecticut went on to complete an undefeated season.) Over the history of the poll, over 255 coaches have had a team represented in polls.
Beginning in 2012, the AP began issuing a weekly pro football ranking, the AP Pro32 rankings.
|
where does the firth of forth meet the north sea | Firth of Forth - wikipedia
The Firth of Forth (Scottish Gaelic: Linne Foirthe) is the estuary (firth) of several Scottish rivers including the River Forth. It meets the North Sea with Fife on the north coast and Lothian on the south. It was known as Bodotria in Roman times. In the Norse sagas it was known as the Myrkvifiörd.
Geologically, the Firth of Forth is a fjord, formed by the Forth Glacier in the last glacial period. The drainage basin for the Firth of Forth covers a wide geographic area including places as far from the shore as Ben Lomond, Cumbernauld, Harthill, Penicuik and the edges of Gleneagles Golf Course.
Many towns line the shores, as well as the petrochemical complexes at Grangemouth, commercial docks at Leith, former oil rig construction yards at Methil, the ship - breaking facility at Inverkeithing and the naval dockyard at Rosyth, along with numerous other industrial areas, including the Forth Bridgehead area, encompassing Rosyth, Inverkeithing and the southern edge of Dunfermline, Burntisland, Kirkcaldy, Bo'ness and Leven.
The firth is bridged in two places. The Kincardine Bridge and the Clackmannanshire Bridge cross it at Kincardine, while the Forth Bridge, the Forth Road Bridge and the Queensferry Crossing cross from North Queensferry to South Queensferry, further east.
From 1964 to 1982, a tunnel existed under the Firth of Forth, dug by coal miners to link the Kinneil colliery on the south side of the Forth with the Valleyfield colliery on the north side. This is shown in the 1968 educational film "Forth - Powerhouse for Industry ''. The shafts leading into the tunnel were filled and capped with concrete when the tunnel was closed, and it is believed to have filled with water or collapsed in places.
In July, 2007, a hovercraft passenger service completed a two - week trial between Portobello, Edinburgh and Kirkcaldy, Fife. The trial of the service (marketed as "Forthfast '') was hailed as a major operational success, with an average passenger load of 85 percent. It was estimated the service would decrease congestion for commuters on the Forth road and rail bridges by carrying about 870,000 passengers each year. Despite the initial success, the project was cancelled in December, 2011.
The inner firth, located between the Kincardine and Forth bridges, has lost about half of its former intertidal area as a result of land reclamation, partly for agriculture, but mainly for industry and the large ash lagoons built to deposit spoil from the coal - fired Longannet Power Station near Kincardine. Historic villages line the Fife shoreline; Limekilns, Charlestown and Culross, established in the 6th century, where Saint Kentigern was born.
The firth is important for nature conservation and is a Site of Special Scientific Interest. The Firth of Forth Islands SPA (Special Protection Area) is home to more than 90,000 breeding seabirds every year. There is a bird observatory on the Isle of May.
The youngest person to swim across the Firth of Forth was 13 - year - old Joseph Feeney, who accomplished the feat in 1933.
In 2008, a controversial bid to allow oil transfer between ships in the firth was refused by Forth Ports. SPT Marine Services had asked permission to transfer 7.8 million tonnes of crude oil per year between tankers, but the proposals were met with determined opposition from conservation groups.
lowest bridging point: Stirling
North shore
South shore
|
how i love you how i love you lyrics | Would I Love You (Love You, Love You) - Wikipedia
"Would I Love You (Love You, Love You) '' is a popular song with music by Harold Spina and lyrics by Bob Russell. It was published in 1950.
It was popularized by Patti Page in a recording made on January 2, 1951. The recording was issued by Mercury Records as catalog number 5571, and first reached the Billboard chart on February 10, 1951, lasting 19 weeks and peaking at # 4.
Another recording was made by Doris Day with Harry James. It was issued by Columbia Records as catalog number 39159 with the flip side "Lullaby of Broadway. '' It reached # 19 on the Billboard chart, lasting 10 weeks beginning on March 2, 1951.
A version by Tony Martin also charted. This recording was released by RCA Victor Records as catalog number 20 - 4056. It first reached the Billboard magazine charts on February 23, 1951 and lasted 4 weeks on the chart, peaking at # 25.
|
when does the michael scott paper company end | Michael Scott Paper Company - wikipedia
"Michael Scott Paper Company '' is the twenty - third episode of the fifth season of the television series The Office, and the 95th overall episode of the series. It originally aired on NBC in the United States on April 9, 2009.
In the episode, Michael, Pam and Ryan try to get their new paper company off the ground, but end up bickering among themselves due to the stress and cramped office space. Meanwhile, Jim tries to do a "rundown '' for new boss Charles Miner without admitting he does not know what a rundown is, while Dwight and Andy compete for the affections of the new receptionist, Erin, played by Ellie Kemper.
The episode was written by Justin Spitzer and directed by Gene Stupnitsky. It included a guest appearance by Idris Elba, who played new Dunder Mifflin vice president Charles Miner. The episode aired the same day as the Office episode "Dream Team ''; the debut episode of the new NBC show Parks and Recreation was shown between the two episodes. "Michael Scott Paper Company '' included a new title sequence with footage of the series characters in the new Michael Scott Paper Company office setting, rather than the Dunder Mifflin setting from previous episodes. The episode received mostly positive reviews. According to Nielsen ratings, it was watched by eight million viewers and captured the most viewers in its time slot for adults between the ages of 18 and 49. "Michael Scott Paper Company '' received a Primetime Emmy Award nomination for Outstanding Sound Mixing for a Comedy or Drama Series (Half - Hour) and Animation.
Michael (Steve Carell), Pam (Jenna Fischer), and Ryan (B.J. Novak) are struggling to adjust to the work environment of the new Michael Scott Paper Company. The office space used to be a closet; water pipes run through the room, so they can hear the toilets flush from the Dunder Mifflin bathrooms above them. Ryan openly goofs off in front of Pam and Michael and insults both of them in their presence during phone conversations. Pam and Ryan bicker over who is responsible for making copies until Michael separates them into corners. Pam, who joined the company as a saleswoman, is given the corner where the photocopier sits. She is concerned that Michael will force her to be the receptionist, which is why she quit Dunder Mifflin in the first place. She goes upstairs and asks Charles Miner (Idris Elba) for her job back, but Charles has already given the job to a new employee, Kelly Hannon (Ellie Kemper), who ends up going by her middle name -- Erin -- to avoid confusion with Kelly Kapoor (Mindy Kaling).
Michael hosts a pancake luncheon to introduce the company to potential clients, but only one person and the Dunder Mifflin employees show up. When Michael, Pam, and Ryan come close to giving up, the potential client from the luncheon calls asking for paper. Pam closes the sale and the three cheer in celebration.
In the Dunder Mifflin office, Charles asks Jim (John Krasinski) for a "rundown '' of his client information. Jim does not know what a rundown is, but is too embarrassed to ask because he has been making such a poor impression with Charles. Jim spends much of the day trying to figure out what a rundown is, making several failed attempts to figure it out by chatting vaguely about it with Charles and other coworkers. When he finally finishes what he believes is a rundown, Charles does not look at it and simply asks Jim to fax it to everyone on the distribution list. Jim does not know what the distribution list is either, but rather than asking Charles, he simply faxes the rundown to his father.
Meanwhile, Dwight (Rainn Wilson) and Andy (Ed Helms) plan a hunting trip, but their new friendship is tested by their mutual romantic interest in Erin. Both make passes at her, but eventually agree their friendship is more valuable than a romantic interest. However, they end up trying simultaneously to impress Erin during a competitive duet of John Denver 's "Take Me Home, Country Roads '', with Andy on a banjo and Dwight playing guitar; both sing, and Dwight sings part of the song in German. Erin is initially impressed with both of them, but winds up awkwardly sneaking out of the room as they play. The loud duet is finally stopped by a frustrated Toby (Paul Lieberstein). Angela (Angela Kinsey) is visibly annoyed by Dwight and Andy 's budding friendship.
"Michael Scott Paper Company '' was written by Justin Spitzer and directed by Gene Stupnitsky. It originally aired April 9, 2009, the same day as the episode "Dream Team ''; the debut episode of the new NBC show Parks and Recreation was shown between the two episodes. "Michael Scott Paper Company '' was the fourth of six episodes guest starring Idris Elba, best known as Stringer Bell from The Wire. Elba said he did not watch the episode after it aired because "I 'm hypercritical about my work, so I try not to torture myself. '' According to the Season 5 DVD episode commentary, B.J. Novak came up with the story idea involving Jim and the "run - down '' and worked it into Spitzer 's final script. The episode included a new title sequence with footage of the series characters in the new Michael Scott Paper Company office setting, rather than the Dunder Mifflin setting from the previous episodes. Rainn Wilson and Ed Helms practiced the guitar competition in Ed Helms ' trailer during lunch the day of the filming; Helms, a proficient bluegrass banjo player, coached Wilson, who is proficient at guitar and drums. Wilson said he proposed doing a full studio cover of the song, with Creed Bratton providing backup guitar and vocals, and selling it on iTunes for charity. Helms said he was happy to have the opportunity to play banjo on screen, but did not feel the instrument made much sense for his character; Helms said, "It 's so fun and weird, but he 's a Connecticut preppy guy. How did he pick up a banjo? It 's one of Andy 's many mysteries, not all of which I even understand. ''
Prior to the episode 's airing, NBC set up a web site for the new Michael Scott Paper Company at www.michaelscottpapercompany.com, which included a mission statement for the company, photos of the new office space and a downloadable copy of the coupon for "unparalleled customer service '' featured in the episode. The official website for The Office included four cut scenes from "Michael Scott Paper Company '' within a week of the episode 's original release. In one 85 - second clip, Dwight and Andy pretend to shoot, stab and throw grenades at each other in pantomime in anticipation of their hunting trip; they pretend to kill Jim and pester him until he plays dead, after which Charles walks in and believes he is napping. A second one - minute clip includes Pam and Ryan fighting around the new office until they are interrupted by a janitor who believes the room is still a closet and leaves water jugs on the floor. In a third, 40 - second clip, Jim asks Charles directly what a "run - down '' is, but when an annoyed Charles asks if this is "one of your pranks '', Jim gives up and leaves. The fourth and final clip, which is 85 seconds long, features Andy and Dwight both making passes at Erin; Andy discusses how easy it would be to learn sign language, while Dwight tells her to be careful not to get her hair or clothes caught in the nearby paper shredder.
Dwight and Andy sing and perform the John Denver song "Take Me Home, Country Roads '' while trying to impress the new receptionist. At one point, Toby is overheard through a vent discussing and praising the FX show Damages while on the phone in a bathroom. He said it is as good as anything on HBO, a premium television channel known for such shows as The Sopranos, Six Feet Under and The Wire. During the workday, Ryan watches a YouTube video of a rap music commercial for Flea Market Montgomery; the low - budget rap music advertisement for the Montgomery, Alabama flea market gained Internet fame. Michael uses Evite, social - planning website for creating online invitations, to invite people to his pancake luncheon. On a whiteboard, Michael writes a quote of himself quoting Wayne Gretzky, the popular ice hockey player who said "You miss 100 % of the shots you do n't take '' The new paper company office included Apple computers and a Nerf basketball hoop. During a phone conversation, Ryan said he wants an iPod music player that also serves as a phone, but not an iPhone, which is essentially that very product; both items are Apple products. Michael listens to "Just Dance '', a Lady Gaga song, while driving his convertible to work, which he mistakes for a Britney Spears song. Near the end of the episode, when Andy states that every song sounds better a cappella, Dwight asks about the songs "Cherry Pie '' by Warrant, "Enter Sandman '' by Metallica and "Rebel Yell '' by Billy Idol.
In its original American broadcast on April 9, 2009, "Michael Scott Paper Company '' was watched by 8 million overall viewers, according to Nielsen ratings. The episode earned more ratings than "Dream Team '', the other Office episode of the night, which had 7.2 million viewers. It also performed better than the Parks and Recreation pilot, which ran between the two Office episodes and had 6.8 million viewers. "Michael Scott Paper Company '', as well as "Dream Team '', had the most viewers in its time slot among adults between the ages of 18 and 49.
The episode received mostly positive reviews. Travis Fickett of IGN said this and other recent episodes are "proving that the show has plenty of life in it and (that) The Office has still got it. '' He said the funniest element of the show was the emerging "bro - mance '' between Dwight and Andy. Will Leitch of New York magazine said, "The Office kind of needed this sort of shake - up, even if it 's something as simple as another room to put all our characters in. '' Keith Phipps of The A.V. Club said he liked the plot aspects involve the new company, particularly the pancake breakfast and the first successful sales call: "I know the series probably has to revert to something like the old status quo at some point, but I almost wish it could stay in that dank little corner a little longer. '' Phipps, who gave the episode a B+ grade, said the "rundown '' subplot between Jim and Charles was a bit strained by the end. Steven Mullen of The Tuscaloosa News called the episode "stellar '' and particularly praised the comedic chemistry between Andy and Dwight.
Alan Sepinwall of The Star - Ledger said he was enjoying the new paper company storyline, but that "Michael Scott Paper Company '' was not as funny as "Dream Team '', which aired the same day. Sepinwall praised particular scenes such as Dwight and Andy 's competitive duet and Kelly 's plan to make Charles want her. She criticized the cramped new office space jokes as being less funny than those in previous episodes. Sepinwall also said the storyline between Jim and Charles was getting repetitive and, "It would help if the writers ever gave Idris Elba something funny to do. '' Margaret Lions of Entertainment Weekly said, "This episode was n't one of my favorites... No bombs, no bits that failed, and by The Office 's standards, nothing even particularly cringe worthy. But ' TMSPC ' is more groundwork than payoff. '' She said Dwight 's rendition of "Take Me Home, Country Roads '' in German was "the absolute funniest moment of the episode ''.
In her list of the top ten moments from the fifth season of The Office, phillyBurbs.com writer Jen Wielgus ranked Michael 's formation of the Michael Scott Paper Company in the downstairs storage closet as the number one, citing the "Dream Team '', "Michael Scott Paper Company '' and "Heavy Competition '' episodes in particular. She also said she specifically enjoyed the paper - shaped pancakes from "Michael Scott Paper Company ''. "Michael Scott Paper Company '' was voted the twelfth highest - rated episode out of 26 from the fifth season, according to an episode poll at the fansite OfficeTally; the episode was rated 8.27 out of 10.
This episode received a Primetime Emmy Award nomination for Outstanding Sound Mixing for a Comedy or Drama Series (Half - Hour) and Animation. "Michael Scott Paper Company '' accounted for one of the ten Primetime Emmy Award nominations The Office received for the show 's fifth season at the 61st Primetime Emmy Awards, which were held on September 20, 2009.
|
how many episodes of hap and leonard are there | Hap and Leonard (TV series) - wikipedia
Hap and Leonard is an American television drama series based on the characters Hap and Leonard, created by novelist Joe R. Lansdale and adapted from his series of novels of the same name. The series was written and developed by Nick Damici and Jim Mickle, who had previously adapted Lansdale 's Cold in July and was directed by Mickle. The series premiered on the American cable network SundanceTV on March 2, 2016. So far, the series has received favorable reviews.
According to various sources including Variety the series has been renewed for Season 3 on SundanceTV. The show is currently SundanceTV 's highest - rated original series, and will return for six episodes in 2018. The third season of "Hap and Leonard, '' will take inspiration from "The Two - Bear Mambo, '' the third installment of Lansdale 's book series.
Filming of the show took place in Baton Rouge, Louisiana, although the series is set in 1980s in the fictional town of LaBorde in East Texas. One of the locations used was an old Woman 's Hospital in Baton Rouge as well as the Celtic Media Centre.
The second episode "The Bottoms '' was released online on March 2, 2016, a week before its scheduled airing.
A collection called Hap and Leonard made up of previously published short stories (Hyenas, Veil 's Visit, Dead Aim) by Joe Lansdale as well as new content was published by Tachyon Publications in March 2016 as a tie - in to the TV series.
|
why did they cancel the amazing world of gumball | The Amazing World of Gumball - wikipedia
The Amazing World of Gumball (also known simply as just Gumball) is an animated television series created by Ben Bocquelet for Cartoon Network. Produced primarily by Cartoon Network Studios Europe, it first aired on May 3, 2011. The series revolves around the lives of 12 - year - old Gumball Watterson, a blue cat, and his best friend -- adoptive brother goldfish Darwin, who attend middle school in the fictional city of Elmore. They frequently find themselves involved in various shenanigans around the city, during which time they interact with Gumball 's family members -- sister Anais and parents Nicole and Richard -- and an extended supporting cast of characters.
Bocquelet based several of the series ' characters on rejected characters from his previous commercial work and making its premise a mixture of "family shows and school shows '', which Cartoon Network was heavily interested in. He pitched The Amazing World of Gumball to the network and Turner Broadcasting executive Daniel Lennard subsequently greenlit production of the series. It is the first series to be produced by Cartoon Network Studios Europe, and is currently co-produced with Studio SOI in Germany and Great Marlborough Productions Limited.
One unique feature of the series is its lack of stylistic unity. Characters are designed, filmed, and animated using multiple styles and techniques (stylised traditional animation, puppetry, CGI, stop motion, Flash animation, live - action, etc.)
The series has made multiple stylistic changes throughout its production, specifically in the transition between its first and second seasons. Such changes include character redesigns, an increase in the use of VFX, higher quality animation, and a shift towards a much darker, more satirical comedic style.
When Cartoon Network Studios Europe was created in 2007, Ben Bocquelet was hired to help people pitch their projects to the network. However, when the studio decided to have its employees all pitch their own ideas, he decided to take some rejected characters he had created for commercials and put them together in one series set in a school. Daniel Lennard, vice president of Original Series and Development at Turner Broadcasting System Europe, was impressed by the premise and approved production of the series. The first series to be produced by Cartoon Network Studios Europe, thirty - six episodes were produced for its first season in collaboration with Studio SOI, Dublin - based Boulder Media Limited, and Dandelion Studios.
The series revolves around the life of a 12 - year - old cat named Gumball Watterson (Logan Grove, seasons 1 -- 2 and season 3 episode: "The Kids ''; Jacob Hopkins, rest of season 3 to season 5 episode "The Copycats ''; Nicolas Cantu, rest of season 5 onward) and his frequent shenanigans in the fictional American city of Elmore, accompanied by his adopted goldfish brother / best friend Darwin (Kwesi Boakye, season 1 -- 2 and season 3 episode: "The Kids ''; Terrell Ransom Jr., rest of season 3 to season 5 episode "The Copycats ''; Donielle T. Hansley Jr., rest of season 5 to season 6 episode "The Cage ''; Christian J. Simon, rest of season 6 - onward). Gumball 's other family members -- his intellectual sister Anais (Kyla Rae Kowalewski) and stay - at - home father Richard (Dan Russell), both rabbits, and workaholic mother Nicole (Teresa Gallagher), a cat -- often find themselves involved in Gumball 's exploits. Gumball attends school with his siblings at Elmore Junior High, where throughout the series he interacts with his various middle school classmates, most prominently his love interest and eventual girlfriend Penny Fitzgerald (also Gallagher).
One prominent feature of the series since its third season is "The Void '', a dimension inside of Elmore where all the universe 's mistakes reside. This includes references to aspects of reality as well as in - series elements. Rob (Hugo Harold - Harrison, David Warner for episodes "The Nemesis '' to "The Disaster '') is a background character from the first two seasons who became trapped in The Void after becoming "irrelevant ''. He later escapes in Season 3, after which he becomes Gumball 's nemesis and main antagonist. He is shown to be aware of his fictional existence in the Season 4 episode "The Disaster '', and his hatred towards Gumball is a result of his role as the protagonist.
The first season of The Amazing World of Gumball premiered on May 3, 2011 with the episode "The DVD '' and ended on March 13, 2012, with "The Fight ''. A 40 - episode second season was announced on March 17, 2011, prior to the premiere of the series ' first season. Speaking of the renewal, executive producer Daniel Lennard stated: "Commissioning a second series before the first show has aired shows our absolute commitment and belief in the series and we 're hoping audiences the world over will embrace this show as much as we have. '' The Amazing World of Gumball was renewed for a third season consisting of 40 episodes in October 2012. In February 2013, the series was put on hiatus, but returned in June 2013. On September 6, 2016, Ben Bocquelet announced he would be departing production of Gumball upon completing the sixth season, but production will continue without him.
On September 17, 2015, series creator Ben Bocquelet announced on his Twitter page that a crossover episode with another Cartoon Network show would air as part of the fifth season..
Said episode, The Boredom featured characters from Clarence, Regular Show, and Uncle Grandpa making cameo appearances.
Gumball had a cameo appearance on the Uncle Grandpa episode "Pizza Eve '', along with other Cartoon Network characters from currently running and ended cartoons.
The first and second seasons have been released on Cartoon Network channels in over 126 countries, with the third season rolling out through 2014.
The series debuted on Cartoon Network UK on 2 May 2011.
On December 1, 2014, The Amazing World of Gumball began airing on Boomerang in the United States, alongside its broadcasts on Cartoon Network.
The Amazing World of Gumball received positive reviews from critics. In a favourable review, Brian Lowry of Variety described the series as "mostly a really clever spin on domestic chaos '' and "first - rate silliness. '' Ken Tucker of Entertainment Weekly was also positive, writing: "There are few examples of mainstream children 's programming as wildly imaginative, as visually and narratively daring, as The Amazing World of Gumball. '' Reviews from the Daily Mail praised The Amazing World of Gumball as a "gloriously surreal chunk of fast and funny telly '' and "the kind of clever children 's comedy that parents can also enjoy. ''
The A.V. Club 's Noel Murray graded the DVD release of the series ' first 12 episodes a B+, writing that "what sets (The Amazing World of Gumball) apart from the many other super-silly, semi-anarchic cartoons on cable these days is that it features such a well - developed world, where even with the eclectic character designs, there are recognisable traits and tendencies. '' Wired writer Z noted that the series "manages to have genuine heart even as the plots themselves transition from well - worn TV tropes to all out madness. ''
On May 3, 2011, the series premiere of The Amazing World of Gumball was watched by 2.120 million viewers in the United States. "The Goons '' is currently the highest viewed episode of the series, with 2.72 million viewers. "The Potion '' is the lowest viewed episode with only 0.42 million viewers, approximately 15 % of its series high.
In an interview with The Times newspaper, series creator Ben Bocquelet mentioned plans for a feature film based on the series. However, after Bocquelet announced his departure from the show following the sixth season, he stated that he doubted a film would be made. In March 2018, Bocquelet 's interest in a Gumball movie was seemingly revitalized as he stated that he "might have a good idea '' for a movie. He later added that he had two ideas, one for a potential theatrical film and one for a potential direct - to - video film.
On June 18, 2014, a comic book adaptation of the series, written by Frank Gibson, was released. Art for the collection of works is handled by Boom! Studios.
|
who wrote girls just want to have fun | Girls Just Want to Have Fun - wikipedia
"Girls Just Want to Have Fun '' is a song written by and first recorded in 1979 by American musician Robert Hazard. It is better known as a single by American singer Cyndi Lauper, whose version was released in 1983. It was the first major single released by Lauper as a solo artist and the lead single from her debut studio album She 's So Unusual (1983). Lauper 's version gained recognition as a feminist anthem and was promoted by a Grammy - winning music video. It has been covered, either as a studio recording or in a live performance, by over 30 other artists.
The single was Lauper 's breakthrough hit, reaching number two on the US Billboard Hot 100 chart and becoming a worldwide hit throughout late 1983 and early 1984. It remains one of Lauper 's signature songs and was a widely popular song during the era of its release, the 1980s. The "Rolling Stone & MTV: ' 100 Greatest Pop Songs ': 1 - 50 '', "Rolling Stone: "The 100 Top Music Videos '' '' and the "VH1: 100 Greatest Videos '' lists ranked the song at No. 22, No. 39 and No. 45, respectively. The song received Grammy Award nominations for Record of the Year and Best Female Pop Vocal Performance. In 2013, the song was remixed by Yolanda Be Cool, taken from the 30th anniversary reissue of the album She 's So Unusual.
The song was written by Robert Hazard, who recorded only a demo of it in 1979. Hazard 's version was written from a male point of view. Lauper 's version appeared on her 1983 debut solo record, She 's So Unusual. The track is a synthesizer - backed anthem, from a feminist point of view, conveying the point that all women really want is to have the same experiences that men can. Gillian G. Gaar, author of She 's a Rebel: The History of Women in Rock & Roll (2002), described the single and corresponding video as a "strong feminist statement '', an "anthem of female solidarity '' and a "playful romp celebrating female camaraderie. ''
The variety of releases of the single includes an Austrian birthday card with a 3 - inch (76 mm) CD of the song inside. The song has been heavily distributed in karaoke version as well. Lauper later went on to completely re-work the song in 1994 resulting in the new hit "Hey Now (Girls Just Want to Have Fun) ''. The song was remade by Lauper yet again in 2005 on her The Body Acoustic album, also produced by Chertoff and Wittman with Lauper, with guest support vocals from Japanese pop / rock duo Puffy AmiYumi.
The release of the single was accompanied by a quirky music video. It cost less than $35,000, largely due to a volunteer cast and the free loan of the most sophisticated video equipment available at the time. The cast included professional wrestling manager "Captain '' Lou Albano in the role of Lauper 's father while her real mother, Catrine, played herself. Lauper would later appear in World Wrestling Federation storylines opposite Albano and guest - star in an episode of The Super Mario Bros. Super Show, in which Albano portrayed Mario (Albano also played himself in the episode). Lauper 's attorney, Elliot Hoffman, appeared as her uptight dancing partner. Also in the cast were Lauper 's manager, David Wolf, her brother, Butch Lauper, fellow musician Steve Forbert, and a bevy of secretaries borrowed from Portrait / CBS, Lauper 's record label. A clip of The Hunchback of Notre Dame is featured as Lauper watches it on television.
Lorne Michaels (Broadway Video, SNL), another of Hoffman 's clients, agreed to give Lauper free run of his brand new million - dollar digital editing equipment, with which she and her producer created several first - time - ever computer generated images of Lauper dancing with her buttoned - up lawyer, leading the entire cast in a snake - dance through New York streets and ending up in Lauper 's bedroom in her home. The bedroom scene is an homage to the famous stateroom scene in the Marx Brothers ' film A Night at the Opera.
"The year 1983 makes a watershed in the history of female - address video. It is the year that certain issues and representations began to gain saliency and the textual strategies of female address began to coalesce. '' In the video, Lauper wanted to show in a more fun and light - hearted manner that girls want the same equality and recognition boys had in society.
Before the song starts, the beginning of her version of "He 's So Unusual '' plays.
The music video was directed by Edd Griles. The producer was Ken Walz while the cinematographer was Francis Kenny. The treatment for the video was co-written by Griles, Walz, and Cyndi Lauper. The video was shot in the Lower East Side of Manhattan in summer 1983 and premiered on television in December 1983. The choreography was by a New York dance and music troupe called XXY featuring Mary Ellen Strom, Cyndi Lee and Pierce Turner.
As of August 2018, the music video has over 600 million views on YouTube.
The song was released in late 1983 but much of its success on the charts came during the first half of 1984. The single reached the Top 10 in over 25 countries and reached No. 1 in ten of those countries including Australia, Canada, Ireland, Japan, New Zealand, and Norway. It also reached No. 2 in both the United Kingdom and the United States.
In the United States, the song entered the Billboard Hot 100 at No. 80 on December 17, 1983. It ultimately peaked at No. 2 on March 10, 1984 where it stayed for two weeks, behind Van Halen 's "Jump ''. In the United Kingdom, the song entered the chart at No. 50 on January 14, 1984 and peaked at No. 2 on February 4, 1984 where it stayed for one week. In Ireland, the song entered the chart on January 29, 1984. It peaked at number one for two weeks and was on the chart for a total of seven weeks. In Australia, the song debuted on the Kent Music Report Top 100 on February 27, 1984. It entered the Top 10 in only its third week on the chart and reached number one on March 26, 1984. It topped the chart for two weeks and then remained at number two for four weeks behind Nena 's "99 Luftballons ''. It stayed on the chart for 21 weeks and was the 9th biggest - selling single of the year. In Belgium, the song debut at No. 38 on February 18, 1984 and peaked at No. 4 on April 7, 1984. In the Netherlands, the song entered the chart at No. 38 on February 25, 1984 and peaked at No. 4 on March 31, 1984.
In Sweden, the song entered at No. 13 on March 6, 1984 and peaked at No. 5 on April 3, 1984, charting for six weeks. In Switzerland, the song entered the chart at No. 15 on April 1, 1984 and peaked at No. 6 on April 29, 1984. In New Zealand, the song debuted at No. 21 on April 1, 1984 and peaked at No. 1 on May 6, 1984 where it stayed for three weeks. In Austria, the single entered at No. 3 on May 1, 1984 which was its peak position.
The song is featured in the films Clueless, Girls Just Want to Have Fun, To Wong Foo, Thanks for Everything! Julie Newmar, Riding in Cars with Boys, I Now Pronounce You Chuck and Larry, Hysterical Blindness, Midnight Heat, The Other Woman, Housefull 3, Peter 's Friends, The Sisterhood of the Traveling Pants 2, Anomalisa and Baby Mama.
It 's also used in the television shows The Simpsons, Friends, Bones, Gilmore Girls, The Adventures of Super Mario Bros. 3, Married with Children, Daria, Hinter Gittern -- Der Frauenknast, The Comeback, Drawn Together, 90210, Secret Diary of a Call Girl, 20 to 1, Celebrity Big Brother, Two and a Half Men, Family Guy, Coronation Street, and Miami Vice among others. In 2008 in Mexico, it was used in the first episode of the telenovela Mañana es para siempre.
WWE Hall of Famer Wendi Richter has used the song as her entrance theme.
There are multiple cover versions of this song. For example, one is by STRFKR on the album Jupiter.
7 '' single
12 '' Vinyl Promo
Single A CD single was issued in 2007, known as a ringle, which included bonus interactive computer material as well as a code to download a free ringtone of the title track. It featured the title track and for the first time on CD, "Right Track Wrong Train ''. The ringle, as well as all other issued ringles, were recalled by Sony Music due to issues with the ringtone not working correctly. They have yet to be reissued.
Official versions
Official versions (Hey Now version)
sales figures based on certification alone shipments figures based on certification alone
"Hey Now (Girls Just Want to Have Fun) '' was the first single from Cyndi Lauper 's Twelve Deadly Cyns... and Then Some hits collection from 1994, and her first charting single on the Billboard Hot 100 since "My First Night Without You '' in 1989.
This song is a new reggae - tinged arrangement of Lauper 's own "Girls Just Want to Have Fun '' standard, with a musical tip of the hat to Redbone 's "Come and Get Your Love ''. The arrangement evolved as she experimented with the song 's style over the course of the 1993 -- 94 Hat Full of Stars Tour. The song was a big comeback hit for Lauper, landing in the top 10 and top 40 in many countries. It was also a big dance hit in the United States. It peaked at # 4 in the UK and New Zealand, its highest position.
"Hey Now '' plays over a pivotal closing sequence of the film To Wong Foo, Thanks for Everything! Julie Newmar.
The song was covered by Triple Image and Jamie Lynn Spears in 2002.
US CD single
Japanese CD single
French CD Single
In 2010, Cancer Research UK arranged for a charity record for their Race for Life campaign. It features many celebrities such as EastEnders actress Nina Wadia, Coronation Street actress Kym Marsh, Life of Riley actress Caroline Quentin, glamour girl Danielle Lloyd, X Factor finalist Lucie Jones, singer Sonique (herself a breast cancer survivor), former EastEnders actress Lucy Benjamin, and Celebrity Big Brother 's Nicola T.
|
what is a right bundle branch block in the heart | Right bundle branch block - wikipedia
A right bundle branch block (RBBB) is a heart block in the electrical conduction system.
During a right bundle branch block, the right ventricle is not directly activated by impulses travelling through the right bundle branch. The left ventricle however, is still normally activated by the left bundle branch. These impulses are then able to travel through the myocardium of the left ventricle to the right ventricle and depolarize the right ventricle this way. As conduction through the myocardium is slower than conduction through the Bundle of His - Purkinje fibres, the QRS complex is seen to be widened. The QRS complex often shows an extra deflection which reflects the rapid depolarisation of the left ventricle followed by the slower depolarisation of the right ventricle.
In most cases right bundle branch block has a pathological cause though it is also seen in healthy individuals in about 1.5 - 3 %.
The criteria to diagnose a right bundle branch block on the electrocardiogram:
The T wave should be deflected opposite the terminal deflection of the QRS complex. This is known as appropriate T wave discordance with bundle branch block. A concordant T wave may suggest ischemia or myocardial infarction.
A mnemonic to distinguish between ECG signatures of left bundle branch block (LBBB) and right, is WiLLiaM MaRRoW; i.e., with LBBB, there is a W in lead V1 and an M in lead V6, whereas, with RBBB, there is an M in V1 and a W in V6.
RBBB with associated first degree AV block
RBBB with associated tachycardia
RBBB
An atrial septal defect is one possible cause of a right bundle branch block. In addition, a right bundle branch block may also result from Brugada syndrome, right ventricular hypertrophy, pulmonary embolism, ischaemic heart disease, rheumatic heart disease, myocarditis, cardiomyopathy or hypertension.
Prevalence of RBBB increases with age.
The underlying condition may be treated by medications to control hypertension or diabetes, if they are the primary underlying cause. If coronary arteries are blocked, an invasive coronary angioplasty may relieve the impending RBBB.
|
who holds the most grand slams in men's tennis | List of Grand Slam men 's singles champions - wikipedia
This article details the list of men 's singles Grand Slam tournaments tennis champions. Some major changes have taken place in history and have affected the number of titles that have been won by various players. These have included the opening of the French national championships to international players in 1925, the elimination of the challenge round in 1922, and the admission of professional players in 1968 (the start of the ' Open ' era).
Note: All of these tournaments have been listed since they began, rather than when they officially became majors. The Australian and US tournaments have only been officially regarded as majors by the ILTF (now the ITF) since 1924 (though many regarded the US Championships as a major before then). The French Championships have only been a major since 1925 (when it became open to all amateurs internationally). Before 1924 (since 1912 / 1913 to 1923) there were 3 official majors: Wimbledon, the World Hard Court Championships (played on clay) and the World Covered Court Championships (played on an indoor wood surface).
All time
Open Era
All Time
Open Era
Note: Bold indicates player still active.
Note: * indicates ongoing streak, bold indicates player still active.
These players won all four majors. The year listed is the year the player first won each tournament; the last one is marked in bold. The age listed is the age at the end of that last tournament, i.e., the age at which the player completed his Career Grand Slam.
(Winners of all four Grand Slam singles tournaments in the same calendar year)
Note: players with four titles are not included here.
Note: players with more than two titles are not included here.
Bold = Active Streaks
|
when did americas got talent season 13 start | America 's Got Talent - Wikipedia
America 's Got Talent (often abbreviated as AGT) is an American reality television series on the NBC television network, and part of the global Got Talent franchise. It is a talent show that features singers, dancers, magicians, comedians, and other performers of all ages competing for the advertised top prize of one million dollars. The show debuted in June 2006 for the summer television season. From season three (2008) onwards, the prize includes one million dollars, payable in a financial annuity over 40 years, and a chance to headline a show on the Las Vegas Strip. Among its significant features is that it gives an opportunity to talented amateurs or unknown performers, with the results decided by an audience vote. The format is a popular one and has often been reworked for television in the United States and the United Kingdom.
This incarnation was created by Simon Cowell, and was originally due to be a 2005 British series called Paul O'Grady 's Got Talent but was postponed due to O'Grady's acrimonious split with broadcaster ITV (later launching as Britain 's Got Talent in 2007). Therefore, the U.S. version became the first full series of the franchise.
The original judging panel consisted of David Hasselhoff, Brandy Norwood, and Piers Morgan. Sharon Osbourne replaced Norwood in season two (2007), and Howie Mandel replaced Hasselhoff in season five (2010). Howard Stern replaced Morgan in season seven (2012). Heidi Klum replaced Osbourne in season eight (2013), while Mel B joined as a fourth judge. Simon Cowell replaced Stern in season eleven (2016). Regis Philbin was the original host (season one), followed by Jerry Springer for two seasons (2007 -- 2008), followed by Nick Cannon for eight seasons (2009 -- 2016). Supermodel and host Tyra Banks replaced Cannon for the twelfth season (2017) and season thirteen.
The series has since been renewed for a thirteenth season, which premiered on May 29, 2018. Also in May, 2018. NBC announced a winter spin - off edition, titled, America 's Got Talent: The Champions featuring acts from previous seasons, as well as acts from international Got Talent shows.
Starting with the tenth season, each of the main judges invited a guest judge to join the judging panel for one night during the Judge Cuts stage of the competition. The guest judges had the ability to employ the golden buzzer to bypass the other judges and advance an act to the live shows. The first guest judge, Neil Patrick Harris, appeared at the invitation of Howard Stern in episode eight of season ten, which aired on July 14, 2015. Michael Bublé appeared at the invitation of Heidi Klum in episode nine of season ten, which aired on July 21, 2015. Marlon Wayans appeared at the invitation of Howie Mandel in episode ten of season ten, which aired on July 28, 2015. Piers Morgan appeared at the invitation of Mel B in episode eleven of season ten, which aired on August 4, 2015. Beginning with the eleventh season the guest judges were announced without any indication if they were invited by one of the regular judges. This continued into the twelfth season when the guest judges were announced by NBC through various outlets.
The general selection process of the show begins with separate producers ' auditions held in various cities across the United States, some of which host only the producers ' auditions, and some of which also host judges ' auditions held in theaters. This round is held several months before the judges ' audition. Acts that have made it through the producers ' audition then audition in front of the judges and a live audience.
Following the producers ' auditions, acts that are called back are to audition in front of (as of 2013) four celebrity judges and an audience. These auditions are specifically held in the Pasadena Civic Auditorium and are pre recorded to be later televised. The announced judges for the Season 13 (2018) are Simon Cowell, Heidi Klum, Melanie Brown (Mel B) and Howie Mandel. Tyra Banks will be returning to the show as the host. The current date the series is set to premiere is May 29, 2018. At the Judges ' Auditions, the four judges may individually register their disapproval of an act by pressing a red buzzer in front of each judge on the judging table, which lights up their corresponding X both at the table and above the stage, this lighting is followed by a quick flash of light on stage and a loud buzzing effect. Any performer who receives X 's (3 in seasons 1 to 7, or 4 from season 8 onwards) from all of the judges must stop performing as the stage turns red and is eliminated. Since season three (2008), large audiences that are seated in the auditorium by the separate company On Camera Audiences have become factors in the judging process, as their reaction of clapping or booing way swing or influence a judge 's votes. Judges are allowed to express constructive criticism after the 90 seconds of the act are over. If an act receives three or more "yes '' votes, they advance to the next round of competition. However, in the majority of the seasons, several acts do not perform in the second round if they do not pass with 3 or more "yes '' votes. Many acts that move on may be cut by producers and may forfeit due to the limited slots available for the second performance.
From Season 2 (2007) to Season 8 (2013), Las Vegas Week has been an intermediary televised taped round between the auditions and the live shows. This round takes place in a notable venue on the (Las Vegas Strip). Names for this round in previous seasons have included "Las Vegas Callbacks '' (Seasons 2 - 3) and "Vegas Verdicts '' (Season 4) and "Las Vegas Week '' (Seasons 5 - 8). The (Las Vegas) round generally consists of acts performing a second time for the judges (except for Season 4 in 2009 in which only 3 acts needed to perform again), who then pick select acts to move on to the live shows. An act eliminated in Las Vegas Week is not completely excluded from the live show competition, as several seasons have featured contestants being brought back from this round as "wildcard '' acts.
Prior to the inclusion of this round, the judges would have a list containing a number of acts which advanced past the auditions during each live show. The judges would then pick ten acts from that group each week, leaving several acts without the chance to perform.
In Season 9 (2014), acts went to New York instead of Las Vegas to determine a place in the live shows, and the round, for that season, was renamed Judgement Week.
In Season 10, Judge Cuts format was introduced. One guest judge per round will join the judging panel and help with decisions. Each guest judge was given one golden buzzer opportunity to send an act straight to the live shows, but they do not receive a red X. Twenty acts are shown each week, and seven advance, including one from the guest judge 's golden buzzer. Any act that received all four red buzzers was to be immediately eliminated from the competition. In season 10 (2015), Judge Cuts were held at a Soundstage in Brooklyn. In season 11 (2016), Judge Cuts were held at CBS Studio Center. From season 12 onwards (2017 - present), Judge Cuts is specifically held in the NBC Universal Back Lot in a selected sound stage, therefore making the audience smaller compared to both the Judges ' Auditions in the Pasadena Civic Auditorium and the Live Shows held in the Dolby Theatre.
During the live shows, a group of acts ranging from only a Top 20 (Season 2), to as many as 60 (Season 8), compete for viewers ' and judges ' votes. In the first season, the judges could not end an act 's performance, but could either "check '' or "X '' the performance during their critique. Since Season 2 (2007), judges have been able to end an act 's performance early, and the "check '' was removed. Generally, acts each perform first in a live round consisting of a series of quarterfinals. In seasons with YouTube auditions, the round of live judging of YouTube finalists takes place then, as part of these quarterfinals. Then there may be additional shows for wild card acts -- acts that one or more of the judges select to be given one more chance for audience vote despite previous elimination. From these shows, the existing group is narrowed through votes by the public and / or the judges (depending on the season). Acts then move on to a semifinal round, and even further rounds (such as a "Top 8 '' or a "Top 10 '', depending on the season) through a series of weekly shows, which trim the number of acts down each time based on a public vote. In the majority of seasons, judges have had no vote from the semifinals. All these rounds culminate in a live final, which has consisted of anywhere from 4 to 10 acts throughout the seasons. The act with the most votes is declared the winner, given $1 million, and, since Season 3 (2008), a chance to headline a show on the Las Vegas Strip.
During Season 1 (2006), the studio performance shows were filmed at Paramount Studios in Hollywood. In seasons 2 - 3 (2007 - 2008), the studio performance shows were held at CBS Studio Center in Studio City, CA. From Season 4 through 6 (2009 -- 2011), the live shows were filmed at Stage 36 of CBS Television City in Los Angeles. In Season 7 (2012), the live shows were held at the New Jersey Performing Arts Center in Newark. From Seasons 8 through 10 (2013 -- 2015), live performances were held at Radio City Music Hall in New York. From Season 11 (2016) onwards the live shows are being held at the Dolby Theatre.
For Seasons 5 through 8 (2010 -- 13), the show also made the winner the headline act of a national tour with runners up following the final show, stopping in 25 cities. For season nine, however (2014), there was no tour; two shows were held in Las Vegas for the winner and some of the runner - up acts. (See # America 's Got Talent Live, below.)
From Season 5 (2010) to Season 7 (2012), acts who did not attend live auditions could instead submit a taped audition online via YouTube. Acts from the online auditions were then selected to compete in front of the judges and a live audience during the "live shows '' part of the season, prior to the semi-finals. The most successful act of the YouTube auditions was Jackie Evancho, who went on to place second in Season 5 and after the season ended, became the youngest solo artist ever to go platinum in the U.S.
Before the inclusion of this round, the show had a separate audition episode in Seasons 3 and 4 (2008 -- 2009) for contestants who posted videos on MySpace.
Introduced in season nine, the "Golden Buzzer '' is located on the center of the judges ' desk and may be used once per season by each judge. In season 9, a judge could press the golden buzzer to save an act from elimination, regardless of the number of X 's earned from the other judges. Starting in season 10 and onward, any act that receives a golden buzzer advances directly to the live show; and in season 11, the hosts also were given the power to use the golden buzzer. The golden buzzer is also used in the Judge Cuts format.
In May 2006, NBC announced the new show. The audition tour took place in June. Auditions were held in the following locations: Los Angeles, New York, and Chicago. Some early ads for the show implied that the winning act would also headline a show at a casino, possibly in Las Vegas; however, this was replaced with a million dollars due to concerns of minors playing in Las Vegas, should one become a champion. More than 12 million viewers watched the series premiere (which is more than American Idol got during its premiere in 2002). The two - hour broadcast was the night 's most - watched program on U.S. television and the highest - rated among viewers aged 18 to 49 (the prime - time audience that matters most to advertisers), Nielsen Media Research reported.
On the season finale, there was an unaired segment that was scheduled to appear after Aly & AJ. The segment featured Tom Green dressing in a parrot costume and squawking with a live parrot to communicate telepathically. Green then proceeded to fly up above the audience, shooting confetti streamers out of his costume onto the crowd below.
In season one, the show was hosted by Regis Philbin and judged by actor David Hasselhoff, singer Brandy Norwood, and journalist Piers Morgan.
The winner of the season was 11 - year - old singer Bianca Ryan, and the runners - up were clogging group All That and musical group The Millers.
After initially announcing in June 2006 that season two would premiere in January 2007 and would air at 8 pm on Sunday nights, with no separate results show, the network changed that, pushing the show back to the summer, where the first season had enjoyed great success. This move kept the show out of direct competition with American Idol, which had a similar premise and was more popular.
In AGT 's place, another reality - based talent show, Grease: You 're The One That I Want, began airing on Sunday nights in the same time slot on NBC beginning in January. In March, NBC announced that Philbin would not return as host of the show, and that Jerry Springer would succeed him as host, with Sharon Osbourne (formerly a judge on Cowell 's UK show The X Factor) succeeding Brandy Norwood as a judge.
The season finale was shown Tuesday, August 21, with the winner being Terry Fator, a singing impressionist ventriloquist. The runner - up was singer Cas Haley.
Season three premiered on June 17, 2008. Auditions took place in Charlotte, Nashville, Orlando, New York, Dallas, Los Angeles, Atlanta, and Chicago from January to April. A televised MySpace audition also took place.
Season three differed from the previous two in many ways. Auditions were held in well - known theaters across the nation, and a new title card was introduced, featuring the American flag as background. The X 's matched the ones on Britain 's Got Talent as did the judges ' table. Like the previous season, the Las Vegas callbacks continued, but there were forty acts selected to compete in the live rounds, instead of twenty. This season also contained several results episodes, but not on a regular basis. The show took a hiatus for two - and - a-half weeks for the 2008 Summer Olympics, but returned with the live rounds on August 26.
Neal E. Boyd, an opera singer, was named the winner on October 1. Eli Mattson, a singer and pianist, was runner - up.
Season four premiered on Tuesday, June 23, 2009. It was the first to be broadcast in high definition. Auditions for this season were held in more than nine major cities including New York, Los Angeles, Chicago, Washington, D.C., Atlanta, Miami, Tacoma, Boston, and Houston. Los Angeles auditions kicked off the January 29 -- 31 tour at the Los Angeles Convention Center, followed by the February 7 -- 8 Atlanta auditions. New York and Miami auditions were held during March. Tacoma auditions were held April 25 and 26. In addition to live auditions and the ability to send in a home audition tape, season four offered the opportunity for acts to upload their video direct to NBC.com/agt with their registration. This year 's host was Nick Cannon. Jerry Springer said that he could not return as host due to other commitments.
The audition process in season four was the same as the previous season, but the ' Las Vegas Callbacks ' was renamed ' Vegas Verdicts '. This was the first season since season one where results episodes lasted one hour on a regular basis. The title card this year featured bands of the American flag and stars waving around the America 's Got Talent logo.
On September 16, country music singer Kevin Skinner was named the season 's winner. The grand prize was $1 million and a 10 - week headline show at the Planet Hollywood Resort and Casino on the Las Vegas Strip. The runner - up was Bárbara Padilla, an opera singer.
For season five, the network had considered moving the show to the fall, after rival series So You Think You Can Dance transferred from the summer to fall season in 2009. NBC ultimately decided to keep Talent a summer show.
Open auditions were held in the winter to early spring of 2010 in Chicago, Dallas, Los Angeles, New York, Orlando, and Portland (Oregon). Non-televised producers ' auditions were also held in Atlanta and Philadelphia. For the first time, online auditions were also held via YouTube.
David Hasselhoff left to host a new television show and was replaced by comedian and game show host Howie Mandel. This made Piers Morgan the only original judge left in the show. The show premiered Tuesday, June 1, 2010, at 8 pm ET. Afterward, Talent resumed the same time slot as the previous season.
On September 15, singer Michael Grimm was named the winner. He won a $1 million prize and a chance to perform at the Caesars Palace Casino and Resort on the Las Vegas Strip, as well as headline the 25 - city America 's Got Talent Live Tour along with runner - up Jackie Evancho, Fighting Gravity, Prince Poppycock, and the other top ten finalists.
Season six premiered on Tuesday, May 31, 2011, with a two - hour special. Piers Morgan and Sharon Osbourne continued as judges after taking jobs on Piers Morgan Tonight and The Talk, respectively. On The Tonight Show with Jay Leno on July 27, 2010, Morgan officially stated that he had signed a three - year contract to stay on Talent.
The show held televised auditions in Los Angeles, New York, Minneapolis, Atlanta, Seattle, and Houston. Non-televised producers ' auditions were also held in Denver and Chicago. Previews of auditions were shown during NBC 's The Voice premiere on April 26. Online auditions via YouTube were also held for the second time in the show 's run, beginning on May 4. Finalists for this audition circuit competed live on August 9.
On Wednesday, September 14, Landau Eugene Murphy, Jr., a Frank Sinatra - style singer, was named the winner. Dance group Silhouettes was runner - up.
Season seven premiered on May 14, 2012. The first round of auditions, which are judged by producers, were held in New York, Washington, D.C., Tampa, Charlotte, Austin, Anaheim, St. Louis, and San Francisco from October 2011 to February 2012. The show began its live theater performances at the New Jersey Performing Arts Center in Newark on February 27.
Piers Morgan did not return as a judge for season seven, due to his work hosting CNN 's Piers Morgan Tonight, and he was replaced by Howard Stern. Since Stern hosts his SiriusXM radio show in New York City, the live rounds of the show were moved to nearby Newark, New Jersey. In December 2011, Simon Cowell, the show 's executive producer, announced that the show would be receiving a "top - to - bottom makeover '', confirming that there would be new graphics, lighting, theme music, show intro, logo, and a larger live audience at the New Jersey Performing Arts Center in Newark. On July 2, at the first live performance show of the season, their new location and stage were unveiled in a two - and - a-half - hour live special. A new set was also unveiled with a revised judges ' desk and a refreshed design of the "X ''.
On August 6, Sharon Osbourne announced that she would leave America 's Got Talent after the current season, in response to allegations that her son Jack Osbourne was discriminated against by the producers of the upcoming NBC program Stars Earn Stripes.
On September 13, Olate Dogs were announced the winner of the season, becoming the show 's first completely non-singing act to win the competition and also the first non-solo act to win. Comedian Tom Cotter finished as the runner - up.
Season eight of AGT premiered on Tuesday, June 4, 2013. The new season was announced in a promotional video shown during a commercial break for season seven 's second live show. Sharon Osbourne initially stated that she would not return for the season, but later said that she was staying with the show "for now. '' Osbourne confirmed that she would be leaving the show after a feud with NBC on August 6, 2012.
On February 20, 2013, it was announced that one of the Spice Girls members, Mel B (Melanie Brown), would be joining the show as the new fourth judge. Entertainment Weekly also reported at the same time that NBC was looking at a possible fourth judge to be added. On March 3, it was announced that supermodel Heidi Klum would replace Sharon Osbourne as the third judge.
An Audition Cities poll for the season was announced on July 11, 2012. The first batch of Audition Cities were announced as Los Angeles, Seattle, Portland (Oregon), New Orleans, Birmingham, Memphis, Nashville, Savannah, Raleigh, Norfolk, San Antonio, New York, Columbus (Ohio), and Chicago. This season, the auditions traveled to more cities than ever before. America 's Got Talent moved its live shows to Radio City Music Hall in New York for season eight. Auditions in front of the judges and an audience began taping on March 4. The show traveled to New Orleans, New York, Chicago, Los Angeles, and San Antonio.
On September 18, 2013, martial arts dancer / mime Kenichi Ebina was announced the winner of the season, the first dance act to win the competition. Stand - up comedian Taylor Williamson was the runner - up.
Season nine premiered on Tuesday, May 27, 2014, at 8 pm ET. The producers ' auditions began on October 26, 2013, in Miami. Other audition sites included Atlanta, Baltimore, Denver, Houston, Indianapolis, Los Angeles, and New York. Contestants could also submit a video of their audition online. Auditions in front of the judges were held February 20 -- 22 at the New Jersey Performing Arts Center in Newark, which also hosted the live shows during season seven. Judges ' auditions were held in New York City at Madison Square Garden from April 3 to 6 and in Los Angeles at the Dolby Theatre from April 21 to 26.
The live shows return to Radio City Music Hall on July 29. There was also a new twist in the show, where "Judgment Week '' was held in New York City instead of Las Vegas. Judgment Week was originally intended to be held in front of a live studio audience, but after three acts performed, the producers scrapped the live audience concept. This season also came with the addition of a "Golden Buzzer, '' which was unveiled on that same year 's Britain 's Got Talent. Each judge can press the buzzer only once each season that can save an act, typically used when there is a tie.
For this season, contestants were invited to submit a video of their performance to The Today Show website throughout June, and the top three entrants performed their acts on The Today Show on July 23, 2014. The performer with the most votes, Cornell Bhangra, filled the 48th spot in the quarterfinals.
On September 17, magician Mat Franco was announced the winner of the season, the first magic act to win the competition. Singer Emily West was the runner - up.
Season ten premiered on May 26, 2015. Producer auditions began on November 2, 2014, in Tampa. Other audition sites included Nashville, Richmond (Virginia), New York, Chicago, St. Louis, San Antonio, Albuquerque, San Francisco, Seattle, Boise, Las Vegas, and Los Angeles. Online submissions were also accepted.
Howard Stern rumored on his radio show on October 1, 2014, that he might not return, but announced on December 8 that he would return for the upcoming season. Nick Cannon returned for his seventh season as host. On February 9, 2015, Howie Mandel said he would return for season ten and Mel B announced the next day that she would be returning as well. It was revealed on February 11 that Heidi Klum would also be returning.
It was announced on December 4, 2014, that Cris Judd would be named as a dance scout. He previously worked on the show as a choreographer behind the scenes, and on the New Zealand version of Got Talent as a judge.
Auditions in front of the judges began on March 2, 2015, at the New Jersey Performing Arts Center. They continued at the Manhattan Center in New York City and the Dolby Theatre in Los Angeles. A special "extreme '' audition session was held outside at the Fairplex in Pomona, California, where danger acts performed outside for the judges, who were seated at an outdoor stage.
During NBC 's summer press tour, it was announced that America 's Got Talent would be making their "Golden Buzzer '' more like Britain 's Got Talent where the contestant that gets the buzzer will be sent directly to the live shows. An official trailer for the season was released, which showed that Dunkin Donuts was the show 's official sponsor for the season, with their cups prominently placed on the judges ' desk. Dunkin replaced Snapple, which sponsored the show since season seven.
On June 24, Howard Stern announced on The Howard Stern Show that season ten would be his last season as judge. Stern said, "In all seriousness, I 've told you, I 'm just too f * cking busy... something 's got to give... NBC 's already asked me what my intentions are for next year, whether or not I 'd come back, I kind of have told them I think this is my last season. Not I think, this is my last season ''.
On September 16, Paul Zerdin was announced the winner of the season, making him the second ventriloquist to win. Comedian Drew Lynch was runner - up, and magician mentalist Oz Pearlman was in Third Place.
America 's Got Talent was renewed for an eleventh season on September 1, 2015. The season will have preliminary open call auditions in Detroit, New York, Phoenix, Salt Lake City, Las Vegas, San Jose, San Diego, Kansas City, Los Angeles, Atlanta, Orlando, and Dallas. As in years past, hopeful contestants may also submit auditions online.
On October 22, 2015, it was announced that creator Simon Cowell would replace Howard Stern as a judge for season 11. Mel B, Heidi Klum and Howie Mandel all returned as judges, with Nick Cannon returning as host. The live shows moved from New York back to Los Angeles, due to Stern 's departure, at the Dolby Theatre.
Auditions in front of the judges began on March 3, 2016 at the Pasadena Civic Auditorium in Pasadena, California. The season premiered on May 31, 2016.
On September 14, 12 - year - old singer - songwriter and ukulele player Grace VanderWaal was announced as the second female and second child to win America 's Got Talent (Bianca Ryan, age 11, was first). Magician mentalists The Clairvoyants were runners - up, and magician Jon Dorenbos placed third.
On August 2, 2016, it was announced that host Nick Cannon and all four judges would be returning for season 12. Later that year, on October 4, Simon Cowell signed a contract to remain as a judge through to 2019 (Season 14).
On February 13, 2017, Cannon announced he would not return as host for the twelfth season, citing creative differences between him and executives at NBC. The resignation came in the wake of news that the network considered firing Cannon after he made disparaging remarks about NBC in his Showtime comedy special Stand Up, Do n't Shoot. NBC selected Tyra Banks as the new host for season 12, which premiered on Tuesday, May 30, 2017.
On September 20, Darci Lynne Farmer won the twelfth season, becoming the third ventriloquist, third child act and the third female act to win the competition (second year in a row after VanderWaal 's win in 2016). Child singer Angelica Hale was announced as the runner - up, and Ukrainian dance act Light Balance finished in third place. Deaf musician Mandy Harvey and dog act Sara & Hero rounded out the top five.
On February 21, 2018, it was announced that judges Simon Cowell, Mel B, Heidi Klum and Howie Mandel along with Tyra Banks would all be returning. The season premiered on May 29, 2018.
America 's Got Talent Live is a show on the Las Vegas Strip that features the winner of each season of America 's Got Talent as the main performance.
In 2009, America 's Got Talent Live appeared on the Las Vegas Strip appearing Wednesday through Sunday at the Planet Hollywood Resort and Casino in Las Vegas, in a limited ten - week run from October through January featuring winner Kevin Skinner, runner - up Barbara Padilla and fourth - place finisher The Texas Tenors. It featured the final ten acts which made it to the season four (2009) finale. Jerry Springer emceed, commuting weekly between Stamford, Connecticut, tapings of his self - named show and Las Vegas.
In 2010, on the first live show of season five, the winner headlined America 's Got Talent Live from Caesars Palace Casino and Resort on the Las Vegas Strip, which was part of a 25 - city tour that featured the season 's finalists. Jerry Springer returned as both host of the tour and the headliner of the show.
In 2012, the tour returned, featuring winners Olate Dogs, Spencer Horsman, Joe Castillo, Lightwire Theater, David Garibaldi and his CMYK 's, Jarrett and Raja, Tom Cotter, and other fan favorites.
In 2013, after the success of the 2012 tour, another tour was scheduled, featuring season eight 's winner, Kenichi Ebina, and finalists Collins Key, Jimmy Rose, Taylor Williamson, Cami Bradley, The KriStef Brothers, and Tone the Chiefrocca. Tone hosted the tour.
In 2014, America 's Got Talent Live announced that performances in Las Vegas on September 26 and 27 would feature Taylor Williamson, the season eight (2013) runner - up, and the top finalists for season nine: Mat Franco, Emily West, Quintavious Johnson, AcroArmy, Emil and Dariel, Miguel Dakota, and Sons of Serendip.
In 2015, no tour was held. Instead, three shows were given at the Planet Hollywood Resort in Las Vegas featuring winner Paul Zerdin, runner - up Drew Lynch, and fan favorite Piff the Magic Dragon.
In 2016, four shows were given at the Planet Hollywood Resort in Las Vegas. They featured the top two finalists for season 11, Grace VanderWaal and The Clairvoyants, as well as finalist Tape Face.
In 2017, four shows were given at the Planet Hollywood Resort in Las Vegas. They featured winner Darci Lynne, runner - up Angelica Hale, third - placed Light Balance, and finalist Preacher Lawson.
NBC broadcast the two - hour America 's Got Talent Holiday Spectacular on December 19, 2016, hosted by Cannon with performances by Grace VanderWaal, Jackie Evancho, Andra Day, Penn & Teller, Pentatonix, Terry Fator, Mat Franco, Piff the Magic Dragon, Olate Dogs, Professor Splash, Jon Dorenbos and others, and featuring the Season 11 judges, including Klum, who sang a duet with Season 11 finalist Sal Valentinetti. The special drew 9.5 million viewers.
Since the show began, its ratings have been very high, ranging from 9 million viewers to as many as 16 million viewers, generally averaging around 11 million viewers. The show has also ranked high in the 18 -- 49 demographic, usually rating anywhere from as low as 1.6 to as high as 4.6 throughout its run. Audition shows and performance shows rate higher on average than results shows.
Although the show 's ratings have been high, the network usually keeps the show 's run limited to before the official start of the next television season in the third week of September with some reductions or expansions depending on Olympic years, where finale ratings are usually lower due to returning programming on other networks.
The highest rated season in overall viewers to date is season four (2009). The most - watched episode has been the finale of season five (2010), with 16.41 million viewers. The series premiere and an episode featuring the first part of Las Vegas Week in season six (2011) have each tied for highest rating among adults 18 -- 49, both having a 4.6 rating.
Sales numbers and rankings are U.S. sales only.
Many acts which have competed on America 's Got Talent but were ultimately eliminated before the final round have either previously competed on or went on to compete in a number of other reality shows, most notably American Idol and America 's Best Dance Crew.
The following America 's Got Talent (AGT) contestants also appeared on American Idol (AI):
The following America 's Got Talent (AGT) contestants also appeared on America 's Best Dance Crew (ABDC):
The following America 's Got Talent (AGT) contestants also appeared on these other shows:
In Indonesia, the eleventh season has currently been broadcast by NET. since October 22, 2016 every Saturday and Sunday at 10 pm WIB. But, since Monday, October 31, in addition to the weekend slot, the show has also been broadcast every Monday to Friday at 5 pm WIB as the replacement of the currently concluded TV drama, the second season of Kesempurnaan Cinta, which was concluded on Friday, October 28, 2016.
In the United Kingdom, TruTV, along with simulcasts on the Made Television network, show America 's Got Talent, with TruTV showing it from the tenth season.
The thirteenth season of the show will air on AXN Asia, together with the twelfth series of Britain 's Got Talent.
|
what is the term that is used to describe a person’s overall sense of worth and well-being | Self - image - wikipedia
Self - image is the mental picture, generally of a kind that is quite resistant to change, that depicts not only details that are potentially available to objective investigation by others (height, weight, hair color, etc.), but also items that have been learned by that person about themself, either from personal experiences or by internalizing the judgments of others. A simple definition of a person 's self - image is their answer to the question "What do you believe people think about you? ''.
Self - image may consist of three types:
These three types may or may not be an accurate representation of the person. All, some or none of them may be true.
A more technical term for self - image that is commonly used by social and cognitive psychologists is self - schema. Like any schema, self - schemas store information and influence the way we think and remember. For example, research indicates that information which refers to the self is preferentially encoded and recalled in memory tests, a phenomenon known as "self - referential encoding ''. Self - schemas are also considered the traits people use to define themselves, they draw information about the self into a coherent scheme.
Poor self - image may be the result of accumulated criticisms that the person collected as a child which have led to damaging their own view of themselves. Children in particular are vulnerable to accepting negative judgments from authority figures because they have yet to develop competency in evaluating such reports. Also, adolescents are highly targeted to suffer from poor body image issues. Individuals that already exhibit a low - sense of self - worth may be vulnerable to develop social disorders.
Negative self - images can arise from a variety of factors. A prominent factor, however, is personality type. Perfectionists, high achievers, and those with "type A '' personalities seem to be prone to having negative self - images. This is because such people constantly set the standard for success high above a reasonable, attainable level. Thus, they are constantly disappointed in this "failure. ''
Another factor that contributes to a negative self - image is the beauty values of the society in which a person lives. In the American society, a popular beauty ideal is thinness. Oftentimes, girls feel that they do not measure up to society 's "thin '' standards, which leads to them having a negative self - image.
When people are in the position of evaluating others, self - image maintenance processes can lead to a more negative evaluation depending on the self - image of the evaluator. That is to say stereotyping and prejudice may be the way individuals maintain their self - image. When individuals evaluate a member of a stereotyped group, they are less likely to evaluate that person negatively if their self - images had been bolstered through a self - affirmation procedure, and they are more likely to evaluate that person stereotypically if their self - images have been threatened by negative feedback. Individuals may restore their self - esteem by derogating the member of a stereotyped group.
Fein and Spencer (1997) conducted a study on Self - image Maintenance and Discriminatory Behavior. This study showed evidence that increased prejudice can result from a person 's need to redeem a threatened positive perception of the self. The aim of the study was to test whether a particular threat to the self would instigate increased stereotyping and lead to actual discriminatory behavior or tendencies towards a member of a "negatively '' stereotyped group. The study began when Fein and Spencer gave participants an ostensible test of intelligence. Some of them received negative feedback, and others, positive and supportive feedback. In the second half of the experiment, the participants were asked to evaluate another person who either belonged to a negatively stereotyped group, or one who did not. The results of the experiment proved that the participants who had previously received unfavorable comments on their test, evaluated the target of the negatively stereotyped group in a more antagonistic or opposing way, than the participants who were given excellent reports on their intelligence test. They concluded that the negative feedback on the test threatened the participants ' self - image and they evaluated the target in a more negative manner, all in efforts to restore their own self - esteem.
A present study extends the studies of Fein and Spencer in which the principal behavior examined was avoidance behavior. In the study, Macrae et al. (2004) found that participants that had a salient negative stereotype of "skinheads '' attached, physically placed themselves further from a skinhead target compared to those in which the stereotype was not as apparent. Therefore, greater salience of a negative stereotype led participants to show more stereotype - consistent behavior towards the target.
Residual self - image is the concept that individuals tend to think of themselves as projecting a certain physical appearance, or certain position of social entitlement, or lack thereof. The term was used at least as early as 1968, but was popularized in fiction by the Matrix series, where persons who existed in a digitally created world would subconsciously maintain the physical appearance that they had become accustomed to projecting.
Victims of abuse and manipulation often get trapped into a self - image of victimisation. The psychological profile of victimisation includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, self - blame and depression. This way of thinking can lead to hopelessness and despair.
Self - image disparity was found to be positively related to chronological age (CA) and intelligence, two factors thought to increase concomitantly with maturity: Capacity for guilt and ability for cognitive differentiation. However, males had larger self - image disparities than females, Caucasians had larger disparities and higher ideal self images than Blacks, and socioeconomic status (SES) affected self - images differentially for the 2nd and 5th graders.
A child 's self - awareness of who they are differentiates into three categories around the age of five: their social self, academic persona, and physical attributes. Several ways to strengthen a child 's self - image include communication, reassurance, support of hobbies, and finding good role models.
When does a child become aware that the image in a mirror is their own? Research was done on 88 children between 3 and 24 months. Their behaviors were observed before a mirror. The results indicated that children 's awareness of self - image followed three major age - related sequences:
Regular practice of endurance exercise was related to a more favourable self - image. There was a strong association between participation in sports and the type of personality that tends to be resistant to drug and alcohol addiction. Physical exercise was further significantly related to scores for physical and psychological well - being. Adolescents who engaged regularly in physical activity were characterised by lower anxiety - depression scores, and displayed much less social behavioural inhibition than their less active counterparts.
It is likely that discussion of recreational or exercise involvement may provide a useful point of entry for facilitating dialogue among adolescents about concerns relating to body image and self - esteem. In terms of psychotherapeutic applications, physical activity has many additional rewards for adolescents. It is probable that by promoting physical fitness, increased physical performance, lessening body mass and promoting a more favourable body shape and structure, exercise will provide more positive social feedback and recognition from peer groups, and this will subsequently lead to improvement in an individual 's self - image.
Does self - image threatening feedback make perceivers more likely to activate stereotypes when confronted by members of a minority group? Participants in Study 1 saw an Asian American or European American woman for several minutes, and participants in Studies 2 and 3 were exposed to drawings of an African American or European American male face for fractions of a second. These experiments found no evidence of automatic stereotype activation when perceivers were cognitively busy and when they had not received negative feedback. When perceivers had received negative feedback, however, evidence of stereotype activation emerged even when perceivers were cognitively busy.
A magazine survey that included items about body image, self - image, and sexual behaviors was completed by 3,627 women. The study found that overall self - image and body image are significant predictors of sexual activity. Women more satisfied with body image reported more sexual activity, orgasm, and initiating sex, greater comfort undressing in front of their partner, having sex with the lights on, trying new sexual behaviors (i.e. anal), and pleasing their partner sexually than those dissatisfied. Positive body image was inversely related to self - consciousness and importance of physical attractiveness, and positively related to relationships with others and overall satisfaction. Body image was predictive only of one 's comfort undressing in front of partner and having sex with lights on. Overall satisfaction was predictive of frequency of sex, orgasm, and initiating sex, trying new sexual behaviors, and confidence in giving partner sexual pleasure.
One hundred and ten heterosexual individuals (67 men; 43 women) responded to questions related to penis size and satisfaction. Men showed significant dissatisfaction with penile size, despite perceiving themselves to be of average size. Importantly, there were significant relationships between penile dissatisfaction and comfort with others seeing their penis, and with likelihood of seeking medical advice with regard to penile and / or sexual function. Given the negative consequences of low body satisfaction and the importance of early intervention in sexually related illnesses (e.g., testicular cancer), it is imperative that attention be paid to male body dissatisfaction.
According to a British survey, managers have a gap in self - awareness. Managers rate their own performance far higher than their team members do.
|
there were at least three independent centers of domestication in the new world | History of agriculture - wikipedia
The history of agriculture records the domestication of plants and animals and the development and dissemination of techniques for raising them productively. Agriculture began independently in different parts of the globe, and included a diverse range of taxa. At least eleven separate regions of the Old and New World were involved as independent centers of origin.
Wild grains were collected and eaten from at least 20,000 BC. Rye was cultivated by at least 11,050 BC in Mesopotamia. From around 9,500 BC, the eight Neolithic founder crops -- emmer wheat, einkorn wheat, hulled barley, peas, lentils, bitter vetch, chick peas, and flax -- were cultivated in the Levant. Rice was domesticated in China by 6,200 BC with earliest known cultivation from 5,700 BC, followed by mung, soy and azuki beans. Pigs were domesticated in Mesopotamia around 11,000 BC, followed by sheep between 11,000 and 9,000 BC. Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan around 8,500 BC. Sugarcane and some root vegetables were domesticated in New Guinea around 7,000 BC. Sorghum was domesticated in the Sahel region of Africa by 5,000 BC. In the Andes of South America, the potato was domesticated between 8,000 and 5,000 BC, along with beans, coca, llamas, alpacas, and guinea pigs. Bananas were cultivated and hybridized in the same period in Papua New Guinea. In Mesoamerica, wild teosinte was domesticated to maize by 4,000 BC. Cotton was domesticated in Peru by 3,600 BC. Camels were domesticated late, perhaps around 3,000 BC.
The Bronze Age, from c. 3300 BC, witnessed the intensification of agriculture in civilizations such as Mesopotamian Sumer, ancient Egypt, the Indus Valley Civilisation of South Asia, ancient China, and ancient Greece. During the Iron Age and era of classical antiquity, the expansion of ancient Rome, both the Republic and then the Empire, throughout the ancient Mediterranean and Western Europe built upon existing systems of agriculture while also establishing the manorial system that would become a bedrock of medieval agriculture. In the Middle Ages, both in the Islamic world and in Europe, agriculture was transformed with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton and fruit trees such as the orange to Europe by way of Al - Andalus. After the voyages of Christopher Columbus in 1492, the Columbian exchange brought New World crops such as maize, potatoes, sweet potatoes, and manioc to Europe, and Old World crops such as wheat, barley, rice, and turnips, and livestock including horses, cattle, sheep, and goats to the Americas.
Irrigation, crop rotation, and fertilizers were introduced soon after the Neolithic Revolution and developed much further in the past 200 years, starting with the British Agricultural Revolution. Since 1900, agriculture in the developed nations, and to a lesser extent in the developing world, has seen large rises in productivity as human labour has been replaced by mechanization, and assisted by synthetic fertilizers, pesticides, and selective breeding. The Haber - Bosch process allowed the synthesis of ammonium nitrate fertilizer on an industrial scale, greatly increasing crop yields. Modern agriculture has raised social, political, and environmental issues including water pollution, biofuels, genetically modified organisms, tariffs and farm subsidies. In response, organic farming developed in the twentieth century as an alternative to the use of synthetic pesticides.
Scholars have developed a number of hypotheses to explain the historical origins of agriculture. Studies of the transition from hunter - gatherer to agricultural societies indicate an antecedent period of intensification and increasing sedentism; examples are the Natufian culture in the Levant, and the Early Chinese Neolithic in China. Current models indicate that wild stands that had been harvested previously started to be planted, but were not immediately domesticated.
Localised climate change is the favoured explanation for the origins of agriculture in the Levant. When major climate change took place after the last ice age (c. 11,000 BC), much of the earth became subject to long dry seasons. These conditions favoured annual plants which die off in the long dry season, leaving a dormant seed or tuber. An abundance of readily storable wild grains and pulses enabled hunter - gatherers in some areas to form the first settled villages at this time.
Early people began altering communities of flora and fauna for their own benefit through means such as fire - stick farming and forest gardening very early. Exact dates are hard to determine, as people collected and ate seeds before domesticating them, and plant characteristics may have changed during this period without human selection. An example is the semi-tough rachis and larger seeds of cereals from just after the Younger Dryas (about 9,500 BC) in the early Holocene in the Levant region of the Fertile Crescent. Monophyletic characteristics were attained without any human intervention, implying that apparent domestication of the cereal rachis could have occurred quite naturally.
Agriculture began independently in different parts of the globe, and included a diverse range of taxa. At least 11 separate regions of the Old and New World were involved as independent centers of origin. Some of the earliest known domestications were of animals. Domestic pigs had multiple centres of origin in Eurasia, including Europe, East Asia and Southwest Asia, where wild boar were first domesticated about 10,500 years ago. Sheep were domesticated in Mesopotamia between 11,000 and 9,000 BC. Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan around 8,500 BC. Camels were domesticated late, perhaps around 3,000 BC.
It was not until after 9,500 BC that the eight so - called founder crops of agriculture appear: first emmer and einkorn wheat, then hulled barley, peas, lentils, bitter vetch, chick peas and flax. These eight crops occur more or less simultaneously on Pre-Pottery Neolithic B (PPNB) sites in the Levant, although wheat was the first to be grown and harvested on a significant scale. At around the same time (9400 BC), parthenocarpic fig trees were domesticated.
Rye was cultivated in Mesopotamia by at least 11,050 BC. By 7,000 BC, the Sumerians systematized and scaled up sowing and harvesting in Mesopotamia 's fertile soil. By 8,000 BC, farming was entrenched on the banks of the River Nile. About this time, agriculture was developed independently in the Far East, probably in China, with rice rather than wheat as the primary crop. Maize was domesticated from the wild grass teosinte in West Mexico by 6,700 BC. The potato (8,000 BC), tomato, pepper (4,000 BC), squash (8,000 BC) and several varieties of bean (8,000 BC onwards) were domesticated in the New World.
Agriculture was independently developed on the island of New Guinea. Banana cultivation of Musa acuminata, including hybridization, dates back to 5,000 BC, and possibly to 8,000 BC, in Papua New Guinea.
Bees were kept for honey in the Middle East around 7,000 BC. Archaeological evidence from various sites on the Iberian peninsula suggest the domestication of plants and animals between 6,000 and 4,500 BC. Céide Fields in Ireland, consisting of extensive tracts of land enclosed by stone walls, date to 3,500 BC and are the oldest known field systems in the world. The horse was domesticated in the Pontic steppe around 4,000 BC. In Siberia, Cannabis was in use in China in Neolithic times and may have been domesticated there; it was in use both as a fibre for ropemaking and as a medicine in Ancient Egypt by about 2,350 BC.
In China, millet and rice and were domesticated by 6,200 BC; the earliest known cultivation of rice is from 5,700 BC. They were followed by mung, soy and azuki beans. In the Sahel region of Africa, local rice and sorghum were domesticated by 5,000 BC. Kola nut and coffee were domesticated in Africa. In New Guinea, ancient Papuan peoples began practicing agriculture around 7,000 BC, domesticating sugarcane and taro. In the Indus Valley from the eighth millennium BC onwards at Mehrgarh, 2 - row and 6 - row barley were cultivated, along with einkorn, emmer, and durum wheats, and dates. In the earliest levels of Merhgarh, wild game such as gazelle, swamp deer, blackbuck, chital, wild ass, wild goat, wild sheep, boar, and nilgai were all hunted for food. These are successively replaced by domesticated sheep, goats, and humped zebu cattle by the fifth millennium BC, indicating the gradual transition from hunting and gathering to agriculture. Maize and squash were domesticated in Mesoamerica; potato in South America, and sunflower in the Eastern Woodlands of North America.
Sumerian farmers grew the cereals barley and wheat, starting to live in villages from about 8,000 BC. Given the low rainfall of the region, agriculture relied on the Tigris and Euphrates rivers. Irrigation canals leading from the rivers permitted the growth of cereals in large enough quantities to support cities. The first ploughs appear in pictographs from Uruk around 3,000 BC; seed - ploughs that funneled seed into the ploughed furrow appear on seals around 2300 BC. Vegetable crops included chickpeas, lentils, peas, beans, onions, garlic, lettuce, leeks and mustard. They grew fruits including dates, grapes, apples, melons, and figs. Alongside their farming, Sumerians also caught fish and hunted fowl and gazelle. The meat of sheep, goats, cows and poultry was eaten, mainly by the elite. Fish was preserved by drying, salting and smoking.
The civilization of Ancient Egypt was indebted to the Nile River and its dependable seasonal flooding. The river 's predictability and the fertile soil allowed the Egyptians to build an empire on the basis of great agricultural wealth. Egyptians were among the first peoples to practice agriculture on a large scale, starting in the pre-dynastic period from the end of the Paleolithic into the Neolithic, between around 10,000 and 4,000 BC. This was made possible with the development of basin irrigation. Their staple food crops were grains such as wheat and barley, alongside industrial crops such as flax and papyrus.
Wheat, barley, and jujube were domesticated in the Indian subcontinent by 9,000 BC, soon followed by sheep and goats. Barley and wheat cultivation -- along with the domestication of cattle, primarily sheep and goats -- followed in Mehrgarh culture by 8,000 -- 6,000 BC. This period also saw the first domestication of the elephant. Pastoral farming in India included threshing, planting crops in rows -- either of two or of six -- and storing grain in granaries. Cotton was cultivated by the 5th - 4th millennium BC. By the 5th millennium BC, agricultural communities became widespread in Kashmir. Irrigation was developed in the Indus Valley Civilization by around 4,500 BC. The size and prosperity of the Indus civilization grew as a result of this innovation, leading to more thoroughly planned settlements which used drainage and sewers. Archeological evidence of an animal - drawn plough dates back to 2,500 BC in the Indus Valley Civilization.
Records from the Warring States, Qin Dynasty, and Han Dynasty provide a picture of early Chinese agriculture from the 5th century BC to 2nd century AD which included a nationwide granary system and widespread use of sericulture. An important early Chinese book on agriculture is the Qimin Yaoshu of AD 535, written by Jia Sixie. Jia 's writing style was straightforward and lucid relative to the elaborate and allusive writing typical of the time. Jia 's book was also very long, with over one hundred thousand written Chinese characters, and it quoted many other Chinese books that were written previously, but no longer survive. The contents of Jia 's 6th century book include sections on land preparation, seeding, cultivation, orchard management, forestry, and animal husbandry. The book also includes peripherally related content covering trade and culinary uses for crops. The work and the style in which it was written proved influential on later Chinese agronomists, such as Wang Zhen and his groundbreaking Nong Shu of 1313.
For agricultural purposes, the Chinese had innovated the hydraulic - powered trip hammer by the 1st century BC. Although it found other purposes, its main function to pound, decorticate, and polish grain that otherwise would have been done manually. The Chinese also began using the square - pallet chain pump by the 1st century AD, powered by a waterwheel or oxen pulling an on a system of mechanical wheels. Although the chain pump found use in public works of providing water for urban and palatial pipe systems, it was used largely to lift water from a lower to higher elevation in filling irrigation canals and channels for farmland. By the end of the Han dynasty in the late 2nd century, heavy ploughs had been developed with iron ploughshares and mouldboards. These would slowly spread west, revolutionizing farming in Northern Europe by the 10th century. (Glick, however, argues for a development of the Chinese plough as late as the 9th century, implying its spread east from similar designs known in Italy by the 7th century.)
Asian rice was domesticated 8,200 -- 13,500 years ago in China, with a single genetic origin from the wild rice Oryza rufipogon, in the Pearl River valley region of China. Rice cultivation then spread to South and Southeast Asia.
The major cereal crops of the ancient Mediterranean region were wheat, emmer, and barley, while common vegetables included peas, beans, fava, and olives, dairy products came mostly from sheep and goats, and meat, which was consumed on rare occasion for most people, usually consisted of pork, beef, and lamb. Agriculture in ancient Greece was hindered by the topography of mainland Greece that only allowed for roughly 10 % of the land to be cultivated properly, necessitating the specialized exportation of oil and wine and importation of grains from Thrace (centered in what is now Bulgaria) and the Greek colonies of southern Russia. During the Hellenistic period, the Ptolemaic Empire controlled Egypt, Cyprus, Phoenicia, and Cyrenaica, major grain - producing regions that mainland Greeks depended on for subsistence, while the Ptolemaic grain market also played a critical role in the rise of the Roman Republic. In the Seleucid Empire, Mesopotamia was a crucial area for the production of wheat, while nomadic animal husbandry was also practiced in other parts.
In the Greco - Roman world of Classical antiquity, Roman agriculture was built on techniques originally pioneered by the Sumerians, transmitted to them by subsequent cultures, with a specific emphasis on the cultivation of crops for trade and export. The Romans laid the groundwork for the manorial economic system, involving serfdom, which flourished in the Middle Ages. The farm sizes in Rome can be divided into three categories. Small farms were from 18 - 88 iugera (one iugerum is equal to about 0.65 acre). Medium - sized farms were from 80 - 500 iugera (singular iugerum). Large estates (called latifundia) were over 500 iugera. The Romans had four systems of farm management: direct work by owner and his family; slaves doing work under supervision of slave managers; tenant farming or sharecropping in which the owner and a tenant divide up a farm 's produce; and situations in which a farm was leased to a tenant.
In Mesoamerica, wild teosinte was transformed through human selection into the ancestor of modern maize, more than 6,000 years ago. It gradually spread across North America and was the major crop of Native Americans at the time of European exploration. Other Mesoamerican crops include hundreds of varieties of locally domesticated squash and beans, while cocoa, also domesticated in the region, was a major crop. The turkey, one of the most important meat birds, was probably domesticated in Mexico or the U.S. Southwest.
In Mesoamerica, the Aztecs were active farmers and had an agriculturally focused economy. The land around Lake Texcoco was fertile, but not large enough to produce the amount of food needed for the population of their expanding empire. The Aztecs developed irrigation systems, formed terraced hillsides, fertilized their soil, and developed chinampas or artificial islands, also known as "floating gardens ''. The Mayas between 400 BC to 900 AD used extensive canal and raised field systems to farm swampland on the Yucatán Peninsula.
In the Andes region of South America, with civilizations including the Inca, the major crop was the potato, domesticated approximately 7,000 -- 10,000 years ago. Coca, still a major crop to this day, was domesticated in the Andes, as were the peanut, tomato, tobacco, and pineapple. Cotton was domesticated in Peru by 3,600 BC. Animals were also domesticated, including llamas, alpacas, and guinea pigs.
The indigenous people of the Eastern U.S.A. domesticated numerous crops. Sunflowers, tobacco, varieties of squash and Chenopodium, as well as crops no longer grown, including marsh elder and little barley, were domesticated. Wild foods including wild rice and maple sugar were harvested. The domesticated strawberry is a hybrid of a Chilean and a North American species, developed by breeding in Europe and North America. Two major crops, pecans and Concord grapes, were utilized extensively in prehistoric times but do not appear to have been domesticated until the 19th century.
The indigenous people in what is now California and the Pacific Northwest practiced various forms of forest gardening and fire - stick farming in the forests, grasslands, mixed woodlands, and wetlands, ensuring that desired food and medicine plants continued to be available. The natives controlled fire on a regional scale to create a low - intensity fire ecology which prevented larger, catastrophic fires and sustained a low - density agriculture in loose rotation; a sort of "wild '' permaculture.
A system of companion planting called the Three Sisters was developed in North America. Three crops that complemented each other were planted together: winter squash, maize (corn), and climbing beans (typically tepary beans or common beans). The maize provides a structure for the beans to climb, eliminating the need for poles. The beans provide the nitrogen to the soil that the other plants use, and the squash spreads along the ground, blocking the sunlight, helping prevent the establishment of weeds. The squash leaves also act as a "living mulch ''.
From the time of British colonization of Australia in 1788, Indigenous Australians were characterised as nomadic hunter - gatherers who did not engage in agriculture, despite evidence to the contrary. In 1969, the archaeologist Rhys Jones proposed that Indigenous Australians engaged in systematic burning as a way of enhancing natural productivity, what has been termed fire - stick farming. In the 1970s and 1980s archaeological research in south west Victoria established that the Gunditjmara and other groups had developed sophisticated eel farming and fish trapping systems over a period of nearly 5,000 years. The archaeologist Harry Lourandos suggested in the 1980s that there was evidence of ' intensification ' in progress across Australia, a process that appeared to have continued through the preceding 5,000 years. These concepts led the historian Bill Gammage to argue that in effect the whole continent was a managed landscape.
In two regions of Australia, the central west coast and eastern central Australia, forms of early agriculture may have been practiced. People living in permanent settlements of over 200 residents sowed or planted on a large scale and stored the harvested food. The Nhanda and Amangu of the central west coast grew yams (Dioscorea hastifolia), while various groups in eastern central Australia (the Corners Region) planted and harvested bush onions (yaua - Cyperus bulbosus), native millet (cooly, tindil -- Panicum decompositum) and a sporocarp, ngardu (Marsilea drummondii).
From 100 BC to 1600 AD, world population continued to grow along with land use, as evidenced by the rapid increase in methane emissions from cattle and the cultivation of rice.
From the 8th century, the medieval Islamic world underwent a transformation in agricultural practice, described by the historian Andrew Watson as the Arab agricultural revolution. This transformation was driven by a number of factors including the diffusion of many crops and plants along Muslim trade routes, the spread of more advanced farming techniques, and an agricultural - economic system which promoted increased yields and efficiency. The shift in agricultural practice changed the economy, population distribution, vegetation cover, agricultural production, population levels, urban growth, the distribution of the labour force, cooking, diet, and clothing across the Islamic world. Muslim traders covered much of the Old World, and trade enabled the diffusion of many crops, plants and farming techniques across the region, as well as the adaptation of crops, plants and techniques from beyond the Islamic world. This diffusion introduced major crops to Europe by way of Al - Andalus, along with the techniques for their cultivation and cuisine. Sugar cane, rice, and cotton were among the major crops transferred, along with citrus and other fruit trees, nut trees, vegetables such as aubergine, spinach and chard, and the use of spices such as cumin, coriander, nutmeg and cinnamon. Intensive irrigation, crop rotation, and agricultural manuals were widely adopted. Irrigation, partly based on Roman technology, made use of noria water wheels, water mills, dams and reservoirs.
The Middle Ages saw further improvements in agriculture. Monasteries spread throughout Europe and became important centers for the collection of knowledge related to agriculture and forestry. The manorial system allowed large landowners to control their land and its laborers, in the form of peasants or serfs. During the medieval period, the Arab world was critical in the exchange of crops and technology between the European, Asia and African continents. Besides transporting numerous crops, they introduced the concept of summer irrigation to Europe and developed the beginnings of the plantation system of sugarcane growing through the use of slaves for intensive cultivation.
By AD 900, developments in iron smelting allowed for increased production in Europe, leading to developments in the production of agricultural implements such as ploughs, hand tools and horse shoes. The carruca heavy plough improved on the earlier scratch plough, with the adoption of the Chinese mouldboard plough to turn over the heavy, wet soils of northern Europe. This led to the clearing of northern European forests and an increase in agricultural production, which in turn led to an increase in population. At the same time, some farmers in Europe moved from a two field crop rotation to a three field crop rotation in which one field of three was left fallow every year. This resulted in increased productivity and nutrition, as the change in rotations permitted nitrogen - fixing legumes such as peas, lentils and beans. Improved horse harnesses and the whippletree further improved cultivation.
Watermills were introduced by the Romans, but were improved throughout the Middle Ages, along with windmills, and used to grind grains into flour, to cut wood and to process flax and wool.
Crops included wheat, rye, barley and oats. Peas, beans, and vetches became common from the 13th century onward as a fodder crop for animals and also for their nitrogen - fixation fertilizing properties. Crop yields peaked in the 13th century, and stayed more or less steady until the 18th century. Though the limitations of medieval farming were once thought to have provided a ceiling for the population growth in the Middle Ages, recent studies have shown that the technology of medieval agriculture was always sufficient for the needs of the people under normal circumstances, and that it was only during exceptionally harsh times, such as the terrible weather of 1315 -- 17, that the needs of the population could not be met.
After 1492, a global exchange of previously local crops and livestock breeds occurred. Maize, potatoes, sweet potatoes and manioc were the key crops that spread from the New World to the Old, while varieties of wheat, barley, rice and turnips traveled from the Old World to the New. There had been few livestock species in the New World, with horses, cattle, sheep and goats being completely unknown before their arrival with Old World settlers. Crops moving in both directions across the Atlantic Ocean caused population growth around the world and a lasting effect on many cultures. Maize and cassava were introduced from Brazil into Africa by Portuguese traders in the 16th century, becoming staple foods, replacing native African crops.
After its introduction from South America to Spain in the late 1500s, the potato became a staple crop throughout Europe by the late 1700s. The potato allowed farmers to produce more food, and initially added variety to the European diet. The increased supply of food reduced disease, increased births and reduced mortality, causing a population boom throughout the British Empire, the US and Europe. The introduction of the potato also brought about the first intensive use of fertilizer, in the form of guano imported to Europe from Peru, and the first artificial pesticide, in the form of an arsenic compound used to fight Colorado potato beetles. Before the adoption of the potato as a major crop, the dependence on grain had caused repetitive regional and national famines when the crops failed, including 17 major famines in England between 1523 and 1623. The resulting dependence on the potato however caused the European Potato Failure, a disastrous crop failure from disease that resulted in widespread famine and the death of over one million people in Ireland alone.
Between the 16th century and the mid-19th century, Britain saw a large increase in agricultural productivity and net output. New agricultural practices like enclosure, mechanization, four - field crop rotation to maintain soil nutrients, and selective breeding enabled an unprecedented population growth to 5.7 million in 1750, freeing up a significant percentage of the workforce, and thereby helped drive the Industrial Revolution. The productivity of wheat went up from about 19 bushels per acre in 1720 to around 30 bushels by 1840, marking a major turning point in history.
Advice on more productive techniques for farming began to appear in England in the mid-17th century, from writers such as Samuel Hartlib, Walter Blith and others. The main problem in sustaining agriculture in one place for a long time was the depletion of nutrients, most importantly nitrogen levels, in the soil. To allow the soil to regenerate, productive land was often let fallow and in some places crop rotation was used. The Dutch four - field rotation system was popularised by the British agriculturist Charles Townshend in the 18th century. The system (wheat, turnips, barley and clover), opened up a fodder crop and grazing crop allowing livestock to be bred year - round. The use of clover was especially important as the legume roots replenished soil nitrates.
The mechanisation and rationalisation of agriculture was another important factor. Robert Bakewell and Thomas Coke introduced selective breeding, and initiated a process of inbreeding to maximise desirable traits from the mid 18th century, such as the New Leicester sheep. Machines were invented to improve the efficiency of various agricultural operation, such as Jethro Tull 's seed drill of 1701 that mechanised seeding at the correct depth and spacing and Andrew Meikle 's threshing machine of 1784. Ploughs were steadily improved, from Joseph Foljambe 's Rotherham iron plough in 1730 to James Small 's improved "Scots Plough '' metal in 1763. In 1789 Ransomes, Sims & Jefferies was producing 86 plough models for different soils. Powered farm machinery began with Richard Trevithick 's stationary steam engine, used to drive a threshing machine, in 1812. Mechanisation spread to other farm uses through the 19th century. The first petrol - driven tractor was built in America by John Froelich in 1892.
The scientific investigation of fertilization began at the Rothamsted Experimental Station in 1843 by John Bennet Lawes. He investigated the impact of inorganic and organic fertilizers on crop yield and founded one of the first artificial fertilizer manufacturing factories in 1842. Fertilizer, in the shape of sodium nitrate deposits in Chile, was imported to Britain by John Thomas North as well as guano (birds droppings). The first commercial process for fertilizer production was the obtaining of phosphate from the dissolution of coprolites in sulphuric acid.
Dan Albone constructed the first commercially successful gasoline - powered general purpose tractor in 1901, and the 1923 International Harvester Farmall tractor marked a major point in the replacement of draft animals (particularly horses) with machines. Since that time, self - propelled mechanical harvesters (combines), planters, transplanters and other equipment have been developed, further revolutionizing agriculture. These inventions allowed farming tasks to be done with a speed and on a scale previously impossible, leading modern farms to output much greater volumes of high - quality produce per land unit.
The Haber - Bosch method for synthesizing ammonium nitrate represented a major breakthrough and allowed crop yields to overcome previous constraints. It was first patented by German chemist Fritz Haber. In 1910 Carl Bosch, while working for German chemical company BASF, successfully commercialized the process and secured further patents. In the years after World War II, the use of synthetic fertilizer increased rapidly, in sync with the increasing world population.
Collective farming was widely practiced in the Soviet Union, the Eastern Bloc countries, China, and Vietnam, starting in the 1930s in the Soviet Union; one result was the Soviet famine of 1932 -- 33.
In the past century agriculture has been characterized by increased productivity, the substitution of synthetic fertilizers and pesticides for labor, water pollution, and farm subsidies. Other applications of scientific research since 1950 in agriculture include gene manipulation, hydroponics, and the development of economically viable biofuels such as ethanol.
In recent years there has been a backlash against the external environmental effects of conventional agriculture, resulting in the organic movement. Famines continued to sweep the globe through the 20th century. Through the effects of climactic events, government policy, war and crop failure, millions of people died in each of at least ten famines between the 1920s and the 1990s.
The historical processes that have allowed agricultural crops to be cultivated and eaten well beyond their centers of origin continues in the present through globalization. On average, 68.7 % of a nation 's food supplies and 69.3 % of its agricultural production are of crops with foreign origins.
The Green Revolution was a series of research, development, and technology transfer initiatives, between the 1940s and the late 1970s. It increased agriculture production around the world, especially from the late 1960s. The initiatives, led by Norman Borlaug and credited with saving over a billion people from starvation, involved the development of high - yielding varieties of cereal grains, expansion of irrigation infrastructure, modernization of management techniques, distribution of hybridized seeds, synthetic fertilizers, and pesticides to farmers.
Synthetic nitrogen, along with mined rock phosphate, pesticides and mechanization, have greatly increased crop yields in the early 20th century. Increased supply of grains has led to cheaper livestock as well. Further, global yield increases were experienced later in the 20th century when high - yield varieties of common staple grains such as rice, wheat, and corn were introduced as a part of the Green Revolution. The Green Revolution exported the technologies (including pesticides and synthetic nitrogen) of the developed world to the developing world. Thomas Malthus famously predicted that the Earth would not be able to support its growing population, but technologies such as the Green Revolution have allowed the world to produce a surplus of food.
Although the Green Revolution significantly increased rice yields in Asia, yield increases have not occurred in the past 15 -- 20 years. The genetic "yield potential '' has increased for wheat, but the yield potential for rice has not increased since 1966, and the yield potential for maize has "barely increased in 35 years ''. It takes only a decade or two for herbicide - resistant weeds to emerge, and insects become resistant to insecticides within about a decade, delayed somewhat by crop rotation.
For most of its history, agriculture has been organic, without synthetic fertilisers or pesticides, and without GMOs. With the advent of chemical agriculture, Rudolf Steiner called for farming without synthetic pesticides, and his Agriculture Course of 1924 laid the foundation for biodynamic agriculture. Lord Northbourne developed these ideas and presented his manifesto of organic farming in 1940. This became a worldwide movement, and organic farming is now practiced in most countries.
|
where do the electrons needed by photosystem two originate | Photosystem II - wikipedia
Photosystem II (or water - plastoquinone oxidoreductase) is the first protein complex in the light - dependent reactions of oxygenic photosynthesis. It is located in the thylakoid membrane of plants, algae, and cyanobacteria. Within the photosystem, enzymes capture photons of light to energize electrons that are then transferred through a variety of coenzymes and cofactors to reduce plastoquinone to plastoquinol. The energized electrons are replaced by oxidizing water to form hydrogen ions and molecular oxygen.
By replenishing lost electrons with electrons from the splitting of water, photosystem II provides the electrons for all of photosynthesis to occur. The hydrogen ions (protons) generated by the oxidation of water help to create a proton gradient that is used by ATP synthase to generate ATP. The energized electrons transferred to plastoquinone are ultimately used to reduce NADP to NADPH or are used in cyclic photophosphorylation.
The core of PSII consists of a pseudo-symmetric heterodimer of two homologous proteins D1 and D2. Unlike the reaction centers of all other photosystems which have a special pair of closely spaced chlorophyll molecules, the pigment that undergoes the initial photoinduced charge separation in PSII is a chlorophyll monomer. Because the positive charge is not shared across two molecules, the ionised pigment is highly oxidizing and can take part in the splitting of water.
Photosystem II (of cyanobacteria and green plants) is composed of around 20 subunits (depending on the organism) as well as other accessory, light - harvesting proteins. Each photosystem II contains at least 99 cofactors: 35 chlorophyll a, 12 beta - carotene, two pheophytin, two plastoquinone, two heme, one bicarbonate, 20 lipid molecules, the Mn CaO cluster (including two chloride ions), one non heme Fe and two putative Ca ions per monomer. There are several crystal structures of photosystem II. The PDB accession codes for this protein are 3WU2, 3BZ1, 3BZ2 (3BZ1 and 3BZ2 are monomeric structures of the Photosystem II dimer), 2AXT, 1S5L, 1W5C, 1ILX, 1FE1, 1IZL.
beta - carotene, quinone and manganese center
The oxygen - evolving complex is the site of water oxidation. It is a metallo - oxo cluster comprising four manganese ions (in oxidation states ranging from + 2 to + 4) and one divalent calcium ion. When it oxidizes water, producing oxygen gas and protons, it sequentially delivers the four electrons from water to a tyrosine (D1 - Y161) sidechain and then to P680 itself. The first structural model of the oxygen - evolving complex are solved by X-ray crystallography from frozen protein crystals with a resolution of 3.8 Å in 2001. Over the next years the resolution of the model is gradually increased to 2.9 Å. While obtaining these structures, in itself was a great feat, they do not show the oxygen - evolving complex in full detail. In 2011 the OEC of PSII was resolved to a level of 1.9 Å revealing five oxygen atoms serving as oxo bridges linking the five metal atoms and four water molecules bound to the Mn4CaO5 cluster; more than 1,300 water molecules were found in each photosystem II monomer, some forming extensive hydrogen - bonding networks that may serve as channels for protons, water or oxygen molecules. At this stage, it is suggested that the structures obtained by X-ray crystallography are biased, since there is evidence that the manganese atoms are reduced by the high - intensity X-rays used, altering the observed OEC structure. This incentivized researchers to take their crystals to a different X-ray facilities, called X-ray Free Electron Lasers, such as SLAC in the USA. In 2014 the structure observed in 2011 was confirmed. Knowing the structure of Photosystem II did not suffice to reveal how it works exactly. So now the race has started to solve the structure of Photosystem II at different stages in the mechanistic cycle (discussed below). Currently structures of the S1 state and the S3 state 's have been published almost simultaneously from two different groups, showing the addition of an oxygen molecule designated O6 between Mn1 and Mn4, suggesting that this may be the site on the oxygen evolving complex, where oxygen is produced.
Photosynthetic water splitting (or oxygen evolution) is one of the most important reactions on the planet, since it is the source of nearly all the atmosphere 's oxygen. Moreover, artificial photosynthetic water - splitting may contribute to the effective use of sunlight as an alternative energy - source.
The mechanism of water oxidation is still not fully elucidated, but we know many details about this process. The oxidation of water to molecular oxygen requires extraction of four electrons and four protons from two molecules of water. The experimental evidence that oxygen is released through cyclic reaction of oxygen evolving complex (OEC) within one PSII was provided by Pierre Joliot et al. They have shown that, if dark - adapted photosynthetic material (higher plants, algae, and cyanobacteria) is exposed to a series of single turnover flashes, oxygen evolution is detected with typical period - four damped oscillation with maxima on the third and the seventh flash and with minima on the first and the fifth flash (for review, see). Based on this experiment, Bessel Kok and co-workers introduced a cycle of five flash - induced transitions of the so - called S - states, describing the four redox states of OEC: When four oxidizing equivalents have been stored (at the S - state), OEC returns to its basic S - state. In the absence of light, the OEC will "relax '' to the S state; the S state is often described as being "dark - stable ''. The S state is largely considered to consist of manganese ions with oxidation states of Mn, Mn, Mn, Mn. Finally, the intermediate S - states were proposed by Jablonsky and Lazar as a regulatory mechanism and link between S - states and tyrosine Z.
In 2012, Renger expressed the idea of internal changes of water molecules into typical oxides in different S - states during water splitting. In 2017, Dolai showed almost complete concept of water splitting by the way of structural analysis of S - states and developed oxides from the water molecules.
Dolai 's Diagram of S - states: Dolai 's diagrams are perfect pictures of different S - states in water - splitting during photosynthesis. These diagrams can express complete figures of intermediate S - states as well as the development of typical oxides (monoxide (2H2O), hydroxide (OH. H O), peroxide (H O), super oxide (HO) and di - oxygen (O)); which are formed due to interchange of water molecules in S - states. The arrangement of hydrogen bonding is main criteria of the diagrams. The effect of photon (hv), emission of electron (e) and production of hydrogen ion (H+) from water molecules can be shown by the diagrams in a more specific way of experimental detection. The diagrams almost completely analyze the pathway of water - splitting in photosynthesis.
|
how many magnesium atoms are in one molecule of mgcl2 | Magnesium chloride - wikipedia
Magnesium chloride is the name for the chemical compound with the formula MgCl and its various hydrates MgCl (H O). These salts are typical ionic halides, being highly soluble in water. The hydrated magnesium chloride can be extracted from brine or sea water. In North America, magnesium chloride is produced primarily from Great Salt Lake brine. It is extracted in a similar process from the Dead Sea in the Jordan valley. Magnesium chloride, as the natural mineral bischofite, is also extracted (via solution mining) out of ancient seabeds; for example, the Zechstein seabed in northwest Europe. Some magnesium chloride is made from solar evaporation of seawater. Anhydrous magnesium chloride is the principal precursor to magnesium metal, which is produced on a large scale. Hydrated magnesium chloride is the form most readily available.
MgCl crystallizes in the cadmium chloride motif, which features octahedral Mg. A variety of hydrates are known with the formula MgCl (H O), and each loses water with increasing temperature: x = 12 (− 16.4 ° C), 8 (− 3.4 ° C), 6 (116.7 ° C), 4 (181 ° C), 2 (ca. 300 ° C). In the hexahydrate, the Mg remains octahedral, but is coordinated to six water ligands. The thermal dehydration of the hydrates MgCl (H O) (x = 6, 12) does not occur straightforwardly.
As suggested by the existence of some hydrates, anhydrous MgCl is a Lewis acid, although a very weak one.
In the Dow process, magnesium chloride is regenerated from magnesium hydroxide using hydrochloric acid:
It can also be prepared from magnesium carbonate by a similar reaction.
In most of its derivatives, MgCl forms octahedral complexes. Derivatives with tetrahedral Mg are less common. Examples include salts of (tetraethylammonium) MgCl and adducts such as MgCl (TMEDA).
Magnesium chloride serves as precursor to other magnesium compounds, for example by precipitation:
It can be electrolysed to give magnesium metal:
This process is practiced on a substantial scale.
Magnesium chloride is most commonly used for dust control and road stabilization. Its second-most common use is ice control. In addition to the production of magnesium metal, magnesium chloride also is used for a variety of other applications: fertilizer, mineral supplement for animals, wastewater treatment, wallboard, artificial seawater, feed supplement, textiles, paper, fireproofing agents, cements and refrigeration brine. Mixed with hydrated magnesium oxide, magnesium chloride forms a hard material called Sorel cement.
This compound is also used in fire extinguishers: obtained by the reaction of magnesium hydroxide and hydrochloric acid (HCl) in liquid form along with water in gaseous state. Magnesium chloride also is used in several medical and topical (skin related) applications. It has been used in pills as supplemental sources of magnesium, where it serves as a soluble compound that is not as laxative as magnesium sulfate, and more bioavailable than magnesium hydroxide and magnesium oxide, since it does not require stomach acid to produce soluble Mg ion. It can also be used as an effective anesthetic for cephalopods, some species of crustaceans, and several species of bivalve, including oysters.
MgCl is also commonly utilized in the polymerase chain reaction (PCR). The magnesium ion is necessary for both in vivo / vitro DNA synthesis.
Magnesium chloride is one of many substances used for dust control, soil stabilization and wind erosion mitigation. When magnesium chloride is applied to roads and bare soil areas, both positive and negative performance issues occur which are related to many application factors. Water absorbing magnesium chloride (deliquescent) attributes include
However, limitations include
The use of magnesium chloride on roads remains controversial. Advocates claim (1) Cleaner air, which leads to better health as fugitive dust can cause health problems in the young, elderly and people with respiratory conditions; and (2) Greater safety through improved road conditions, including increased driver visibility and decreased risks caused by loose gravel, soft spots, road roughness and flying rocks. It reduces foreign sediment in nearby surface waters (dust that settles in creeks and streams), helps prevent stunted crop growth caused by clogged pores in plants, and keeps vehicles and property cleaner. Other studies show the use of salts for road deicing or dust suppressing can contribute substantial amounts of chloride ions to runoff from surface of roads treated with the compounds. The salts MgCl2 (and CaCl2) are very soluble in water and will dissociate. The salts, when used on road surfaces, will dissolve during wet weather and be transported into the groundwater through infiltration and / or runoff into surface water bodies. Groundwater infiltration can be a problem and the chloride ion in drinking water is considered a problem when concentrations exceed 250 mg / l. It is therefore regulated by the EPA 's drinking water standards. The chloride concentration in the groundwater or surface water depends on several factors including:
In addition, the chloride concentration in the surface water also depends on the size or flow rate of the water body and the resulting dilution achieved. In chloride concentration studies carried out in Wisconsin during a winter deicing period, runoff from roadside drainages were analyzed. All studies indicated that the chloride concentration increased as a result of deicing activities but the levels were still below the MCL of 250 mg / L set by the EPA. Nevertheless, the long - term effect of this exposure is not known.
Although the EPA has set the maximum chloride concentration in water for domestic use at 250 mg / l animals can tolerate higher levels. At excessively high levels, chloride is said to affect the health of animals. As stated by the National Technical Advisory Committee to the Secretary of Interior (1968), "Salinity may have a two-fold effect on wildlife; a direct one affecting the body processes of the species involved and an indirect one altering the environment making living species perpetuation difficult or impossible. '' One major problem associated with the use of deicing salt as far as wildlife is concerned is that wildlife are known to have "salt craving '' and therefore are attracted to salted highways which can be a traffic hazard to both the animals and motorists.
Regarding the accumulation of chloride salts in roadside soils including the adverse effects on roadside plants and vegetation physiology and morphology, documentation dates back to World War II era times and consistently continues forward to present times. As far as plants and vegetation are concerned, the accumulation of salts in the soil adversely affects their physiology and morphology by: increasing the osmotic pressure of the soil solution, by altering the plant 's mineral nutrition, and by accumulating specific ions to toxic concentrations in the plants. Regarding the intentional application of excessive salts: see Salting the Earth.
Road departments and private industry may apply liquid or powdered magnesium chloride to control dust and erosion on unimproved (dirt or gravel) roads and dusty job sites such as quarries because it is relatively inexpensive to purchase and apply. Its hygroscopy makes it absorb moisture from the air, limiting the number of smaller particles (silts and clays) that become airborne. The most significant benefit of applying dust control products is the reduction in gravel road maintenance costs. However, recent research and updates indicate biological toxicity in the environment in plants as an ongoing problem. Since 2001, truckers have complained about "Killer Chemicals '' on roads and now some states are backing away from using salt products.
Also a small percentage of owners of indoor arenas (e.g. for horse riding) may apply magnesium chloride to sand or other "footing '' materials to control dust. Although magnesium chloride use in an equestrian (horse) arena environment is generally referred to as a dust suppressant it is technically more accurate to consider it as a water augmentation activity since its performance is based on absorbing moisture from the air and from whatever else comes in contact with it.
To control or mitigate dust, chlorides need moisture to work effectively so it works better in humid than arid climates. As the humidity increases the chloride draw moisture out of the air to keep the surface damp and as humidity decreases it diffuses and releases moisture. These naturally occurring equilibrium changes also allow chlorides to also be used as a dehydrating agent including the drying out of and curing and preservation of hides.
As a road stabilizer, magnesium chloride binds gravel and clay particles to keep them from leaving the road. The water - absorbing (hygroscopic) characteristics of magnesium chloride prevent the road from drying out, which keeps gravel on the ground. The road remains continually "wet '' as if a water truck had just sprayed the road.
Magnesium chloride is used for low - temperature de-icing of highways, sidewalks, and parking lots. When highways are treacherous due to icy conditions, magnesium chloride helps to prevent the ice bond, allowing snow plows to clear the roads more efficiently.
Magnesium chloride is used in three ways for pavement ice control: Anti-icing, when maintenance professionals spread it onto roads before a snow storm to prevent snow from sticking and ice from forming; pre-wetting, which means a liquid formulation of magnesium chloride is sprayed directly onto salt as it is being spread onto roadway pavement, wetting the salt so that it sticks to the road; and pre-treating, when magnesium chloride and salt are mixed together before they are loaded onto trucks and spread onto paved roads.
While it is generally accepted that ongoing use of any de-icer (ice melter) will eventually contribute to some degradation of the concrete surface to which it is applied, some de-icers are gentler on concrete than others. Past studies have often utilized high temperatures to accelerate the impact to concrete. By setting parameters that more closely represent real - world de-icing conditions, Purdue University researchers measured the impact of magnesium chloride and calcium chloride on concrete. Their study concluded that calcium chloride damages concrete twice as fast as magnesium chloride.
Magnesium chloride is used in nutraceutical and pharmaceutical preparations.
Magnesium chloride has shown promise as a storage material for hydrogen. Ammonia, which is rich in hydrogen atoms, is used as an intermediate storage material. Ammonia can be effectively absorbed onto solid magnesium chloride, forming Mg (NH) Cl. Ammonia is released by mild heat, and is then passed through a catalyst to give hydrogen gas.
Magnesium chloride (E511) is an important coagulant used in the preparation of tofu from soy milk. In Japan it is sold as nigari (にがり, derived from the Japanese word for "bitter ''), a white powder produced from seawater after the sodium chloride has been removed, and the water evaporated. In China, it is called lushui (卤水). Nigari or lushui consists mostly of magnesium chloride, with some magnesium sulfate and other trace elements. It is also an ingredient in baby formula milk.
Because magnesium is a mobile nutrient, magnesium chloride can be effectively used as a substitute for magnesium sulfate (Epsom salt) to help correct magnesium deficiency in plants via foliar feeding. The recommended dose of magnesium chloride is smaller than the recommended dose of magnesium sulfate (20 g / L). This is due primarily to the chlorine present in magnesium chloride, which can easily reach toxic levels if over-applied or applied too often.
It has been found that higher concentrations of magnesium in tomato and some pepper plants can make them more susceptible to disease caused by infection of the bacterium Xanthomonas campestris, since magnesium is essential for bacterial growth.
Magnesium in natural seawater values are between 1250 mg / L and 1350 mg / L, approximately 3.7 % of the total seawater mineral content. Dead Sea minerals contain a significantly higher magnesium chloride ratio, 50.8 %. Carbonates and calcium are essential for all growth of corals, coralline algae, clams, and invertebrates. Magnesium can be depleted by mangrove plants and the use of excessive limewater or by going beyond natural calcium, alkalinity, and pH values.
Magnesium ions are bitter - tasting, and magnesium chloride solutions are bitter in varying degrees, depending on the concentration of magnesium.
Magnesium toxicity from magnesium salts is rare in healthy individuals with a normal diet, because excess magnesium is readily excreted in urine by the kidneys. A few cases of oral magnesium toxicity have been described in persons with normal renal function ingesting large amounts of magnesium salts, but it is rare. If a large amount of magnesium chloride is eaten, it will have effects similar to magnesium sulfate, causing diarrhea, although the sulfate also contributes to the laxative effect in magnesium sulfate, so the effect from the chloride is not as severe.
Chloride (Cl) and magnesium (Mg) are both essential nutrients important for normal plant growth. Too much of either nutrient may harm a plant, although foliar chloride concentrations are more strongly related with foliar damage than magnesium. High concentrations of MgCl ions in the soil may be toxic or change water relationships such that the plant can not easily accumulate water and nutrients. Once inside the plant, chloride moves through the water - conducting system and accumulates at the margins of leaves or needles, where dieback occurs first. Leaves are weakened or killed, which can lead to the death of the tree.
Ecotoxicity levels related to terrestrial and aquatic organisms for magnesium chloride are listed in the Pesticide Action Network Pesticide Database.
The presence of dissolved magnesium chloride in the well water (bore water) used in locomotive boilers on the Trans - Australian Railway caused serious and expensive maintenance problems during the steam era. At no point along its route does the line cross a permanent fresh water watercourse, so bore water had to be relied on. No inexpensive treatment for the highly mineralised water was available and locomotive boilers were lasting less than a quarter of the time normally expected. In the days of steam locomotion, about half the total train load was water for the engine. The line 's operator, Commonwealth Railways was an early adopter of the diesel - electric locomotive.
|
stars of the wild wild west tv show | The Wild Wild West - wikipedia
The Wild Wild West is an American weird western television series that ran on the CBS television network for four seasons (104 episodes) from September 17, 1965, to April 4, 1969. Two television movies were made with the original cast in 1979 and 1980, and the series was adapted for a motion picture in 1999.
Developed at a time when the television western was losing ground to the spy genre, this show was conceived by its creator, Michael Garrison, as "James Bond on horseback. '' Set during the administration of President Ulysses Grant (1869 -- 77), the series followed Secret Service agents James West (Robert Conrad) and Artemus Gordon (Ross Martin) as they solved crimes, protected the President, and foiled the plans of megalomaniacal villains to take over all or part of the United States.
The show featured a number of fantasy elements, such as the technologically advanced devices used by the agents and their adversaries. The combination of the Victorian era time - frame and the use of Verne-esque technology has inspired some to give the show credit as being one of the more "visible '' origins of the steampunk subculture. These elements were accentuated even more in the 1999 movie adaptation.
Despite high ratings, the series was cancelled near the end of its fourth season as a concession to Congress over television violence.
The Wild Wild West told the story of two Secret Service agents: the fearless and handsome James T. West (played by Robert Conrad), and Artemus Gordon (played by Ross Martin), a brilliant gadgeteer and master of disguise. Their unending mission was to protect President Ulysses S. Grant and the United States from all manner of dangerous threats. The agents traveled in luxury aboard their own train, the Wanderer, equipped with everything from a stable car to a laboratory. James West had served as an intelligence and cavalry officer in the US Civil War on the staff of Ulysses Grant; his "cover, '' at least in the pilot episode, is that of "a dandy, a high - roller from the East. '' Thereafter, however, there is no pretense, and his reputation as the foremost Secret Service agent often precedes him. According to the TV movies, West retires from the Service by 1880 and lives on a ranch in Mexico. Gordon, who was a captain in the Civil War, had also been in show business. When he retires in 1880 he returns to performing as the head of a Shakespeare traveling players troupe.
The show incorporated classic Western elements with an espionage thriller, science fiction / alternate history ideas (in a similar vein to what would later be called steampunk), in one case horror ("The Night of the Man Eating House '') and plenty of humor. In the tradition of James Bond, there were always beautiful women, clever gadgets, and delusional arch - enemies with half - insane plots to take over the country or the world.
The title of each episode begins with "The Night '' (except for the first - season episode "Night of the Casual Killer '', which omitted the definite article "The ''). This followed other idiosyncratic naming conventions established by shows like Rawhide, where each episode title began with "Incident at '' or "Incident of, '' and The Man from U.N.C.L.E., where episodes were titled "The (Blank) Affair. ''
Robert Conrad starred as James West. Before The Wild Wild West, Conrad played private eye Tom Lopaka in ABC 's Hawaiian Eye for four seasons, 1959 - 63. Conrad claimed to be the 17th actor to test for the role of James West. (Rory Calhoun was initially announced for the part.) Conrad performed nearly all of his own stunts on The Wild Wild West. "For the first few episodes we tried stuntmen, '' Conrad explained, "but the setup time slowed production down, so I volunteered. Things started moving quicker when I took the jumps and the spills. We started meeting the budget. '' Early on he was doubled by Louie Elias or Chuck O'Brien.
On January 24, 1968, however, during filming of "The Night of the Fugitives '', Conrad fell 12 ft (3.7 m) from a chandelier onto a concrete floor and suffered a concussion. As a result, production of the series (then near the end of its third season) ended two weeks early. Conrad spent weeks in the hospital, and had a long convalescence slowed by constant dizziness. The episode was eventually completed and aired during the fourth season, with footage of the fall left in. Conrad later told Percy Shain of the Boston Globe, "I have the whole scene on film. It 's a constant reminder to be careful. It also bolstered my determination to make this my last year with the series. Four seasons are enough of this sort of thing. ''
Artemus Gordon was played by Ross Martin. Prior to The Wild Wild West, Martin co-starred in the CBS series Mr. Lucky from 1959 to 1960, portraying Mr. Lucky 's sidekick, Andamo. The series was created by Blake Edwards, who also cast Martin in his films Experiment in Terror (1962) and The Great Race (1964).
Martin once called his role as Artemus Gordon "a show - off 's showcase '' because it allowed him to portray over 100 different characters during the course of the series, and perform dozens of different dialects. Martin sketched his ideas for his characterizations and worked with the makeup artists to execute the final look. Martin told Percy Shain of the Boston Globe, "In the three years of the show, I have run a wider gamut than even those acknowledged masters of disguise, Paul Muni and Lon Chaney. Sometimes I feel like a one man repertory company. I think I 've proven to myself and to the industry that I am the No. 1 character lead in films today. '' The industry acknowledged Martin 's work with an Emmy nomination in 1969.
Martin broke his leg in a fourth - season episode, "The Night of the Avaricious Actuary ''. He dropped a rifle, stepped on it, and his foot rolled over it. Martin told Percy Shain of the Boston Globe, "In the scene where I was hurt, my stand - in tried to finish it. When the shell ejected from the rifle, it caught him in the eye and burned it. We still have n't finished that scene. It will have to wait until I can move around again. ''
A few weeks later, after completing "The Night of Fire and Brimstone '', Martin suffered a heart attack on August 17, 1968. (This was exactly two years after Michael Garrison died.) Martin 's character was replaced temporarily by other agents played by Charles Aidman (four episodes), Alan Hale, Jr. and William Schallert. Aidman said the producers had promised to rewrite the scripts for his new character, but this simply amounted to scratching out the name "Artemus Gordon '' and penciling in "Jeremy Pike '' (his character 's name). Pat Paulsen is frequently thought of as a Martin substitute, but he in fact appeared in one of Aidman 's episodes, and his character would have been present even if Martin appeared.
The show 's most memorable recurring arch - villain was Dr. Miguelito Quixote Loveless, a brilliant but petulant and megalomaniacal dwarf portrayed by Michael Dunn. Like Professor Moriarty for Sherlock Holmes, Loveless provided West and Gordon with a worthy adversary, whose plans could be foiled but who resisted all attempts to capture him and bring him to justice. Initially he had two constant companions: the huge Voltaire, played by Richard Kiel; and the beautiful Antoinette, played by Dunn 's real - life singing partner, Phoebe Dorin. Voltaire disappeared without explanation after his third episode (although Richard Kiel returned in a different role in "The Night of the Simian Terror ''), and Antoinette after her sixth. According to the TV movie The Wild Wild West Revisited, Loveless eventually dies in 1880 from ulcers, brought on by the frustration of having his plans consistently foiled by West and Gordon. (His son, played by Paul Williams, subsequently seeks revenge on the agents.)
Though several actors appeared in multiple villainous roles, only one other character had a second encounter with West and Gordon: Count Manzeppi (played flamboyantly by Victor Buono, who played another, different villain in the pilot), a diabolical genius of "black magic '' and crime, who -- like Dr. Loveless -- had an escape plan at the end. (Buono eventually returned in More Wild Wild West as "Dr. Henry Messenger '', a parody of Henry Kissinger, who ends up both handcuffed and turning invisible with the villainous Paradine.)
Agnes Moorehead won an Emmy for her role as Emma Valentine in "The Night of The Vicious Valentine ''. Some of the other villains were portrayed by Leslie Nielsen, Martin Landau, Burgess Meredith, Boris Karloff, Ida Lupino, Carroll O'Connor, Ricardo Montalban, Robert Duvall, Ed Asner, and Harvey Korman.
While the show 's writers created their fair share of villains, they frequently started with the nefarious, stylized inventions of these madmen (or madwomen) and then wrote the episodes to capitalize on these devices. Henry Sharp, the series ' story consultant, would sketch the preliminaries of the designs (eccentrically numbering every sketch "fig. 37 ''), and give the sketch to a writer, who would build a story around it. Episodes were also inspired by Edgar Allan Poe, H.G. Wells, and Jules Verne.
In 1954, Michael Garrison and Gregory Ratoff purchased the film rights to Ian Fleming 's first Bond novel, Casino Royale, for $600. CBS bought the TV rights for $1,000, and on October 21, 1954 broadcast an hour - long adaptation on its Climax! series, with Barry Nelson playing American agent ' Jimmy Bond ' and Peter Lorre playing the villain, Le Chiffre. CBS also approached Fleming about developing a Bond TV series. (Fleming later contributed ideas to NBC 's The Man From U.N.C.L.E.)
In 1955 Ratoff and Garrison bought the rights to the novel in perpetuity for an additional $6,000. They pitched the idea for a film to 20th Century Fox, but studio turned them down. After Ratoff died in 1960, his widow and Garrison sold the film rights to Charles K. Feldman for $75,000. Feldman eventually produced the spoof Casino Royale in 1967. By then, Garrison and CBS had brought a James Bond to television in a unique way.
The pilot episode, "The Night of the Inferno '', was produced by Garrison and, according to Robert Conrad, cost $685,000. The episode was scripted by Gilbert Ralston, who had written for numerous episodic TV series in the 1950s and 1960s. In a later deposition, Ralston explained that he was approached by Michael Garrison, who "said he had an idea for a series, good commercial idea, and wanted to know if I could glue the idea of a western hero and a James Bond type together in the same show. '' Ralston said he then created the Civil War characters, the format, the story outline and nine drafts of the pilot script that was the basis for the television series. It was his idea, for example, to have a secret agent named Jim West who would perform secret missions for President Ulysses S. Grant. (Ralston later sued Warner Bros. over the 1999 motion picture Wild Wild West based on the series.)
As indicated by Robert Conrad on his DVD commentary, the show went through several changes in producers in its first season. This was apparently due to conflicts between the network and Garrison, who had no experience producing for television and had trouble staying on budget. At first, Ben Brady was named producer, but he was shifted to Rawhide, which had its own crisis when star Eric Fleming quit at the end of the 1964 - 65 season. (That series lasted for another thirteen episodes before it was cancelled by CBS.)
The network then hired Collier Young. In an interview, Young said he saw the series as The Rogues set in 1870. (The Rogues, which he had produced, was about con men who swindled swindlers, much like the 1970s series Switch.) Young also claimed to have added the wry second "Wild '' to the series title, which had been simply "The Wild West '' in its early stages of production. Young lasted three episodes (2 -- 4). His shows featured a butler named Tennyson who traveled with West and Gordon, but since the episodes were not broadcast in production order, the character popped up at different times during the first season. Conrad was not sorry to see Young go: "I do n't mind. All that guy did creatively was put the second ' wild ' in the title. CBS did the right thing. ''
Young 's replacement, Fred Freiberger, returned the series to its original concept. It was on his watch that writer John Kneubuhl, inspired by a magazine article on Michael Dunn, created the arch - villain Dr. Miguelito Loveless. Phoebe Dorin, who played Loveless ' assistant, Antoinette, recalled: "Michael Garrison came to see (our) nightclub act when he was in New York. Garrison said to himself, ' Michael Dunn would make the most extraordinary villain. People have never seen anything like him before, and he 's a fabulous little actor and he 's funny as hell. ' And, Garrison felt, if Michael Dunn sang on every show, with the girl, it would be an extraordinary running villain. He came backstage and he told us who he was and he said he was going to do a television show called The Wild Wild West and we would be called. We thought, ' Yeah, yeah, we 've heard all that before. ' But he did call us and the show was a fantastic success. And that 's how it started, because he saw the nightclub act. '' Loveless was introduced in the show 's sixth produced, but third televised episode, "The Night the Wizard Shook The Earth. '' The character became an immediate hit and Dunn was contracted to appear in four episodes per season. Because of health problems, Dunn could only appear in 10 episodes instead of 16.
After ten episodes (5 -- 14), Freiberger and executive producer Michael Garrison were, according to Variety, "unceremoniously dumped, '' reputedly due to a behind - the - scenes power struggle. Garrison was replaced by Phillip Leacock, the executive producer of Gunsmoke, and Frieberger was supplanted by John Mantley, an associate producer on Gunsmoke. The exchange stunned both cast and crew. Garrison, who owned 40 % of The Wild Wild West, knew nothing about the changes and had n't been consulted. He turned the matter over to his attorneys. Freiberger said, "I was fired for accomplishing what I had been hired to do. I was hired to pull the show together when it was in chaos. '' Conrad said, "I was totally shocked by it. Let 's face it, the show is healthy. I think Fred Freiberger is totally correct in his concept of the show. It 's an administrative change, for what reason I do n't know. ''
Mantley produced seven (15 -- 21) episodes then returned to his former position on Gunsmoke, and Gene L. Coon took over as associate producer. By then, Garrison 's conflict with CBS was resolved and he returned to the executive producer role. Coon, however, left after six episodes (22 -- 27) to write First to Fight (1967), a Warner Bros. film about the Marines. Garrison produced the last episode of season one and the initial episodes of season two.
Garrison 's return was much to the relief of Ross Martin, who once revealed that he was so disenchanted during the first season that he tried to quit three times. He explained that Garrison "saw the show as a Bond spoof laid in 1870, and we all knew where we stood. Each new producer tried to put his stamp on the show and I had a terrible struggle. I fought them line by line in every script. They knew they could n't change the James West role very much, but it was open season on Artemus Gordon because they had never seen anything like him before. ''
On August 17, 1966, however, during production of the new season 's ninth episode, "The Night of the Ready - Made Corpse '', Garrison fell down a flight of stairs in his home, fractured his skull, and died. CBS brought in Bruce Lansbury, brother of actress Angela Lansbury, to produce the show for the remainder of its run. In the early 1960s Lansbury had been in charge of daytime shows at CBS Television City in Hollywood, then vice president of programming in New York. When he was tapped for The Wild Wild West, Lansbury was working with his twin brother, Edgar, producing legitimate theater on Broadway.
The first season 's episodes were filmed in black and white, and they were darker in tone. Cinematographer Ted Voightlander was nominated for an Emmy Award for his work on one of these episodes, "The Night of the Howling Light. '' Subsequent seasons were filmed in color, and the show became noticeably campier.
The Wild Wild West was filmed at CBS Studio Center on Radford Avenue in Studio City in the San Fernando Valley. The 70 - acre lot was formerly the home of Republic Studios, which specialized in low - budget films including Westerns starring Roy Rogers and Gene Autry and Saturday morning serials (which The Wild Wild West appropriately echoed). CBS had a wall - to - wall lease on the lot starting in May 1963, and produced Gunsmoke and Rawhide there, as well as Gilligan 's Island. The network bought the lot from Republic in February 1967, for $9.5 million. Beginning in 1971, MTM Enterprises (headed by actress Mary Tyler Moore and her then - husband, Grant Tinker) became the Studio Center 's primary tenant. In the mid-1980s the western streets and sets were replaced with new sound stages and urban facades, including the New York streets seen in Seinfeld. In 1995 the lagoon set that was originally constructed for Gilligan 's Island was paved over to create a parking lot.
Among iconic locations used for filming were Bronson Canyon ("Night of the Returning Dead '' S02E05) and Vasquez Rocks ("Night of the Cadre '' S02E26).
For the pilot episode, "The Night of the Inferno '', the producers used Sierra Railroad No. 3, a 4 - 6 - 0 locomotive that was, fittingly, an anachronism: Sierra No. 3 was built in 1891, fifteen to twenty years after the series was set. Footage of this train, with a 5 replacing the 3 on its number plate, was shot in Jamestown, California. Best known for its role as the Hooterville Cannonball in the CBS series Petticoat Junction, Sierra No. 3 probably appeared in more films and TV shows than any other locomotive in history. It was built by the Rogers Locomotive and Machine Works in Paterson, New Jersey.
When The Wild Wild West went into series production, however, an entirely different train was employed. The locomotive, a 4 - 4 - 0 named the Inyo, was built in 1875 by the Baldwin Locomotive Works in Philadelphia. Originally a wood - burner, the Inyo was converted to oil in 1910. The Inyo, as well as the express car and the passenger car, originally served the Virginia and Truckee Railroad in Nevada. They were among V&T cars sold to Paramount Pictures in 1937 -- 38. The Inyo appears in numerous films including High, Wide, and Handsome (1938), Union Pacific (1939), The Marx Brothers ' Go West (1940), Meet Me in St. Louis, (1944), Red River (1948), Disney 's The Great Locomotive Chase (1956) and McLintock! (1963). For The Wild Wild West, Inyo 's original number plate was temporarily changed from No. 22 to No. 8 so the train footage could be flopped horizontally without the number appearing reversed. Footage of the Inyo in motion and idling was shot around Menifee, California, and reused in virtually every episode. (Stock footage of Sierra No. 3 occasionally resurfaced as well.)
These trains were used only for exterior shots. The luxurious interior of the passenger car was constructed on Stage 6 at CBS Studio Center. (Neither Stage 6 or the western streets still exist.) Designed by art director Albert Heschong, the set reportedly cost $35,000 in 1965 (approximately $250,000 in 2011 dollars). The interior was redesigned when the show switched to color for the 1966 - 67 season.
The train interior was also used in at least one episode of Gunsmoke ("Death Train, '' aired January 27, 1967), and in at least two episodes of The Big Valley ("Last Train to the Fair, '' aired April 27, 1966, and "Days of Wrath, '' aired January 8, 1968). All three series were filmed at CBS Studio Center and shared other exterior and interior sets. Additionally, the interior was used for an episode of Get Smart ("The King Lives? '', aired January 6, 1968) and the short - lived Barbary Coast ("Funny Money, '' aired September 8, 1975).
After her run on The Wild Wild West, the Inyo participated in the Golden Spike Centennial at Promontory, Utah, in 1969. The following year it appeared as a replica of the Central Pacific 's "Jupiter '' locomotive at the Golden Spike National Historical Site. The State of Nevada purchased the Inyo in 1974; it was restored to 1895 vintage, including a wider smoke stack and a new pilot (cow catcher) without a drop coupler. The Inyo is still operational and displayed at the Nevada State Railroad Museum in Carson City. The express car (No. 21) and passenger car (No. 4) are also at the museum.
Another veteran V&T locomotive, the Reno (built in 1872 by Baldwin), was used in the two The Wild Wild West TV movies. The Reno, which resembles the Inyo, is located at Old Tucson Studios.
The 1999 Wild Wild West motion picture used the Baltimore & Ohio 4 - 4 - 0 No. 25, one of the oldest operating steam locomotives in the U.S. Built in 1856 at the Mason Machine Works in Taunton, Massachusetts, it was later renamed The William Mason in honor of its manufacturer. For its role as "The Wanderer '' in the motion picture, the engine was sent to the steam shops at the Strasburg Railroad for restoration and repainting. The locomotive is brought out for the B&O Train Museum in Baltimore 's "Steam Days ''.
The Inyo and The William Mason both appeared in the Disney film The Great Locomotive Chase (1956).
The Wild Wild West featured numerous, often anachronistic, gadgets. Some were recurring devices, such as James ' sleeve gun or breakaway derringer hidden in his left and right boot heels. Others appeared in only a single episode.
Most of these gadgets are concealed in West 's garments:
Aboard the train:
Other gadgets:
The villains often used equally creative gadgets, including:
The main title theme was written by Richard Markowitz, who previously composed the theme for the TV series The Rebel. He was brought in after the producers rejected two attempts by film composer Dimitri Tiomkin.
In an interview by Susan Kesler (for her book, The Wild Wild West: The Series) included in the first season DVD boxed set, Markowitz recalled that the original Tiomkin theme "was very, kind of, traditional, it just seemed wrong. '' Markowitz explained his own approach: "By combining jazz with Americana, I think that 's what nailed it. That took it away from the serious kind of thing that Tiomkin was trying to do... What I did essentially was write two themes: the rhythmic, contemporary theme, Fender bass and brushes, that vamp, for the cartoon effects and for West 's getting himself out of trouble, and the heraldic western outdoor theme over that, so that the two worked together. ''
Session musicians who played on the theme were Tommy Morgan (harmonica); Bud Shank, Ronnie Lang, Plas Johnson, and Gene Cipriano (woodwinds); Vince DeRosa and Henry Sigismonti (French Horns); Uan Rasey, Ollie Mitchell, and Tony Terran (trumpets); Dick Nash, Lloyd Ulyate, Chauncey Welsch, Kenny Shroyer (trombones). Tommy Tedesco and Bill Pitman (guitars); Carol Kaye (Fender bass); Joe Porcaro (brushes); Gene Estes, Larry Bunker, and Emil Richards (timpani, percussion).
Markowitz, however, was never credited for his theme in any episode; it is believed that this was due to legal difficulties between CBS and Tiomkin over the rejection of the latter 's work. Markowitz did receive "music composed and conducted by '' credits for episodes he 'd scored (such as "The Night of the Bars of Hell '' and "The Night of the Raven '') or where he supplied the majority of tracked - in cues (for example in "The Night of the Grand Emir '' and "The Night of the Gypsy Peril ''). He finally received "theme by '' credit on both of the TV movies, which were scored by Jeff Alexander rather than Markowitz (few personnel from the series were involved with the TV movies).
The animated title sequence was another unique element of the series. Created by Michael Garrison Productions and DePatie - Freleng Enterprises, it was directed by Isadore "Friz '' Freleng and animated by Ken Mundie, who designed the titles for the film The Great Race and the TV series Secret Agent, Rawhide, and Death Valley Days.
The screen was divided into four corner panels surrounding a narrow central panel that contained a cartoon "hero ''. The Hero, who looked more like a traditional cowboy than either West or Gordon, encounters cliché western characters and situations in each of the panels. In the three seasons shot in color, the overall backdrop was an abstracted wash of the flag of the United States, with the upper left panel colored blue and the others containing horizontal red stripes.
The original animation sequence is:
This teaser part of the show was incorporated into The History Channel 's Wild West Tech (2003 -- 5).
Each episode had four acts. At the end of each act, the scene, usually a cliffhanger moment, would freeze, and a sketch or photograph of the scene faded in to replace the cartoon art in one of the four corner panels. The style of freeze - frame art changed over the course of the series. In all first - season episodes other than the pilot, the panels were live - action stills made to evoke 19th - century engravings. In season two (the first in color) the scenes dissolved to tinted stills; from "The Night of the Flying Pie Plate '' on, however, the panels were home to Warhol - like serigraphs of the freeze - frames. The end credits were displayed over each episode 's unique mosaic except in the final season, when a standardized design was used (curiously, in this design the bank robber is unconscious, the cardsharp has no card and the lady is on the ground, but the sixshooter in the upper left - hand panel has returned). The freeze - frame graphics were shot at a facility called Format Animation. The pilot is the only episode in which the center panel of the Hero is replaced by a sketch of the final scene of an act; in the third act he is replaced by the villainous General Cassinello (Nehemiah Persoff).
During the first season, the series title "The Wild Wild West '' was set in the font Barnum, which resembles the newer font P.T. Barnum. In subsequent seasons, the title appeared in a hand - drawn version of the font Dolphin (which resembles newer fonts called Zebrawood, Circus, and Rodeo Clown). Robert Conrad 's name was also set in this font. Ross Martin 's name was set in the font Bracelet (which resembles newer fonts named Tuscan Ornate and Romantiques). All episode titles, writer and director credits, guest cast and crew credits were set in Barnum. During commercial breaks, the title "The Wild Wild West '' also appeared in Barnum.
The series is generally set during the presidency of Ulysses S. Grant, 1869 -- 77; occasional episodes indicate a more precise date:
Some episodes were violent for their time, and that, rather than low ratings, ultimately was the series ' downfall (citation?). In addition to gunplay, there were usually two fight sequences per episode. These were choreographed by Whitey Hughes and performed by Conrad and a stock company of stuntmen, including Red West, Dick Cangey, and Bob Herron (who doubled for Ross Martin).
After he suffered a concussion filming "The Night of the Fugitives, '' the network insisted that Conrad defer to a double. (His chair on the set was newly inscribed: "Robert Conrad, ex-stuntman, retired by CBS, Jan. 24, 1968. '') "(W) hen I came back for the fourth season I was limited to what I could do for insurance reasons, '' Conrad explained. "So I agreed and gradually I did all the fights but could n't do anything five feet off the ground and of course that went out the window. '' He was doubled by Jimmy George. Often, George would start a stunt, such as a high fall or a dive through a window, then land behind boxes or off camera, where Conrad was concealed and waiting to seamlessly complete the action. This same ploy was sometimes used by Ross Martin and Bob Herron.
It was hazardous work. Hughes recalled, "We had a lot of crashes. We used to say, ' Roll the cameras and call the ambulances. ' '' Conrad recalled in 1994, "The injuries started at the top. Robert Conrad: 6 - inch fracture of the skull, high temporal concussion, partial paralysis. Ross Martin: broken leg. A broken skull for Red West. Broken leg for Jimmy George. Broken arm for Jack Skelly. And Michael Dunn: head injury and a spinal sprain. He did his own stunts. And on and on. ''
Following the 1968 assassinations of Dr. Martin Luther King and Robert Kennedy, President Lyndon Johnson created the National Commission on the Causes and Prevention of Violence. One of the questions it tackled was whether violence on television was a contributing factor to violence in American society. (This also included graphic news coverage of the Vietnam War.) The television networks, anticipating these allegations, moved to curtail violence on their entertainment programs before the start of the 1968 - 69 season. Television reporter Cynthia Lowrey, in an article published in August 1968, wrote that The Wild Wild West "is one of the action series being watched by network censors for scenes of excessive violence, even if the violence is all in fun. ''
However, despite a CBS mandate to tone down the mayhem, "The Night of the Egyptian Queen '' (aired November 15, 1968) contains perhaps the series ' most ferocious barroom brawl. A later memo attached to the shooting script of "The Night of Miguelito 's Revenge '' (aired December 13, 1968) reads: "Note to Directors: The producer respectfully asks that no violent acts be shot which are not depicted in the script or discussed beforehand.... Most particularly stay away from gratuitous ad - libs, such as slaps, pointing of firearms or other weapons at characters (especially in close quarters), kicks and the use of furniture and other objects in fight scenes. '' Strict limits were placed on the number of so - called "acts of violence '' in the last episodes of the season (and thus the series). James West rarely wears a gun, and rather than the usual fisticuffs, fight sequences involved tossing, tackling or body blocking the villains.
In December 1968, executives from ABC, CBS and NBC appeared before the President 's Commission. The most caustic of the commissioners, Rep. Hale Boggs (D - La.), decried what he called "the Saturday morning theme of children 's cartoon shows '' that permit "the good guy to do anything in the name of justice. '' He also indicted CBS for featuring sadism in its primetime programing (The Wild Wild West was subsequently identified as one example). The Congressman did, however, commend CBS for a 25 % decline in violence programming in prime time compared to the other two networks.
Three months later, in March 1969, Sen. John O. Pastore (D-R. I.) called the same network presidents before his Senate communications subcommittee for a public scolding on the same subject. At Pastore 's insistence, the networks promised tighter industry self - censorship, and the Surgeon General began a $1 million study on the effects of television. Congress 's concern was shared by the public: in a nationwide poll, 67.5 % of 1,554 Americans agreed with the hypothesis that TV and movie violence prompted violence in real life.
Additionally, the National Association for Better Broadcasting (NABB), in a report eventually issued in November 1969, rated The Wild Wild West "as one of the most violent series on television. ''
After being excoriated by two committees, the networks scrambled to expunge violence from their programming. The Wild Wild West received its cancellation notice in mid-February, even before Pastore 's committee convened. Producer Bruce Lansbury always claimed that "It was a sacrificial lamb... It went off with a 32 or 33 share which in those days was virtually break - even, but it always won its time period. '' This is confirmed by an article by Associated Press reporter Joseph Mohbat: "Shows like ABC 's ' Outcasts ' and NBC 's ' Outsider ', which depended heavily on violence, were scrapped. CBS killed ' The Wild, Wild West ' despite high ratings, because of criticism. It was seen by the network as a gesture of good intentions. '' The networks played it safe thereafter: of the 22 new television shows that debuted in the fall of 1969, not one was a western or detective drama; 14 were comedy or variety series.
Conrad denounced Pastore for many years, but in other interviews he admitted that it probably was time to cancel the series because he felt that he and the stunt men were pushing their luck. He also felt the role had hurt his craft. "In so many roles I was a tough guy and I never advanced much, '' Conrad explained. "Wild Wild West was action adventure. I jumped off roofs and spent all my time with the stuntmen instead of other actors. I thought that 's what the role demanded. That role had no dimension other than what it was -- a caricature of a performance. It was a comic strip character. ''
In the summer of 1970, CBS reran several episodes of The Wild Wild West on Mondays at 10 p.m. as a summer replacement for the Carol Burnett Show. These episodes were "The Night of the Bleak Island '' (aired July 6); "The Night of the Big Blackmail '' (July 13); "The Night of the Kraken '' (July 20); "The Night of the Diva '' (July 27); "The Night of the Simian Terror '' (August 3); "The Night of the Bubbling Death '' (August 11); "The Night of the Returning Dead '' (August 17); "The Night of the Falcon '' (August 24); "The Night of the Underground Terror '' (August 31); and "The Night of the Sedgewick Curse '' (September 7). Curiously, none of these featured Dr. Loveless.
TV critic Lawrence Laurent wrote, "The return of Wild Wild West even for a summer re-run is n't surprising. CBS - TV was never really very eager to cancel this series, since over a four - year run that began in 1965 the Wild Wild West had been a solid winner in the ratings. Cancellation came mainly because CBS officials were concerned about the criticism over televised violence and to a lesser degree because Robert Conrad had grown slightly weary of the role of James West. Ever since last fall 's ratings started rolling in, CBS has wished that it had kept Wild Wild West. None of the replacements have done nearly as well and, as a result, all of the Friday programs suffered. ''
That fall, CBS put the program into syndication, giving it new life on local stations across the country. This further antagonized the anti-violence lobby, since the program was now broadcast weekdays and often after school. One group, The Foundation to Improve Television (FIT), filed a suit on November 12, 1970, to prevent WTOP in Washington, D.C., from airing The Wild Wild West weekday afternoons at 4 pm. The suit was brought in Washington, D.C., specifically to gain government and media attention. The suit said the series "contains fictionalized violence and horror harmful to the mental health and well - being of minor children '', and should not air before 9 pm. WTOP 's vice president and general manager, John R. Corporan, was quoted as saying, "Since programs directed specifically at children are broadcast in the late afternoon by three other TV stations, it is our purpose to counter-program with programming not directed specifically at children. '' US District Court Judge John J. Sirica, who later presided over the trial of the Watergate burglars and ordered US President Richard Nixon to turn over White House recordings, dismissed the lawsuit in January 1971, referring FIT to take their complaint to the FCC. FIT appealed, but a year and a half later the U.S. Court of Appeals upheld the district court decision dismissing the suit on the grounds that FIT had not exhausted the administrative remedies available to them. By then, WTOP had stopped broadcasting the series altogether. At that time, the show was in reruns on about 57 other local stations across the country, including WOR in New York and WFLD in Chicago.
In October 1973 the Los Angeles - based National Association for Better Broadcasting (NABB) reached a landmark agreement with KTTV, a local station, to purge 42 violent cartoon programs, including Mighty Mouse, Magilla Gorilla, Speed Racer, and Gigantor. Additionally, the NABB cited 81 syndicated live - action shows that "may have a detrimental influence on some children who are exposed to such programming without parental guidance or perspective '' when they are telecast before 8: 30 p.m. This list included The Wild Wild West, The Avengers, Batman, Man from UNCLE, Roy Rogers, Wanted Dead or Alive, and The Lone Ranger. In Los Angeles, such shows opened with a cautionary announcement: "Parents -- we wish to advise that because of violence or other possible harmful elements, certain portions of the following program may not be suitable for young children. '' The NABB hoped to use the cartoon ban and warning announcement as a model for similar agreements with other local stations.
By then The Wild Wild West was running on 99 local stations. Its ongoing popularity throughout the 1970s prompted two television movies, The Wild Wild West Revisited (1979) and More Wild Wild West (1980) (see below). By the spring of 1985 the original series was still carried on 74 local stations.
In the late 1980s the series was still seen on local stations in Boston, Hartford, Philadelphia, Pittsburgh and Los Angeles, among other cities. Significantly, WGN (Chicago), which carried the show at 10 a.m. on Sundays, became available nationally through cable television.
In 1994, The Wild Wild West began running on Saturdays at 10 a.m. on Turner Network Television (TNT), which preferred the color episodes to the black and white shows. The series was dropped from WGN soon after. Hallmark Channel aired the series in 2005 as part of its slate of Saturday afternoon Westerns but dropped it after only a few weeks. In 2011 the series began running weekdays and / or weekends on MeTV, then Sundays on the Heroes and Icons digital channel. In 2016 The Wild Wild West returned to MeTV on Saturday afternoons. On January 1, 2018, MeTV began running the series weekday afternoons again, starting with second season (color) episodes. It also airs in the United Kingdom (as of 2015) on the Horror Channel on Sky channel 319, Virgin channel 149, Freeview channel 70 and Freesat channel 138.
Conrad and Martin reunited for two television movies, The Wild Wild West Revisited (aired May 9, 1979) and More Wild Wild West (aired October 7 -- 8, 1980). Revisited introduced Paul Williams as Miguelito Loveless Jr., the son of the agents ' nemesis. (Michael Dunn, who played Dr. Loveless in the original show, had died in 1973.) Loveless planned to substitute clones for the crowned heads of Europe and the President of the United States. (This plot is similar to the second - season episode "The Night of the Brain ''.)
Most of the exterior filming took place at Old Tucson Studios where there were still many "Old West '' buildings and a functioning steam train and tracks. Interiors were shot at CBS Studio Center. Ross Martin said, "We worked on a lot of the same sets at the studio, including the interiors of the old train. We used the same guns and gimmicks and wardrobes -- with the waistlines let out a little bit. The script, unlike the old shows, is played strictly for comedy. It calls for us to be ten years older than when we were last seen. There are a lot more laughs than adventure. ''
More Wild Wild West was initially conceived as a rematch between the agents and Miguelito Jr., but Williams was unavailable for the film; his character was changed to Albert Paradine II and played by Jonathan Winters -- this explains why the story begins with various clones of Paradine being murdered (the first film ends with Loveless having cloned himself and placed the doubles around the world). Paradine planned world conquest using a formula for invisibility (recalling the first - season episode "The Night of the Burning Diamond '').
Both TV films were campier than the TV series, although Conrad and Martin played their roles straight. Both films were directed by veteran comedy Western director Burt Kennedy and written by William Bowers (in the latter case with Tony Kayden, from a story by Bowers); neither Kennedy nor Bowers worked on the original series. The Wild Wild West Revisited takes the agents to a town called Wagon Gap. This was a nod to the Abbott and Costello film, The Wistful Widow of Wagon Gap (1947), which was based on a treatment by Bowers and D.D. Beauchamp of a short story by Beauchamp.
Conrad once revealed that CBS intended to do yearly TV revivals of The Wild Wild West. Variety, in its review of the first TV movie, concurred: "A couple of more movies in this vein, sensibly spaced, could work in the future. '' Ross Martin 's death in 1981, however, put an end to the idea.
Conrad was later quoted in Cinefantastique about these films: "We all got along fine with each other when we did these, but I was n't happy with them only because CBS imposed a lot of restrictions on us. They never came up to the level of what we had done before. ''
The first season of The Wild Wild West was released on DVD in North America on June 6, 2006 by CBS Home Entertainment (distributed by Paramount Home Entertainment). Although it was touted as a special 40th anniversary edition, it appeared 41 years after the show 's 1965 debut. Robert Conrad recorded audio introductions for all 28 first - season episodes, plus a commentary track for the pilot. The set also featured audio interviews by Susan Kesler (for her book, The Wild Wild West: The Series), and 1970s era footage of Conrad and Martin on a daytime talk show. The second season was released on DVD on March 20, 2007; the third season was released on November 20, 2007; and the fourth and final season was released on March 18, 2008. None of the later season sets contained bonus material. A 27 - disc complete series set was released on November 4, 2008. It contains all 104 episodes of the series as well as both reunion telefilms.
On May 12, 2015, CBS Home Entertainment released a repackaged version of the complete series set, at a lower price, but did not include the bonus disc that was part of the original complete series set. On June 13, 2016, the bonus disc was released as a standalone item.
In France, where the series (known locally as Les Mystères de l'Ouest) was a big hit, all four seasons were released in a DVD boxed set before their US release. The French set, released by TF1 Video, includes many of the extras on the US season one set, and many others. "The Night of the Inferno '' is presented twice -- as a regular episode in English with Conrad 's audio commentary, and in a French - dubbed version. All of the episodes are presented in English with French subtitles, and several episode titles differ in translation from the original English titles. For example, "The Night of the Gypsy Peril '', "The Night of the Simian Terror '' and "The Night of Jack O'Diamonds '' respectively translate as "The Night of the White Elephant '', "The Night of the Beast '' and "The Night of the Thoroughbred ''. Both TV movies are included as extras, but only in French - dubbed versions. The set also features a 1999 interview with Robert Conrad at the Mirande Country Music Festival in France.
The series spawned several merchandising spin - offs, including a seven - issue comic book series by Gold Key Comics, and a paperback novel, Richard Wormser 's The Wild Wild West, published in 1966 by Signet (ISBN 0 - 451 - 02836 - 8), which adapted the episode "The Night of the Double - Edged Knife ''.
In 1988, Arnett Press published The Wild Wild West: The Series by Susan E. Kesler (ISBN 0 - 929360 - 00 - 1), a thorough production history and episode guide.
In 1998, Berkeley Books published three novels by author Robert Vaughan -- The Wild Wild West (ISBN 0 - 425 - 16372 - 5), The Night of the Death Train (ISBN 0 - 425 - 16449 - 7), and The Night of the Assassin (ISBN 0 - 425 - 16517 - 5).
In 1990, Millennium Publications produced a four - part comic book series ("The Night of the Iron Tyrants '') scripted by Mark Ellis with art by Darryl Banks. A sequel to the TV series, it involved Dr. Loveless in a conspiracy to assassinate President Grant and the President of Brazil and put the Knights of the Golden Circle into power. The characters of Voltaire and Antoinette were prominent here, despite their respective early departures from Dr. Loveless ' side in the original program. A review from the Mile High Comics site states: "This mini-series perfectly captures the fun mixture of western and spy action that marked the ground - breaking 1960s TV series. '' The storyline of the comics mini-series was optioned for motion picture development.
In the 75th volume of the French comic book series Lucky Luke (L'Homme de Washington), published in 2008, both James West and Artemus Gordon have a minor guest appearance, albeit the names have been changed to "James East '' and "Artémius Gin ''.
When Robert Conrad hosted Saturday Night Live on NBC (January 23, 1982), he appeared in a parody of The Wild Wild West. President Lincoln states his famous quip that, if General U.S. Grant is a drunk, he should send whatever he 's drinking to his other less successful generals. Lincoln dispatches West and Gordon (Joe Piscopo) to find out what Grant drinks. They discover that Grant is held captive by Velvet Jones (Eddie Murphy).
On July 11, 2017, La - La Land Records released a limited edition 4 - disc set of music from the series, featuring Richard Markowitz 's theme, episode scores by Markowitz, Robert Drasnin, Dave Grusin, Richard Shores, Harry Geller, Walter Scharf, Jack Pleis and Fred Steiner, and Dimitri Tiomkin 's unused theme music.
In January 1992, Variety reported that Warner Bros. was planning a theatrical version of The Wild Wild West directed by Richard Donner, written by Shane Black, and starring Mel Gibson as James West (Donner directed three episodes of the original series). Finally, in 1999, a theatrical motion picture loosely based on the series was released as Wild Wild West (without the definite article used in the series title). Directed by Barry Sonnenfeld, the film made substantial changes to the characters of the series, reimagining James West as a black man (played by Will Smith), and Artemus Gordon (played by Kevin Kline) as egotistical and bitterly competitive with West. Additionally, significant changes were made to Dr. Loveless (Kenneth Branagh in the film). No longer a dwarf, he was portrayed as a double amputee confined to a steam - powered wheelchair (similar to that employed by the villain in "The Night of the Brain ''). Loveless ' first name was changed from Miguelito to Arliss, and he was given the motive of a bitter Southerner who sought revenge on the North after the Civil War.
Robert Conrad reportedly was offered the role of President Grant, but turned it down. He was outspoken in his criticism of the new film, now little more than a comedic Will Smith showcase with virtually no relationship to the action - adventure series. In a New York Post interview (July 3, 1999), Conrad stated that he disliked the movie and that contractually he was owed a share of money on merchandising that he was not paid. He had a long - standing feud with producer Jon Peters, which may have colored his opinion. He was offended at the racial aspects of the film, as well as the casting of Branagh as a double amputee, rather than a little - person actor, in the role of Loveless.
Conrad took special delight in accepting the Razzie Awards for the motion picture in 1999. It was awarded Worst Picture, Worst Screenplay, and Worst Original Song.
In 2009, Will Smith apologized publicly to Conrad while doing promotion for Seven Pounds:
I made a mistake on Wild Wild West. That could have been better... No, it 's funny because I could never understand why Robert Conrad was so upset with Wild Wild West. And now I get it. It 's like, ' That 's my baby! I put my blood, sweat and tears into that! ' So I 'm going to apologize to Mr. Conrad for that because I did n't realize. I was young and immature. So much pain and joy went into (my series) The Fresh Prince that my greatest desire would be that it 's left alone.
As with many television series, The Wild Wild West had several merchandise tie - ins during its run. These are listed below.
On October 5, 2010, Entertainment Weekly 's website reported that Ron Moore and Naren Shankar were developing a remake of The Wild Wild West for television, but the project apparently stalled. In December 2013, Moore told Wired, "Wild Wild West and Star Trek were two of my great loves. I watched both in syndication in the ' 70s. Wild Wild West was really interesting, that combination of genres -- a Western and secret agent, and they dabbled in the occult and paranormal. I really wanted to do a new version for CBS. I still think it 's a great property. Someday I hope to go back to it. ''
A new fan - produced webseries, Back to the Wild Wild West, began production in November 2011, but apparently has stalled.
|
what is the structure and function of the neuromuscular junction | Neuromuscular junction - wikipedia
A neuromuscular junction (or myoneural junction) is a chemical synapse formed by the contact between a motor neuron and a muscle fiber. It is at the neuromuscular junction that a motor neuron is able to transmit a signal to the muscle fiber, causing muscle contraction.
Muscles require innervation to function -- and even just to maintain muscle tone, avoiding atrophy. Synaptic transmission at the neuromuscular junction begins when an action potential reaches the presynaptic terminal of a motor neuron, which activates voltage - dependent calcium channels to allow calcium ions to enter the neuron. Calcium ions bind to sensor proteins (synaptotagmin) on synaptic vesicles, triggering vesicle fusion with the cell membrane and subsequent neurotransmitter release from the motor neuron into the synaptic cleft. In vertebrates, motor neurons release acetylcholine (ACh), a small molecule neurotransmitter, which diffuses across the synaptic cleft and binds to nicotinic acetylcholine receptors (nAChRs) on the cell membrane of the muscle fiber, also known as the sarcolemma. nAChRs are ionotropic receptors, meaning they serve as ligand - gated ion channels. The binding of ACh to the receptor can depolarize the muscle fiber, causing a cascade that eventually results in muscle contraction.
Neuromuscular junction diseases can be of genetic and autoimmune origin. Genetic disorders, such as Duchenne muscular dystrophy, can arise from mutated structural proteins that comprise the neuromuscular junction, whereas autoimmune diseases, such as myasthenia gravis, occur when antibodies are produced against nicotinic acetylcholine receptors on the sarcolemma.
The neuromuscular junction differs from chemical synapses between neurons. Presynaptic motor axons stop 30 nanometers from the sarcolemma, the cell membrane of a muscle cell. This 30 - nanometer space forms the synaptic cleft through which signalling molecules are released. The sarcolemma has invaginations called postjunctional folds, which increase the surface area of the membrane exposed to the synaptic cleft. These postjunctional folds form what is referred to as the motor endplate, which possess nicotinic acetylcholine receptors (nAChRs) at a density of 10,000 receptors / micrometer in skeletal muscle. The presynaptic axons form bulges called terminal boutons (or presynaptic terminals) that project into the postjunctional folds of the sarcolemma. The presynaptic terminals have active zones that contain vesicles, also called quanta, full of acetylcholine molecules. These vesicles can fuse with the presynaptic membrane and release ACh molecules into the synaptic cleft via exocytosis after depolarization. AChRs are localized opposite the presynaptic terminals by protein scaffolds at the postjunctional folds of the sarcolemma. Dystrophin, a structural protein, connects the sarcomere, sarcolemma, and extracellular matrix components. Rapsyn is another protein that docks AChRs and structural proteins to the cytoskeleton. Also present is the receptor tyrosine kinase protein MuSK, a signaling protein involved in the development of the neuromuscular junction, which is also held in place by rapsyn.
The neuromuscular junction is where a neuron activates a muscle to contract. Upon the arrival of an action potential at the presynaptic neuron terminal, voltage - dependent calcium channels open and Ca ions flow from the extracellular fluid into the presynaptic neuron 's cytosol. This influx of Ca causes neurotransmitter - containing vesicles to dock and fuse to the presynaptic neuron 's cell membrane through SNARE proteins. Fusion of the vesicular membrane with the presynaptic cell membrane results in the emptying of the vesicle 's contents (acetylcholine) into the synaptic cleft, a process known as exocytosis. Acetylcholine diffuses into the synaptic cleft and can bind to the nicotinic acetylcholine receptors on the motor endplate.
Acetylcholine is a neurotransmitter synthesized from dietary choline and acetyl - CoA (ACoA), and is involved in the stimulation of muscle tissue in vertebrates as well as in some invertebrate animals. In vertebrate animals, the acetylcholine receptor subtype that is found at the neuromuscular junction of skeletal muscles is the nicotinic acetylcholine receptor (nAChR), which is a ligand - gated ion channel. Each subunit of this receptor has a characteristic "cys - loop, '' which is composed of a cysteine residue followed by 13 amino acid residues and another cysteine residue. The two cysteine residues form a disulfide linkage which results in the "cys - loop '' receptor that is capable of binding acetylcholine and other ligands. These cys - loop receptors are found only in eukaryotes, but prokaryotes possess ACh receptors with similar properties. Not all species use a cholinergic neuromuscular junction; e.g. crayfish and fruit flies have a glutamatergic neuromuscular junction.
AChRs at the skeletal neuromuscular junction form heteropentamers composed of two α, one β, one ɛ, and one δ subunits. When a single ACh ligand binds to one of the α subunits of the ACh receptor it induces a conformational change at the interface with the second AChR α subunit. This conformational change results in the increased affinity of the second α subunit for a second ACh ligand. AChRs therefore exhibit a sigmoidal dissociation curve due to this cooperative binding. The presence of the inactive, intermediate receptor structure with a single - bound ligand keeps ACh in the synapse that might otherwise be lost by cholinesterase hydrolysis or diffusion. The persistence of these ACh ligands in the synapse can cause a prolonged post-synaptic response.
The development of the neuromuscular junction requires signaling from both the motor neuron 's terminal and the muscle cell 's central region, During development, muscle cells produce acetylcholine receptors (AChRs) and express them in the central regions in a process called prepatterning. Agrin, a heparin proteoglycan, and MuSK kinase are thought to help stabilize the accumulation of AChR in the central regions of the myocyte. MuSK is a receptor tyrosine kinase -- meaning that it induces cellular signaling by binding phosphate molecules to self regions like tyrosines, and to other targets in the cytoplasm. Upon activation by its ligand agrin, MuSK signals via two proteins called "Dok - 7 '' and "rapsyn '', to induce "clustering '' of acetylcholine receptors. ACh release by developing motor neurons produces postsynaptic potentials in the muscle cell that positively reinforces the localization and stabilization of the developing neuromuscular junction.
These findings were demonstrated in part by mouse "knockout '' studies. In mice which are deficient for either agrin or MuSK, the neuromuscular junction does not form. Further, mice deficient in Dok - 7 did not form either acetylcholine receptor clusters or neuromuscular synapses.
The development of neuromuscular junctions is mostly studied in model organisms, such as rodents. In addition, in 2015 an all - human neuromuscular junction has been created in vitro using human embryonic stem cells and somatic muscle stem cells. In this model presynaptic motor neurons are activated by optogenetics and in response synaptically connected muscle fibers twitch upon light stimulation.
José del Castillo and Bernard Katz used ionophoresis to determine the location and density of nicotinic acetylcholine receptors (nAChRs) at the neuromuscular junction. With this technique, a microelectrode was placed inside the motor endplate of the muscle fiber, and a micropipette filled with acetylcholine (ACh) is placed directly in front of the endplate in the synaptic cleft. A positive voltage was applied to the tip of the micropipette, which caused a burst of positively charged ACh molecules to be released from the pipette. These ligands flowed into the space representing the synaptic cleft and bound to AChRs. The intracellular microelectrode monitored the amplitude of the depolarization of the motor endplate in response to ACh binding to nicotinic (ionotropic) receptors. Katz and del Castillo showed that the amplitude of the depolarization (excitatory postsynaptic potential) depended on the proximity of the micropipette releasing the ACh ions to the endplate. The farther the micropipette was from the motor endplate, the smaller the depolarization was in the muscle fiber. This allowed the researchers to determine that the nicotinic receptors were localized to the motor endplate in high density.
Toxins are also used to determine the location of acetylcholine receptors at the neuromuscular junction. α - Bungarotoxin is a toxin found in the snake species Bungarus multicinctus that acts as an ACh antagonist and binds to AChRs irreversibly. By coupling assayable enzymes such as horseradish peroxidase (HRP) or fluorescent proteins such as green fluorescent protein (GFP) to the α - bungarotoxin, AChRs can be visualized and quantified.
Nerve gases and liquor damage this area.
Botulinum toxin (aka botulinum neurotoxin, BoNT, and sold under the trade name Botox) inhibits the release of acetylcholine at the neuromuscular junction by interfering with SNARE proteins. This toxin crosses into the nerve terminal through the process of endocytosis and subsequently interferes with SNARE proteins, which are necessary for ACh release. By doing so, it induces a transient flaccid paralysis and chemical denervation localized to the striated muscle that it has affected. The inhibition of the ACh release does not set in until approximately two weeks after the injection is made. Three months after the inhibition occurs, neuronal activity begins to regain partial function, and six months, complete neuronal function is regained.
Tetanus toxin, also known as tetanospasmin is a potent neurotoxin produced by Clostridium tetani and causes the disease state, tetanus. The LD of this toxin has been measured to be approximately 1 ng / kg, making it second only to Botulinum toxin D as the deadliest toxin in the world. It functions very similarly to botunlinum neurotoxin (BoNT) by attaching and endocytosing into the presynaptic nerve terminal and interfering with SNARE protein complexes. It differs from BoNT in a few ways, most apparently in its end state, wherein tetanospasmin demonstrates a rigid / spastic paralysis as opposed to the flaccid paralysis demonstrated with BoNT.
Latrotoxin (α - Latrotoxin) found in venom of widow spiders also affects the neuromuscular junction by causing the release of acetylcholine from the presynaptic cell. Mechanisms of action include binding to receptors on the presynaptic cell activating the IP3 / DAG pathway and release of calcium from intracellular stores and pore formation resulting in influx of calcium ions directly. Either mechanism causes increased calcium in presynaptic cell, which then leads to release of synaptic vesicles of acetylcholine. Latrotoxin causes pain, muscle contraction and if untreated potentially paralysis and death.
Snake venoms act as toxins at the neuromuscular junction and can induce weakness and paralysis. Venoms can act as both presynaptic and postsynaptic neurotoxins.
Presynaptic neurotoxins, commonly known as β - neurotoxins, affect the presynaptic regions of the neuromuscular junction. The majority of these neurotoxins act by inhibiting the release of neurotransmitters, such as acetylcholine, into the synapse between neurons. However, some of these toxins have also been known to enhance neurotransmitter release. Those that inhibit neurotransmitter release create a neuromuscular blockade that prevents signaling molecules from reaching their postsynaptic target receptors. In doing so, the victim of these snake bite suffer from profound weakness. Such neurotoxins do not respond well to anti-venoms. After one hour of inoculation of these toxins, including notexin and taipoxin, many of the affected nerve terminals show signs of irreversible physical damage, leaving them devoid of any synaptic vesicles.
Postsynaptic neurotoxins, otherwise known as α - neurotoxins, act oppositely to the presynaptic neurotoxins by binding to the postsynaptic acetylcholine receptors. This prevents interaction between the acetylcholine released by the presynaptic terminal and the receptors on the postsynaptic cell. In effect, the opening of sodium channels associated with these acetylcholine receptors is prohibited, resulting in a neuromuscular blockade, similar to the effects seen due to presynaptic neurotoxins. This causes paralysis in the muscles involved in the affected junctions. Unlike presynaptic neurotoxins, postsynaptic toxins are more easily affected by anti-venoms, which accelerate the dissociation of the toxin from the receptors, ultimately causing a reversal of paralysis. These neurotoxins experimentally and qualitatively aid in the study of acetylcholine receptor density and turnover, as well as in studies observing the direction of antibodies toward the affected acetylcholine receptors in patients diagnosed with myasthenia gravis.
Any disorder that compromises the synaptic transmission between a motor neuron and a muscle cell is categorized under the umbrella term of neuromuscular diseases. These disorders can be inherited or acquired and can vary in their severity and mortality. In general, most of these disorders tend to be caused by mutations or autoimmune disorders. Autoimmune disorders, in the case of neuromuscular diseases, tend to be humoral mediated, B cell mediated, and result in an antibody improperly created against a motor neuron or muscle fiber protein that interferes with synaptic transmission or signaling.
Myasthenia gravis is an autoimmune disorder where the body makes antibodies against either the acetylcholine receptor (AchR) (in 80 % of cases), or against postsynaptic muscle - specific kinase (MuSK) (0 -- 10 % of cases). In seronegative myasthenia gravis low density lipoprotein receptor - related protein 4 is targeted by IgG1, which acts as a competitive inhibitor of its ligand, preventing the ligand from binding its receptor. It is not known if seronegative myasthenia gravis will respond to standard therapies.
Neonatal MG is an autoimmune disorder that affects 1 in 8 children born to mothers who have been diagnosed with Myasthenia gravis (MG). MG can be transferred from the mother to the fetus by the movement of AChR antibodies through the placenta. Signs of this disease at birth include weakness, which responds to anticholinesterase medications, as well as fetal akinesia, or the lack of fetal movement. This form of the disease is transient, lasting for about three months. However, in some cases, neonatal MG can lead to other health effects, such as arthrogryposis and even fetal death. These conditions are thought to be initiated when maternal AChR antibodies are directed to the fetal AChR and can last until the 33rd week of gestation, when the γ subunit of AChR is replaced by the ε subunit.
Lambert - Eaton myasthenic syndrome (LEMS) is an autoimmune disorder that affects the presynaptic portion of the neuromuscular junction. This rare disease can be marked by a unique triad of symptoms: proximal muscle weakness, autonomic dysfunction, and areflexia. Proximal muscle weakness is a product of pathogenic autoantibodies directed against P / Q - type voltage - gated calcium channels, which in turn leads to a reduction of acetylcholine release from motor nerve terminals on the presynaptic cell. Examples of autonomic dysfunction caused by LEMS include erectile dysfunction in men, constipation, and, most commonly, dry mouth. Less common dysfunctions include dry eyes and altered perspiration. Areflexia is a condition in which tendon reflexes are reduced and it may subside temporarily after a period of exercise.
50 -- 60 % of the patients that are diagnosed with LEMS also have present an associated tumor, which is typically small - cell lung carcinoma (SCLC). This type of tumor also expresses voltage - gated calcium channels. Oftentimes, LEMS also occurs alongside myasthenia gravis.
Treatment for LEMS consists of using 3, 4 - diaminopyridine as a first measure, which serves to increase the compound muscle action potential as well as muscle strength by lengthening the time that voltage - gated calcium channels remain open after blocking voltage - gated potassium channels. In the US, treatment with 3, 4 - diaminopyridine for eligible LEMS patients is available at no cost under an expanded access program. Further treatment includes the use of prednisone and azathioprine in the event that 3, 4 - diaminopyridine does not aid in treatment.
Neuromyotonia (NMT), otherwise known as Isaac 's syndrome, is unlike many other diseases present at the neuromuscular junction. Rather than causing muscle weakness, NMT leads to the hyperexcitation of motor nerves. NMT causes this hyperexcitation by producing longer depolarizations by down - regulating voltage - gated potassium channels, which causes greater neurotransmitter release and repetitive firing. This increase in rate of firing leads to more active transmission and as a result, greater muscular activity in the affected individual. NMT is also believed to be of autoimmune origin due to its associations with autoimmune symptoms in the individual affected.
Congenital myasthenic syndromes (CMS) are very similar to both MG and LEMS in their functions, but the primary difference between CMS and those diseases is that CMS is of genetic origins. Specifically, these syndromes are diseases incurred due to mutations, typically recessive, in 1 of at least 10 genes that affect presynaptic, synaptic, and postsynaptic proteins in the neuromuscular junction. Such mutations usually arise in the ε - subunit of AChR, thereby affecting the kinetics and expression of the receptor itself. Single nucleotide substitutions or deletions may cause loss of function in the subunit. Other mutations, such as those affecting acetylcholinesterase and acetyltransferase, can also cause the expression of CMS, with the latter being associated specifically with episodic apnea. These syndromes can present themselves at different times within the life of an individual. They may arise during the fetal phase, causing fetal akinesia, or the perinatal period, during which certain conditions, such as arthrogryposis, ptosis, hypotonia, ophthalmoplegia, and feeding or breathing difficulties, may be observed. They could also activate during adolescence or adult years, causing the individual to develop slow - channel syndrome.
Treatment for particular subtypes of CMS (postsynaptic fast - channel CMS) is similar to treatment for other neuromuscular disorders. 3, 4 - Diaminopyridine, the first - line treatment for LEMS, is under development as an orphan drug for CMS in the US, and available to eligible patients under an expanded access program at no cost.
Bulbospinal muscular atrophy, also known as Kennedy 's disease, is a rare recessive trinucleotide, polyglutamine disorder that is linked to the X chromosome. Because of its linkage to the X chromosome, it is typically transmitted through females. However, Kennedy 's disease is only present in adult males and the onset of the disease is typically later in life. This disease is specifically caused by the expansion of a CAG - tandem repeat in exon 1 found on the androgen - receptor (AR) gene on chromosome Xq 11 - 12. Poly - Q - expanded AR accumulates in the nuclei of cells, where it begins to fragment. After fragmentation, degradation of the cell begins, leading to a loss of both motor neurons and dorsal root ganglia.
Symptoms of Kennedy 's disease include weakness and wasting of the facial bulbar and extremity muscles, as well as sensory and endocrinological disturbances, such as gynecomastia and reduced fertility. Other symptoms include elevated testosterone and other sexual hormone levels, development of hyper - CK - emia, abnormal conduction through motor and sensory nerves, and neuropathic or in rare cases myopathic alterations on biopsies of muscle cells.
Duchenne muscular dystrophy is an X-linked genetic disorder that results in the absence of the structural protein dystrophin at the neuromuscular junction. It affects 1 in 3,600 -- 6,000 males and frequently causes death by the age of 30. The absence of dystrophin causes muscle degeneration, and patients present with the following symptoms: abnormal gait, hypertrophy in the calf muscles, and elevated creatine kinase. If left untreated, patients may suffer from respiratory distress, which can lead to death.
|
how old must you be to drive in france | List of countries by minimum Driving age - wikipedia
The minimum driving age is the minimum age at which a person may obtain a driver 's licence to lawfully drive a motor vehicle on public roads. That age is determined by and for each jurisdiction and is most commonly set at 18 years of age, but learner drivers may be permitted on the road at an earlier age under supervision. Before reaching the minimum age for a driver 's licence or anytime afterwards, the person wanting the licence would normally be tested for driving ability and knowledge of road rules before being issued with a licence, provided he or she is above the minimum driving age. Countries with the lowest driving ages (17 and below) are Canada, El Salvador, Iceland, Israel, India, Malaysia, New Zealand, the Philippines, the United Kingdom (Mainland), United States and Zimbabwe. In several jurisdictions in the United States and Canada, drivers can be as young as 14.
Most jurisdictions recognise driver 's licences issued by another jurisdiction, which may result in a young person who obtains a licence in a jurisdiction with a low minimum driving age being permitted to drive in a jurisdiction which normally has a higher driving age.
The minimum age may vary depending on vehicle type. This list refers to the minimum driving age for a motor vehicle with a maximum authorized mass not exceeding 3,500 kg and designed and constructed for the carriage of no more than eight passengers in addition to the driver (not including a trailer).
16 for motorcycles
16 (mopeds and tractors)
16 (All other states and territories)
|
universities in the coastal plain of north carolina | List of colleges and universities in North Carolina - wikipedia
The following is a list of colleges and universities in the U.S. state of North Carolina.
|
when did the half penny stop being legal tender | Halfpenny (British decimal coin) - wikipedia
The British decimal halfpenny (1⁄2p) coin was introduced in February 1971, at the time of decimalisation, and was worth one two - hundredth of a pound sterling. It was ignored in banking transactions, which were carried out in units of 1p.
The decimal halfpenny had the same value as 1.2 pre-decimal pence, and was introduced to enable the prices of some low - value items to be more accurately translated to the new decimal currency. The possibility of setting prices including an odd half penny also made it more practical to retain the pre-decimal sixpence in circulation (with a value of 21⁄2 new pence) alongside the new decimal coinage.
The halfpenny coin 's obverse featured the profile of Queen Elizabeth II; the reverse featured an image of St Edward 's Crown. It was minted in bronze (like the 1p and 2p coins). It was the smallest decimal coin in both size and value. The size was in proportion to the 1p and 2p coins. It soon became Britain 's least favourite coin. The Treasury had continued to argue that the halfpenny was important in the fight against inflation (preventing prices from being rounded up). The coin was demonetised and withdrawn from circulation in December 1984.
The reverse of the coin, designed by Christopher Ironside, was a representation of St Edward 's Crown, with the numeral "1⁄2 '' below the crown, and either NEW PENNY (1971 -- 1981) or HALF PENNY (1982 -- 1984) above the crown.
Only one design of obverse was used on the halfpenny coin. The inscription around the portrait on the obverse was ELIZABETH II D.G. REG. F.D. 19xx, where 19xx was the year of minting. Both sides of the coin are encircled by dots, a common feature on coins, known as beading.
As on all decimal coins produced before 1984, the portrait of Queen Elizabeth II by Arnold Machin appeared on the obverse; in this portrait the Queen wears the ' Girls of Great Britain and Ireland ' Tiara.
Annual number of coins released into general circulation (excludes proof sets)
A decimal quarter - penny coin (to be struck in aluminium) was also proposed (which would have allowed the pre-decimal threepence to continue to circulate with a value of 1.25 new pence), but was never produced.
|
what is the difference between an ice age and an interglacial period | Glacial period - wikipedia
A glacial period (alternatively glacial or glaciation) is an interval of time (thousands of years) within an ice age that is marked by colder temperatures and glacier advances. Interglacials, on the other hand, are periods of warmer climate between glacial periods. The last glacial period ended about 15,000 years ago. The Holocene epoch is the current interglacial. A time when there are no glaciers on Earth is considered a greenhouse climate state.
Within the Quaternary glaciation (2.6 Ma to present), there have been a number of glacials and interglacials.
The last glacial period was the most recent glacial period within the current ice age, occurring in the Pleistocene epoch, which began about 110,000 years ago and ended about 15,000 years ago. The glaciations that occurred during this glacial period covered many areas of the Northern Hemisphere and have different names, depending on their geographic distributions: Wisconsin (in North America), Devensian (in Great Britain), Midlandian (in Ireland), Würm (in the Alps), Weichsel (in northern central Europe), Dali (in East China), Beiye (in North China), Taibai (in Shaanxi) Luojishan (in Southwest Sichuan), Zagunao (in Northwest Sichuan), Tianchi (in Tianshan Mountains) Qomolangma (in Himalaya), and Llanquihue (in Chile). The glacial advance reached its maximum extent about 18,000 BP. In Europe, the ice sheet reached Northern Germany. In the last 650,000 years, there were, on average, seven cycles of glacial advance and retreat.
Since orbital variations are predictable, computer models that relate orbital variations to climate can predict future climate possibilities. Two caveats are necessary: that anthropogenic effects (human - assisted global warming) are likely to exert a larger influence over the short term; and that the mechanism by which orbital forcing influences climate is not well understood. Work by Berger and Loutre suggests that the current warm climate may last another 50,000 years.
|
which section of the united states ignored the embargo act and continued to trade with great britain | Embargo Act of 1807 - wikipedia
The Embargo Act of 1807 was a general embargo enacted by the United States Congress against Great Britain and France during the Napoleonic Wars.
The embargo was imposed in response to the violations of the United States neutrality, in which American merchantmen and their cargo were seized as contraband of war by the belligerent European navies. The British Royal Navy, in particular, resorted to impressment, forcing thousands of American seamen into service on their warships. Britain and France, engaged in the Napoleonic Wars, rationalized the plunder of U.S. shipping as incidental to war and necessary for their survival. Americans saw the Chesapeake - Leopard Affair as a particularly egregious example of a British violation of American neutrality. Perceived diplomatic insults and unwarranted official orders issued in support of these actions by European powers were argued by some to be grounds for a U.S. declaration of war.
President Thomas Jefferson acted with restraint as these antagonisms mounted, weighing public support for retaliation. He recommended that Congress respond with commercial warfare, rather than with military mobilization. The Embargo Act was signed into law on December 22, 1807. The anticipated effect of this measure -- economic hardship for the belligerent nations -- was expected to chasten Great Britain and France, and force them to end their molestation of American shipping, respect U.S. neutrality, and cease the policy of impressment. The embargo turned out to be impractical as a coercive measure, and was a failure both diplomatically and economically. As implemented, the legislation inflicted devastating burdens on the U.S. economy and the American people.
Widespread evasion of the maritime and inland trade restrictions by American merchants, as well as loopholes in the legislation, greatly reduced the impact of the embargo on the intended targets in Europe. British merchant marine appropriated the lucrative trade routes relinquished by U.S. shippers due to the embargo. Demand for English goods rose in South America, offsetting losses suffered as a result of Non-Importation Acts. The embargo undermined national unity in the U.S., provoking bitter protests, especially in New England commercial centers. The issue vastly increased support for the Federalist Party and led to huge gains in their representation in Congress and in the electoral college in 1808. The embargo had the effect of simultaneously undermining American citizens ' faith that their government could execute its own laws fairly, and strengthening the conviction among America 's enemies that its republican form of government was inept and ineffectual. At the end of 15 months, the embargo was revoked on March 1, 1809, in the last days of Jefferson 's presidency. Tensions with Britain continued to grow, leading to the War of 1812.
After the short truce in 1802 -- 1803 the European wars resumed and continued until the defeat of Napoleon in 1814. The war caused American relations with both Britain and France to deteriorate rapidly. There was grave risk of war with one or the other. With Britain supreme on the sea, and France on the land, the war developed into a struggle of blockade and counterblockade. This commercial war peaked in 1806 and 1807. Britain 's Royal Navy shut down most European harbors to American ships unless they first traded through British ports. France declared a paper blockade of Britain (which it lacked a navy to enforce) and seized American ships that obeyed British regulations. The Royal Navy needed large numbers of sailors, and saw the U.S. merchant fleet as a haven for British sailors.
The British system of impressment humiliated and dishonored the U.S. because it was unable to protect its ships and their sailors. This British practice of taking British deserters, and often Americans, from American ships and forcing them into the Royal Navy increased greatly after 1803, and caused bitter anger in the United States.
On June 21, 1807 the American warship USS Chesapeake was attacked and boarded on the high seas off the coast of Norfolk, VA by the British warship HMS Leopard. Three Americans were dead and 18 wounded; the British impressed four seamen with American papers as alleged deserters. The outraged nation demanded action; President Jefferson ordered all British ships out of American waters.
Passed on December 22, 1807, the Act:
This shipping embargo was a cumulative addition to the Non-importation Act of 1806 (2 Stat. 379), this earlier act being a "Prohibition of the Importation of certain Goods and Merchandise from the Kingdom of Great Britain ''; the prohibited imported goods being defined where their chief value which consists of leather, silk, hemp or flax, tin or brass, wool, glass; in addition paper goods, nails, hats, clothing, and beer.
The Embargo Act of 1807 is codified at 2 Stat. 451 and formally titled "An Embargo laid on Ships and Vessels in the Ports and Harbours of the United States ''. The bill was drafted at the request of President Thomas Jefferson and subsequently passed by the Tenth U.S. Congress, on December 22, 1807, during Session 1; Chapter 5. Congress initially acted to enforce a bill prohibiting imports, but supplements to the bill eventually banned exports as well.
The embargo, which lasted from December 1807 to March 1809 effectively throttled American overseas trade. All areas of the United States suffered. In commercial New England and the Middle Atlantic states, ships rotted at the wharves, and in the agricultural areas, particularly in the South, farmers and planters could not sell their crops on the international market. For New England, and especially for the Middle Atlantic states, there was some consolation, for the scarcity of European goods meant that a definite stimulus was given to the development of American industry.
The embargo was a financial disaster for the Americans because the British were still able to export goods to America: initial loopholes overlooked smuggling by coastal vessels from Canada, whaling ships and privateers from overseas; and widespread disregard of the law meant enforcement was difficult.
A 2005 study by economic historian Douglas Irwin estimates that the embargo cost about 5 percent of America 's 1807 GNP.
A case study of Rhode Island shows the embargo devastated shipping - related industries, wrecked existing markets, and caused an increase in opposition to the Democratic - Republican Party. Smuggling was widely endorsed by the public, who viewed the embargo as a violation of their rights. Public outcry continued, helping the Federalists regain control of the state government in 1808 -- 09. The case is a rare example of American national foreign policy altering local patterns of political allegiance. Despite its unpopular nature, the Embargo Act did have some limited, unintended benefits to the Northeastern region especially as it drove capital and labor into New England textile and other manufacturing industries, lessening America 's reliance on the British.
In Vermont, the embargo was doomed to failure on the Lake Champlain - Richelieu River water route because of Vermont 's dependence on a Canadian outlet for produce. At St. John, Lower Canada, £ 140,000 worth of goods smuggled by water were recorded there in 1808 -- a 31 % increase over 1807. Shipments of ashes (used to make soap) nearly doubled to £ 54,000, but lumber dropped 23 % to £ 11,200. Manufactured goods, which had expanded to £ 50,000 since Jay 's Treaty of 1795, fell over 20 %, especially articles made near Tidewater. Newspapers and manuscripts recorded more lake activity than usual, despite the theoretical reduction in shipping that should accompany an embargo. The smuggling was not restricted to water routes, as herds were readily driven across the uncontrollable land border. Southbound commerce gained two - thirds overall, but furs dropped a third. Customs officials maintained a stance of vigorous enforcement throughout and Gallatin 's Enforcement Act (1809) was a party issue. Many Vermonters preferred the embargo 's exciting game of revenuers versus smugglers, bringing high profits, versus mundane, low - profit normal trade.
The New England merchants who evaded the embargo were imaginative, daring, and versatile in their violation of federal law. Gordinier (2001) examines how the merchants of New London, Connecticut, organized and managed the cargoes purchased and sold, and the vessels used during the years before, during, and after the embargo. Trade routes and cargoes, both foreign and domestic, along with the vessel types, and the ways their ownership and management were organized show the merchants of southeastern Connecticut evinced versatility in the face of crisis.
Gordinier (2001) concludes the versatile merchants sought alternative strategies for their commerce, and to a lesser extent, for their navigation. They tried extra-legal activities, a reduction in the size of the foreign fleet, and the re-documentation of foreign trading vessels into domestic carriage. Most importantly, they sought new domestic trading partners, and took advantage of the political power of Jedidiah Huntington, the Customs Collector. Huntington was an influential member of the Connecticut leadership class (called "the Standing Order ''); he allowed scores of embargoed vessels to depart for foreign ports under the guise of "special permission. '' Old modes of sharing vessel ownership in order to share the risk proved to be hard to modify. Instead established relationships continued through the embargo crisis, in spite of numerous bankruptcies.
Jefferson 's Secretary of the Treasury Albert Gallatin was against the entire embargo, foreseeing correctly the impossibility of enforcing the policy and the negative public reaction. "As to the hope that it may... induce England to treat us better, '' wrote Gallatin to Jefferson shortly after the bill had become law, "I think is entirely groundless... government prohibitions do always more mischief than had been calculated; and it is not without much hesitation that a statesman should hazard to regulate the concerns of individuals as if he could do it better than themselves. ''
Since the bill hindered U.S. ships from leaving American ports bound for foreign trade; it had the side - effect of hindering American exploration.
Just weeks later, on January 8, 1808, legislation again passed the Tenth U.S. Congress, Session 1; Chapter 8: "An Act supplementary... '' to the Embargo Act (2 Stat. 453). As historian Forrest McDonald wrote, "A loophole had been discovered '' in the initial enactment, "namely that coasting vessels, and fishing and whaling boats '' had been exempt from the embargo, and they had been circumventing it, primarily via Canada. This supplementary act extended the bonding provision (i.e. Section 2 of the initial Embargo Act) to those of purely domestic trades:
Meanwhile, Jefferson requested authorization from Congress to raise 30,000 troops from the current standing army of 2,800. Congress refused. With their harbors for the most part unusable in the winter anyway, New England and the north ports of the mid-Atlantic states had paid little notice to the previous embargo acts. That was to change with the spring thaw, and the passing of yet another embargo act.
With the coming of the spring, the effect of the previous acts were immediately felt throughout the coastal states, especially in New England. An economic downturn turned into a depression and caused increasing unemployment. Protests occurred up and down the eastern coast. Most merchants and shippers simply ignored the laws. On the Canada -- US border, especially in upstate New York and Vermont, the embargo laws were openly flouted. Federal officials believed parts of Maine, such as Passamaquoddy Bay on the border with British - held New Brunswick, were in open rebellion. By March, an increasingly frustrated Jefferson was resolved to enforce the embargo to the letter.
On March 12, 1808, Congress passed and Jefferson signed into law yet another supplement to the Embargo Act. This supplement prohibited, for the first time, all exports of any goods, whether by land or by sea. Violators were subject to a fine of US $10,000, plus forfeiture of goods, per offense. It granted the President broad discretionary authority to enforce, deny, or grant exceptions to the embargo. Port authorities were authorized to seize cargoes without a warrant and to try any shipper or merchant who was thought to have merely contemplated violating the embargo.
Despite the added penalties, citizens and shippers openly ignored the embargo. Protests continued to grow; and so it was that the Jefferson administration requested and Congress rendered yet another embargo act.
The Embargo was hurting the United States as much as Britain or France. Britain, expecting to suffer most from the American regulations, built up a new South American market for its exports, and the British shipowners were pleased that American competition had been removed by the action of the U.S. government.
Jefferson placed himself in a strange position with his Embargo policy. Though he had so frequently and eloquently argued for as little government intervention as possible, he now found himself assuming extraordinary powers in an attempt to enforce his policy. The presidential election of 1808, in which James Madison defeated Charles Cotesworth Pinckney, showed that the Federalists were regaining strength, and helped to convince Jefferson and Madison that the Embargo would have to be removed.
Shortly before leaving office, in March 1809, Jefferson signed the repeal of the failed Embargo. Despite its unpopular nature, the Embargo Act did have some limited, unintended benefits, especially as entrepreneurs and workers responded by bringing in fresh capital and labor into New England textile and other manufacturing industries, lessening America 's reliance on the British merchants.
On March 1, 1809, Congress passed the Non-Intercourse Act, a law that enabled the President, once the wars of Europe ended, to declare the country sufficiently safe and to allow foreign trade with certain nations.
In 1810 the government was ready to try yet another tactic of economic coercion, in the desperate measure known as Macon 's Bill Number 2. This bill became law on May 1, 1810, and replaced the Non-Intercourse Act. It was an acknowledgment of the failure of economic pressure to coerce the European powers. Trade with both Britain and France was now thrown open, and the United States attempted to bargain with the two belligerents. If either power would remove her restrictions on American commerce, the United States would reapply non-intercourse against the power that had not so acted. Napoleon quickly took advantage of this opportunity. He promised that his Berlin and Milan Decrees would be repealed, and Madison reinstated non-intercourse against Britain in the fall of 1810. Though Napoleon did not fulfill his promise, strained Anglo - American relations prevented his being brought to task for his duplicity.
The attempt of Jefferson and Madison to resist aggression by peaceful means gained a belated success in June 1812 when Britain finally promised to repeal her Orders in Council. The British concession was too late, for by the time the news reached America the United States had already declared the War of 1812 against Britain.
The entire series of events was ridiculed in the press as Dambargo, Mob - Rage, Go - bar - ' em or O - grab - me ("Embargo '' spelled backward); there was a cartoon ridiculing the Act as a snapping turtle, named "O ' grab me '', grabbing at American shipping.
America 's declaration of war, in mid-June 1812, was followed shortly by the Enemy Trade Act of 1812 on July 6, which employed similar restrictions as previous legislation; it was likewise ineffective and tightened in December 1813, and debated for further tightening in December 1814. After existing embargoes expired with the onset of war, the Embargo Act of 1813 was signed into law December 17, 1813. Four new restrictions were included: An embargo prohibiting all American ships and goods from leaving port; a complete ban on certain commodities customarily produced in the British Empire; a ban against foreign ships trading in American ports unless 75 % of the crew were citizens of the ship 's flag; and a ban on ransoming ships. The Embargo of 1813 was the nation 's last great trade restriction. Never again would the US government cut off all its trade to achieve a foreign policy objective. The act particularly hurt the northeastern states, since the British kept a tighter blockade on the south, and thus encouraged American opposition to the administration. To make his point, the act was not lifted by Madison until after the defeat of Napoleon, and the point was moot. On February 15, 1815, Madison signed the Enemy Trade Act of 1815; it was tighter than any previous trade restriction including the Enforcement Act of 1809 (January 9) and the Embargo of 1813, but it would expire two weeks later when official word of peace from Ghent was received.
|
where was the movie while you were sleeping filmed | While you were sleeping (film) - wikipedia
While You Were Sleeping is a 1995 romantic comedy film directed by Jon Turteltaub and written by Daniel G. Sullivan and Fredric Lebow. It stars Sandra Bullock as Lucy, a Chicago Transit Authority token collector, and Bill Pullman as Jack, the brother of a man whose life she saves, along with Peter Gallagher as Peter, the man who is saved, Peter Boyle and Glynis Johns as members of Peter 's family, and Jack Warden as longtime family friend and neighbor.
Lucy Eleanor Moderatz (Sandra Bullock) is a lonely fare token collector for the Chicago Transit Authority. She has a secret crush on a handsome commuter named Peter Callaghan (Peter Gallagher), although they are complete strangers. On Christmas Day, she rescues him from the oncoming Chicago "L '' train after a group of muggers push him onto the tracks. He falls into a coma, and she accompanies him to the hospital, where a nurse overhears her musing aloud, "I was going to marry him. '' Misinterpreting her, the nurse tells his family that she is his fiancée.
At first she is too caught up in the panic to explain the truth. She winds up keeping the secret for a number of reasons: she is embarrassed, Peter 's grandmother Elsie (Glynis Johns) has a heart condition, and Lucy quickly comes to love being a part of Peter 's big, loving family. One night, thinking she is alone while visiting Peter, she confesses about her predicament. Peter 's godfather Saul (Jack Warden) overhears the truth and later confronts her, but tells her he will keep her secret, because the accident has brought the family closer.
With no family and few friends, Lucy becomes so captivated with the quirky Callaghans and their unconditional love for her that she can not bring herself to hurt them by revealing that Peter does not even know her. She spends a belated Christmas with them and then meets Peter 's younger brother Jack (Bill Pullman), who is supposed to take over his father 's furniture business. He is suspicious of her at first, but he falls in love with her as they spend time together. They develop a close friendship and soon she falls in love with him as well.
After New Year 's Eve, Peter wakes up. He does not know Lucy, so it is assumed that he must have amnesia. She and Peter spend time together, and Saul persuades Peter to propose to her "again ''; she agrees even though she is in love with Jack. When Jack visits her the day before the wedding, she gives him a chance to change her mind, asking him if he can give her a reason not to marry Peter. He replies that he can not, leaving her disappointed.
On the day of the wedding, just as the priest begins the ceremony, Lucy finally confesses everything and tells the family she loves Jack rather than Peter. At this point, Peter 's real fiancée Ashley Bartlett Bacon (Ally Walker), who happens to be married herself, arrives and also demands the wedding be stopped. As the family argues, Lucy slips out unnoticed, unsure of her future.
Some time later, while Lucy is at work, Jack places an engagement ring in the token tray of her booth. She lets him into the booth, and with the entire Callaghan family watching, he proposes to her. In the last scenes of the film, they kiss at the end of their wedding, then leave on a CTA train for their honeymoon. She narrates that he fulfilled her dream of going to Florence, Italy, and explains that, when Peter asked when she fell in love with Jack, she replied, "it was while you were sleeping. ''
Julia Roberts was offered the role of Lucy Moderatz but turned it down.
The film was a tremendous success, grossing a total of $182,057,016 worldwide against an estimated $17,000,000 budget. It made $9,288,915 on its opening weekend of April 21 -- 23, 1995. It was the thirteenth - highest grosser of 1995 in the United States. Also, the film is recognized by American Film Institute in these lists:
|
when was universal declaration of human rights adopted | Universal Declaration of Human Rights - wikipedia
The Universal Declaration of Human Rights (UDHR) is a historic document that was adopted by the United Nations General Assembly at its third session on 10 December 1948 as Resolution 217 at the Palais de Chaillot in Paris, France. Of the then 58 members of the United Nations, 48 voted in favor, none against, eight abstained, and two did not vote.
The Declaration consists of 30 articles affirming an individual 's rights which, although not legally binding in themselves, have been elaborated in subsequent international treaties, economic transfers, regional human rights instruments, national constitutions, and other laws. The Declaration was the first step in the process of formulating the International Bill of Human Rights, which was completed in 1966, and came into force in 1976, after a sufficient number of countries had ratified them.
Some legal scholars have argued that because countries have constantly invoked the Declaration for more than 50 years, it has become binding as a part of customary international law. However, in the United States, the Supreme Court in Sosa v. Alvarez - Machain (2004), concluded that the Declaration "does not of its own force impose obligations as a matter of international law. '' Courts of other countries have also concluded that the Declaration is not in and of itself part of domestic law.
The underlying structure of the Universal Declaration was introduced in its second draft, which was prepared by René Cassin. Cassin worked from a first draft, which was prepared by John Peters Humphrey. The structure was influenced by the Code Napoléon, including a preamble and introductory general principles. Cassin compared the Declaration to the portico of a Greek temple, with a foundation, steps, four columns, and a pediment.
The Declaration consists of a preamble and thirty articles:
These articles are concerned with the duty of the individual to society and the prohibition of use of rights in contravention of the purposes of the United Nations Organisation.
During World War II, the Allies adopted the Four Freedoms -- freedom of speech, freedom of religion, freedom from fear, and freedom from want -- as their basic war aims. The United Nations Charter "reaffirmed faith in fundamental human rights, and dignity and worth of the human person '' and committed all member states to promote "universal respect for, and observance of, human rights and fundamental freedoms for all without distinction as to race, sex, language, or religion ''.
When the atrocities committed by Nazi Germany became fully apparent after World War II, the consensus within the world community was that the United Nations Charter did not sufficiently define the rights to which it referred. A universal declaration that specified the rights of individuals was necessary to give effect to the Charter 's provisions on human rights.
In June 1946, the UN Economic and Social Council established the Commission on Human Rights, comprising 18 members from various nationalities and political backgrounds. The Commission, a standing body of the United Nations, was constituted to undertake the work of preparing what was initially conceived as an International Bill of Rights.
The Commission established a special Universal Declaration of Human Rights Drafting Committee, chaired by Eleanor Roosevelt, to write the articles of the Declaration. The Committee met in two sessions over the course of two years.
Canadian John Peters Humphrey, Director of the Division of Human Rights within the United Nations Secretariat, was called upon by the United Nations Secretary - General to work on the project and became the Declaration 's principal drafter. At the time, Humphrey was newly appointed as Director of the Division of Human Rights within the United Nations Secretariat.
Other well - known members of the drafting committee included René Cassin of France, Charles Malik of Lebanon, and P.C. Chang of the Republic of China (Taiwan). Humphrey provided the initial draft which became the working text of the Commission.
According to Allan Carlson, the Declaration 's pro-family phrases were the result of the Christian Democratic movement 's influence on Cassin and Malik.
Once the Committee finished its work in May 1948, the draft was further discussed by the Commission on Human Rights, the Economic and Social Council, the Third Committee of the General Assembly before being put to vote in December 1948. During these discussions many amendments and propositions were made by UN Member States.
British representatives were extremely frustrated that the proposal had moral but no legal obligation. (It was not until 1976 that the International Covenant on Civil and Political Rights came into force, giving a legal status to most of the Declaration.)
The Universal Declaration was adopted by the General Assembly as Resolution 217 on 10 December 1948. Of the then 58 members of the United Nations, 48 voted in favor, none against, eight abstained and Honduras and Yemen failed to vote or abstain.
The meeting record provides firsthand insight into the debate. South Africa 's position can be seen as an attempt to protect its system of apartheid, which clearly violated several articles in the Declaration. The Saudi Arabian delegation 's abstention was prompted primarily by two of the Declaration 's articles: Article 18, which states that everyone has the right "to change his religion or belief ''; and Article 16, on equal marriage rights. The six communist countries abstentions centred around the view that the Declaration did not go far enough in condemning fascism and Nazism. Eleanor Roosevelt attributed the abstention of Soviet bloc countries to Article 13, which provided the right of citizens to leave their countries.
The 48 countries which voted in favour of the Declaration are:
8 countries abstained:
Other countries only gained sovereignty and joined the United Nations later, which explains the relatively small number of states entitled to the historical vote, and in no way reflects opposition to the universal principles.
The Declaration of Human Rights Day is commemorated every year on December 10, the anniversary of the adoption of the Universal Declaration, and is known as Human Rights Day or International Human Rights Day. The commemoration is observed by individuals, community and religious groups, human rights organizations, parliaments, governments, and the United Nations. Decadal commemorations are often accompanied by campaigns to promote awareness of the Declaration and human rights. 2008 marked the 60th anniversary of the Declaration, and was accompanied by year - long activities around the theme "Dignity and justice for all of us ''.
In 1948, the UN Resolution A / RES / 217 (III) (A) adopted the Declaration on a bilingual document in English and French, and official translations in Chinese, Russian and Spanish. In 2009, the Guinness Book of Records described the Declaration as the world 's "Most Translated Document '' (370 different languages and dialects). The Unicode Consortium stores 431 of the 503 official translations available at the OHCHR (as of June 2017).
In its preamble, governments commit themselves and their people to progressive measures which secure the universal and effective recognition and observance of the human rights set out in the Declaration. Eleanor Roosevelt supported the adoption of the Declaration as a declaration rather than as a treaty because she believed that it would have the same kind of influence on global society as the United States Declaration of Independence had within the United States. In this, she proved to be correct. Even though it is not legally binding, the Declaration has been adopted in or has influenced most national constitutions since 1948. It has also served as the foundation for a growing number of national laws, international laws, and treaties, as well as for a growing number of regional, sub national, and national institutions protecting and promoting human rights.
For the first time in international law, the term "the rule of law '' was used in the preamble of the Declaration. The third paragraph of the preamble of the Declaration reads as follows: "Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law. ''
While not a treaty itself, the Declaration was explicitly adopted for the purpose of defining the meaning of the words "fundamental freedoms '' and "human rights '' appearing in the United Nations Charter, which is binding on all member states. For this reason, the Universal Declaration of Human Rights is a fundamental constitutive document of the United Nations. In addition, many international lawyers believe that the Declaration forms part of customary international law and is a powerful tool in applying diplomatic and moral pressure to governments that violate any of its articles. The 1968 United Nations International Conference on Human Rights advised that the Declaration "constitutes an obligation for the members of the international community '' to all persons. The Declaration has served as the foundation for two binding UN human rights covenants: the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. The principles of the Declaration are elaborated in international treaties such as the International Convention on the Elimination of All Forms of Racial Discrimination, the International Convention on the Elimination of Discrimination Against Women, the United Nations Convention on the Rights of the Child, the United Nations Convention Against Torture, and many more. The Declaration continues to be widely cited by governments, academics, advocates, and constitutional courts, and by individuals who appeal to its principles for the protection of their recognised human rights.
The Universal Declaration has received praise from a number of notable people. The Lebanese philosopher and diplomat Charles Malik called it "an international document of the first order of importance '', while Eleanor Roosevelt -- first chairwoman of the Commission on Human Rights (CHR) that drafted the Declaration -- stated that it "may well become the international Magna Carta of all men everywhere. '' In a speech on 5 October 1995, Pope John Paul II called the Declaration "one of the highest expressions of the human conscience of our time '' but the Vatican never adopted the Declaration. In a statement on 10 December 2003 on behalf of the European Union, Marcello Spatafora said that the Declaration "placed human rights at the centre of the framework of principles and obligations shaping relations within the international community. ''
Turkey -- which was a secular state with an overwhelmingly Muslim population -- signed the Declaration in 1948. However, the same year, Saudi Arabia abstained from the ratification vote on the Declaration, claiming that it violated Sharia law. Pakistan -- which had signed the declaration -- disagreed and critiqued the Saudi position. Pakistani minister Muhammad Zafarullah Khan strongly argued in favor of including freedom of religion. In 1982, the Iranian representative to the United Nations, Said Rajaie - Khorassani, said that the Declaration was "a secular understanding of the Judeo - Christian tradition '' which could not be implemented by Muslims without conflict with Sharia. On 30 June 2000, members of the Organisation of the Islamic Conference (now the Organisation of Islamic Cooperation) officially resolved to support the Cairo Declaration on Human Rights in Islam, an alternative document that says people have "freedom and right to a dignified life in accordance with the Islamic Shari'ah '', without any discrimination on grounds of "race, colour, language, sex, religious belief, political affiliation, social status or other considerations ''.
Some Muslim diplomats would go on later to help draft other UN human rights treaties. For example, Iraqi diplomat Bedia Afnan 's insistence on wording that recognized gender equality resulted in Article 3 within the ICCPR and ICESCR. Pakistani diplomat Shaista Suhrawardy Ikramullah also spoke in favor of recognizing women 's rights.
A number of scholars in different fields have expressed concerns with the Declaration 's alleged Western bias. These include Irene Oh, Abdulaziz Sachedina, Riffat Hassan, and Faisal Kutty. Hassan has argued:
What needs to be pointed out to those who uphold the Universal Declaration of Human Rights to be the highest, or sole, model, of a charter of equality and liberty for all human beings, is that given the Western origin and orientation of this Declaration, the "universality '' of the assumptions on which it is based is -- at the very least -- problematic and subject to questioning. Furthermore, the alleged incompatibility between the concept of human rights and religion in general, or particular religions such as Islam, needs to be examined in an unbiased way.
Irene Oh argues that one solution is to approach the issue from the perspective of comparative (descriptive) ethics.
Kutty writes: "A strong argument can be made that the current formulation of international human rights constitutes a cultural structure in which western society finds itself easily at home... It is important to acknowledge and appreciate that other societies may have equally valid alternative conceptions of human rights. ''
Ironically, a number of Islamic countries that as of 2014 are among the most resistant to UN intervention in domestic affairs, played an invaluable role in the creation of the Declaration, with countries such as Syria and Egypt having been strong proponents of the universality of human rights and the right of countries to self - determination.
Groups such as Amnesty International and War Resisters International have advocated for "The Right to Refuse to Kill '' to be added to the Universal Declaration. War Resisters International has stated that the right to conscientious objection to military service is primarily derived from -- but not yet explicit in -- Article 18 of the UDHR: the right to freedom of thought, conscience, and religion.
Steps have been taken within the United Nations to make this right more explicit, but -- to date (2017) -- those steps have been limited to less significant United Nations documents. Sean MacBride -- Assistant Secretary - General of the United Nations and Nobel Peace Prize laureate -- has said: "To the rights enshrined in the Universal Declaration of Human Rights one more might, with relevance, be added. It is ' The Right to Refuse to Kill '. ''
The American Anthropological Association criticized the UDHR while it was in its drafting process. The AAA warned that the document would be defining universal rights from a Western paradigm which would be unfair to countries outside of that scope. They further argued that the West 's history of colonialism and evangelism made them a problematic moral representative for the rest of the world. They proposed three notes for consideration with underlying themes of cultural relativism: "1. The individual realizes his personality through his culture, hence respect for individual differences entails a respect for cultural differences '', "2. Respect for differences between cultures is validated by the scientific fact that no technique of qualitatively evaluating cultures has been discovered '', and "3. Standards and values are relative to the culture from which they derive so that any attempt to formulate postulates that grow out of the beliefs or moral codes of one culture must to that extent detract from the applicability of any Declaration of Human Rights to mankind as a whole. ''
During the lead up to the World Conference on Human Rights held in 1993, ministers from Asian states adopted the Bangkok Declaration, reaffirming their governments ' commitment to the principles of the United Nations Charter and the Universal Declaration of Human Rights. They stated their view of the interdependence and indivisibility of human rights and stressed the need for universality, objectivity, and non-selectivity of human rights. However, at the same time, they emphasized the principles of sovereignty and non-interference, calling for greater emphasis on economic, social, and cultural rights -- in particular, the right to economic development over civil and political rights. The Bangkok Declaration is considered to be a landmark expression of the Asian values perspective, which offers an extended critique of human rights universalism.
The International Federation for Human Rights (FIDH) is nonpartisan, nonsectarian, and independent of any government, and its core mandate is to promote respect for all the rights set out in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights.
In 1988, director Stephen R. Johnson and 41 international animators, musicians, and producers created a 20 - minute video for Amnesty International to celebrate the 40th Anniversary of the Universal Declaration. The video was to bring to life the Declaration 's 30 articles.
Amnesty International celebrated Human Rights Day and the 60th anniversary of the Universal Declaration all over the world by organizing the "Fire Up! '' event.
The Unitarian Universalist Service Committee (UUSC) is a non-profit, nonsectarian organization whose work around the world is guided by the values of Unitarian Universalism and the Universal Declaration of Human Rights. It works to provide disaster relief and promote human rights and social justice around the world.
The Quaker United Nations Office and the American Friends Service Committee work on many human rights issues, including improving education on the Universal Declaration of Human Rights. They have developed a Curriculum to help introduce High School students to the Universal Declaration of Human Rights.
In 1997, the council of the American Library Association (ALA) endorsed Article 19 from the Universal Declaration of Human Rights. Along with Article 19, Article 18 and 20 are also fundamentally tied to the ALA Universal Right to Free Expression and the Library Bill of Rights. Censorship, the invasion of privacy, and interference of opinions are human rights violations according to the ALA.
In response to violations of human rights, the ALA asserts the following principles:
The American Library Association condemns any governmental effort to involve libraries and librarians in restrictions on the right of any individual to hold opinions without interference, and to seek, receive, and impart information and ideas. Such restrictions, whether enforced by statutes or regulations, contractual stipulations, or voluntary agreements, pervert the function of the library and violate the professional responsibilities of librarians.
The American Library Association rejects censorship in any form. Any action that denies the inalienable human rights of individuals only damages the will to resist oppression, strengthens the hand of the oppressor, and undermines the cause of justice.
The American Library Association will not abrogate these principles. We believe that censorship corrupts the cause of justice, and contributes to the demise of freedom.
Youth for Human Rights International (YHRI) is a non-profit organization founded in 2001 by Mary Shuttleworth, an educator born and raised in apartheid South Africa, where she witnessed firsthand the devastating effects of discrimination and the lack of basic human rights. The purpose of YHRI is to teach youth about human rights, specifically the United Nations Universal Declaration of Human Rights, and inspire them to become advocates for tolerance and peace. YHRI has now grown into a global movement, including hundreds of groups, clubs and chapters around the world.
|
where did the name hacksaw ridge come from | Hacksaw Ridge - wikipedia
Hacksaw Ridge is a 2016 biographical war drama film directed by Mel Gibson and written by Andrew Knight and Robert Schenkkan, based on the 2004 documentary The Conscientious Objector. The film focuses on the World War II experiences of Desmond Doss, an American pacifist combat medic who was a Seventh - day Adventist Christian, refusing to carry or use a firearm or weapons of any kind. Doss became the first conscientious objector to be awarded the Medal of Honor, for service above and beyond the call of duty during the Battle of Okinawa. Andrew Garfield stars as Doss, with Sam Worthington, Luke Bracey, Teresa Palmer, Hugo Weaving, Rachel Griffiths, and Vince Vaughn in supporting roles.
The film was released in the United States on November 4, 2016, grossing $175.3 million worldwide and received positive reviews, with Gibson 's direction and Garfield 's performance earning notable praise. Hacksaw Ridge was chosen by the American Film Institute as one of its top ten Movies of the Year, and has received numerous awards and nominations. The film received six Oscar nominations at the 89th Academy Awards, including Best Picture, Best Director, Best Actor for Garfield and Best Sound Editing, winning the awards for Best Sound Mixing and Best Film Editing. It also received Golden Globe nominations for Best Picture, Best Director and Best Actor, and 12 AACTA Awards nominations, winning the majority, including Best Film, Best Direction, Best Original Screenplay, Best Actor for Garfield, and Best Supporting Actor for Weaving.
As a young boy, Desmond Doss nearly kills his younger brother Hal. This experience and his Seventh - day Adventist upbringing reinforce Desmond 's belief in the commandment "Thou shalt not kill ''. Years later, Doss takes an injured man to the hospital and meets a nurse, Dorothy Schutte. The two begin a relationship and Doss tells Dorothy of his interest in medical work.
After the Japanese attack on Pearl Harbor, Doss is motivated to enlist in the Army and intends to serve as a combat medic. His father Tom, a troubled World War I veteran, is deeply upset by the decision. Before leaving for Fort Jackson, he asks for Dorothy 's hand in marriage and she accepts.
Doss is placed under the command of Sergeant Howell. He excels physically but becomes an outcast among his fellow soldiers for refusing to handle a rifle and train on Saturdays. Howell and Captain Glover attempt to discharge Doss for psychiatric reasons but are overruled, as Doss ' religious beliefs do not constitute mental illness. They subsequently torment Doss by putting him through grueling labor, intending to get Doss to leave of his own accord. Despite being beaten one night by his fellow soldiers, he refuses to identify his attackers and continues training.
Doss ' unit completes basic training and is released on leave, during which Doss intends to marry Dorothy, but his refusal to carry a firearm leads to an arrest for insubordination. Captain Glover and Dorothy visit Doss in jail and try to convince him to plead guilty so that he can be released without charge but Doss refuses to compromise his beliefs. At his trial, Doss pleads not guilty, but before he is sentenced, his father barges into the tribunal with a letter from a former commanding officer stating that his son 's pacifism is protected by an Act of Congress. The charges against Doss are dropped, and he and Dorothy are married.
Doss ' unit is assigned to the 77th Infantry Division and deployed to the Pacific theater. During the Battle of Okinawa, Doss ' unit is informed that they are to relieve the 96th Infantry Division, which was tasked with ascending and securing the Maeda Escarpment ("Hacksaw Ridge ''). In the initial fight, both sides sustain heavy losses, and Doss successfully saves several soldiers, including those with severe injuries. The Americans camp for the night, which Doss spends in a foxhole with Smitty, a squadmate who was the first to call Doss a coward. Doss reveals that his aversion to holding a firearm stems from nearly shooting his drunken father, who threatened his mother with a gun. Smitty apologizes for doubting his courage and the two make amends.
The next morning, the Japanese launch a massive counterattack and drive the Americans off the escarpment. Smitty is killed, while Howell and several of Doss ' squad mates are left injured on the battlefield. Doss hears the cries of the dying soldiers and decides to run back into the carnage. He starts carrying wounded soldiers to the cliff 's edge and belaying them down by rope, each time praying to save one more. The arrival of dozens of wounded once presumed dead comes as a shock to the rest of the unit below. When day breaks, Doss rescues Howell and the two finally escape Hacksaw under enemy fire.
Captain Glover tells Doss that the men have been inspired by his miraculous efforts, and that they will not launch the next attack without him. Despite the next day being the Sabbath day, he joins his fellow soldiers after finishing his prayers. With reinforcements, they turn the tide of battle. During an ambush set by Japanese soldiers feigning surrender, Doss manages to save Glover and others by knocking away enemy grenades. Doss is eventually wounded by a grenade blast, but the battle is won. Doss descends the cliff, clutching the Bible Dorothy gave him.
The film switches to real - life photos and archive footage showing that after rescuing 75 soldiers at Hacksaw Ridge, Doss was awarded the Medal of Honor by President Harry S. Truman. Doss stayed married to Dorothy until her death in 1991. He died on March 23, 2006, at the age of 87.
The project was in development hell for 14 years. Numerous producers had tried for decades to film Doss ' story, including decorated war hero Audie Murphy and Hal B. Wallis (producer of Casablanca).
In 2001, after finally convincing Doss that making a movie on his remarkable life was the right thing to do, screenwriter / producer Gregory Crosby (grandson of Bing Crosby) wrote the treatment and brought the project to film producer David Permut, of Permut Presentations, through the early efforts of Stan Jensen of the Seventh - day Adventist Church, which ultimately led to the film being financed.
In 2004, director Terry Benedict won the rights to make a documentary about Doss, The Conscientious Objector, and secured the dramatic film rights in the process. However, Doss died in 2006, after which producer Bill Mechanic acquired and then sold the rights to Walden Media, which developed the project along with producer David Permut. Co-producers of the film are Gregory Crosby and Steve Longi. Walden Media insisted on a PG - 13 version of the battle, and Mechanic spent years working to buy the rights back.
After acquiring the rights, Mechanic approached Mel Gibson, and wanted him to create a concoction of violence and faith, as he did with The Passion of the Christ (2004). Gibson turned down the offer twice, as he previously did with Braveheart (1995). Then nearly a decade later, Gibson finally agreed to helm the film, a decision announced in November 2014. The same month, Andrew Garfield was confirmed to play the role of Desmond Doss.
With a budget of $40 million, the team still faced many challenges. Hacksaw Ridge became an international co-production, with key players and firms located in both the United States and Australia. When Australian tax incentives were taken off the table, the film had to qualify as Australian to receive government subsidies. Despite being American - born, Gibson 's early years in Australia helped the film qualify, along with most of the cast being Australian, including Rachel Griffiths (Doss ' mother), Teresa Palmer (Doss ' wife), Sam Worthington (unit leader), Hugo Weaving (as Doss ' father), Richard Roxburgh (as a colonel) and Luke Bracey (as Smitty, one of Doss ' most antagonistic unit members). Rounding out the cast was American actor Vince Vaughn.
On February 9, 2015, IM Global closed a deal to finance the film, and also sold the film into the international markets. On the same day, Lionsgate acquired the North American distribution rights to the film. Chinese distribution rights were acquired by Bliss Media, a Shanghai - based film production and distribution company.
Hacksaw Ridge is the first film directed by Gibson since Apocalypto in 2006, and marks a departure from his previous films, such as Apocalypto and Braveheart, in which the protagonists acted violently.
Robert Schenkkan and Randall Wallace wrote the script, and Wallace was previously attached to direct the film. Andrew Knight polished the original script. Gibson 's partner Bruce Davey also produced the film, along with Paul Currie.
The cast -- Andrew Garfield, Vince Vaughn, Sam Worthington, Luke Bracey, Teresa Palmer, Rachel Griffiths, Richard Roxburgh, Luke Pegler, Richard Pyros, Ben Mingay, Firass Dirani, Nico Cortez, Michael Sheasby, Goran Kleut, Jacob Warner, Harry Greenwood, Damien Thomlinson, Ben O'Toole, Benedict Hardie, Robert Morgan, Ori Pfeffer, Milo Gibson, and Nathaniel Buzolic, Hugo Weaving, and Ryan Corr -- was announced between November 2014 and October 2015. The younger Doss was played by Darcy Bryce.
Garfield plays Desmond Doss, a US Army medic awarded the Medal of Honor by President Harry S. Truman for saving lives during the Battle of Okinawa in World War II. Garfield had high regards for Doss, and venerated him for his act of bravery, hailing him as a "wonderful symbol of embodying the idea of live and let live no matter what your ideology is, no matter what your value system is, just to allow other people to be who they are and allow yourself to be who you are. '' He found the idea of playing a real superhero, as compared to his past roles playing Spider - Man in The Amazing Spider - Man and its sequel, much more inspiring. Garfield admitted that he cried the first time he read the screenplay. He visited Doss ' hometown and touched his various tools. Gibson was drawn to Garfield the first time he saw his performance in The Social Network.
Palmer wanted a role in the film so badly that she auditioned via phone, and sent the recording to Gibson. She heard nothing back for three months, until Gibson called Palmer to tell her in a Skype chat that he had cast her in the role of Dorothy, Doss ' wife.
Principal photography started on September 29, 2015, and lasted for 59 days, ending in December of that year. Filming took place entirely in Australia. The film was based at Fox Studios in Sydney, after producers vigorously scouted for locations around the country. Filming took place mostly in the state of New South Wales -- where Gibson spent much of his early years -- in and around Sydney such as in Richmond, Bringelly, and Oran Park. Gibson moved to the state in July 2015, two months before filming began. The graveyard scene was shot at a set - constructed cemetery in Sydney 's Centennial Park. The grounds of Newington Armory at Sydney Olympic Park were used as Fort Jackson. Filming in Bringelly required the team to clear and deforest over 500 hectares of land, which evoked the ire of some environmentalists. However, the producers had complete approval and clearance to do so. Also conditions were imposed to replant and rehabilitate part of the land after filming ceased. According to Troy Grant, New South Wales ' deputy premier and minister for the arts, the film brought 720 jobs and US $19 million to regional and rural New South Wales.
Altogether, three jeeps, two trucks and a tank were featured in the film. Bulldozers and backhoes were used to transform a dairy pasture near Sydney to re-create the Okinawa battlefield. A berm had to be raised around the perimeter so cameras could turn 360 degrees without getting any eucalyptus trees in the background. Gibson did not want to rely heavily on computer visual effects, either on the screen or in pre-visualizing the battle scenes. Visual effects were used only during bloody scenes, like napalm - burnt soldiers. During filming of the war scenes, Gibson incorporated his past war - movie experiences, and would yell to the actors, reminding them constantly of what they were fighting for.
The film has been described as an anti-war film, with pacifist themes. It also incorporates recurring religious imagery, such as baptism and ascension.
After the war, Doss turned down many requests for books and film versions of his actions, because he was wary of whether his life, wartime experiences, and Seventh - day Adventist beliefs would be portrayed inaccurately or sensationally. Doss ' only child, Desmond Doss Jr., stated: "The reason he declined is that none of them adhered to his one requirement: that it be accurate. And I find it remarkable, the level of accuracy in adhering to the principle of the story in this movie. '' Producer David Permut stated that the filmmakers took great care in maintaining the integrity of the story, since Doss was very religious.
However, the filmmakers did change some details, notably the backstory about his father, the incident with the gun Doss took out of his alcoholic father 's hands, and the circumstances of his first marriage. The film also omits his prior combat service in the Battle of Guam and Battle of Leyte (Doss was awarded the Bronze Star Medal for extraordinary bravery in both battles), and leaves the impression that Doss ' actions at Okinawa took place over a period of a few days, though his Medal of Honor citation covered his actions over a period of about three weeks.
The film 's accompanying score was provided by Rupert Gregson - Williams and was recorded at Abbey Road Studios in London, with an orchestra of 70 musicians, and a 36 - voice choir.
The world premiere of Hacksaw Ridge occurred on September 4, 2016, at the 73rd Venice Film Festival, where it received a 10 - minute standing ovation. The film was released in Australia on November 3, 2016, by Icon Film Distribution, and in the United States on November 4, 2016, by Lionsgate / Summit Entertainment. It was released by Bliss Media in China in November, and in the United Kingdom in 2017, with IM Global handling international sales.
In August 2016, Gibson appeared at Pastor Greg Laurie 's SoCal Harvest in Anaheim, California, to promote the film.
Hacksaw Ridge grossed $67.1 million in the United States and Canada and $108.2 million in other countries for a worldwide total of $175.3 million, against a production budget of $40 million.
The film opened alongside Doctor Strange and Trolls, and was projected to gross around $12 million from 2,886 theaters. It was expected to play very well among faith - based, Midwest, and Southern audiences. It made $5.2 million on its first day and $15.2 million in its opening weekend, finishing third at the box office. The debut was on par with the $15 million opening of Gibson 's last directorial effort, Apocalypto, in 2006. In its second weekend, the film grossed $10.8 million (a drop of just 29.1 %), finishing 5th at the box office.
The film also opened successfully in China, grossing over $16 million in its first four days at the box office.
On review aggregator Rotten Tomatoes, the film has an approval rating of 86 %, based on 232 reviews, with an average rating of 7.2 / 10. The site 's critical consensus reads, "Hacksaw Ridge uses a real - life pacifist 's legacy to lay the groundwork for a gripping wartime tribute to faith, valor, and the courage of remaining true to one 's convictions. '' On Metacritic, which assigns a weighted average to reviews, the film has a score of 71 out of 100, based on reviews from 47 critics, indicating "generally favorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A '' on an A+ to F scale.
The Milford Daily News called the film a "masterpiece '', adding that it "is going to end up on many 2016 Top 10 lists, that should get Oscar nominations for Best Actor, Best Director and Best Picture. '' Maggie Stancu of Movie Pilot wrote that "Gibson made some of his most genius directing choices in Hacksaw Ridge, and Garfield has given his best performance yet. With amazing performances by Vince Vaughn, Teresa Palmer, Sam Worthington and Hugo Weaving, it is absolutely one of 2016 's must - see films. '' Mick LaSalle of SFGate called the film "a brilliant return for Mel Gibson, which confirms his position as a director with a singular talent for spectacle and a sure way with actors. '' In The Film Lawyers, Samar Khan called Hacksaw Ridge "fantastic, '' and emphasised "just how wonderful it is to have Gibson back in a more prominent position in Hollywood, hopefully with the demons of his past behind him. If Hacksaw Ridge is any indication, we are poised for a future filled with great films from the visionary director. '' The Telegraph awarded the film four stars, and added: "Hacksaw Ridge is a fantastically moving and bruising war film that hits you like a raw topside of beef in the face -- a kind of primary - coloured Guernica that flourishes on a big screen with a crowd. ''
The Guardian also awarded the film four stars, and stated that Gibson had "absolutely hit Hacksaw Ridge out of the park. '' The Australian 's reviewer was equally positive, stating that, as a director, "Gibson 's approach is bold and fearless; this represents his best work to date behind the camera. '' Rex Reed of Observer rated the film with four stars, and called it "the best war film since Saving Private Ryan... (I) t is violent, harrowing, heartbreaking and unforgettable. And yes, it was directed by Mel Gibson. He deserves a medal, too '' Michael Smith of Tulsa World called Hacksaw Ridge a "moving character study '' and praised both the direction and acting. He observed: "It 's truly remarkable how Gibson can film scenes of such heartfelt emotion with such sweet subtlety as easily as he stages some of the most vicious, visual scenes of violence that you will ever see... Hacksaw Ridge is beautiful and brutal, and that 's a potent combination for a movie about a man determined to serve his country, as well as his soul. '' IGN critic Alex Welch gave the film a score of 8 / 10, praising it as "one of the most successful war films of recent memory, '' and "at times horrifying, inspiring, and heart - wrenching. '' Mike Ryan of Uproxx gave the film a positive review, praising Gibson 's direction and saying, "There are two moments during the second half of Mel Gibson 's Hacksaw Ridge when I literally jumped out of my seat in terror. The film 's depiction of war is the best I 've seen since Saving Private Ryan. '' Peter Travers of Rolling Stone gave the film 3.5 stars, writing, "Thanks to some of the greatest battle scenes ever filmed, Gibson once again shows his staggering gifts as a filmmaker, able to juxtapose savagery with aching tenderness. '' In contrast, Matt Zoller Seitz for RogerEbert.com gave the film 2.5 stars, and described the film as "a movie at war with itself. '' Guy Westwell, writing for The Conversation, criticized the depiction of Doss ' pacifism as contributing to the jingoism of the film.
Hacksaw Ridge won Best Film Editing and Best Sound Mixing and was nominated for Best Picture, Best Director, Best Actor for Garfield and Best Sound Editing at the Academy Awards. The film won Best Editing and was nominated for Best Actor in a Leading Role for Garfield, Best Adapted Screenplay, Best Sound and Best Makeup and Hair at the British Academy Film Awards. The film won Best Action Movie and Best Actor in an Action Movie for Garfield and was nominated for Best Picture, Best Director, Best Actor for Garfield, Best Editing and Best Hair and Makeup at the Critics ' Choice Awards. The film received three nominations at the Golden Globe Awards, including Best Motion Picture -- Drama, Best Actor -- Motion Picture Drama for Garfield and Best Director. The film won Best Actor for Garfield, Best Film Editing and Best Sound and was nominated for Best Film, Best Director, Best Adapted Screenplay, Best Cinematography, Best Original Score and Best Art Direction and Production Design at the Satellite Awards.
|
error in 4th order runge kutta method is of the order of | Runge -- Kutta methods - wikipedia
In numerical analysis, the Runge -- Kutta methods are a family of implicit and explicit iterative methods, which includes the well - known routine called the Euler Method, used in temporal discretization for the approximate solutions of ordinary differential equations. These methods were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.
See the article on numerical methods for ordinary differential equations for more background and other methods. See also List of Runge -- Kutta methods.
The most widely known member of the Runge -- Kutta family is generally referred to as "RK4 '', "classical Runge -- Kutta method '' or simply as "the Runge -- Kutta method ''.
Let an initial value problem be specified as follows:
Here y is an unknown function (scalar or vector) of time t, which we would like to approximate; we are told that y _̇ (\ displaystyle (\ dot (y))), the rate at which y changes, is a function of t and of y itself. At the initial time t 0 (\ displaystyle t_ (0)) the corresponding y value is y 0 (\ displaystyle y_ (0)). The function f and the data t 0 (\ displaystyle t_ (0)), y 0 (\ displaystyle y_ (0)) are given.
Now pick a step - size h > 0 and define
for n = 0, 1, 2, 3,..., using
Here y n + 1 (\ displaystyle y_ (n + 1)) is the RK4 approximation of y (t n + 1) (\ displaystyle y (t_ (n + 1))), and the next value (y n + 1 (\ displaystyle y_ (n + 1))) is determined by the present value (y n (\ displaystyle y_ (n))) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right - hand side of the differential equation.
In averaging the four increments, greater weight is given to the increments at the midpoint. If f (\ displaystyle f) is independent of y (\ displaystyle y), so that the differential equation is equivalent to a simple integral, then RK4 is Simpson 's rule.
The RK4 method is a fourth - order method, meaning that the local truncation error is on the order of O (h 5) (\ displaystyle O (h ^ (5))), while the total accumulated error is on the order of O (h 4) (\ displaystyle O (h ^ (4))).
The family of explicit Runge -- Kutta methods is a generalization of the RK4 method mentioned above. It is given by
where
To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients a (for 1 ≤ j < i ≤ s), b (for i = 1, 2,..., s) and c (for i = 2, 3,..., s). The matrix (a) is called the Runge -- Kutta matrix, while the b and c are known as the weights and the nodes. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):
The Runge -- Kutta method is consistent if
There are also accompanying requirements if one requires the method to have a certain order p, meaning that the local truncation error is O (h). These can be derived from the definition of the truncation error itself. For example, a two - stage method has order 2 if b + b = 1, b c = 1 / 2, and a = c.
In general, if an explicit s (\ displaystyle s) - stage Runge -- Kutta method has order p (\ displaystyle p), then s ≥ p (\ displaystyle s \ geq p), and if p ≥ 5 (\ displaystyle p \ geq 5), then s > p (\ displaystyle s > p). The minimum s (\ displaystyle s) required for an explicit s (\ displaystyle s) - stage Runge -- Kutta method to have order p (\ displaystyle p) is an open problem. Some values which are known are:
The RK4 method falls in this framework. Its tableau is
A slight variation of "the '' Runge -- Kutta method is also due to Kutta in 1901 and is called the 3 / 8 - rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating - point operations) per time step. Its Butcher tableau is
However, the simplest Runge -- Kutta method is the (forward) Euler method, given by the formula y n + 1 = y n + h f (t n, y n) (\ displaystyle y_ (n + 1) = y_ (n) + hf (t_ (n), y_ (n))). This is the only consistent explicit Runge -- Kutta method with one stage. The corresponding tableau is
An example of a second - order method with two stages is provided by the midpoint method:
The corresponding tableau is
The midpoint method is not the only second - order Runge -- Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula
Its Butcher tableau is
In this family, α = 1 2 (\ displaystyle \ alpha = (\ tfrac (1) (2))) gives the midpoint method, and α = 1 (\ displaystyle \ alpha = 1) is Heun 's method.
As an example, consider the two - stage second - order Runge -- Kutta method with α = 2 / 3, also known as Ralston method. It is given by the tableau
with the corresponding equations
This method is used to solve the initial - value problem
with step size h = 0.025, so the method needs to take four steps.
The method proceeds as follows:
The numerical solutions correspond to the underlined values.
The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge -- Kutta step. This is done by having two methods in the tableau, one with order p (\ displaystyle p) and one with order p − 1 (\ displaystyle p - 1).
The lower - order step is given by
where k i (\ displaystyle k_ (i)) are the same as for the higher - order method. Then the error is
which is O (h p) (\ displaystyle O (h ^ (p))). The error estimate is used to control the step size. The Butcher tableau for this kind of method is extended to give the values of b i ∗ (\ displaystyle b_ (i) ^ (*)):
The Runge -- Kutta -- Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is:
However, the simplest adaptive Runge -- Kutta method involves combining Heun 's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is:
Other adaptive Runge -- Kutta methods are the Bogacki -- Shampine method (orders 3 and 2), the Cash -- Karp method and the Dormand -- Prince method (both with orders 5 and 4).
A Runge -- Kutta method is said to be nonconfluent if all the c i, i = 1, 2,..., s (\ displaystyle c_ (i), \, i = 1, 2, \ ldots, s) are distinct.
All Runge -- Kutta methods mentioned up to now are explicit methods. Explicit Runge -- Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded. This issue is especially important in the solution of partial differential equations.
The instability of explicit Runge -- Kutta methods motivates the development of implicit methods. An implicit Runge -- Kutta method has the form
where
The difference with an explicit method is that in an explicit method, the sum over j only goes up to i − 1. This also shows up in the Butcher tableau: the coefficient matrix a i j (\ displaystyle a_ (ij)) of an explicit method is lower triangular. In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form
See Adaptive Runge - Kutta methods above for the explanation of the b ∗ (\ displaystyle b ^ (*)) row.
The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with s stages is used to solve a differential equation with m components, then the system of algebraic equations has ms components. This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s - step linear multistep method needs to solve a system of algebraic equations with only m components, so the size of the system does not increase as the number of steps increases.
The simplest example of an implicit Runge -- Kutta method is the backward Euler method:
The Butcher tableau for this is simply:
This Butcher tableau corresponds to the formulae
which can be re-arranged to get the formula for the backward Euler method listed above.
Another example for an implicit Runge -- Kutta method is the trapezoidal rule. Its Butcher tableau is:
The trapezoidal rule is a collocation method (as discussed in that article). All collocation methods are implicit Runge -- Kutta methods, but not all implicit Runge -- Kutta methods are collocation methods.
The Gauss -- Legendre methods form a family of collocation methods based on Gauss quadrature. A Gauss -- Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau:
The advantage of implicit Runge -- Kutta methods over explicit ones is their greater stability, especially when applied to stiff equations. Consider the linear test equation y ' = λy. A Runge -- Kutta method applied to this equation reduces to the iteration y n + 1 = r (h λ) y n (\ displaystyle y_ (n + 1) = r (h \ lambda) \, y_ (n)), with r given by
where e stands for the vector of ones. The function r is called the stability function. It follows from the formula that r is the quotient of two polynomials of degree s if the method has s stages. Explicit methods have a strictly lower triangular matrix A, which implies that det (I − zA) = 1 and that the stability function is a polynomial.
The numerical solution to the linear test equation decays to zero if r (z) < 1 with z = hλ. The set of such z is called the domain of absolute stability. In particular, the method is said to be A-stable if all z with Re (z) < 0 are in the domain of absolute stability. The stability function of an explicit Runge -- Kutta method is a polynomial, so explicit Runge -- Kutta methods can never be A-stable.
If the method has order p, then the stability function satisfies r (z) = e z + O (z p + 1) (\ displaystyle r (z) = (\ textrm (e)) ^ (z) + O (z ^ (p + 1))) as z → 0 (\ displaystyle z \ to 0). Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as Padé approximants. A Padé approximant with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m + 2.
The Gauss -- Legendre method with s stages has order 2s, so its stability function is the Padé approximant with m = n = s. It follows that the method is A-stable. This shows that A-stable Runge -- Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep methods can not exceed two.
The A-stability concept for the solution of differential equations is related to the linear autonomous equation y ′ = λ y (\ displaystyle y ' = \ lambda y). Dahlquist proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as G - stability for multistep methods (and the related one - leg methods) and B - stability (Butcher, 1975) for Runge -- Kutta methods. A Runge -- Kutta method applied to the non-linear system y ′ = f (y) (\ displaystyle y ' = f (y)), which verifies ⟨ f (y) − f (z), y − z ⟩ < 0 (\ displaystyle \ langle f (y) - f (z), \ y-z \ rangle < 0), is called B - stable, if this condition implies ∥ y n + 1 − z n + 1 ∥ ≤ ∥ y n − z n ∥ (\ displaystyle \ y_ (n + 1) - z_ (n + 1) \ \ leq \ y_ (n) - z_ (n) \) for two numerical solutions.
Let B (\ displaystyle B), M (\ displaystyle M) and Q (\ displaystyle Q) be three s × s (\ displaystyle s \ times s) matrices defined by
A Runge -- Kutta method is said to be algebraically stable if the matrices B (\ displaystyle B) and M (\ displaystyle M) are both non-negative definite. A sufficient condition for B - stability is: B (\ displaystyle B) and Q (\ displaystyle Q) are non-negative definite.
In general a Runge -- Kutta method of order s (\ displaystyle s) can be written as:
where:
are increments obtained evaluating the derivatives of y t (\ displaystyle y_ (t)) at the i (\ displaystyle i) - th order.
We develop the derivation for the Runge -- Kutta fourth - order method using the general formula with s = 4 (\ displaystyle s = 4) evaluated, as explained above, at the starting point, the midpoint and the end point of any interval (t, t + h) (\ displaystyle (t, \ t + h)); thus, we choose:
and β i j = 0 (\ displaystyle \ beta _ (ij) = 0) otherwise. We begin by defining the following quantities:
where y t + h / 2 1 = y t + y t + h 1 2 (\ displaystyle y_ (t + h / 2) ^ (1) = (\ dfrac (y_ (t) + y_ (t + h) ^ (1)) (2))) and y t + h / 2 2 = y t + y t + h 2 2 (\ displaystyle y_ (t + h / 2) ^ (2) = (\ dfrac (y_ (t) + y_ (t + h) ^ (2)) (2))) If we define:
and for the previous relations we can show that the following equalities holds up to O (h 2) (\ displaystyle (\ mathcal (O)) (h ^ (2))):
where:
is the total derivative of f (\ displaystyle f) with respect to time.
If we now express the general formula using what we just derived we obtain:
and comparing this with the Taylor series of y t + h (\ displaystyle y_ (t + h)) around y t (\ displaystyle y_ (t)):
we obtain a system of constraints on the coefficients:
which when solved gives a = 1 6, b = 1 3, c = 1 3, d = 1 6 (\ displaystyle a = (\ frac (1) (6)), b = (\ frac (1) (3)), c = (\ frac (1) (3)), d = (\ frac (1) (6))) as stated above.
|
select all that apply. which of the following were not part of the missouri compromise | Missouri Compromise - wikipedia
The Missouri Compromise is the title generally attached to the legislation passed by the 16th United States Congress on May 9, 1820. The measures provided for the admission of Maine as a free state along with Missouri as a slave state, thus maintaining the balance of power between North and South. As part of the compromise, slavery was prohibited north of the 36 ° 30 ′ parallel, excluding Missouri. President James Monroe signed the legislation on March 6, 1820.
Earlier, on February 4, 1820, Representative James Tallmadge Jr., a Jeffersonian Republican from New York, submitted two amendments to Missouri 's request for statehood, which included restrictions on slavery. Southerners objected to any bill which imposed federal restrictions on slavery, believing that slavery was a state issue settled by the Constitution. However, with the Senate evenly split at the opening of the debates, both sections possessing 11 states, the admission of Missouri would give the South an advantage. Northern critics including Federalists and Democratic - Republicans objected to the expansion of slavery into the Louisiana Purchase territory on the Constitutional inequalities of the three - fifths rule, which conferred Southern representation in the federal government, derived from a states ' slave population. Jeffersonian Republicans in the North ardently maintained that a strict interpretation of the Constitution required that Congress act to limit the spread of slavery on egalitarian grounds. "(Northern) Republicans rooted their antislavery arguments, not on expediency, but in egalitarian morality ''; and "The Constitution, (said northern Jeffersonians) strictly interpreted, gave the sons of the founding generation the legal tools to hasten (the) removal (of slavery), including the refusal to admit additional slave states. ''
When free - soil Maine offered its petition for statehood, the Senate quickly linked the Maine and Missouri bills, making Maine admission a condition for Missouri entering the Union with slavery unrestricted. Senator Jesse B. Thomas of Illinois added a compromise proviso, excluding slavery from all remaining lands of the Louisiana Purchase north of the 36 ° 30 ' parallel. The combined measures passed the Senate, only to be voted down in the House by those Northern representatives who held out for a free Missouri. Speaker of the House Henry Clay of Kentucky, in a desperate bid to break the deadlock, divided the Senate bills. Clay and his pro-compromise allies succeeded in pressuring half the anti-restrictionist House Southerners to submit to the passage of the Thomas proviso, while maneuvering a number of restrictionist House northerners to acquiesce in supporting Missouri as a slave state. The Missouri question in the 15th Congress ended in stalemate on March 4, 1819, the House sustaining its northern antislavery position, and the Senate blocking a slavery restricted statehood.
The Missouri Compromise was controversial at the time, as many worried that the country had become lawfully divided along sectional lines. The bill was effectively repealed in the Kansas -- Nebraska Act of 1854, and declared unconstitutional in Dred Scott v. Sandford (1857). This increased tensions over slavery and eventually led to the Civil War.
The Era of Good Feelings, closely associated with the administration of President James Monroe (1817 -- 1825), was characterized by the dissolution of national political identities. With the discredited Federalists in decline nationally, the "amalgamated '' or hybridized Republicans adopted key Federalist economic programs and institutions, further erasing party identities and consolidating their victory.
The economic nationalism of the Era of Good Feelings that would authorize the Tariff of 1816 and incorporate the Second Bank of the United States portended an abandonment of the Jeffersonian political formula for strict construction of the constitution, a limited central government and commitments to the primacy of Southern agrarian interests. The end of opposition parties also meant the end of party discipline and the means to suppress internecine factional animosities. Rather than produce political harmony, as President James Monroe had hoped, amalgamation had led to intense rivalries among Jeffersonian Republicans.
It was amid the "good feelings '' of this period -- during which Republican Party discipline was in abeyance -- that the Tallmadge Amendment surfaced.
The immense Louisiana Purchase territories had been acquired through federal executive action, followed by Republican legislative authorization in 1803 during the Thomas Jefferson administration.
Prior to its purchase in 1803, the governments of Spain and France had sanctioned slavery in the region. In 1812, the state of Louisiana, a major cotton producer and the first to be carved from the Louisiana Purchase, had entered the Union as a slave state. Predictably, Missourians were adamant that slave labor should not be molested by the federal government. In the years following the War of 1812, the region, now known as Missouri Territory, experienced rapid settlement, led by slaveholding planters.
Agriculturally, the land comprising the lower reaches of the Missouri River, from which that new state would be formed, had no prospects as a major cotton producer. Suited for diversified farming, the only crop regarded as promising for slave labor was hemp culture. On that basis, southern planters immigrated with their chattel to Missouri, the slave population rising from 3,100 in 1810 to 10,000 in 1820. In a total population 67,000, slaves represented about 15 percent.
By 1818, the population of Missouri territory was approaching the threshold that would qualify it for statehood. An enabling act was provided to Congress empowering territorial residents to select convention delegates and draft a state constitution. The admission of Missouri territory as a slave state was expected to be more or less routine.
When the Missouri statehood bill was opened for debate in the House of Representative on February 13, 1819, early exchanges on the floor proceeded without serious incident. In the course of these proceedings, however, Representative James Tallmadge Jr. of New York "tossed a bombshell into the Era of Good Feelings '' with the following amendments:
Provided, that the further introduction of slavery or involuntary servitude be prohibited, except for the punishment of crimes, whereof the party shall have been fully convicted; and that all children born within the said State will be executed after the admission thereof into the Union, shall be free at the age of twenty - five years.
A political outsider, the 41 - year old Tallmadge conceived his amendment based on a personal aversion to slavery. He had played a leading role in accelerating emancipation of the remaining slaves in New York in 1817. Moreover, he had campaigned against Illinois ' Black Codes: though ostensibly free - soil, the new Illinois state constitution permitted indentured servitude and a limited form of slavery. As a New York Republican, Tallmadge maintained an uneasy association with Governor DeWitt Clinton, a former Republican who depended on support from ex-Federalists. Clinton 's faction was hostile to Tallmadge for his spirited defense of General Andrew Jackson over his contentious invasion of Florida.
Tallmadge had to back from a fellow New York Republican, Congressman John W. Taylor (not to be confused with legislator John Taylor of Caroline County, Virginia). Taylor also had antislavery credentials: In February 1819, he had proposed similar slave restrictions on Arkansas territory in the House, but failed 89 - 87. He would lead the pro-Tallmadge antislavery forces during the 16th Congress in 1820.
The amendment instantly exposed the polarization among Jeffersonian Republicans over the future of slavery in the nation. Northern Jeffersonian Republicans formed a coalition across factional lines with remnants of the Federalists. Southern Jeffersonian united in almost unanimous opposition. The ensuing debates pitted the northern "restrictionists '' (antislavery legislators who wished to bar slavery from the Louisiana territories) and southern "anti-restrictionists '' (proslavery legislators who rejected any interference by Congress inhibiting slavery expansion).
The sectional "rupture '' over slavery among Jeffersonian Republicans, first exposed in the Missouri crisis, had its roots in the Revolutionary generation.
The Missouri crisis marked a rupture in the Republican Ascendency -- the national association of Jeffersonian Republicans that dominated national politics in the post-War of 1812 period.
The Founders had inserted both principled and pragmatic elements in the establishing documents. The Declaration of Independence of 1776 was grounded on the claim that liberty established a moral ideal that made universal equality a common right. The Revolutionary War generation had formed a government of limited powers in 1787 to embody the principles in the Declaration, but "burdened with the one legacy that defied the principles of 1776 '': human bondage. In a pragmatic commitment to form the Union, the federal apparatus would forego any authority to directly interfere with the institution of slavery where it existed under local control within the states. This acknowledgment of state sovereignty provided for the participation of those states most committed to slave labor. With this understanding, slaveholders had cooperated in authorizing the Northwest Ordinance in 1787, and to outlawing the trans - Atlantic slave trade in 1808. Though the Founders sanctioned slavery, they did so with the implicit understanding that the slaveholding states would take steps to relinquish the institution as opportunities arose.
Southern states, after the War for Independence, had regarded slavery as an institution in decline (with the exception of Georgia and South Carolina). This was manifest in the shift towards diversified farming in the Upper South, and in the gradual emancipation of slaves in New England, and more significantly, in the mid-Atlantic states. Beginning in the 1790s, with the introduction of the cotton gin, and by 1815, with the vast increase in demand for cotton internationally, slave - based agriculture underwent an immense revival, spreading the institution westward to the Mississippi River. Slavery opponents in the South vacillated, as did their hopes for the imminent demise of human bondage.
However rancorous the disputes among Southerners themselves over the virtues of a slave - based society, they united as a section when confronted by external challenges to their institution. The free states were not to meddle in the affairs of the slaveholders. Southern leaders -- of whom virtually all identified as Jeffersonian Republicans -- denied that Northerners had any business encroaching on matters related to slavery. Northern attacks on the institution were condemned as incitements to riot among the slave populations -- deemed a dire threat to white southern security.
Northern Jeffersonian Republicans embraced the Jeffersonian antislavery legacy during the Missouri debates, explicitly citing the Declaration of Independence as an argument against expanding the institution. Southern leaders, seeking to defend slavery, would renounce the document 's universal egalitarian applications and its declaration that "all men are created equal. ''
Article One, Section Two of the US Constitution supplemented legislative representation in those states where residents owned slaves. Known as the three - fifths clause or the "federal ratio '', three - fifths (60 %) of the slave population was numerically added to the free population. This sum was used to calculate Congressional districts per state and the number of delegates to the Electoral College. The federal ratio produced a significant number of legislative victories for the South in the years preceding the Missouri crisis, as well as augmenting its influence in party caucuses, the appointment of judges and the distribution of patronage. It is unlikely that the three - fifths clause, prior to 1820, was decisive in affecting legislation on slavery. Indeed, with the rising northern representation in the House, the South 's share of the membership had declined since the 1790s.
Hostility to the federal ratio had historically been the object of the now nationally ineffectual Federalists; they blamed their collective decline on the "Virginia Dynasty '', expressed in partisan terms rather than in moral condemnation of slavery. The pro-De Witt Clinton - Federalist faction carried on the tradition, posing as antirestrictionists, for the purpose of advancing their fortunes in New York politics.
Senator Rufus King of New York, a Clinton associate, was the last Federalist icon still active on the national stage, a fact irksome to Southern Republicans. A signatory to the US Constitution, he had strongly opposed the three - fifths rule in 1787. In the 1819 15th Congress debates, he revived his critique as a complaint that New England and the Mid-Atlantic States suffered unduly from the federal ratio, declaring himself "degraded '' (politically inferior) to the slaveholders. Federalists, North and South, preferred to mute antislavery rhetoric, but during the 1820 debates in the 16th Congress, King and other old Federalists would expand their critique to include moral considerations of slavery.
Republican James Tallmadge, Jr. and the Missouri restrictionists deplored the three - fifths clause because it had translated into political supremacy for the South. They had no agenda to remove it from the founding document, only to prevent its further application west of the Mississippi River.
As determined as Southern Republicans were to secure Missouri statehood with slavery, the three - fifths clause failed to provide the margin of victory in the 15th Congress. Blocked by Northern Republicans -- largely on egalitarian grounds -- with sectional support from Federalists, the bill would die in the upper house, where the federal ratio had no relevance. The "balance of power '' between the sections, and the maintenance of Southern preeminence on matters related to slavery resided in the Senate.
Northern voting majorities in the lower house did not translate into political dominance. The fulcrum for proslavery forces resided in the upper house of Congress. There, constitutional compromise in 1787 had provided for exactly two senators per state, regardless of its population: the South, with its small white demographic relative to the North, benefited from this arrangement. Since 1815, sectional parity in the Senate had been achieved through paired admissions, leaving the North and South, at the time of Missouri territory application for statehood, at eleven states each.
The South, voting as a bloc on measures that challenged slaveholding interests and augmented by defections from Free State Senators with Southern sympathies, was able to tally majorities. The Senate stood as the bulwark and source of the Slave Power -- a power that required admission of slave states to the Union to preserve its national primacy.
Missouri statehood, with the Tallmadge amendment approved, would set a trajectory towards a Free State trans - Mississippi and a decline in Southern political authority. The question as to whether the Congress could lawfully restrain the growth of slavery in Missouri took on great importance among the slave states. The moral dimensions of the expansion of human bondage would be raised by Northern Republicans on constitutional grounds.
The Tallmadge amendment was "the first serious challenge to the extension of slavery '' and raised questions concerning the interpretation of the republics ' founding documents.
Jeffersonian Republicans justified Tallmadge 's slavery restrictions on the grounds that Congress possessed the authority to impose territorial statutes which would remain in force after statehood was established. Representative John W. Taylor pointed to Indiana and Illinois, where their Free State status conformed to the antislavery provisions in the Northwest Ordinance.
Further, antislavery legislators invoked Article Four, Section Four of the Constitution, which required that states provide a republican form of government. As the Louisiana Territory was not part of the United States in 1787, they argued, introducing slavery into Missouri would thwart the egalitarian intent of the Founders.
Proslavery Republicans countered that the Constitution had long been interpreted as having relinquished any claim to restricting slavery within the states. The free inhabitants of Missouri, either in the territorial phase or during statehood, had the right to establish slavery -- or disestablish it -- exclusive of central government interference. As to the Northwest Ordinance, Southerners denied that this could serve as a lawful antecedent for the territories of the Louisiana Purchase, as the ordinance had been issued originally under the Articles of Confederation, not under the US Constitution.
As a legal precedent, they offered the treaty acquiring the Louisiana lands in 1803: the document included a provision (Article 3) that extended the rights of US citizens to all inhabitants of the new territory, including the protection of property in slaves. When slaveholders embraced Jeffersonian constitutional strictures on a limited central government they were reminded that Jefferson, as US President in 1803, had deviated from these precepts when he wielded federal executive power to double the size the United States (including the lands under consideration for Missouri statehood). In doing so, he set a Constitutional precedent that would serve to rationalize Tallmadge 's federally imposed slavery restrictions.
The 15th Congress debates, focusing on it did on constitutional questions, largely avoided the moral dimensions raised by the topic of slavery. That the unmentionable subject had been raised publicly was deeply offensive to Southern Congressmen, and violated the long - held sectional understanding between free and slave state legislators.
Missouri statehood confronted Southern Jeffersonians with the prospect of applying the egalitarian principles espoused by the Revolutionary generation. This would require halting the spread of slavery westward, and confine the institution to where it already existed. Faced with a population of 1.5 million slaves, and the lucrative production of cotton, the South would abandon hopes for containment. Slaveholders in the 16th Congress, in an effort to come to grips with this paradox, would resort to a theory that called for extending slavery geographically so as to encourage its decline: "diffusion ''.
On February 16, 1819, the House Committee of the Whole voted to link Tallmadge 's provisions with the Missouri enabling legislation, approving the move 79 - 67. Following the committee vote, debates resumed over the merits of each of Tallmadge 's provisions in the enabling act. The debates in the House 's 2nd session in 1819 lasted only three days. They have been characterized as "rancorous '', "fiery '', "bitter '', "blistering '', "furious '' and "bloodthirsty ''.
Northern representatives outnumbered the South in House membership 105 to 81. When each of the restrictionist provisions were put to the vote, they passed along sectional lines: 87 to 76 in favor of prohibition on further slave migration into Missouri (Table 1) and 82 to 78 in favor of emancipating slave offspring at age twenty - five.
15th Congress 2nd Session, February 16, 1819.
The enabling bill was passed to the Senate, where both parts of the bill were rejected: 22 to 16 opposed to restricting new slaves in Missouri (supported by five northerners, two of whom were the proslavery legislators from the free state of Illinois); and 31 to 7 against gradual emancipation for slave children born post-statehood. House antislavery restrictionists refused to concur with the Senate proslavery anti-restrictionists: Missouri statehood would devolve upon the 16th Congress in December 1819.
The Missouri Compromise debates stirred suspicions among proslavery interests that the underlying purpose of the Tallmadge amendments had little to do with opposition to slavery expansion. The accusation was first leveled in the House by the Republican anti-restrictionist John Holmes from the District of Maine. He suggested that Senator Rufus King 's "warm '' support for the Tallmadge amendment concealed a conspiracy to organize a new antislavery party in the North -- a party composed of old Federalists in combination with disaffected antislavery Republicans. The fact that King, in the Senate, and Tallmadge and Tyler, in the House -- all New Yorkers -- were among the vanguard for slavery restriction in Missouri lent credibility to these charges. When King was re-elected to the US Senate in January 1820, during the 16th Congress debates, and with bipartisan support, suspicions deepened and would persist throughout the crisis. Southern Jeffersonian Republican leadership, including President Monroe and former President Thomas Jefferson, considered it as an article of faith that Federalists, given the chance, would destabilize the Union so as to re-impose monarchal rule in North America, and "consolidate '' political control over the people by expanding the functions of the central government. Jefferson, at first unperturbed by the Missouri question, soon became convinced that a northern conspiracy was afoot, with Federalists and crypto - Federalists posing as Republicans, using Missouri statehood as a pretext.
Due to the disarray of the Republican Ascendency brought about by amalgamation, fears abounded among Southerners that a Free State party might take shape in the event that Congress failed to reach an understanding over Missouri and slavery: Such a party would threaten Southern preeminence. Secretary of State John Quincy Adams of Massachusetts surmised that the political configuration for just such a sectional party already existed. That the Federalists were anxious to regain a measure of political participation in national politics is indisputable. There was no basis, however, for the charge that Federalists had directed Tallmadge in his antislavery measures, nor was there anything to indicate that a New York - based King - Clinton alliance sought to erect an antislavery party on the ruins of the Republican Party. The allegations by Southern proslavery interests of a "plot '' or that of "consolidation '' as a threat to the Union misapprehended the forces at work in the Missouri crisis: the core of the opposition to slavery in the Louisiana Purchase were informed by Jeffersonian egalitarian principles, not a Federalist resurgence.
To balance the number of "slave states '' and "free states '', the northern region of what was then Massachusetts, the District of Maine, ultimately gained admission into the United States as a free state to become Maine. This only occurred as a result of a compromise involving slavery in Missouri, and in the federal territories of the American West. The admission of another slave state would increase the South 's power at a time when northern politicians had already begun to regret the Constitution 's Three - Fifths Compromise. Although more than 60 percent of whites in the United States lived in the North, by 1818 northern representatives held only a slim majority of congressional seats. The additional political representation allotted to the South as a result of the Three - Fifths Compromise gave southerners more seats in the House of Representatives than they would have had if the number was based on just free population. Moreover, since each state had two Senate seats, Missouri 's admission as a slave state would result in more southern than northern senators. A bill to enable the people of the Missouri Territory to draft a constitution and form a government preliminary to admission into the Union came before the House of Representatives in Committee of the Whole, on February 13, 1819. James Tallmadge of New York offered an amendment, named the Tallmadge Amendment, that forbade further introduction of slaves into Missouri, and mandated that all children of slave parents born in the state after its admission should be free at the age of 25. The committee adopted the measure and incorporated it into the bill as finally passed on February 17, 1819, by the house. The United States Senate refused to concur with the amendment, and the whole measure was lost.
During the following session (1819 -- 1820), the House passed a similar bill with an amendment, introduced on January 26, 1820, by John W. Taylor of New York, allowing Missouri into the union as a slave state. The question had been complicated by the admission in December of Alabama, a slave state, making the number of slave and free states equal. In addition, there was a bill in passage through the House (January 3, 1820) to admit Maine as a free state.
The Senate decided to connect the two measures. It passed a bill for the admission of Maine with an amendment enabling the people of Missouri to form a state constitution. Before the bill was returned to the House, a second amendment was adopted on the motion of Jesse B. Thomas of Illinois, excluding slavery from the Louisiana Territory north of the parallel 36 ° 30 ′ north (the southern boundary of Missouri), except within the limits of the proposed state of Missouri.
The vote in the Senate was 24 for the compromise, to 20 against. The amendment and the bill passed in the Senate on February 17 and February 18, 1820. The House then approved the Senate compromise amendment, on a vote of 90 to 87, with those 87 votes coming from free state representatives opposed to slavery in the new state of Missouri. The House then approved the whole bill, 134 to 42, with opposition from the southern states.
The two houses were at odds not only on the issue of the legality of slavery but also on the parliamentary question of the inclusion of Maine and Missouri within the same bill. The committee recommended the enactment of two laws, one for the admission of Maine, the other an enabling act for Missouri. They recommended against having restrictions on slavery but for including the Thomas amendment. Both houses agreed, and the measures were passed on March 5, 1820, and were signed by President James Monroe on March 6.
The question of the final admission of Missouri came up during the session of 1820 -- 1821. The struggle was revived over a clause in Missouri 's new constitution (written in 1820) requiring the exclusion of "free negroes and mulattoes '' from the state. Through the influence of Kentucky Senator Henry Clay "The Great Compromiser '', an act of admission was finally passed, upon the condition that the exclusionary clause of the Missouri constitution should "never be construed to authorize the passage of any law '' impairing the privileges and immunities of any U.S. citizen. This deliberately ambiguous provision is sometimes known as the Second Missouri Compromise.
During the decades following 1820, Americans hailed the 1820 agreement as an essential compromise almost on the sacred level of the Constitution itself. Although the Civil War broke out in 1861, historians often say the Compromise helped postpone the war.
These disputes involved the competition between the southern and northern states for power in Congress and for control over future territories. There were also the same factions emerging as the Democratic - Republican party began to lose its coherence. In an April 22 letter to John Holmes, Thomas Jefferson wrote that the division of the country created by the Compromise Line would eventually lead to the destruction of the Union:
... but this momentous question, like a fire bell in the night, awakened and filled me with terror. I considered it at once as the knell of the Union. it is hushed indeed for the moment. but this is a reprieve only, not a final sentence. A geographical line, coinciding with a marked principle, moral and political, once conceived and held up to the angry passions of men, will never be obliterated; and every new irritation will mark it deeper and deeper.
The debate over admission of Missouri also raised the issue of sectional balance, for the country was equally divided between slave and free states with eleven each. To admit Missouri as a slave state would tip the balance in the Senate (made up of two senators per state) in favor of the slave states. For this reason, northern states wanted Maine admitted as a free state. Maine was admitted in 1820 and Missouri in 1821, but no further states were added until 1836, when Arkansas was admitted.
From the constitutional standpoint, the Compromise of 1820 was important as the example of Congressional exclusion of slavery from U.S. territory acquired since the Northwest Ordinance. Nevertheless, the Compromise was deeply disappointing to African - Americans in both the North and South, as it stopped the southern progression of gradual emancipation at Missouri 's southern border and legitimized slavery as a southern institution.
The provisions of the Missouri Compromise forbidding slavery in the former Louisiana Territory north of the parallel 36 ° 30 ′ north were effectively repealed by Stephen A. Douglas 's Kansas -- Nebraska Act of 1854. The repeal of the Compromise caused outrage in the North and sparked the return to politics of Abraham Lincoln, who criticized slavery and excoriated Douglas 's act in his "Peoria Speech '' (October 16, 1854).
|
reasons for implementing a land reform programme in south africa | Land reform in South Africa - Wikipedia
The Native Lands Act of 1913 "prohibited the establishment of new farming operations, sharecropping or cash rentals by blacks outside of the reserves '' where they were forced to live. "Land restitution '' was one of the promises made by the African National Congress when it came to power in South Africa in 1994.
These property rights are extremely important as, not only do they empower farmer workers (who now have the opportunity to become farmers) and reduce inequality, but they also increase production due to inverse farm size productivity. Farmers with smaller plots who live on the farm often use family members for labor, making these farms efficient. Their transaction costs are less than larger plots with hired labor. Since many of these family members were unemployed it allows previously unemployed people to now participate in the economy and better the country 's economic growth.
Despite this view, it is also argued by some that the opposite has happened. Many South Africans and foreign commentators have voiced alarm over the failure of the redistribution policy. Around 50 % of farms are said to be failing, whilst the South African government has said the figure could be as high as 90 %. Critics use these figures to suggest that the ANC government 's current policy will be detrimental to the South African agricultural industry.
The Land Reform Process focused on three areas: restitution, land tenure reform and land redistribution. Restitution, where the government compensates (monetary) individuals who had been forcefully removed, has been very unsuccessful and the policy has now shifted to redistribution with secure land tenure. Land tenure reform is a system of recognising people 's right to own land and therefore control of the land.
Redistribution is the most important component of land reform in South Africa. Initially, land was bought from its owners (willing seller) by the government (willing buyer) and redistributed, in order to maintain public confidence in the land market.
Although this system has worked in various countries in the world, in South Africa it has proved to be very difficult to implement. This is because many owners do not actually see the land they are purchasing and are not involved in the important decisions made at the beginning of the purchase and negotiation.
In 2000 the South African Government decided to review and change the redistribution and tenure process to a more decentralised and area based planning process. The idea is to have local integrated development plans in 47 districts. This will hopefully mean more community participation and more redistribution taking place, but there are also various concerns and challenges with this system too.
These include the use of third parties, agents accredited by the state, and who are held accountable to the government. The result has been local land holding elites dominating the system in many of these areas. The government still hopes that with "improved identification and selection of beneficiaries, better planning of land and ultimately greater productivity of the land acquired... '' the land reform process will begin moving faster.
As of early 2006, the ANC government announced that it will start expropriating the land, although according to the country 's chief land - claims commissioner, Tozi Gwanya, unlike Zimbabwe there will be compensation to those whose land is expropriated, "but it must be a just amount, not inflated sums. ''
Despite these moves towards decentralisation, these improved practices and government promises are not very evident. South Africa still remains hugely unequal, with black South Africans still dispossessed of land and many still homeless. The challenge for the incumbent politicians is to improve the various bureaucratic processes, and find solutions to giving more South Africans secure land tenure.
In South Africa, the main model of Land Reforms that was implemented was based on the Market - led Agrarian Reform (MLAR) approach. Within the MLAR, the strategic partnership (SP) model was implemented in seven claimant communities in Levubu in the Limpopo province. The SP model was implemented between 2005 - 2008 that ended up in a fiasco leading to creation of conflict between several interested parties.
Various researchers have identified various challenges facing land and agrarian reform in South Africa. The following are amongst the challenges as identified by (Hall 2004) and the Department of Rural Development and Land Reform (2008):
Unresolved land claims, which are largely rural claims, are mostly affected by a number of challenges such as:
As of 2016 the South African government has pumped more than R60bn into land reform projects since 1994. Despite this investment, the land reform programme has not stimulated development in the targeted rural areas. A report by the South African Government 's Financial and Fiscal Commission shows that land reform as a mechanism for agricultural development and job creation has failed. A survey by the commission in Limpopo province, KwaZulu - Natal and the Eastern Cape found that most land reform farms show little or no agricultural activity; the land reform beneficiaries earn little to no income, and the majority of these beneficiaries seek work on surrounding commercial farms instead of actively farming their own land. Where farming is taking place on land reform farms, these farms operate below their full agricultural potential and are mainly used for subsistence agriculture. On average, crop production had decreased by 79 % since conversion to land reform. In the three provinces surveyed, job losses averaged 84 %, with KwaZulu - Natal suffering a 94 % job haemorrhage.
|
what are yakko wakko and dot supposed to be | Animaniacs - wikipedia
Animaniacs is an American animated comedy television series created by Tom Ruegger. It is the second animated series produced by Amblin Entertainment in association with Warner Bros. Animation during the animation renaissance of the late 1980s and early 1990s. Animaniacs first aired on Fox Kids from 1993 to 1995 and new episodes later appeared on The WB from 1995 to 1998 as part of its Kids ' WB afternoon programming block. The series had a total of 99 episodes and one film, Wakko 's Wish.
Animaniacs is a variety show, with short skits featuring a large cast of characters. While the show had no set format, the majority of episodes were composed of three short mini-episodes, each starring a different set of characters, and bridging segments. Hallmarks of the series included its music, character catchphrases, and humor directed at an adult audience.
A reboot of the series was announced by Hulu in January 2018, with two seasons to be produced in conjunction with Amblin Entertainment and Warner Bros. Animation, expected to air starting in 2020.
The Warner siblings and the other characters live in Burbank, California. However, characters from the series had episodes in various places and periods of time. The Animaniacs characters interacted with famous people and creators of the past and present as well as mythological characters and characters from contemporary pop culture and television. Andrea Romano, the casting and recording director of Animaniacs, said that the Warner siblings functioned to "tie the show together, '' by appearing in and introducing other characters ' segments. Each Animaniacs episode usually consisted of two or three cartoon shorts. Animaniacs segments ranged in time, from bridging segments less than a minute long to episodes spanning the entire show length; writer Peter Hastings said that the varying episode lengths gave the show a "sketch comedy '' atmosphere.
Animaniacs had a large cast of characters, separated into individual segments, with each pair or set of characters acting in its own plot. The Warner kids, Yakko, Wakko, and Dot, were three cartoon stars from the 1930s that were locked away in the Warner Bros. Water Tower until the 1990s, when they escaped. After their escape, they often interacted with Warner Bros. studio workers, including Ralph the Security Guard; Dr. Otto Scratchansniff, the studio psychiatrist, and his assistant Hello Nurse. Pinky and the Brain are two genetically altered laboratory mice who continuously plot and attempt to take over the world. Slappy Squirrel is an octogenarian cartoon star who can easily outwit antagonists and uses her wiles to educate her nephew, Skippy Squirrel, about cartoon techniques. Additional principal characters included Rita and Runt, Buttons and Mindy, Chicken Boo, Flavio and Marita (The Hip Hippos), Katie Ka - Boom, and a trio of pigeons known as The Goodfeathers.
The Animaniacs cast of characters had a variety of inspiration, from celebrities to writers ' family members to other writers. Executive producer Steven Spielberg said that the irreverence in Looney Tunes cartoons inspired the Animaniacs cast. Tom Ruegger created Pinky and the Brain, a series Sherri Stoner had also written for, after being inspired by the personalities of two of his Tiny Toon Adventures colleagues, Eddie Fitzgerald and Tom Minton. Ruegger thought of the premise for Pinky and the Brain when wondering what would happen if Minton and Fitzgerald tried to take over the world. Deanna Oliver contributed The Goodfeathers scripts and the character Chicken Boo, while Nicholas Hollander based Katie Kaboom on his teenage daughter.
Ruegger modeled the Warners ' personalities heavily after his three sons. Because the Warners were portrayed as cartoon stars from the early 1930s, Ruegger and other artists for Animaniacs made the images of the Warners similar to cartoon characters of the early 1930s. Simple black and white drawings were very common in cartoons of the 1920s and 1930s, such as Buddy, Felix the Cat, Oswald the Lucky Rabbit, and the early versions of Mickey Mouse and Minnie Mouse.
Sherri Stoner created Slappy the Squirrel when another writer and friend of Stoner, John McCann, made fun of Stoner 's career in TV movies playing troubled teenagers. When McCann joked that Sherri would be playing troubled teenagers when she was fifty years old, the latter developed the idea of Slappy 's characteristics as an older person acting like a teenager. Stoner liked the idea of an aged cartoon character because an aged cartoon star would know the secrets of other cartoons and "have the dirt on (them) ''.
Steven Spielberg served as executive producer, under his Amblin Television label. Showrunner and senior producer Tom Ruegger lead the overall production and writer 's room. Producers Peter Hastings, Sherri Stoner, Rusty Mills, and Rich Arons contributed scripts for many of the episodes and had an active role during group discussions in the writer 's room as well.
The writers and animators of Animaniacs used the experience gained from the previous series to create new animated characters that were cast in the mold of Chuck Jones and Tex Avery 's creations. Additional writers for the series included Liz Holzman, Paul Rugg, Deanna Oliver, John McCann, Nicholas Hollander, Charlie Howell, Gordon Bressack, Jeff Kwitny, Earl Kress, Tom Minton, and Randy Rogel. Hastings, Rugg, Stoner, McCann, Howell, and Bressack were involved in writing sketch comedy while others, including Kress, Minton, and Rogel, came from cartoon backgrounds.
Made - up stories did not exclusively comprise Animaniacs writing, as Hastings remarked: "We were n't really there to tell compelling stories... (As a writer) you could do a real story, you could recite the Star - Spangled Banner, or you could parody a commercial... you could do all these kinds of things, and we had this tremendous freedom and a talent to back it up. '' Writers for the series wrote into Animaniacs stories that happened to them; the episodes "Ups and Downs, '' "Survey Ladies, '' and "I Got Yer Can '' were episodes based on true stories that happened to Rugg, Deanna Oliver, and Stoner, respectively. Another episode, "Bumbie 's Mom, '' both parodied the film Bambi and was a story based on Stoner 's childhood reaction to the film.
In an interview, the writers explained how Animaniacs allowed for non-restrictive and open writing. Hastings said that the format of the series had the atmosphere of a sketch comedy show because Animaniacs segments could widely vary in both time and subject, while Stoner described how the Animaniacs writing staff worked well as a team in that writers could consult other writers on how to write or finish a story, as was the case in the episode "The Three Muska - Warners ''. Rugg, Hastings and Stoner also mentioned how the Animaniacs writing was free in that the writers were allowed to write about parody subjects that would not be touched on other series.
Animaniacs featured Rob Paulsen as Yakko, Pinky and Dr. Otto von Scratchansniff, Tress MacNeille as Dot, Jess Harnell as Wakko, Sherri Stoner as Slappy the Squirrel, Maurice LaMarche as the Brain, Squit and the belching segments "The Great Wakkorotti '' (Harnell said that he himself is commonly mistaken for the role), and veteran voice actor Frank Welker as Ralph the Security Guard, Thaddeus Plotz and Runt. Andrea Romano said that the casters wanted Paulsen to play the role of Yakko: "We had worked with Rob Paulsen before on a couple of other series and we wanted him to play Yakko. '' Romano said that the casters had "no trouble '' choosing the role of Dot, referring to MacNeille as "just hilarious... And yet (she had) that edge. '' Before Animaniacs, Harnell had little experience in voice acting other than minor roles for Disney which he "fell into ''. Harnell revealed that at the audition for the show, he did a John Lennon impression and the audition "went great ''. Stoner commented that when she gave an impression of what the voice would be to Spielberg, he said she should play Slappy. According to Romano, she personally chose Bernadette Peters to play Rita. Other voices were provided by Jim Cummings, Paul Rugg, Vernee Watson - Johnson, Jeff Bennett and Gail Matthius (from Tiny Toon Adventures). Tom Ruegger 's three sons also played roles on the series. Nathan Ruegger voiced Skippy Squirrel, nephew to Slappy, throughout the duration of the series; Luke Ruegger voiced The Flame in historical segments on Animaniacs; and Cody Ruegger voiced Birdie from Wild Blue Yonder.
Animation work on Animaniacs was farmed out to several different studios, both American and international, over the course of the show 's production. The animation companies included Tokyo Movie Shinsha (now known as TMS Entertainment), StarToons, Wang Film Productions, Freelance Animators New Zealand, and AKOM, and most Animaniacs episodes frequently had animation from different companies in each episode 's respective segments.
Animaniacs was made with a higher production value than standard television animation; the show had a higher cel count than most TV cartoons. The Animaniacs characters often move fluidly, and do not regularly stand still and speak, as in other television cartoons.
Animaniacs utilized a heavy musical score for an animated program, with every episode featuring at least one original score. The idea for an original musical score in every episode came from Steven Spielberg. Animaniacs used a 35 - piece orchestra, and seven composers were contracted to write original underscore for the series run: Richard Stone, Steve Bernstein, Julie Bernstein, Carl Johnson, J. Eric Schmidt, Gordon Goodwin, and Tim Kelly. The use of the large orchestra in modern Warner Bros. animation began with Animaniacs predecessor, Tiny Toon Adventures, but Spielberg pushed for its use even more in Animaniacs. Although the outcome was a very expensive show to produce, "the sound sets us apart from everyone else in animation, '' said Jean MacCurdy, the executive in charge of production for the series. According to Steve and Julie Bernstein, not only was the Animaniacs music written in the same style as that of Looney Tunes composer Carl Stalling, but that the music was recorded at the Eastwood Scoring Stage, which was used by Stalling as well as its piano. Senior producer Tom Ruegger said that writers Randy Rogel, Nicholas Hollander, and Deanna Oliver wrote "a lot of music '' for the series.
The humor of Animaniacs varied in type, ranging from parody to cartoon violence. The series made parodies of television shows and films. In an interview, Spielberg defended the "irreverence '' of Animaniacs, saying that the Animaniacs crew has "a point of view '' and does not "sit back passively and play both sides equally ''. Spielberg also said that Animaniacs ' humor of social commentary and irreverence were inspired by the Marx Brothers and Looney Tunes cartoons. Animaniacs, among other Spielberg - produced shows, had a large amount of cartoon violence. Spielberg defended the violence in Animaniacs by saying that the series had a balance of both violent humor and educational segments, so the series would never become either too violent or "benign ''. Animaniacs also made use of catchphrases, recurring jokes and segments, and "adult '' humor.
Characters on Animaniacs had catchphrases, with some characters having more than one. Notable catchphrases include Yakko 's "Goodnight, everybody! '' often said following adult humor, Wakko 's "Faboo! '' and Dot 's frequent assertions of her cuteness. The most prominent catchphrase that was said by all the Warners was "Hello - o-o, nurse! '' Tom Ruegger said that the "Hello - o-o, Nurse! '' line was intended to be a catchphrase much like Bugs Bunny 's line, "What 's up, doc? '' Before the theme song for each "Pinky and the Brain '' segment, Pinky asks, "Gee, Brain, what do you want to do tonight? '' Brain replies, "The same thing we do every night, Pinky: try to take over the world! '' During these episodes, Brain often asks Pinky, "Are you pondering what I 'm pondering? '' and Pinky replies with a silly non sequitur that changes every episode. Writer Peter Hastings said that he unintentionally created these catchphrases when he wrote the episode "Win Big, '' and then Producer Sherri Stoner used them and had them put into later episodes.
Running gags and recurring segments were very common in the show. The closing credits for each episode always included one joke credit and ended with a water tower gag similar to The Simpsons couch gag. Director Rusty Mills and senior producer Tom Ruegger said that recurring segments like the water tower gag and another segment titled "The Wheel of Morality '' (which, in Yakko 's words, "adds boring educational value to what would otherwise be an almost entirely entertaining program '', and ends with a "moral '' that makes no sense) eased the production of episodes because the same animated scenes could be used more than once (and, in the case of the Wheel segments, enabled the producers to add a segment in where there was not room for anything else in the episode).
A great deal of Animaniacs 's humor and content was aimed at an adult audience. Animaniacs parodied the film A Hard Day 's Night and the Three Tenors, references that The New York Times wrote were "appealing to older audiences ''. The comic operas of Gilbert and Sullivan Pirates of Penzance and H.M.S. Pinafore were parodied in episode 3, "HMS Yakko ''. The Warners ' personalities were made similar to those of the Marx Brothers and Jerry Lewis, in that they, according to writer Peter Hastings, "wreak havoc, '' in "serious situations ''. In addition, the show 's recurring Goodfeathers segment was populated with characters based on characters from The Godfather and Goodfellas, R - rated crime dramas neither marketed nor intended for children. Some content of Animaniacs was not only aimed at an adult audience but was suggestive in nature; one character, Minerva Mink, had episodes that network censors considered too sexually suggestive for the show 's intended audience, for which she was soon de-emphasized as a featured character. Jokes involving such innuendo would often end with Yakko telling "Goodnight, everybody! '' as a punchline.
Animaniacs parodied popular TV shows and movies and caricatured celebrities. Animaniacs made fun of celebrities, major motion pictures, television shows for adults (Seinfeld and Friends, among others), television shows for children, and trends in the US. One episode even made fun of competing show Power Rangers, and another episode caricatured Animaniacs ' own Internet fans. Animaniacs also made potshots of Disney films, creating parodies of such films as The Lion King, Beauty and the Beast, Pocahontas, Bambi, and others. Animaniacs director Russell Calabrese said that not only did it become a compliment to be parodied on Animaniacs but also that being parodied on the series would be taken as a "badge of honor ''.
Animaniacs had a variety of music types. Many Animaniacs songs were parodies of classical or folk music with educational lyrics. Notable ones include "Yakko 's World '', in which Yakko sings the names of all 200 - some nations of the world to the tune of the "Mexican Hat Dance ''. "Wakko 's America '' listed all the United States and their capitals to the tune of "Turkey in the Straw ''. Another song, titled "The Presidents '', named every US president (up to Bill Clinton, due to production date) to the tune of the "William Tell Overture '' (with brief snippets of the tunes "Mademoiselle from Armentieres '' and "Dixie ''). Non-educational songs included parodies such as the segment "Slippin ' on the Ice '' and a parody of "Singin ' in the Rain ''. Most of the groups of characters even had their own theme songs for their segment on the show.
The Animaniacs series theme song, performed by the Warners, was a very important part of the show. In the series ' first season, the theme won an Emmy Award for best song. Stone composed the music for the title sequence and Ruegger wrote the lyrics. Several Animaniacs albums and sing - along VHS tapes were released, including the CDs Animaniacs, Yakko 's World, and Variety Pack, and the tapes Animaniacs Sing - Along: Yakko 's World and Animaniacs Sing - Along: Mostly in Toon.
Shorts featuring Rita and Runt would also incorporate songs for Bernadette Peters to sing.
Animaniacs was a successful show, gathering both child and adult fans. The series received ratings higher than its competitors and won eight Daytime Emmy Awards and one Peabody Award.
During its run, Animaniacs became the second-most popular children 's show in both demographics of children ages 2 -- 11 and children ages 6 -- 11 (behind Mighty Morphin Power Rangers). Animaniacs, along with other animated series, helped to bring "Fox Kids '' ratings much larger than those of the channel 's competitors. In November 1993, Animaniacs and Tiny Toon Adventures almost doubled the ratings of their rival shows, Darkwing Duck and Goof Troop, in both the 2 -- 11 and 6 -- 11 demographics that are very important to children 's networks. On Kids ' WB, Animaniacs gathered about one - million children viewers every week.
While Animaniacs was popular among younger viewers (the target demographic for Warner Bros. ' TV cartoons), adults also responded positively to the show; in 1995, more than one - fifth of the weekday (4 p.m., Monday through Friday) and Saturday morning (8 a.m.) audience viewers were 25 years or older. The large adult fanbase even led to one of the first Internet - based fandom cultures. During the show 's prime, the Internet newsgroup alt. tv. animaniacs was an active gathering place for fans of the show (most of whom were adults) to post reference guides, fan fiction, and fan - made artwork about Animaniacs. The online popularity of the show did not go unnoticed by the show 's producers, and twenty of the most active participants on the newsgroup were invited to the Warner Bros. Animation studios for a gathering in August 1995 dubbed by those fans Animania IV.
Animaniacs ' first major award came in 1993, when the series won a Peabody Award in its debuting season. In 1994, Animaniacs was nominated for two Annie Awards, one for "Best Animated Television Program '', and the other for "Best Achievement for Voice Acting '' (Frank Welker). Animaniacs also won two Daytime Emmy Awards for "Outstanding Achievement in Music Direction and Composition '' and "Outstanding Original Song '' (Animaniacs Main Title Theme). In 1995, Animaniacs was nominated four times for the Annie Awards, once for "Best Animated Television Program '', twice for "Voice Acting in the Field of Animation '' (Tress MacNeille and Rob Paulsen), and once for "Best Individual Achievement for Music in the Field of Animation '' (Richard Stone). In 1996, Animaniacs won two Daytime Emmy Awards, one for "Outstanding Animated Children 's Program '' and the other for "Outstanding Achievement in Animation ''. In 1997, Animaniacs was nominated for an Annie Award for "Best Individual Achievement: Directing in a TV Production '' (Charles Visser for the episode "Noel ''). Animaniacs also won two more Daytime Emmy Awards, one for "Outstanding Animated Children 's Program '' and the other for "Outstanding Music Direction and Composition ''. In 1998, the last year in which new episodes of Animaniacs were produced, Animaniacs was nominated for an Annie Award in "Outstanding Achievement in an Animated Daytime Television Program ''. Animaniacs also won a Daytime Emmy Award in "Outstanding Music Direction and Composition '' (for the episode "The Brain 's Apprentice ''). In 1999, Animaniacs won a Daytime Emmy Award for "Outstanding Achievement in Music Direction and Composition ''. When Animaniacs won this award, it set a record for most Daytime Emmy Awards in the field of "Outstanding Achievement in Music Direction and Composition '' for any individual animation studio. In 2009, IGN named Animaniacs the 17th - best animated television series. On September 24, 2013, Animaniacs was listed among TV Guide 's "60 Greatest TV Cartoons of All Time ''.
Before Animaniacs was put into production in 1991, various collaboration and brainstorming efforts were thought up to create both the characters and premise of the series. For instance, ideas that were thrown out were Rita and Runt being the hosts of the show and the Warners being duck characters that senior producer Tom Ruegger drew in his college years. After the characters from the series were created, they were all shown to executive producer Steven Spielberg, who would decide which characters would make it into Animaniacs (the characters Buttons and Mindy were chosen by Spielberg 's daughter). The characters ' designs came from various sources, including caricatures of other writers, designs based on early cartoon characters, and characters that simply had a more modern design.
Animaniacs premiered on September 13, 1993, on the Fox Kids programming block of the Fox network, and ran there until September 8, 1995; new episodes aired from the 1993 through 1994 seasons. Animaniacs aired with a 65 - episode first season because these episodes were ordered by Fox all at once. While on Fox Kids, Animaniacs gained fame for its name and became the second-most popular show among children ages 2 -- 11 and children ages 6 -- 11, second to Mighty Morphin Power Rangers (which began that same year). On March 30, 1994, Yakko, Wakko, and Dot first theatrically appeared in the animated short, "I 'm Mad '', which opened nationwide alongside the full - length animated feature, Thumbelina. The musical short featured Yakko, Wakko, and Dot bickering during a car trip. Producers Steven Spielberg, Tom Ruegger, and Jean MacCurdy wanted "I 'm Mad '' to be the first of a series of shorts to bring Animaniacs to a wider audience. However, "I 'm Mad '' was the only Animaniacs theatrical short produced. The short was later incorporated into Animaniacs episode 69. Following the 65th episode of the series, Animaniacs continued to air in reruns on Fox Kids. The only new episodes during this time included a short, four - episode long second season that was quickly put together from unused scripts. After Fox Kids aired Animaniacs reruns for a year, the series switched to the new Warner Bros. children 's programming block, Kids ' WB.
The series was popular enough for Warner Bros. Animation to invest in additional episodes of Animaniacs past the traditional 65 - episode marker for syndication. Animaniacs premiered on the new Kids ' WB line - up on September 9, 1995, with a new season of 13 episodes. At this time, the show 's popular cartoon characters, Pinky and the Brain, were spun off from Animaniacs into their own TV series. Warner Bros. stated in a press release that Animaniacs gathered over one million children viewers every week.
Despite the series ' success on Fox Kids, Animaniacs on Kids ' WB was only successful in an unintended way, bringing in adult viewers and viewers outside the Kids ' WB target demographic of young children. This unintended result of adult viewers and not enough young viewers put pressure on the WB network from advertisers and caused dissatisfaction from the WB network towards Animaniacs. Slowly, orders from the WB for more Animaniacs episodes dwindled and Animaniacs had a couple more short seasons, relying on leftover scripts and storyboards. The fourth season had eight episodes, which was reduced from 18 because of Warner Bros. ' dissatisfaction with Animaniacs. The 99th and final Animaniacs episode aired on November 14, 1998.
The Chicago Tribune reported in 1999 that the production of new Animaniacs episodes ceased and the direct - to - video film Wakko 's Wish was a closer to the series. Animation World Network reported that Warner Bros. laid off over 100 artists, contributing to the reduced production of the original series. Producer Tom Ruegger explained that rather than produce new episodes, Warner Bros. instead decided to use the back - catalog of Animaniacs episodes until "someone clamors for more. '' Animaniacs segments were shown along with segments from other cartoons as part of The Cat&Birdy Warneroonie PinkyBrainy Big Cartoonie Show. Ruegger said at the time the hiatus was "temporary ''. Following the end of the series, the Animaniacs team developed Wakko 's Wish. On December 21, 1999, Warner Bros. released Wakko 's Wish. In 2016, Ruegger said on his Reddit AMA that the decline of Animaniacs and other series was the result of Warner Bros. ' investment in the much cheaper anime series Pokémon. Following Warner Bros. right to distribute the cheaper and successful anime, the network chose to invest less in original programming like Animaniacs.
After Animaniacs, Spielberg collaborated with Warner Bros. Animation again to produce the short - lived series Steven Spielberg Presents Freakazoid, along with the Animaniacs spin - off series Pinky and the Brain, from which Pinky, Elmyra & the Brain was later spun off. Warner Bros. also produced two other comedy animated series in the later half of the decade titled Histeria! and Detention, which were short - lived and unsuccessful compared to the earlier series. Later, Warner Bros. cut back the size of its animation studio because the show Histeria! went over its budget, and most production on further Warner Bros. animated comedy series ceased.
Animaniacs, along with Tiny Toon Adventures, continued to rerun in syndication through the 1990s into the early 2000s after production of new episodes ceased. In the US, Animaniacs aired on Cartoon Network, originally as a one - off airing on January 31, 1997, and then on the regular schedule from August 31, 1998 until the spring of 2001, when Nickelodeon bought the rights to air the series beginning on September 1, 2001. Nickelodeon transferred the series to its newly launched sister channel Nicktoons on May 1, 2002, and aired there until July 7, 2005. Animaniacs aired on Hub Network from January 7, 2013 until October 10, 2014.
Paulsen, Harnell, and MacNeille have announced plans to tour in 2016 to perform songs from Animaniacs! along with a full orchestra. Among the songs will be an updated version of "Yakko 's World '' by Randy Rogel that includes a new verse to include nations that have been formed since the song 's original airing, such as those from the break - up of the Soviet Union.
The Warners starred in the feature - length, direct - to - video movie Wakko 's Wish. The movie takes place in the fictional town of Acme Falls, in which the Warners and the rest of the Animaniacs cast are under the rule of a greedy king who conquered their home country from a neighboring country. When the Warners find out about a star that will grant a wish to the first person that touches it, the Warners, the villagers (the Animaniacs cast), and the king race to get to it first. Although children and adults rated Wakko 's Wish highly in test - screenings, Warner Bros. decided to release it direct - to - video, rather than spend money on advertising. Warner Bros. released the movie on VHS on December 21, 1999; the film was then released on DVD on October 7, 2014.
Episodes of the show have been released on DVD and VHS during and after the series run.
VHS tapes of Animaniacs were released in the United States and in the United Kingdom. All of these tapes are out of production, but are still available at online sellers. The episodes featured are jumbled at random and are in no particular order with the series. Each video featured four to five episodes each and accompanied by a handful of shorter skits, with a running time of about 45 minutes.
Beginning on July 25, 2006, Warner Home Video began releasing DVD volume sets of Animaniacs episodes in order of the episodes ' original airdates. Volume one of Animaniacs sold very well; over half of the product being sold in the first week made it one of the fastest selling animation DVD sets that Warner Home Video ever put out.
An Animaniacs comic book, published by DC Comics, ran from 1995 to 2000 (59 regular monthly issues, plus two specials). Initially, these featured all the characters except for Pinky and the Brain, who were published in their own comic series, though cameos were possible. The Animaniacs comic series was later renamed Animaniacs! Featuring Pinky and the Brain. The Animaniacs comic series, like the show, parodied TV and comics standards, such as Pulp Fiction and The X-Files, among others.
Animaniacs was soon brought into the video game industry to produce games based on the series. The list includes titles such as:
Because Animaniacs had many songs, record labels Rhino Entertainment and Time Warner Kids produced albums featuring songs from the show. These albums include Animaniacs (1993), Yakko 's World (1994), A Christmas Plotz (1995), Animaniacs Variety Pack (1995), A Hip - Opera Christmas (1997), The Animaniacs Go Hollywood (1999), The Animaniacs Wacky Universe (1999), and the compilation album, The Animaniacs Faboo! Collection (1995).
According to IndieWire in May 2017, Amblin Television and Warner Bros. Animation were in the early stages of developing a reboot of Animaniacs. The interest in the reboot was driven by a surge of popularity for the show when it was made available on Netflix in 2016, plus numerous successful projects that have revived interest in older shows, such as Fuller House.
The reboot was officially announced by the streaming service Hulu in January 2018 in partnership with Spielberg and Warner Bros. Domestic Television Distribution. The reboot will consist of at least two seasons, to start airing in 2020. Yakko, Wakko, and Dot will return, along with appearances by Pinky and the Brain each episode; however, no voice actors were announced at this point. The deal includes rights for Hulu to stream all episodes of Animaniacs, Pinky & the Brain, Pinky, Elmyra and the Brain, and Tiny Toon Adventures. Spielberg will return to serve as executive producer, alongside Sam Register, president of Warner Bros. Animation and Warner Digital Series, and Amblin Television co-presidents Justin Falvey and Darryl Frank. The show will be produced by Amblin Television and Warner Bros. Animation. Hulu considers the show its first original series targeted for families.
Wellesley Wild will serve as the showrunner for the series, while Carl Faruolo will serve as supervising director.
a. Sources vary on the size of the Animaniacs orchestra. On the "Animaniacs Live! '' featurette, host Maurice LaMarche refers to the orchestra as "35 - piece ''. A 1995 Warner Bros. Press release refers to the orchestra as "30 - piece '', while an article of the New York Times reads that the orchestra was a much smaller "20 - piece ''. In an interview for "The Cartoon Music Book '', Animaniacs composer Richard Stone said that the number of people in the orchestra varied, depending on the episode and the type of music needed, but said that "I do n't think we ever had more than thirty - two (pieces) ''.
|
how many base pairs in diploid human genome | Human genome - wikipedia
3,234.83 Mb (Mega-basepairs) per haploid genome
The human genome is the complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. Human genomes include both protein - coding DNA genes and noncoding DNA. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization creates a zygote) consist of three billion DNA base pairs, while diploid genomes (found in somatic cells) have twice the DNA content. While there are significant differences among the genomes of human individuals (on the order of 0.1 %), these are considerably smaller than the differences between humans and their closest living relatives, the chimpanzees (approximately 4 %) and bonobos.
The Human Genome Project produced the first complete sequences of individual human genomes, with the first draft sequence and initial analysis being published on February 12, 2001. The human genome was the first of all vertebrates to be completely sequenced. As of 2012, thousands of human genomes have been completely sequenced, and many more have been mapped at lower levels of resolution. The resulting data are used worldwide in biomedical science, anthropology, forensics and other branches of science. There is a widely held expectation that genomic studies will lead to advances in the diagnosis and treatment of diseases, and to new insights in many fields of biology, including human evolution.
Although the sequence of the human genome has been (almost) completely determined by DNA sequencing, it is not yet fully understood. Most (though probably not all) genes have been identified by a combination of high throughput experimental and bioinformatics approaches, yet much work still needs to be done to further elucidate the biological functions of their protein and RNA products. Recent results suggest that most of the vast quantities of noncoding DNA within the genome have associated biochemical activities, including regulation of gene expression, organization of chromosome architecture, and signals controlling epigenetic inheritance.
There are an estimated 19,000 - 20,000 human protein - coding genes. The estimate of the number of human genes has been repeatedly revised down from initial predictions of 100,000 or more as genome sequence quality and gene finding methods have improved, and could continue to drop further. Protein - coding sequences account for only a very small fraction of the genome (approximately 1.5 %), and the rest is associated with non-coding RNA molecules, regulatory DNA sequences, LINEs, SINEs, introns, and sequences for which as yet no function has been determined.
In June 2016, scientists formally announced HGP - Write, a plan to synthesize the human genome.
The total length of the human genome is over 3 billion base pairs. The genome is organized into 22 paired chromosomes, plus the X chromosome (one in males, two in females) and, in males only, one Y chromosome. These are all large linear DNA molecules contained within the cell nucleus. The genome also includes the mitochondrial DNA, a comparatively small circular molecule present in each mitochondrion. Basic information about these molecules and their gene content, based on a reference genome that does not represent the sequence of any specific individual, are provided in the following table. (Data source: Ensembl genome browser release 87, December 2016 for most values; Ensembl genome browser release 68, July 2012 for miRNA, rRNA, snRNA, snoRNA.)
Table 1 (above) summarizes the physical organization and gene content of the human reference genome, with links to the original analysis, as published in the Ensembl database at the European Bioinformatics Institute (EBI) and Wellcome Trust Sanger Institute. Chromosome lengths were estimated by multiplying the number of base pairs by 0.34 nanometers, the distance between base pairs in the DNA double helix. The number of proteins is based on the number of initial precursor mRNA transcripts, and does not include products of alternative pre-mRNA splicing, or modifications to protein structure that occur after translation.
Variations are unique DNA sequence differences that have been identified in the individual human genome sequences analyzed by Ensembl as of December, 2016. The number of identified variations is expected to increase as further personal genomes are sequenced and analyzed. In addition to the gene content shown in this table, a large number of non-expressed functional sequences have been identified throughout the human genome (see below). Links open windows to the reference chromosome sequences in the EBI genome browser.
Small non-coding RNAs are RNAs of as many as 200 bases that do not have protein - coding potential. These include: microRNAs, or miRNAs (post-transcriptional regulators of gene expression), small nuclear RNAs, or snRNAs (the RNA components of spliceosomes), and small nucleolar RNAs, or snoRNA (involved in guiding chemical modifications to other RNA molecules). Long non-coding RNAs are RNA molecules longer than 200 bases that do not have protein - coding potential. These include: ribosomal RNAs, or rRNAs (the RNA components of ribosomes), and a variety of other long RNAs that are involved in regulation of gene expression, epigenetic modifications of DNA nucleotides and histone proteins, and regulation of the activity of protein - coding genes. Small discrepancies between total - small - ncRNA numbers and the numbers of specific types of small ncNRAs result from the former values being sourced from Ensembl release 87 and the latter from Ensembl release 68.
Although the human genome has been completely sequenced for all practical purposes, there are still hundreds of gaps in the sequence. A recent study noted more than 160 euchromatic gaps of which 50 gaps were closed. However, there are still numerous gaps in the heterochromatic parts of the genome which is much harder to sequence due to numerous repeats and other intractable sequence features.
The human reference genome (GRC v38) has been successfully compressed to ~ 5.2-fold (marginal less than 550 MB) in 155 minutes using a desktop computer with 6.4 GB of RAM.
The content of the human genome is commonly divided into coding and noncoding DNA sequences. Coding DNA is defined as those sequences that can be transcribed into mRNA and translated into proteins during the human life cycle; these sequences occupy only a small fraction of the genome (< 2 %). Noncoding DNA is made up of all of those sequences (ca. 98 % of the genome) that are not used to encode proteins.
Some noncoding DNA contains genes for RNA molecules with important biological functions (noncoding RNA, for example ribosomal RNA and transfer RNA). The exploration of the function and evolutionary origin of noncoding DNA is an important goal of contemporary genome research, including the ENCODE (Encyclopedia of DNA Elements) project, which aims to survey the entire human genome, using a variety of experimental tools whose results are indicative of molecular activity.
Because non-coding DNA greatly outnumbers coding DNA, the concept of the sequenced genome has become a more focused analytical concept than the classical concept of the DNA - coding gene.
Mutation rate of human genome is a very important factor in calculating evolutionary time points. Researchers calculated the number of genetic variations between human and apes. Dividing that number by age of fossil of most recent common ancestor of humans and ape, researchers calculated the mutation rate. Recent studies using next generation sequencing technologies concluded a slow mutation rate which does n't add up with human migration pattern time points and suggesting a new evolutionary time scale. 100,000 year old human fossils found in Israel have served to compound this new found uncertainty of the human migration timeline.
Protein - coding sequences represent the most widely studied and best understood component of the human genome. These sequences ultimately lead to the production of all human proteins, although several biological processes (e.g. DNA rearrangements and alternative pre-mRNA splicing) can lead to the production of many more unique proteins than the number of protein - coding genes.
The complete modular protein - coding capacity of the genome is contained within the exome, and consists of DNA sequences encoded by exons that can be translated into proteins. Because of its biological importance, and the fact that it constitutes less than 2 % of the genome, sequencing of the exome was the first major milepost of the Human Genome Project.
Number of protein - coding genes. About 20,000 human proteins have been annotated in databases such as Uniprot. Historically, estimates for the number of protein genes have varied widely, ranging up to 2,000,000 in the late 1960s, but several researchers pointed out in the early 1970s that the estimated mutational load from deleterious mutations placed an upper limit of approximately 40,000 for the total number of functional loci (this includes protein - coding and functional non-coding genes).
The number of human protein - coding genes is not significantly larger than that of many less complex organisms, such as the roundworm and the fruit fly. This difference may result from the extensive use of alternative pre-mRNA splicing in humans, which provides the ability to build a very large number of modular proteins through the selective incorporation of exons.
Protein - coding capacity per chromosome. Protein - coding genes are distributed unevenly across the chromosomes, ranging from a few dozen to more than 2000, with an especially high gene density within chromosomes 19, 11, and 1 (Table 1). Each chromosome contains various gene - rich and gene - poor regions, which may be correlated with chromosome bands and GC - content. The significance of these nonrandom patterns of gene density is not well understood.
Size of protein - coding genes. The size of protein - coding genes within the human genome shows enormous variability (Table 2). The median size of a protein - coding gene is 26,288 bp (mean = 66,577 bp; Table 2 in). For example, the gene for histone H1a (HIST1HIA) is relatively small and simple, lacking introns and encoding mRNA sequences of 781 nt and a 215 amino acid protein (648 nt open reading frame). Dystrophin (DMD) is the largest protein - coding gene in the human reference genome, spanning a total of 2.2 MB, while Titin (TTN) has the longest coding sequence (114,414 bp), the largest number of exons (363), and the longest single exon (17,106 bp). Over the whole genome, the median size of an exon is 122 bp (mean = 145 bp), the median number of exons is 7 (mean = 8.8), and the median coding sequence encodes 367 amino acids (mean = 447 amino acids; Table 21 in).
Table 2. Examples of human protein - coding genes. Chrom, chromosome. Alt splicing, alternative pre-mRNA splicing. (Data source: Ensembl genome browser release 68, July 2012)
Recently, a systematic meta - analysis of updated data of the human genome found that the largest protein - coding gene in the human reference genome is RBFOX1 (RNA binding protein, fox - 1 homolog 1), spanning a total of 2.47 MB. Over the whole genome, considering a curated set of protein - coding genes, the median size of an exon is currently estimated to be 133 bp (mean = 309 bp), the median number of exons is currently estimated to be 8 (mean = 11), and the median coding sequence is currently estimated to encode 425 amino acids (mean = 553 amino acids; Tables 2 and 5 in).
Noncoding DNA is defined as all of the DNA sequences within a genome that are not found within protein - coding exons, and so are never represented within the amino acid sequence of expressed proteins. By this definition, more than 98 % of the human genomes is composed of ncDNA.
Numerous classes of noncoding DNA have been identified, including genes for noncoding RNA (e.g. tRNA and rRNA), pseudogenes, introns, untranslated regions of mRNA, regulatory DNA sequences, repetitive DNA sequences, and sequences related to mobile genetic elements.
Numerous sequences that are included within genes are also defined as noncoding DNA. These include genes for noncoding RNA (e.g. tRNA, rRNA), and untranslated components of protein - coding genes (e.g. introns, and 5 ' and 3 ' untranslated regions of mRNA).
Protein - coding sequences (specifically, coding exons) constitute less than 1.5 % of the human genome. In addition, about 26 % of the human genome is introns. Aside from genes (exons and introns) and known regulatory sequences (8 -- 20 %), the human genome contains regions of noncoding DNA. The exact amount of noncoding DNA that plays a role in cell physiology has been hotly debated. Recent analysis by the ENCODE project indicates that 80 % of the entire human genome is either transcribed, binds to regulatory proteins, or is associated with some other biochemical activity.
It however remains controversial whether all of this biochemical activity contributes to cell physiology, or whether a substantial portion of this is the result transcriptional and biochemical noise, which must be actively filtered out by the organism. Excluding protein - coding sequences, introns, and regulatory regions, much of the non-coding DNA is composed of: Many DNA sequences that do not play a role in gene expression have important biological functions. Comparative genomics studies indicate that about 5 % of the genome contains sequences of noncoding DNA that are highly conserved, sometimes on time - scales representing hundreds of millions of years, implying that these noncoding regions are under strong evolutionary pressure and positive selection.
Many of these sequences regulate the structure of chromosomes by limiting the regions of heterochromatin formation and regulating structural features of the chromosomes, such as the telomeres and centromeres. Other noncoding regions serve as origins of DNA replication. Finally several regions are transcribed into functional noncoding RNA that regulate the expression of protein - coding genes (for example), mRNA translation and stability (see miRNA), chromatin structure (including histone modifications, for example), DNA methylation (for example), DNA recombination (for example), and cross-regulate other noncoding RNAs (for example). It is also likely that many transcribed noncoding regions do not serve any role and that this transcription is the product of non-specific RNA Polymerase activity.
Pseudogenes are inactive copies of protein - coding genes, often generated by gene duplication, that have become nonfunctional through the accumulation of inactivating mutations. Table 1 shows that the number of pseudogenes in the human genome is on the order of 13,000, and in some chromosomes is nearly the same as the number of functional protein - coding genes. Gene duplication is a major mechanism through which new genetic material is generated during molecular evolution.
For example, the olfactory receptor gene family is one of the best - documented examples of pseudogenes in the human genome. More than 60 percent of the genes in this family are non-functional pseudogenes in humans. By comparison, only 20 percent of genes in the mouse olfactory receptor gene family are pseudogenes. Research suggests that this is a species - specific characteristic, as the most closely related primates all have proportionally fewer pseudogenes. This genetic discovery helps to explain the less acute sense of smell in humans relative to other mammals.
Noncoding RNA molecules play many essential roles in cells, especially in the many reactions of protein synthesis and RNA processing. Noncoding RNA include tRNA, ribosomal RNA, microRNA, snRNA and other non-coding RNA genes including about 60,000 long non coding RNAs (lncRNAs). Although the number of reported lncRNA genes continues to rise and the exact number in the human genome is yet to be defined, many of them are argued to be non-functional.
Many ncRNAs are critical elements in gene regulation and expression. Noncoding RNA also contributes to epigenetics, transcription, RNA splicing, and the translational machinery. The role of RNA in genetic regulation and disease offers a new potential level of unexplored genomic complexity.
In addition to the ncRNA molecules that are encoded by discrete genes, the initial transcripts of protein coding genes usually contain extensive noncoding sequences, in the form of introns, 5 ' - untranslated regions (5 ' - UTR), and 3 ' - untranslated regions (3 ' - UTR). Within most protein - coding genes of the human genome, the length of intron sequences is 10 - to 100 - times the length of exon sequences (Table 2).
The human genome has many different regulatory sequences which are crucial to controlling gene expression. Conservative estimates indicate that these sequences make up 8 % of the genome, however extrapolations from the ENCODE project give that 20 - 40 % of the genome is gene regulatory sequence. Some types of non-coding DNA are genetic "switches '' that do not encode proteins, but do regulate when and where genes are expressed (called enhancers).
Regulatory sequences have been known since the late 1960s. The first identification of regulatory sequences in the human genome relied on recombinant DNA technology. Later with the advent of genomic sequencing, the identification of these sequences could be inferred by evolutionary conservation. The evolutionary branch between the primates and mouse, for example, occurred 70 -- 90 million years ago. So computer comparisons of gene sequences that identify conserved non-coding sequences will be an indication of their importance in duties such as gene regulation.
Other genomes have been sequenced with the same intention of aiding conservation - guided methods, for exampled the pufferfish genome. However, regulatory sequences disappear and re-evolve during evolution at a high rate.
As of 2012, the efforts have shifted toward finding interactions between DNA and regulatory proteins by the technique ChIP - Seq, or gaps where the DNA is not packaged by histones (DNase hypersensitive sites), both of which tell where there are active regulatory sequences in the investigated cell type.
Repetitive DNA sequences comprise approximately 50 % of the human genome.
About 8 % of the human genome consists of tandem DNA arrays or tandem repeats, low complexity repeat sequences that have multiple adjacent copies (e.g. "CAGCAGCAG... ''). The tandem sequences may be of variable lengths, from two nucleotides to tens of nucleotides. These sequences are highly variable, even among closely related individuals, and so are used for genealogical DNA testing and forensic DNA analysis.
Repeated sequences of fewer than ten nucleotides (e.g. the dinucleotide repeat (AC)) are termed microsatellite sequences. Among the microsatellite sequences, trinucleotide repeats are of particular importance, as sometimes occur within coding regions of genes for proteins and may lead to genetic disorders. For example, Huntington 's disease results from an expansion of the trinucleotide repeat (CAG) within the Huntingtin gene on human chromosome 4. Telomeres (the ends of linear chromosomes) end with a microsatellite hexanucleotide repeat of the sequence (TTAGGG).
Tandem repeats of longer sequences (arrays of repeated sequences 10 -- 60 nucleotides long) are termed minisatellites.
Transposable genetic elements, DNA sequences that can replicate and insert copies of themselves at other locations within a host genome, are an abundant component in the human genome. The most abundant transposon lineage, Alu, has about 50,000 active copies, and can be inserted into intragenic and intergenic regions. One other lineage, LINE - 1, has about 100 active copies per genome (the number varies between people). Together with non-functional relics of old transposons, they account for over half of total human DNA. Sometimes called "jumping genes '', transposons have played a major role in sculpting the human genome. Some of these sequences represent endogenous retroviruses, DNA copies of viral sequences that have become permanently integrated into the genome and are now passed on to succeeding generations.
Mobile elements within the human genome can be classified into LTR retrotransposons (8.3 % of total genome), SINEs (13.1 % of total genome) including Alu elements, LINEs (20.4 % of total genome), SVAs and Class II DNA transposons (2.9 % of total genome).
With the exception of identical twins, all humans show significant variation in genomic DNA sequences. The human reference genome (HRG) is used as a standard sequence reference.
There are several important points concerning the human reference genome:
Most studies of human genetic variation have focused on single - nucleotide polymorphisms (SNPs), which are substitutions in individual bases along a chromosome. Most analyses estimate that SNPs occur 1 in 1000 base pairs, on average, in the euchromatic human genome, although they do not occur at a uniform density. Thus follows the popular statement that "we are all, regardless of race, genetically 99.9 % the same '', although this would be somewhat qualified by most geneticists. For example, a much larger fraction of the genome is now thought to be involved in copy number variation. A large - scale collaborative effort to catalog SNP variations in the human genome is being undertaken by the International HapMap Project.
The genomic loci and length of certain types of small repetitive sequences are highly variable from person to person, which is the basis of DNA fingerprinting and DNA paternity testing technologies. The heterochromatic portions of the human genome, which total several hundred million base pairs, are also thought to be quite variable within the human population (they are so repetitive and so long that they can not be accurately sequenced with current technology). These regions contain few genes, and it is unclear whether any significant phenotypic effect results from typical variation in repeats or heterochromatin.
Most gross genomic mutations in gamete germ cells probably result in inviable embryos; however, a number of human diseases are related to large - scale genomic abnormalities. Down syndrome, Turner Syndrome, and a number of other diseases result from nondisjunction of entire chromosomes. Cancer cells frequently have aneuploidy of chromosomes and chromosome arms, although a cause and effect relationship between aneuploidy and cancer has not been established.
Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome.
An example of a variation map is the HapMap being developed by the International HapMap Project. The HapMap is a haplotype map of the human genome, "which will describe the common patterns of human DNA sequence variation. '' It catalogs the patterns of small - scale variations in the genome that involve single DNA letters, or bases.
Researchers published the first sequence - based map of large - scale structural variation across the human genome in the journal Nature in May 2008. Large - scale structural variations are differences in the genome among people that range from a few thousand to a few million DNA bases; some are gains or losses of stretches of genome sequence and others appear as re-arrangements of stretches of sequence. These variations include differences in the number of copies individuals have of a particular gene, deletions, translocations and inversions.
Single - nucleotide polymorphisms (SNPs) do not occur homogeneously across the human genome. In fact, there is enormous diversity in SNP frequency between genes, reflecting different selective pressures on each gene as well as different mutation and recombination rates across the genome. However, studies on SNPs are biased towards coding regions, the data generated from them are unlikely to reflect the overall distribution of SNPs throughout the genome. Therefore, the SNP Consortium protocol was designed to identify SNPs with no bias towards coding regions and the Consortium 's 100,000 SNPs generally reflect sequence diversity across the human chromosomes. The SNP Consortium aims to expand the number of SNPs identified across the genome to 300 000 by the end of the first quarter of 2001.
Changes in non-coding sequence and synonymous changes in coding sequence are generally more common than non-synonymous changes, reflecting greater selective pressure reducing diversity at positions dictating amino acid identity. Transitional changes are more common than transversions, with CpG dinucleotides showing the highest mutation rate, presumably due to deamination.
A personal genome sequence is a (nearly) complete sequence of the chemical base pairs that make up the DNA of a single person. Because medical treatments have different effects on different people due to genetic variations such as single - nucleotide polymorphisms (SNPs), the analysis of personal genomes may lead to personalized medical treatment based on individual genotypes.
The first personal genome sequence to be determined was that of Craig Venter in 2007. Personal genomes had not been sequenced in the public Human Genome Project to protect the identity of volunteers who provided DNA samples. That sequence was derived from the DNA of several volunteers from a diverse population. However, early in the Venter - led Celera Genomics genome sequencing effort the decision was made to switch from sequencing a composite sample to using DNA from a single individual, later revealed to have been Venter himself. Thus the Celera human genome sequence released in 2000 was largely that of one man. Subsequent replacement of the early composite - derived data and determination of the diploid sequence, representing both sets of chromosomes, rather than a haploid sequence originally reported, allowed the release of the first personal genome. In April 2008, that of James Watson was also completed. Since then hundreds of personal genome sequences have been released, including those of Desmond Tutu, and of a Paleo - Eskimo. In November 2013, a Spanish family made their personal genomics data publicly available under a Creative Commons public domain license. The work was led by Manuel Corpas and the data obtained by direct - to - consumer genetic testing with 23andMe and the Beijing Genomics Institute). This is believed to be the first such public genomics dataset for a whole family.
The sequencing of individual genomes further unveiled levels of genetic complexity that had not been appreciated before. Personal genomics helped reveal the significant level of diversity in the human genome attributed not only to SNPs but structural variations as well. However, the application of such knowledge to the treatment of disease and in the medical field is only in its very beginnings. Exome sequencing has become increasingly popular as a tool to aid in diagnosis of genetic disease because the exome contributes only 1 % of the genomic sequence but accounts for roughly 85 % of mutations that contribute significantly to disease.
In humans, gene knockouts naturally occur as heterozygous or homozygous loss - of - function gene knockouts. These knockouts are often difficult to distinguish, especially within heterogeneous genetic backgrounds. They are also difficult to find as they occur in low frequencies.
Populations with high rates of consanguinity, such as countries with high rates of first - cousin marriages, display the highest frequencies of homozygous gene knockouts. Such populations include Pakistan, Iceland, and Amish populations. These populations with a high level of parental - relatedness have been subjects of human knock out research which has helped to determine the function of specific genes in humans. By distinguishing specific knockouts, researchers are able to use phenotypic analyses of these individuals to help characterize the gene that has been knocked out.
Knockouts in specific genes can cause genetic diseases, potentially have beneficial effects, or even result in no phenotypic effect at all. However, determining a knockout 's phenotypic effect and in humans can be challenging. Challenges to characterizing and clinically interpreting knockouts include difficulty calling of DNA variants, determining disruption of protein function (annotation), and considering the amount of influence mosaicism has on the phenotype.
One major study that investigated human knockouts is the Pakistan Risk of Myocardial Infarction study. It was found that individuals possessing a heterozygous loss - of - function gene knockout for the APOC3 gene had lower triglycerides in the blood after consuming a high fat meal as compared to individuals without the mutation. However, individuals possessing homozygous loss - of - function gene knockouts of the APOC3 gene displayed the lowest level of triglycerides in the blood after the fat load test, as they produce no functional APOC3 protein.
Most aspects of human biology involve both genetic (inherited) and non-genetic (environmental) factors. Some inherited variation influences aspects of our biology that are not medical in nature (height, eye color, ability to taste or smell certain compounds, etc.). Moreover, some genetic disorders only cause disease in combination with the appropriate environmental factors (such as diet). With these caveats, genetic disorders may be described as clinically defined diseases caused by genomic DNA sequence variation. In the most straightforward cases, the disorder can be associated with variation in a single gene. For example, cystic fibrosis is caused by mutations in the CFTR gene, and is the most common recessive disorder in caucasian populations with over 1,300 different mutations known.
Disease - causing mutations in specific genes are usually severe in terms of gene function, and are fortunately rare, thus genetic disorders are similarly individually rare. However, since there are many genes that can vary to cause genetic disorders, in aggregate they constitute a significant component of known medical conditions, especially in pediatric medicine. Molecularly characterized genetic disorders are those for which the underlying causal gene has been identified, currently there are approximately 2,200 such disorders annotated in the OMIM database.
Studies of genetic disorders are often performed by means of family - based studies. In some instances population based approaches are employed, particularly in the case of so - called founder populations such as those in Finland, French - Canada, Utah, Sardinia, etc. Diagnosis and treatment of genetic disorders are usually performed by a geneticist - physician trained in clinical / medical genetics. The results of the Human Genome Project are likely to provide increased availability of genetic testing for gene - related disorders, and eventually improved treatment. Parents can be screened for hereditary conditions and counselled on the consequences, the probability it will be inherited, and how to avoid or ameliorate it in their offspring.
As noted above, there are many different kinds of DNA sequence variation, ranging from complete extra or missing chromosomes down to single nucleotide changes. It is generally presumed that much naturally occurring genetic variation in human populations is phenotypically neutral, i.e. has little or no detectable effect on the physiology of the individual (although there may be fractional differences in fitness defined over evolutionary time frames). Genetic disorders can be caused by any or all known types of sequence variation. To molecularly characterize a new genetic disorder, it is necessary to establish a causal link between a particular genomic sequence variant and the clinical disease under investigation. Such studies constitute the realm of human molecular genetics.
With the advent of the Human Genome and International HapMap Project, it has become feasible to explore subtle genetic influences on many common disease conditions such as diabetes, asthma, migraine, schizophrenia, etc. Although some causal links have been made between genomic sequence variants in particular genes and some of these diseases, often with much publicity in the general media, these are usually not considered to be genetic disorders per se as their causes are complex, involving many different genetic and environmental factors. Thus there may be disagreement in particular cases whether a specific medical condition should be termed a genetic disorder. The categorized table below provides the prevalence as well as the genes or chromosomes associated with some human genetic disorders.
Comparative genomics studies of mammalian genomes suggest that approximately 5 % of the human genome has been conserved by evolution since the divergence of extant lineages approximately 200 million years ago, containing the vast majority of genes. The published chimpanzee genome differs from that of the human genome by 1.23 % in direct sequence comparisons. Around 20 % of this figure is accounted for by variation within each species, leaving only ~ 1.06 % consistent sequence divergence between humans and chimps at shared genes. This nucleotide by nucleotide difference is dwarfed, however, by the portion of each genome that is not shared, including around 6 % of functional genes that are unique to either humans or chimps.
In other words, the considerable observable differences between humans and chimps may be due as much or more to genome level variation in the number, function and expression of genes rather than DNA sequence changes in shared genes. Indeed, even within humans, there has been found to be a previously unappreciated amount of copy number variation (CNV) which can make up as much as 5 -- 15 % of the human genome. In other words, between humans, there could be + / - 500,000,000 base pairs of DNA, some being active genes, others inactivated, or active at different levels. The full significance of this finding remains to be seen. On average, a typical human protein - coding gene differs from its chimpanzee ortholog by only two amino acid substitutions; nearly one third of human genes have exactly the same protein translation as their chimpanzee orthologs. A major difference between the two genomes is human chromosome 2, which is equivalent to a fusion product of chimpanzee chromosomes 12 and 13. (later renamed to chromosomes 2A and 2B, respectively).
Humans have undergone an extraordinary loss of olfactory receptor genes during our recent evolution, which explains our relatively crude sense of smell compared to most other mammals. Evolutionary evidence suggests that the emergence of color vision in humans and several other primate species has diminished the need for the sense of smell.
In September 2016, scientists reported that, based on human DNA genetic studies, all non-Africans in the world today can be traced to a single population that exited Africa between 50,000 and 80,000 years ago.
The human mitochondrial DNA is of tremendous interest to geneticists, since it undoubtedly plays a role in mitochondrial disease. It also sheds light on human evolution; for example, analysis of variation in the human mitochondrial genome has led to the postulation of a recent common ancestor for all humans on the maternal line of descent (see Mitochondrial Eve).
Due to the lack of a system for checking for copying errors, mitochondrial DNA (mtDNA) has a more rapid rate of variation than nuclear DNA. This 20-fold higher mutation rate allows mtDNA to be used for more accurate tracing of maternal ancestry. Studies of mtDNA in populations have allowed ancient migration paths to be traced, such as the migration of Native Americans from Siberia or Polynesians from southeastern Asia. It has also been used to show that there is no trace of Neanderthal DNA in the European gene mixture inherited through purely maternal lineage. Due to the restrictive all or none manner of mtDNA inheritance, this result (no trace of Neanderthal mtDNA) would be likely unless there were a large percentage of Neanderthal ancestry, or there was strong positive selection for that mtDNA (for example, going back 5 generations, only 1 of your 32 ancestors contributed to your mtDNA, so if one of these 32 was pure Neanderthal you would expect that ~ 3 % of your autosomal DNA would be of Neanderthal origin, yet you would have a ~ 97 % chance to have no trace of Neanderthal mtDNA).
Epigenetics describes a variety of features of the human genome that transcend its primary DNA sequence, such as chromatin packaging, histone modifications and DNA methylation, and which are important in regulating gene expression, genome replication and other cellular processes. Epigenetic markers strengthen and weaken transcription of certain genes but do not affect the actual sequence of DNA nucleotides. DNA methylation is a major form of epigenetic control over gene expression and one of the most highly studied topics in epigenetics. During development, the human DNA methylation profile experiences dramatic changes. In early germ line cells, the genome has very low methylation levels. These low levels generally describe active genes. As development progresses, parental imprinting tags lead to increased methylation activity.
Epigenetic patterns can be identified between tissues within an individual as well as between individuals themselves. Identical genes that have differences only in their epigenetic state are called epialleles. Epialleles can be placed into three categories: those directly determined by an individual 's genotype, those influenced by genotype, and those entirely independent of genotype. The epigenome is also influenced significantly by environmental factors. Diet, toxins, and hormones impact the epigenetic state. Studies in dietary manipulation have demonstrated that methyl - deficient diets are associated with hypomethylation of the epigenome. Such studies establish epigenetics as an important interface between the environment and the genome.
|
who has won the most world cups list | List of FIFA World Cup winners - wikipedia
This is a list of all teams, players and coaches who have won the FIFA World Cup tournament since its inception in 1930.
The 21 World Cup tournaments have been won by eight different nations. Brazil has won the most titles, five. The current champion is France, who won the title in 2018.
Participating teams have to register squads for the World Cup, which consisted of 22 players until 1998 and of 23 players from 2002 onwards.
Since 1978, winners ' medals are given to all members of the winning squads. Prior to that, only players who were on the pitch during the final matches received medals. FIFA decided in 2007 to retroactively award winners ' medals to all members of the winning squads between 1930 and 1974.
A total of 445 players have been in the winning team in the World Cup. Brazil 's Pelé is the only one to have won three times, while another 20 have won twice.
20 different coaches have won the World Cup, Italy 's Vittorio Pozzo being the only one to win twice.
Four other coaches finished as winners once and runners - up once: West Germany 's Helmut Schön (winner in 1974, runner - up in 1966) and Franz Beckenbauer (winner in 1990, runner - up in 1986), Argentina 's Carlos Bilardo (winner in 1986, runner - up in 1990), and Brazil 's Mário Zagallo (winner in 1970, runner - up in 1998).
Zagallo (twice), Beckenbauer and France 's Didier Deschamps also won the title as players.
Every edition of the tournament has been won by a coach leading the team of his own country.
|
aerosmith i don't wanna miss a thing wiki | I Do n't Want to Miss a Thing - wikipedia
"I Do n't Want to Miss a Thing '' is a power ballad performed by American hard rock band Aerosmith for the 1998 film Armageddon which Steven Tyler 's daughter Liv Tyler starred in. Written by Diane Warren, the song debuted at number one on the U.S. Billboard Hot 100 (the first # 1 for the band after 28 years together). It is one of three songs performed by the band for the film, the other two being "What Kind of Love Are You On '' and "Sweet Emotion ''. The song stayed at number one for four weeks from September 5 to 26, 1998. The song also stayed at number 1 for several weeks in several other countries. It sold over a million copies in the UK and reached number four on the UK Singles Chart.
This song was also featured in the 2013 video game Saints Row IV. The track was also heard in an episode of "Jewelpet Sunshine '' which is the third season of the Jewelpet franchise.
This song was Aerosmith 's biggest hit, debuting at number 1 on the U.S. Billboard Hot 100, where it stayed for four weeks in September, and reaching number 1 in many countries around the world, including Australia, the Philippines, Germany, Ireland, Austria, Norway, Italy, the Netherlands, and Switzerland.
The chorus of the song is highly reminiscent of an earlier song Diane Warren co-wrote, "Just Like Jesse James '', which appeared on Cher 's 1989 album Heart of Stone.
The song helped open up Aerosmith to a new generation and remains a slow dance staple.
It was one of many songs written by Warren in that time period. The original version was a collaboration between Chicago musician Phil Kosch of Treaty of Paris and Super Happy Fun Club, and nephew of chart topping writer Lou Bega. Bega introduced the two and they penned the initial track, but ultimately Kosch was left uncredited.
The song is notable for having been nominated for both an Academy Award for Best Original Song and a Golden Raspberry Award for Worst Original Song.
The song appeared on the Argentine version and a European re-released version of the album Nine Lives. It also appeared on the Japanese version of Just Push Play.
"Crash '' and the original "Pink '' appeared as tracks 9 and 11, respectively, on all versions of Nine Lives.
The music video for this song was shot at the Minneapolis Armory in 1998 and was directed by Francis Lawrence. It features the band playing the song intertwined with scenes from the film Armageddon. It features an appearance by Steven Tyler 's daughter Liv, who plays Grace Stamper in the film. Steven Tyler injured his knee the day before the shoot, so they used a lot of close - ups because his movement was limited.
The video begins with shots of the moon in orbit and several asteroids passing by safely and then a view of Earth before zooming in to show Steven Tyler singing. The shots interchange between the band and Mission Control viewing the band singing via their monitors. As the video progresses it reveals that the band is playing in front of what appears to be the fictional Space Shuttle Freedom. Along with Aerosmith, a full hand orchestra plays in sync with the melody. Then smoke surrounds the orchestra and the members of Aerosmith as Freedom takes off from the launch pad. Finally, the screen goes out as a tearful Grace touches one of the monitors to reach out to her father (real life father Steven Tyler in the video; on - screen father Harry Stamper, played by Bruce Willis, in the film).
The video was highly successful and greatly contributed to the song 's success, receiving heavy airplay on MTV and went on to become the second most popular video of 1998, only behind Brandy and Monica 's "The Boy is Mine ''. It also won awards for MTV Video Music Award for Best Video from a Film, and Best Video at Boston Music Awards.
sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
In late 1998, country music artist Mark Chesnutt recorded a cover version of the song. His rendition is the first single and title track from his 1999 album of the same name. Chesnutt 's cover spent two weeks at number one on the US Billboard Hot Country Singles & Tracks (now Hot Country Songs) charts in early 1999, and is the last of his eight number ones on that chart. It is also the first of only two singles in his career to reach the Billboard Hot 100, where it peaked at number 17 in early 1999.
|
which of the following is one of the six critical areas of emergency management | Emergency management - Wikipedia
Emergency management or disaster management is the organization and management of the resources and responsibilities for dealing with all humanitarian aspects of emergencies (preparedness, response, and recovery). The aim is to reduce the harmful effects of all hazards, including disasters. It should not be equated to "disaster management ''.
The World Health Organization defines an emergency as the state in which normal procedures are interrupted, and immediate measures need to be taken to prevent that state turning into a disaster. Thus, emergency management is crucial to avoid the disruption transforming into a disaster, which is even harder to recover from. Disaster management means helping the people to recover their conditions from the disaster occurred in that particular region. Emergency management is also known as disaster management
If possible, emergency planning should aim to prevent emergencies from occurring, and failing that, should develop a good action plan to mitigate the results and effects of any emergencies. As time goes on, and more data become available, usually through the study of emergencies as they occur, a plan should evolve. The development of emergency plans is a cyclical process, common to many risk management disciplines, such as Business Continuity and Security Risk Management, as set out below:
There are a number of guidelines and publications regarding Emergency Planning, published by various professional organizations such as ASIS, National Fire Protection Association (NFPA), and the International Association of Emergency Managers (IAEM). There are very few Emergency Management specific standards, and emergency management as a discipline tends to fall under business resilience standards.
In order to avoid, or reduce significant losses to a business, emergency managers should work to identify and anticipate potential risks, hopefully to reduce their probability of occurring. In the event that an emergency does occur, managers should have a plan prepared to mitigate the effects of that emergency, as well as to ensure Business Continuity of critical operations post-incident. It is essential for an organization to include procedures for determining whether an emergency situation has occurred and at what point an emergency management plan should be activated. An emergency plan must be regularly maintained, in a structured and methodical manner, to ensure it is up - to - date in the event of an emergency. Emergency managers generally follow a common process to anticipate, assess, prevent, prepare, respond and recover from an incident.
Cleanup during disaster recovery involves many occupational hazards. Often these hazards are exacerbated by the conditions of the local environment as a result of the natural disaster. While individual workers should be aware of these potential hazards, employers are responsible to minimize exposure to these hazards and protect workers, when possible. This includes identification and thorough assessment of potential hazards, application of appropriate personal protective equipment (PPE), and the distribution of other relevant information in order to enable safe performance of the work. Maintaining a safe and healthy environment for these workers ensures that the effectiveness of the disaster recovery is unaffected.
Flood - associated injuries: Flooding disasters often expose workers to trauma from sharp and blunt objects hidden under murky waters causing lacerations, as well as open and closed fractures. These injuries are further exacerbated with exposure to the often contaminated waters, leading to increased risk for infection. When working around water, there is always the risk of drowning. In addition, the risk of hypothermia significantly increases with prolonged exposure to water temperatures less than 75 degrees Fahrenheit. Non-infectious skin conditions may also occur including miliaria, immersion foot syndrome (including trench foot), and contact dermatitis.
Earthquake - associated injuries: The predominant exposure are related to building structural components, including falling debris with possible crush injury, trapped under rubble, burns, and electric shock.
Chemicals can pose a risk to human health when exposed to humans at certain quantities. After a natural disaster, certain chemicals can be more prominent in the environment. These hazardous materials can be released directly or indirectly. Chemical hazards directly released after a natural disaster often occur concurrent with the event so little to no mitigation actions can take place for mitigation. For example, airborne magnesium, chloride, phosphorus, and ammonia can be generated by droughts. Dioxins can be produced by forest fires, and silica can be emitted by forest fires. Indirect release of hazardous chemicals can be intentionally released or unintentionally released. An example of intentional release is insecticides used after a flood or chlorine treatment of water after a flood. Unintentional release is when a hazardous chemical is not intentionally released. The chemical released is often toxic and serves beneficial purpose when released to the environment. These chemicals can be controlled through engineering to minimize their release when a natural disaster strikes. An example of this is agrochemicals from inundated storehouses or manufacturing facilities poisoning the floodwaters or asbestos fibers released from a building collapse during a hurricane. The flowchart to the right has been adopted from research performed by Stacy Young, et al., and can be found here.
Exposure limits
Below are TLV - TWA, PEL, and IDLH values for common chemicals workers are exposed to after a natural disaster.
Direct release
Magnesium
Phosphorus
Ammonia
Silica
Intentional release
Insecticides
Chlorine dioxide
Unintentional release
Crude oil components...
Benzene, N - hexane, Hydrogen Sulfide, Cumene, Ethylbenzene, Naphthalene, Toluene, Xylenes, PCBs Agrochemicals
Asbestos
Exposure routes
When a toxicant is prominent in an environment after a natural disaster, it is important to determine the route of exposure to worker safety for the disaster management workers. The 3 components are source of exposure, pathway of the chemical, and receptor. Questions to ask when dealing with chemical source is the material itself, how it 's used, how much is used, how often the chemical is used, temperature, vapor pressure, physical processes. The physical state of the chemical is important to identify. If working indoors, room ventilation, and volume of room needs to be noted to help mitigate health defects from the chemical. Lastly, to ensure worker safety, routes of entry for the chemical should be determined as well as relevant personal protective equipment needs to be worn.
Respirators
According to the CDC "If you need to collect belongings or do basic clean up in your previously flooded home, you do not usually need to use a respirator (a mask worn to prevent breathing in harmful substances). '' A respirator should be worn when performing an operation in an enclosed environment such as a house that creates ample amounts of dust. These activities could include sweeping dust, using power saws and equipment, or cleaning up mold. If you encounter dust, the CDC says to "limit your contact with the dust as much as possible. Use wet mops or vacuums with HEPA filters instead of dry sweeping and lastly wear a respirator that protects against dust in the air. A respirator that is approved by the CDC / NIOSH is the N95 respirator and can be a good personal protective equipment to protect from dust and mold in the air from the associated natural disaster.
Mold exposures: Exposure to mold is commonly seen after a natural disaster such as flooding, hurricane, tornado or tsunami. Mold growth can occur on both the exterior and interior of residential or commercial buildings. Warm and humid condition encourages mold growth; therefore, standing water and excess moisture after a natural disaster would provide an ideal environment for mold growth especially in tropical regions. While the exact number of mold species is unknown, some examples of commonly found indoor molds are Aspergillus, Cladosporium, Alternaria and Penicillium. Reaction to molds differ between individuals and can range from mild symptoms such as eye irritation, cough to severe life - threatening asthmatic or allergic reactions. People with history of chronic lung disease, asthma, allergy, other breathing problems or those that are immunocompromised could be more sensitive to molds and may develop fungal pneumonia.
The most effective approach to control mold growth after a natural disaster is to control moisture level. Some ways to prevent mold growth after a natural disaster include opening all doors and windows, using fans to dry out the building, positioning fans to blow air out of the windows and cleaning up the building within the first 24 -- 48 hours. All wet items that can not be properly cleaned and dried within the first 48 hours should be promptly removed and discarded from the building. If mold growth is found in the building, it is important to concurrently remove the molds and fix the underlying moisture problem. When removing molds, N - 95 masks or respirators with a higher protection level should be used to prevent inhalation of molds into the respiratory system. Molds can be removed from hard surfaces by soap and water, a diluted bleach solution or commercial products.
Human remains: According to the Center for Disease Control and Prevention (CDC), "There is no direct risk of contagion or infectious disease from being near human remains for people who are not directly involved in recovery or other efforts that require handling dead bodies. '' Most viruses and bacteria perish along with the human body after death. Therefore, no excessive measures are necessary when handling human remains indirectly. However, for workers in direct contact with human remains, universal precautions should be exercised in order to prevent unnecessary exposure to blood - borne viruses and bacteria. Relevant PPE includes eye protection, face mask or shield, and gloves. The predominant health risk are gastrointestinal infections through fecal - oral contamination, so hand hygiene is paramount to prevention. Mental health support should also be available to workers who endure psychological stress during and after recovery.
Flood - associated skin infections: Flood waters are often contaminated with bacteria and waste as well as chemicals on occasion. Prolonged, direct contact with these waters leads to an increased risk for skin infection, especially with open wounds in the skin or history of a previous skin condition, such as atopic dermatitis or psoriasis. These infections are exacerbated with a compromised immune system or an aging population. The most common bacterial skin infections are usually with Staphylococcus and Streptococcus. One of the most uncommon, but well - known bacterial infections is from Vibrio vulnificus, which causes a rare, but often fatal infection called necrotizing fasciitis.
Other salt - water Mycobacterium infections include the slow growing M. marinum and fast growing M. fortuitum, M. chelonae, and M. abscessus. Fresh - water bacterial infections include aeromonas hydrophila, Burkholderia pseudomallei causing melioidosis, leptospira interrogans causing leptospirosis, and chromobacterium violaceum. Fungal infections may lead to chromoblastomycosis, blastomycosis, mucormycosis, and dermatophytosis. Numerous other arthropod, protozoal, and parasitic infections have been described. A worker can reduce the risk of flood - associated skin infections by avoiding the water if an open wound is present, or at minimum, cover the open wound with a waterproof bandage. Should contact with flood water occur, the open wound should be washed thoroughly with soap and clean water.
Providing disaster recovery assistance is both rewarding and stressful. According to the CDC, "Sources of stress for emergency responders may include witnessing human suffering, risk of personal harm, intense workloads, life - and - death decisions, and separation from family. '' These stresses need to be prevented or effectively managed in order to optimize assistance without causing danger to oneself. Preparation as an emergency responder is key, in addition to establishing care for responsibilities at home. During the recovery efforts, it is critical to understand and recognize burnout and sources of stress. After the recovery, it is vital to take time away from the disaster scene and slowly re-integrate back to the normal work environment. Substance Abuse and Mental Health Services Administration (SAMHSA) provides stress prevention and management resources for disaster recovery responders.
The Federal Emergency Management Agency (FEMA) advises those who desire to assist go through organized volunteer organizations and not to self - deploy to affected locations. The National Volunteer Organizations Active in Disaster (VOAD) serves as the primary point of contact for volunteer organization coordination. All states have their own state VOAD organization. As a volunteer, since an employer does not have oversight, one must be vigilant and protect against possible physical, chemical, biological, and psychosocial exposures. Furthermore, there must be defined roles with relevant training available. Proper tools and PPE may or may not be available, so safety and liability should always be considered.
Every employer is required to maintain a safe and healthy workplace for its employees. When an emergency situation occurs, employers are expected to protect workers from all harm resulting from any potential hazard, including physical, chemical, and biological exposure. In addition, an employer should provide pre-emergency training and build an emergency action plan.
A written document about what actions employers and employees should take when responding to an emergency situation. According to OSHA regulations 1910.38, an employer must have an emergency action plan whenever an OSHA standard in this part requires one. To develop an emergency action plan, an employer should start from workplace evaluation. Typically, most of the occupational emergency management can be divided into worksite evaluation, exposure monitoring, hazard control, work practices, and training.
Worksite evaluation is about identifying the source and location of the potential hazards such as fall, noise, cold, heat, hypoxia, infectious materials, and toxic chemicals that each of the workers may encounter during emergency situations.
After identifying the source and location of the hazard (s), it is essential to monitor how employees may be exposed to these dangers. Employers should conduct task - specific exposure monitoring when they meet following requirements:
To effectively acquire the above information, an employer can ask workers how they perform the task or use direct reading instruments to identify the exposure level and exposure route.
Employers can conduct hazard control by:
Employers should train their employees annually before an emergency action plan is implemented. (29 CFR 1910.38 (e)) The purpose of training is to inform employees of their responsibilities and / or plan of action during emergency situations. The training program should include the types of emergencies that may occur, the appropriate response, evacuation procedure, warning / reporting procedure, and shutdown procedures. Training requirements are different depending on the size of workplace and workforce, processes used, materials handled, available resources and who will be in charge during an emergency.
The training program should address the following information:
After the emergency action plan is completed, employer and employees should review the plan carefully and post it in a public area that is accessible to everyone. In addition, another responsibility of the employer is to keep a record of any injury or illness of workers according to OSHA / State Plan Record - keeping regulations.
Emergency management plans and procedures should include the identification of appropriately trained staff members responsible for decision - making when an emergency occurs. Training plans should include internal people, contractors and civil protection partners, and should state the nature and frequency of training and testing.
Testing of a plan 's effectiveness should occur regularly. In instances where several business or organisations occupy the same space, joint emergency plans, formally agreed to by all parties, should be put into place.
Drills and exercises in preparation for foreseeable hazards are often held, with the participation of the services that will be involved in handling the emergency, and people who will be affected. Drills are held to prepare for the hazards of fires, tornadoes, lockdown for protection, earthquakes, etc.
Communication is one of the key issues during any emergency, pre-planning of communications is critical. Miscommunication can easily result in emergency events escalating unnecessarily.
Once an emergency has been identified a comprehensive assessment evaluating the level of impact and its financial implications should be undertaken. Following assessment, the appropriate plan or response to be activated will depend on a specific pre-set criteria within the emergency plan. The steps necessary should be prioritized to ensure critical functions are operational as soon as possible. The critical functions are those that makes the plan untenable if not operationalized.
The Communication policy must be well known and rehearsed and all targeted audiences or publics and individuals must be alert. All Communication infrastructure must be as prepared as possible with all information on groupings clearly identified.
Emergency management consists of five phases: prevention, mitigation, preparedness, response and recovery. http://www.fema.gov/mission-areas
It focuses on preventing the human hazard, primarily from potential natural disasters or terrorist attacks. Preventive measures are taken on both the domestic and international levels, designed to provide permanent protection from disasters. also by doing this the risk of loss of life and injury can be mitigated with good evacuation plans, environmental planning and design standards. In January 2005, 167 Governments adopted a 10 - year global plan for natural disaster risk reduction called the Hyogo Framework.
Preventing or reducing the impacts of disasters on our communities is a key focus for emergency management efforts today. Prevention and mitigation also help reduce the financial costs of disaster response and recovery. Public Safety Canada is working with provincial and territorial governments and stakeholders to promote disaster prevention and mitigation using a risk - based and all - hazards approach. In 2009, Federal / Provincial / Territorial Ministers endorsed a National Disaster Mitigation Strategy.
Disaster mitigation measures are those that eliminate or reduce the impacts and risks of hazards through proactive measures taken before an emergency or disaster occurs.
Preventive or mitigation measures take different forms for different types of disasters. In earthquake prone areas, these preventive measures might include structural changes such as the installation of an earthquake valve to instantly shut off the natural gas supply, seismic retrofits of property, and the securing of items inside a building. The latter may include the mounting of furniture, refrigerators, water heaters and breakables to the walls, and the addition of cabinet latches. In flood prone areas, houses can be built on poles / stilts. In areas prone to prolonged electricity black - outs installation of a generator ensures continuation of electrical service. The construction of storm cellars and fallout shelters are further examples of personal mitigative actions.
On a national level, governments might implement large scale mitigation measures. After the monsoon floods of 2010, the Punjab government subsequently constructed 22 ' disaster - resilient ' model villages, comprising 1885 single - storey homes, together with schools and health centres.
One of the best known examples of investment in disaster mitigation is the Red River Floodway. The building of the Floodway was a joint provincial / federal undertaking to protect the City of Winnipeg and reduce the impact of flooding in the Red River Basin. It cost $62.7 million to build in the 1960s. Since then, the floodway has been used over 20 times. Its use during the 1997 Red River Flood alone saved an estimated $4.5 billion in costs from potential damage to the city. The Floodway was expanded in 2006 as a joint provincial / federal initiative.
Preparedness focuses on preparing equipment and procedures for use when a disaster occurs. This equipment and these procedures can be used to reduce vulnerability to disaster, to mitigate the impacts of a disaster or to respond more efficiently in an emergency. The Federal Emergency Management Agency (FEMA) has set out a basic four - stage vision of preparedness flowing from mitigation to preparedness to response to recovery and back to mitigation in a circular planning process. This circular, overlapping model has been modified by other agencies, taught in emergency class and discussed in academic papers.
FEMA also operates a Building Science Branch that develops and produces multi-hazard mitigation guidance that focuses on creating disaster - resilient communities to reduce loss of life and property. FEMA advises citizens to prepare their homes with some emergency essentials in the case that the food distribution lines are interrupted. FEMA has subsequently prepared for this contingency by purchasing hundreds of thousands of freeze dried food emergency meals ready to eat (MRE 's) to dispense to the communities where emergency shelter and evacuations are implemented.
Some guidelines for household preparedness have been put online by the State of Colorado, on the topics of water, food, tools, and so on.
Emergency preparedness can be difficult to measure. CDC focuses on evaluating the effectiveness of its public health efforts through a variety of measurement and assessment programs.
Local Emergency Planning Committees (LEPCs) are required by the United States Environmental Protection Agency under the Emergency Planning and Community Right - to - Know Act to develop an emergency response plan, review the plan at least annually, and provide information about chemicals in the community to local citizens. This emergency preparedness effort focuses on hazards presented by use and storage of extremely hazardous and toxic chemicals. Particular requirements of LEPCs include
According to the EPA, "Many LEPCs have expanded their activities beyond the requirements of EPCRA, encouraging accident prevention and risk reduction, and addressing homeland security in their communities '' and the Agency offers advice on how to evaluate the effectiveness of these committees.
Preparedness measures can take many forms ranging from focusing on individual people, locations or incidents to broader, government - based "all hazard '' planning. There are a number of preparedness stages between "all hazard ' and individual planning, generally involving some combination of both mitigation and response planning. Business continuity planning encourages businesses to have a Disaster Recovery Plan. Community - and faith - based organizations mitigation efforts promote field response teams and inter-agency planning.
School - based response teams cover everything from live shooters to gas leaks and nearby bank robberies. Educational institutions plan for cyberattacks and windstorms. Industry specific guidance exists for horse farms, boat owners and more.
Family preparedness for disaster is fairly unusual. A 2013 survey found that only 19 % of American families felt that they were "very prepared '' for a disaster. Still, there are many resources available for family disaster planning. The Department of Homeland Security 's Ready.gov page includes a Family Emergency Plan Checklist, has a whole webpage devoted to readiness for kids, complete with cartoon - style superheroes, and ran a Thunderclap Campaign in 2014. The Center for Disease Control has a Zombie Apocalypse website.
Disasters take a variety of forms to include earthquakes, tsunamis or regular structure fires. That a disaster or emergency is not large scale in terms of population or acreage impacted or duration does not make it any less of a disaster for the people or area impacted and much can be learned about preparedness from so - called small disasters. The Red Cross states that it responds to nearly 70,000 disasters a year, the most common of which is a single family fire.
Preparedness starts with an individual 's everyday life and involves items and training that would be useful in an emergency. What is useful in an emergency is often also useful in everyday life. From personal preparedness, preparedness continues on a continuum through family preparedness, community preparedness and then business, non-profit and governmental preparedness. Some organizations blend these various levels. For example, the International Red Cross and Red Crescent Movement has a webpage on disaster training as well as offering training on basic preparedness such as Cardiopulmonary resuscitation and First Aid. Other non-profits such as Team Rubicon bring specific groups of people into disaster preparedness and response operations. FEMA breaks down preparedness into a pyramid, with citizens on the foundational bottom, on top of which rests local government, state government and federal government in that order.
The basic theme behind preparedness is to be ready for an emergency and there are a number of different variations of being ready based on an assessment of what sort of threats exist. Nonetheless, there is basic guidance for preparedness that is common despite an area 's specific dangers. FEMA recommends that everyone have a three - day survival kit for their household. Because individual household sizes and specific needs might vary, FEMA 's recommendations are not item specific, but the list includes:
Along similar lines, but not exactly the same, CDC has its own list for a proper disaster supply kit.
Children are a special population when considering Emergency preparedness and many resources are directly focused on supporting them. SAMHSA has list of tips for talking to children during infectious disease outbreaks, to include being a good listener, encouraging children to ask questions and modeling self - care by setting routines, eating healthy meals, getting enough sleep and taking deep breaths to handle stress. FEMA has similar advice, noting that "Disasters can leave children feeling frightened, confused, and insecure '' whether a child has experienced it first hand, had it happen to a friend or simply saw it on television. In the same publication, FEMA further notes, "Preparing for disaster helps everyone in the family accept the fact that disasters do happen, and provides an opportunity to identify and collect the resources needed to meet basic needs after disaster. Preparation helps; when people feel prepared, they cope better and so do children. ''
To help people assess what threats might be in order to augment their emergency supplies or improve their disaster response skills, FEMA has published a booklet called the "Threat and Hazard Identification and Risk Assessment Guide. '' (THIRA) This guide, which outlines the THIRA process, emphasizes "whole community involvement, '' not just governmental agencies, in preparedness efforts. In this guide, FEMA breaks down hazards into three categories: Natural, technological and human caused and notes that each hazard should be assessed for both its likelihood and its significance. According to FEMA, "Communities should consider only those threats and hazards that could plausibly occur '' and "Communities should consider only those threats and hazards that would have a significant effect on them. '' To develop threat and hazard context descriptions, communities should take into account the time, place, and conditions in which threats or hazards might occur.
Not all preparedness efforts and discussions involve the government or established NGOs like the Red Cross. Emergency preparation discussions are active on the internet, with many blogs and websites dedicated to discussing various aspects of preparedness. On - line sales of items such as survival food, medical supplies and heirloom seeds allow people to stock basements with cases of food and drinks with 25 year shelf lives, sophisticated medical kits and seeds that are guaranteed to sprout even after years of storage.
One group of people who put a lot of effort in disaster preparations is called Doomsday Preppers. This subset of preparedness - minded people often share a belief that the FEMA or Red Cross emergency preparation suggestions and training are not extensive enough. Sometimes called survivalists, Doomsday Preppers are often preparing for The End Of The World As We Know It, abbreviated as TEOTWAWKI. With a motto some have that "The Future Belongs to those who Prepare, '' this Preparedness subset has its own set of Murphy 's Rules, including "Rule Number 1: Food, you still do n't have enough '' and "Rule Number 26: People who thought the Government would save them, found out that it did n't. ''
Not all emergency preparation efforts revolve around food, guns and shelters, though these items help address the needs in the bottom two sections of Maslow 's hierarchy of needs. The American Preppers Network has an extensive list of items that might be useful in less apparent ways than a first aid kid or help add ' fun ' to challenging times. These items include:
Emergency preparedness goes beyond immediate family members. For many people, pets are an integral part of their families and emergency preparation advice includes them as well. It is not unknown for pet owners to die while trying to rescue their pets from a fire or from drowning. CDC 's Disaster Supply Checklist for Pets includes:
Emergency preparedness also includes more than physical items and skill - specific training. Psychological preparedness is also a type of emergency preparedness and specific mental health preparedness resources are offered for mental health professionals by organizations such as the Red Cross. These mental health preparedness resources are designed to support both community members affected by a disaster and the disaster workers serving them. CDC has a website devoted to coping with a disaster or traumatic event. After such an event, the CDC, through the Substance Abuse and Mental Health Services Administration (SAMHSA), suggests that people seek psychological help when they exhibit symptoms such as excessive worry, crying frequently, an increase in irritability, anger, and frequent arguing, wanting to be alone most of the time, feeling anxious or fearful, overwhelmed by sadness, confused, having trouble thinking clearly and concentrating, and difficulty making decisions, increased alcohol and / or substance use, increased physical (aches, pains) complaints such as headaches and trouble with "nerves. ''
Sometimes emergency supplies are kept in what is called a Bug - out bag. While FEMA does not actually use the term "Bug out bag, '' calling it instead some variation of a "Go Kit, '' the idea of having emergency items in a quickly accessible place is common to both FEMA and CDC, though on - line discussions of what items a "bug out bag '' should include sometimes cover items such as firearms and great knives that are not specifically suggested by FEMA or CDC. The theory behind a "bug out bag '' is that emergency preparations should include the possibility of Emergency evacuation. Whether fleeing a burning building or hastily packing a car to escape an impending hurricane, flood or dangerous chemical release, rapid departure from a home or workplace environment is always a possibility and FEMA suggests having a Family Emergency Plan for such occasions. Because family members may not be together when disaster strikes, this plan should include reliable contact information for friends or relatives who live outside of what would be the disaster area for household members to notify they are safe or otherwise communicate with each other. Along with the contact information, FEMA suggests having well - understood local gathering points if a house must be evacuated quickly to avoid the dangers of re-reentering a burning home. Family and emergency contact information should be printed on cards and put in each family member 's backpack or wallet. If family members spend a significant amount of time in a specific location, such as at work or school, FEMA suggests learning the emergency preparation plans for those places. FEMA has a specific form, in English and in Spanish, to help people put together these emergency plans, though it lacks lines for email contact information.
Like children, people with disabilities and other special needs have special emergency preparation needs. While "disability '' has a specific meaning for specific organizations such as collecting Social Security benefits, for the purposes of emergency preparedness, the Red Cross uses the term in a broader sense to include people with physical, medical, sensor or cognitive disabilities or the elderly and other special needs populations. Depending on the particular disability, specific emergency preparations might be required. FEMA 's suggestions for people with disabilities includes having copies of prescriptions, charging devices for medical devices such as motorized wheel chairs and a week 's supply of medication readily available LINK or in a "go stay kit. '' In some instances, lack of competency in English may lead to special preparation requirements and communication efforts for both individuals and responders.
FEMA notes that long term power outages can cause damage beyond the original disaster that can be mitigated with emergency generators or other power sources to provide an Emergency power system. The United States Department of Energy states that ' homeowners, business owners, and local leaders may have to take an active role in dealing with energy disruptions on their own. '' This active role may include installing or other procuring generators that are either portable or permanently mounted and run on fuels such as propane or natural gas or gasoline. Concerns about carbon monoxide poisoning, electrocution, flooding, fuel storage and fire lead even small property owners to consider professional installation and maintenance. Major institutions like hospitals, military bases and educational institutions often have or are considering extensive backup power systems. Instead of, or in addition to, fuel - based power systems, solar, wind and other alternative power sources may be used. Standalone batteries, large or small, are also used to provide backup charging for electrical systems and devices ranging from emergency lights to computers to cell phones.
Emergency preparedness does not stop at home or at school. The United States Department of Health and Human Services addresses specific emergency preparedness issues hospitals may have to respond to, including maintaining a safe temperature, providing adequate electricity for life support systems and even carrying out evacuations under extreme circumstances. FEMA encourages all businesses to have businesses to have an emergency response plan and the Small Business Administration specifically advises small business owners to also focus emergency preparedness and provides a variety of different worksheets and resources.
FEMA cautions that emergencies happen while people are travelling as well and provides guidance around emergency preparedness for a range travelers to include commuters, Commuter Emergency Plan and holiday travelers. In particular, Ready.gov has a number of emergency preparations specifically designed for people with cars. These preparations include having a full gas tank, maintaining adequate windshield wiper fluid and other basic car maintenance tips. Items specific to an emergency include:
In addition to emergency supplies and training for various situations, FEMA offers advice on how to mitigate disasters. The Agency gives instructions on how to retrofit a home to minimize hazards from a Flood, to include installing a Backflow prevention device, anchoring fuel tanks and relocating electrical panels.
Given the explosive danger posed by natural gas leaks, Ready.gov states unequivocally that "It is vital that all household members know how to shut off natural gas '' and that property owners must ensure they have any special tools needed for their particular gas hookups. Ready.gov also notes that "It is wise to teach all responsible household members where and how to shut off the electricity, '' cautioning that individual circuits should be shut off before the main circuit. Ready.gov further states that "It is vital that all household members learn how to shut off the water at the main house valve '' and cautions that the possibility that rusty valves might require replacement.
The response phase of an emergency may commence with Search and Rescue but in all cases the focus will quickly turn to fulfilling the basic humanitarian needs of the affected population. This assistance may be provided by national or international agencies and organizations. Effective coordination of disaster assistance is often crucial, particularly when many organizations respond and local emergency management agency (LEMA) capacity has been exceeded by the demand or diminished by the disaster itself. The National Response Framework is a United States government publication that explains responsibilities and expectations of government officials at the local, state, federal, and tribal levels. It provides guidance on Emergency Support Functions that may be integrated in whole or parts to aid in the response and recovery process.
On a personal level the response can take the shape either of a shelter in place or an evacuation.
In a shelter - in - place scenario, a family would be prepared to fend for themselves in their home for many days without any form of outside support. In an evacuation, a family leaves the area by automobile or other mode of transportation, taking with them the maximum amount of supplies they can carry, possibly including a tent for shelter. If mechanical transportation is not available, evacuation on foot would ideally include carrying at least three days of supplies and rain - tight bedding, a tarpaulin and a bedroll of blankets.
Donations are often sought during this period, especially for large disasters that overwhelm local capacity. Due to efficiencies of scale, money is often the most cost - effective donation if fraud is avoided. Money is also the most flexible, and if goods are sourced locally then transportation is minimized and the local economy is boosted. Some donors prefer to send gifts in kind, however these items can end up creating issues, rather than helping. One innovation by Occupy Sandy volunteers is to use a donation registry, where families and businesses impacted by the disaster can make specific requests, which remote donors can purchase directly via a web site.
Medical considerations will vary greatly based on the type of disaster and secondary effects. Survivors may sustain a multitude of injuries to include lacerations, burns, near drowning, or crush syndrome.
The recovery phase starts after the immediate threat to human life has subsided. The immediate goal of the recovery phase is to bring the affected area back to normalcy as quickly as possible. During reconstruction it is recommended to consider the location or construction material of the property.
The most extreme home confinement scenarios include war, famine and severe epidemics and may last a year or more. Then recovery will take place inside the home. Planners for these events usually buy bulk foods and appropriate storage and preparation equipment, and eat the food as part of normal life. A simple balanced diet can be constructed from vitamin pills, whole - meal wheat, beans, dried milk, corn, and cooking oil. Vegetables, fruits, spices and meats, both prepared and fresh - gardened, are included when possible.
Professional emergency managers can focus on government and community preparedness, or private business preparedness. Training is provided by local, state, federal and private organizations and ranges from public information and media relations to high - level incident command and tactical skills.
In the past, the field of emergency management has been populated mostly by people with a military or first responder background. Currently, the field has become more diverse, with many managers coming from a variety of backgrounds other than the military or first responder fields. Educational opportunities are increasing for those seeking undergraduate and graduate degrees in emergency management or a related field. There are over 180 schools in the US with emergency management - related programs, but only one doctoral program specifically in emergency management.
Professional certifications such as Certified Emergency Manager (CEM) and Certified Business Continuity Professional (CBCP) are becoming more common as professional standards are raised throughout the field, particularly in the United States. There are also professional organizations for emergency managers, such as the National Emergency Management Association and the International Association of Emergency Managers.
In 2007, Dr. Wayne Blanchard of FEMA 's Emergency Management Higher Education Project, at the direction of Dr. Cortez Lawrence, Superintendent of FEMA 's Emergency Management Institute, convened a working group of emergency management practitioners and academics to consider principles of emergency management. This was the first time the principles of the discipline were to be codified. The group agreed on eight principles that will be used to guide the development of a doctrine of emergency management. Below is a summary:
A fuller description of these principles can be found at
In recent years the continuity feature of emergency management has resulted in a new concept, Emergency Management Information Systems (EMIS). For continuity and inter-operability between emergency management stakeholders, EMIS supports an infrastructure that integrates emergency plans at all levels of government and non-government involvement for all four phases of emergencies. In the healthcare field, hospitals utilize the Hospital Incident Command System (HICS), which provides structure and organization in a clearly defined chain of command.
Practitioners in emergency management come from an increasing variety of backgrounds. Professionals from memory institutions (e.g., museums, historical societies, etc.) are dedicated to preserving cultural heritage -- objects and records. This has been an increasingly major component within this field as a result of the heightened awareness following the September 11 attacks in 2001, the hurricanes in 2005, and the collapse of the Cologne Archives.
To increase the potential successful recovery of valuable records, a well - established and thoroughly tested plan must be developed. This plan should emphasize simplicity in order to aid in response and recovery: employees should perform similar tasks in the response and recovery phase that they perform under normal conditions. It should also include mitigation strategies such as the installation of sprinklers within the institution. Professional associations hold regular workshops to keep individuals up to date with tools and resources in order to minimize risk and maximize recovery.
In 2008, the U.S. Agency for International Development created a web - based tool for estimating populations impacted by disasters. Called Population Explorer the tool uses land scan population data, developed by Oak Ridge National Laboratory, to distribute population at a resolution 1 km for all countries in the world. Used by USAID 's FEWS NET Project to estimate populations vulnerable and or imd by food insecurity, Population Explorer is gaining wide use in a range of emergency analysis and response actions, including estimating populations impacted by floods in Central America and the Pacific Ocean tsunami event in 2009.
In 2007, a checklist for veterinarians was published in the Journal of the American Veterinary Medical Association, it had two sets of questions for a professional to ask themselves before assisting with an emergency:
Absolute requirements for participation:
Incident participation:
While written for veterinarians, this checklist is applicable for any professional to consider before assisting with an emergency.
The International Emergency Management Society (TIEMS), is an international non-profit NGO, registered in Belgium. TIEMS is a Global Forum for Education, Training, Certification and Policy in Emergency and Disaster Management. TIEMS ' goal is to develop and bring modern emergency management tools, and techniques into practice, through the exchange of information, methodology innovations and new technologies.
TIEMS provides a platform for stakeholders to meet, network and learn about new technical and operational methodologies. TIEMS focuses on cultural differences to be understood and included in the society 's events, education and research programs. This is achieved by establishing local chapters worldwide. Today, TIEMS has chapters in Benelux, Romania, Finland, Italy, Middle East and North Africa (MENA), Iraq, India, Korea, Japan and China.
The International Association of Emergency Managers (IAEM) is a non-profit educational organization aimed at promoting the goals of saving lives and property protection during emergencies. The mission of IAEM is to serve its members by providing information, networking and professional opportunities, and to advance the emergency management profession.
It has seven councils around the world: Asia, Canada, Europa, International, Oceania, Student and USA.
The Air Force Emergency Management Association, affiliated by membership with the IAEM, provides emergency management information and networking for U.S. Air Force Emergency Management personnel.
The International Recovery Platform (IRP) was conceived at the World Conference on Disaster Reduction (WCDR) in Kobe, Hyogo, Japan in January 2005, as part of the Hyogo Framework for Action (HFA) 2005 -- 2015. The HFA is a global plan for disaster risk reduction adopted by 168 governments.
The key role of IRP is to identify gaps in post disaster recovery and to serve as a catalyst for the development of tools and resources for recovery efforts.
The International Federation of Red Cross and Red Crescent Societies (IFRC) works closely with National Red Cross and Red Crescent societies in responding to emergencies, many times playing a pivotal role. In addition, the IFRC may deploy assessment teams, e.g. Field Assessment and Coordination Teams (FACT), to the affected country if requested by the national society. After assessing the needs, Emergency Response Units (ERUs) may be deployed to the affected country or region. They are specialized in the response component of the emergency management framework.
Baptist Global Response (BGR) is a disaster relief and community development organization. BGR and its partners respond globally to people with critical needs worldwide, whether those needs arise from chronic conditions or acute crises such as natural disasters. While BGR is not an official entity of the Southern Baptist Convention, it is rooted in Southern Baptist life and is the international partnership of Southern Baptist Disaster Relief teams, which operate primarily in the US and Canada.
The United Nations system rests with the Resident Coordinator within the affected country. However, in practice, the UN response will be coordinated by the UN Office for the Coordination of Humanitarian Affairs (UN-OCHA), by deploying a UN Disaster Assessment and Coordination (UNDAC) team, in response to a request by the affected country 's government. Finally UN-SPIDER designed as a networking hub to support disaster management by application of satellite technology
Since 1980, the World Bank has approved more than 500 projects related to disaster management, dealing with both disaster mitigation as well as reconstruction projects, amounting to more than US $40 billion. These projects have taken place all over the world, in countries such as Argentina, Bangladesh, Colombia, Haiti, India, Mexico, Turkey and Vietnam.
Prevention and mitigation projects include forest fire prevention measures, such as early warning measures and education campaigns; early - warning systems for hurricanes; flood prevention mechanisms (e.g. shore protection, terracing, etc.); and earthquake - prone construction. In a joint venture with Columbia University under the umbrella of the ProVention Consortium the World Bank has established a Global Risk Analysis of Natural Disaster Hotspots.
In June 2006, the World Bank, in response to the HFA, established the Global Facility for Disaster Reduction and Recovery (GFDRR), a partnership with other aid donors to reduce disaster losses. GFDRR helps developing countries fund development projects and programs that enhance local capacities for disaster prevention and emergency preparedness.
In 2001 the EU adopted Community Mechanism for Civil Protection, to facilitate co-operation in the event of major emergencies requiring urgent response actions. This also applies to situations where there may be an imminent threat as well.
The heart of the Mechanism is the Monitoring and Information Center (MIC), part of the European Commission 's Directorate - General for Humanitarian Aid & Civil Protection. Accessible 24 hours a day, it gives countries access to a one - stop - shop of civil protections available amongst all the participating states. Any country inside or outside the Union affected by a major disaster can make an appeal for assistance through the MIC. It acts as a communication hub, and provides useful and updated information on the actual status of an ongoing emergency.
Naers are part of life in Australia. Heatwaves have killed more Australians than any other type of natural disaster in the 20th century. Australia 's emergency management processes embrace the concept of the prepared community. The principal government agency in achieving this is Emergency Management Australia.
Public Safety Canada is Canada 's national emergency management agency. Each province is required to have both legislation for dealing with emergencies, and provincial emergency management agencies, typically called "Emergency Measures Organizations '' (EMO). Public Safety Canada co-ordinates and supports the efforts of federal organizations as well as other levels of government, first responders, community groups, the private sector, and other nations. The Public Safety and Emergency Preparedness Act defines the powers, duties and functions of PS are outlined. Other acts are specific to individual fields such as corrections, law enforcement, and national security.
In Germany the Federal Government controls the German Katastrophenschutz (disaster relief), the Technisches Hilfswerk (Federal Agency for Technical Relief, THW), and the Zivilschutz (civil protection) programs coordinated by the Federal Office of Civil Protection and Disaster Assistance. Local fire department units, the German Armed Forces (Bundeswehr), the German Federal Police and the 16 state police forces (Länderpolizei) are also deployed during disaster relief operations.
There are several private organizations in Germany that also deal with emergency relief. Among these are the German Red Cross, Johanniter - Unfall - Hilfe (the German equivalent of the St. John Ambulance), the Malteser - Hilfsdienst, and the Arbeiter - Samariter - Bund. As of 2006, there is a program of study at the University of Bonn leading to the degree "Master in Disaster Prevention and Risk Governance '' As a support function radio amateurs provide additional emergency communication networks with frequent trainings.
The National Disaster Management Authority is the primary government agency responsible for planning and capacity - building for disaster relief. Its emphasis is primarily on strategic risk management and mitigation, as well as developing policies and planning. The National Institute of Disaster Management is a policy think - tank and training institution for developing guidelines and training programs for mitigating disasters and managing crisis response.
The National Disaster Response Force is the government agency primarily responsible for emergency management during natural and man - made disasters, with specialized skills in search, rescue and rehabilitation. The Ministry of Science and Technology also contains an agency that brings the expertise of earth scientists and meteorologists to emergency management. The Indian Armed Forces also plays an important role in the rescue / recovery operations after disasters.
Aniruddha 's Academy of Disaster Management (AADM) is a non-profit organization in Mumbai, India with ' disaster management ' as its principal objective.
In Malaysia, The National Disaster Management Agency (NADMA Malaysia) under the Prime Minister 's Department was established on 2 October 2015 following the Flood Disaster in 2014 and taken over the roles previously National Security Council. NADMA Malaysia is the focal point in managing disaster in Malaysia. Ministry of Home Affairs Malaysia, Ministry of Health Malaysia and Ministry of Housing, Urban Wellbeing and Local Government Malaysia are also having responsibility in managing emergency. Several agencies are involved in emergency managements are Royal Malaysian Police, Malaysian Fire and Rescue Department, Malaysian Civil Defence Force, Ministry of Health Malaysia and Malaysian Maritime Enforcement Agency. There were also some voluntary organisation who involved themselves in emergency / disaster management such as St. John Ambulance of Malaysia, Malaysian Red Crescent Society and so on.
The Nepal Risk Reduction Consortium (NRRC) is based on hyogo Framework and Nepal 's National Strategy for Disaster Risk Management. This arrangement unites humanitarian and development partners with Government of Nepal and had identified 5 flagship priorities for sustainable disaster risk management.
In New Zealand, depending on the scope of the emergency / disaster, responsibility may be handled at either the local or national level. Within each region, local governments are organized into 16 Civil Defence Emergency Management Groups (CMGs). If local arrangements are overwhelmed, pre-existing mutual - support arrangements are activated. Central government has the authority to coordinate the response through the National Crisis Management Centre (NCMC), operated by the Ministry of Civil Defence & Emergency Management (MCDEM). These structures are defined by regulation, and explained in The Guide to the National Civil Defence Emergency Management Plan 2006, roughly equivalent to the U.S. Federal Emergency Management Agency 's National Response Framework.
New Zealand uses unique terminology for emergency management. Emergency management is rarely used, many government publications retaining the use of the term civil defence. For example, the Minister of Civil Defence is responsible for the MCDEM. Civil Defence Emergency Management is a term in its own right, defined by statute. And disaster rarely appears in official publications, emergency and incident being the preferred terms, with the term event also being used. For example, publications refer to the Canterbury Snow Event 2002.
"4Rs '' is the emergency management cycle used in New Zealand, its four phases are known as:
Disaster management in Pakistan revolves around flood disasters focusing on rescue and relief.
Federal Flood Commission was established in 1977 under Ministry of Water and Power to manage the issues of flood management on country - wide basis.
The National Disaster Management Ordinance, 2006 and National Disaster Management Act, 2010 were enacted after 2005 Kashmir earthquake and 2010 Pakistan floods respectively to deal with disaster management. The primary central authority mandated to deal with whole spectrum of disasters and their management in the country is National Disaster Management Authority.
In addition each province along with FATA, Gilgit Baltistan and Pakistani administered Kashmir has its own provincial disaster management authority responsible for implementing policies and plans for Disaster Management in the Province.
Each District has its own District Disaster Management Authority for planning, coordinating and implementing body for disaster management and take all measures for the purposes of disaster management in the districts in accordance with the guidelines laid down by the National Authority and the Provincial Authority.
In the Philippines, the National Disaster Risk Reduction and Management Council is responsible for the protection and welfare of people during disasters or emergencies. It is a working group composed of various government, non-government, civil sector and private sector organizations of the Government of the Republic of the Philippines. Headed by the Secretary of National Defense (under the Office of Civil Defense, the NDRRMCs implementing organization), it coordinates all the executive branches of government, presidents of the leagues of local government units throughout the country, the Armed Forces of the Philippines, Philippine National Police, Bureau of Fire Protection (which is an agency under the Department of Interior and Local Government, and the public and private medical services in responding to natural and manmade disasters, as well as planning, coordination, and training of these responsible units. Non-governmental organizations such as the Philippine Red Cross also provide manpower and material support for NDRRMC.
Regional, provincial, city, municipal, and barangay emergency management are handled by Local Disaster Risk Reduction Management Councils (LDRRMCs), which are the functional arm of the local government unit (LGU). Each LDRRMC is headed by a Chief DRRM Officer, and each Office is tasked to organize their own emergency teams and command - and - control centers when activated at times of local emergencies (identified on the regional, provincial, city, municipal, and / or barangay level).
In Russia, the Ministry of Emergency Situations (EMERCOM) is engaged in fire fighting, civil defense, and search and rescue after both natural and human - made disasters.
In Somalia, the Federal Government announced in May 2013 that the Cabinet had approved draft legislation on a new Somali Disaster Management Agency (SDMA), which had originally been proposed by the Ministry of Interior. According to the Prime Minister 's Media Office, the SDMA will lead and coordinate the government 's response to various natural disasters. It is part of a broader effort by the federal authorities to re-establish national institutions. The Federal Parliament is now expected to deliberate on the proposed bill for endorsement after any amendments.
In the Netherlands the Ministry of Security and Justice is responsible for emergency preparedness and emergency management on a national level and operates a national crisis centre (NCC). The country is divided into 25 safety regions (veiligheidsregio). In a safety region, there are four components: the regional fire department, the regional department for medical care (ambulances and psycho - sociological care etc.), the regional dispatch and a section for risk - and crisis management. The regional dispatch operates for police, fire department and the regional medical care. The dispatch has all these three services combined into one dispatch for the best multi-coordinated response to an incident or an emergency. And also facilitates in information management, emergency communication and care of citizens. These services are the main structure for a response to an emergency. It can happen that, for a specific emergency, the co-operation with an other service is needed, for instance the Ministry of Defence, water board (s) or Rijkswaterstaat. The veiligheidsregio can integrate these other services into their structure by adding them to specific conferences on operational or administrative level.
All regions operate according to the Coordinated Regional Incident Management system.
Following the 2000 fuel protests and severe flooding that same year, as well as the foot - and - mouth crisis in 2001, the United Kingdom passed the Civil Contingencies Act 2004 (CCA). The CCA defined some organisations as Category 1 and 2 Responders, setting responsibilities regarding emergency preparedness and response. It is managed by the Civil Contingencies Secretariat through Regional Resilience Forums and local authorities.
Disaster Management training is generally conducted at the local level, and consolidated through professional courses that can be taken at the Emergency Planning College. Diplomas, undergraduate and postgraduate qualifications can be gained at universities throughout the country. The Institute of Emergency Management is a charity, established in 1996, providing consulting services for the government, media and commercial sectors. There are a number of professional societies for Emergency Planners including the Emergency Planning Society and the Institute of Civil Protection and Emergency Management.
One of the largest emergency exercises in the UK was carried out on 20 May 2007 near Belfast, Northern Ireland: a simulated plane crash - landing at Belfast International Airport. Staff from five hospitals and three airports participated in the drill, and almost 150 international observers assessed its effectiveness.
Disaster management in the United States has utilized the functional All - Hazards approach for over 20 years, in which managers develop processes (such as communication & warning or sheltering) rather than developing single - hazard or threat focused plans (e.g., a tornado plan). Processes are then mapped to specific hazards or threats, with the manager looking for gaps, overlaps, and conflicts between processes.
Given these notions, emergency managers must identify, contemplate, and assess possible man - made threats and natural threats that may affect their respective locales. Because of geographical differences throughout the nation, a variety of different threats affect communities among the states. Thus, although similarities may exist, no two emergency plans will be completely identical. Additionally, each locale has different resources and capacities (e.g., budgets, personnel, equipment, etc.) for dealing with emergencies. Each individual community must craft its own unique emergency plan that addresses potential threats that are specific to the locality.
This creates a plan more resilient to unique events because all common processes are defined, and it encourages planning done by the stakeholders who are closer to the individual processes, such as a traffic management plan written by a public works director. This type of planning can lead to conflict with non-emergency management regulatory bodies, which require the development of hazard / threat specific plans, such as the development of specific H1N1 flu plans and terrorism - specific plans.
In the United States, all disasters are initially local, with local authorities, with usually a police, fire, or EMS agency, taking charge. Many local municipalities may also have a separate dedicated office of emergency management (OEM), along with personnel and equipment. If the event becomes overwhelming to the local government, state emergency management (the primary government structure of the United States) becomes the controlling emergency management agency. Federal Emergency Management Agency (FEMA), part of the Department of Homeland Security (DHS), is the lead federal agency for emergency management. The United States and its territories are broken down into ten regions for FEMA 's emergency management purposes. FEMA supports, but does not override, state authority.
The Citizen Corps is an organization of volunteer service programs, administered locally and coordinated nationally by DHS, which seek to mitigate disasters and prepare the population for emergency response through public education, training, and outreach. Most disaster response is carried out by volunteer organizations. In the US, the Red Cross is chartered by Congress to coordinate disaster response services. It is typically the lead agency handling shelter and feeding of evacuees. Religious organizations, with their ability to provide volunteers quickly, are usually integral during the response process. The largest being the Salvation Army, with a primary focus on chaplaincy and rebuilding, and Southern Baptists who focus on food preparation and distribution, as well as cleaning up after floods and fires, chaplaincy, mobile shower units, chainsaw crews and more. With over 65,000 trained volunteers, Southern Baptist Disaster Relief is one of the largest disaster relief organizations in the US. Similar services are also provided by Methodist Relief Services, the Lutherans, and Samaritan 's Purse. Unaffiliated volunteers show up at most large disasters. To prevent abuse by criminals, and for the safety of the volunteers, procedures have been implemented within most response agencies to manage and effectively use these ' SUVs ' (Spontaneous Unaffiliated Volunteers).
The US Congress established the Center for Excellence in Disaster Management and Humanitarian Assistance (COE) as the principal agency to promote disaster preparedness in the Asia - Pacific region.
The National Tribal Emergency Management Council (NEMC) is a non-profit educational organization developed for Tribal organizations to share information and best practices, as well as to discuss issues regarding public health and safety, emergency management and homeland security, affecting those under Indian sovereignty. NTMC is organized into Regions, based on the FEMA 10 region system. NTMC was founded by the Northwest Tribal Emergency Management Council (NWTEMC), a consortium of 29 Tribal Nations and Villages in Washington, Idaho, Oregon, and Alaska.
If a disaster or emergency is declared to be terror related or an "Incident of National Significance, '' the Secretary of Homeland Security will initiate the National Response Framework (NRF). The NRF allows the integration of federal resources with local, country, state, or tribal entities, with management of those resources to be handled at the lowest possible level, utilizing the National Incident Management System (NIMS).
The Centers for Disease Control and Prevention offer information for specific types of emergencies, such as disease outbreaks, natural disasters and severe weather, chemical and radiation accidents, etc. The Emergency Preparedness and Response Program of the National Institute for Occupational Safety and Health develops resources to address responder safety and health during responder and recovery operations.
The Emergency Management Institute (EMI) serves as the national focal point for the development and delivery of emergency management training to enhance the capabilities of state, territorial, local, and tribal government officials; volunteer organizations; FEMA 's disaster workforce; other Federal agencies; and the public and private sectors to minimize the impact of disasters and emergencies on the American public. EMI curricula are structured to meet the needs of this diverse audience with an emphasis on separate organizations working together in all - hazards emergencies to save lives and protect property. Particular emphasis is placed on governing doctrine such as the National Response Framework (NRF), National Incident Management System (NIMS), and the National Preparedness Guidelines. EMI is fully accredited by the International Association for Continuing Education and Training (IACET) and the American Council on Education (ACE).
Approximately 5,500 participants attend resident courses each year while 100,000 individuals participate in non-resident programs sponsored by EMI and conducted locally by state emergency management agencies under cooperative agreements with FEMA. Another 150,000 individuals participate in EMI - supported exercises, and approximately 1,000 individuals participate in the Chemical Stockpile Emergency Preparedness Program (CSEPP).
The independent study program at EMI consists of free courses offered to United States citizens in Comprehensive Emergency Management techniques. Course IS - 1 is entitled "Emergency Manager: An Orientation to the Position '' and provides background information on FEMA and the role of emergency managers in agency and volunteer organization coordination. The EMI Independent Study (IS) Program, a Web - based distance learning program open to the public, delivers extensive online training with approximately 200 courses. It has trained more than 2.8 million individuals. The EMI IS Web site receives 2.5 to 3 million visitors a day.
In emergency or disaster management the SMAUG model of identifying and prioritizing risk of hazards associated with natural and technological threats is an effective tool. SMAUG stands for Seriousness, Manageability, Acceptability, Urgency and Growth and are the criteria used for prioritization of hazard risks. The SMAUG model provides an effective means of prioritizing hazard risks based upon the aforementioned criteria in order to address the risks posed by the hazards to the avail of effecting effective mitigation, reduction, response and recovery methods.
Seriousness can be defined as "The relative impact in terms of people and dollars. '' This includes the potential for lives to be lost and potential for injury as well as the physical, social and as mentioned, economic losses that may be incurred
Manageability can be defined as "the relative ability to mitigate or reduce the hazard (through managing the hazard, or the community or both) ''. Hazards presenting a high risk and as such requiring significant amounts of risk reduction initiatives will be rated high.
Acceptability -- The degree to which the risk of hazard is acceptable in terms of political, environmental, social and economic impact
Urgency -- This is related to the probability of risk of hazard and is defined in terms of how imperative it is to address the hazard
Growth -- This is the potential for the hazard or event to expand or increase in either probability or risk to community or both. Should vulnerability increase, potential for growth may also increase.
An example of the numerical ratings for each of the four criteria is shown below:
NGOs:
|
when was the 3 point line added in nba | Three - point field goal - wikipedia
A three - point field goal (also 3 - pointer or informally, trey) is a field goal in a basketball game made from beyond the three - point line, a designated arc surrounding the basket. A successful attempt is worth three points, in contrast to the two points awarded for field goals made within the three - point line and the one point for each made free throw.
The distance from the basket to the three - point line varies by competition level: in the National Basketball Association (NBA) the arc is 23 feet 9 inches (7.24 m) from the basket; in FIBA and the WNBA (the latter uses FIBA 's three - point line standard) the arc is 6.75 metres or 22 feet 1 ⁄ inches from the basket; and in the National Collegiate Athletic Association (NCAA) the arc is 20 feet 9 inches (6.32 m) from the basket. In the NBA and FIBA / WNBA, the three - point line becomes parallel to each sideline at the points where the arc is 3 feet (0.91 m) from each sideline; as a result the distance from the basket gradually decreases to a minimum of 22 feet (6.71 m). In the NCAA the arc is continuous for 180 ° around the basket. There are more variations (see main article).
In 3x3, a FIBA - sanctioned variant of the half - court 3 - on - 3 game, the "three - point '' line exists, but shots from behind the line are only worth 2 points. All other shots are worth 1 point.
The three - point line was first tested at the collegiate level in a 1945 NCAA game between Columbia and Fordham but it was not kept as a rule. At the direction of Abe Saperstein, the American Basketball League became the first basketball league to institute the rule in 1961. Its three - point line was a radius of 25 feet (7.62 m) from the baskets, except along the sides. The Eastern Professional Basketball League followed in its 1963 -- 64 season.
The three - point shot later became popularized by the American Basketball Association (ABA), introduced in its inaugural 1967 -- 68 season. ABA commissioner George Mikan stated the three - pointer "would give the smaller player a chance to score and open up the defense to make the game more enjoyable for the fans. '' During the 1970s, the ABA used the three - point shot, along with the slam dunk, as a marketing tool to compete with the National Basketball Association (NBA).
In the 1979 -- 80 season, after having tested it in the previous pre-season, the NBA adopted the three - point line despite the view of many that it was a gimmick. Chris Ford of the Boston Celtics is widely credited with making the first three - point shot in NBA history on October 12, 1979, a game more noted for the debut of Larry Bird (and two new head coaches). Rick Barry of the Houston Rockets, in his final season, also made one in the same game, and Kevin Grevey of the Washington Bullets made one that Friday night as well.
The sport 's international governing body, FIBA, introduced the three - point line in 1984, at 6.25 m (20 ft 6 in).
The NCAA 's Southern Conference became the first collegiate conference to use the three - point rule, adopting a 22 - foot (6.71 m) line for the 1980 -- 81 season. Ronnie Carr of Western Carolina University was the first to score a three - point field goal in college basketball history on November 29, 1980. Over the following five years, NCAA conferences differed in their use of the rule and distance required for a three - pointer. The line was as close as 17 ft 9 in (5.41 m) in the Atlantic Coast Conference (ACC), and as far away as 22 ft (6.71 m) in the Big Sky.
Used only in conference play for several years, it was adopted by the NCAA in April 1986 for the 1986 -- 87 season at 19 ft 9 in (6.02 m), and was first used in the NCAA Tournament in 1987. In the same 1986 -- 87 season, the NCAA adopted the three - pointer in women 's basketball on an experimental basis, using the same distance, and made its use mandatory beginning in 1987 -- 88. In 2007, the NCAA lengthened the men 's distance by a foot to 20 ft 9 in (6.32 m), effective with the 2008 -- 09 season, and the women 's line was moved to match the men 's in 2011 -- 12. American high schools, along with elementary and middle schools, adopted a 19 ft 9 in (6.02 m) line nationally in 1987, a year after the NCAA. The NCAA used the FIBA three - point line (see below) in the National Invitation Tournament (NIT) in 2018.
During the 1994 -- 95, 1995 -- 96, and 1996 -- 97 seasons, the NBA attempted to address decreased scoring by shortening the distance of the line from 23 ft 9 in (7.24 m) (22 ft (6.71 m) at the corners) to a uniform 22 ft (6.71 m) around the basket. From the 1997 -- 98 season on, the NBA reverted the line to its original distance of 23 ft 9 in (22 ft at the corners, with a 3 inch differential). Ray Allen is currently the NBA all - time leader in career made three - pointers with 2,973.
In 2008, FIBA announced that the distance would be increased by 50 cm (19.69 in) to 6.75 m (22 ft 1 ⁄ in), with the change being phased in beginning in October 2010. In December 2012, the WNBA announced that it would be using FIBA 's distance, too, as of the 2013 season. The NBA has discussed adding a four - point line, according to president Rod Thorn.
In the NBA, three - point field goals have become increasingly more frequent along the years, with effectiveness increasing slightly. The 1979 - 80 season had an average 2.2 three - point goals per game and 6.6 attempts (33 % effectiveness). The 1989 - 90 season had an average 4.8 three - point goals per game and 13.7 attempts (35 % effectiveness). The 2009 - 10 season had an average 6.4 three - point goals per game and 18.1 attempts (36 % effectiveness). The 2016 - 17 season had an average 9.7 three - point goals per game and 27.0 attempts (36 % effectiveness).
A three - point line consists of an arc at a set radius measured from the point on the floor directly below the center of the basket, and two parallel lines equidistant from each sideline extending from the nearest end line to the point at which they intersect the arc. In the NBA and FIBA standard, the arc spans the width of the court until it is a specified minimum distance from each sideline. The three - point line then becomes parallel to the sidelines from those points to the baseline. The unusual formation of the three - point line at these levels allows players some space from which to attempt a three - point shot at the corners of the court; the arc would be less than 2 feet (0.61 m) from each sideline at the corners if it was a continuous arc. In the NCAA and American high school standards, the arc spans 180 ° around the basket, then becomes parallel to the sidelines from the plane of the basket center to the baseline (5 feet 3 inches or 1.60 metres). The distance of the three - point line to the center of the hoop varies by level:
A player 's feet must be completely behind the three - point line at the time of the shot or jump in order to make a three - point attempt; if the player 's feet are on or in front of the line, it is a two - point attempt. A player is allowed to jump from outside the line and land inside the line to make a three - point attempt, as long as the ball is released in mid-air.
An official raises his / her arm with three fingers extended to signal the shot attempt. If the attempt is successful, he / she raises his / her other arm with all fingers fully extended in manner similar to a football official signifying successful field goal to indicate the three - point goal. The official must recognize it for it to count as three points. Instant replay has sometimes been used, depending on league rules. The NBA, WNBA, FIBA and the NCAA specifically allow replay for this purpose. In NBA, FIBA, and WNBA games, video replay does not have to occur immediately following a shot; play can continue and the officials can adjust the scoring later in the game, after reviewing the video. However, in late game situations, play may be paused pending a review.
If a shooter is fouled while attempting a three - pointer and subsequently misses the shot, the shooter is awarded three free - throw attempts. If a player completes a three - pointer while being fouled, the player is awarded one free - throw for a possible 4 - point play. Conceivably, if a player completed a three - pointer while being fouled, and that foul was ruled as either a Flagrant 1 or a Flagrant 2 foul, the player would be awarded two free throws for a possible 5 - point play.
Major League Lacrosse features a two - point line which forms a 15 - yard (14 m) arc around the front of the goal. Shots taken from behind this line count for two points, as opposed to the standard one point.
In gridiron football, a standard field goal is worth three points; various professional and semi-pro leagues have experimented with four - point field goals. NFL Europe and the Stars Football League adopted a rule similar to basketball 's three - point line in which an additional point was awarded for longer field goals; in both leagues any field goal of 50 yards (46 m) or more was worth four points. The Arena Football League awards four points for any successful drop kicked field goal (like the three - point shot, the drop kick is more challenging than a standard place kick, as the bounce of the ball makes a kick less predictable, and arena football also uses narrower goal posts for all kicks than the outdoor game does).
During the existence of the World Hockey Association in the 1970s, there were proposals for two - point hockey goals for shots taken beyond an established distance (one proposal was a 44 - foot (13.4 m) arc, which would have intersected the faceoff circles), but this proposal gained little support and faded after the WHA merged with the NHL. It was widely believed that long - distance shots in hockey had little direct relation to skill (usually resulting more from goalies ' vision being screened or obscured), plus with the lower scoring intrinsic to the sport a two - point goal was seen as disruptive of the structure of the game.
The Super Goal is a similar concept in Australian rules football, in which a 50 - meter (55 yd) arc determines the value of a goal; within the arc, it is the usual 6 points, but 9 points are scored for a "super goal '' scored from outside the arc. To date the super goal is only used in pre-season games and not in the season proper.
The National Professional Soccer League II, which awarded two points for all goals except those on the power play, also used a three - point line, drawn 45 feet (14 m) from the goal. It has since been adopted by some other indoor soccer leagues.
|
what was the main point of lincoln's second inaugural address | Abraham Lincoln 's second inaugural address - wikipedia
Abraham Lincoln delivered his second inaugural address on March 4, 1865, during his second inauguration as President of the United States. At a time when victory over secessionists in the American Civil War was within days and slavery in all of the Union was near an end, Lincoln did not speak of happiness, but of sadness. Some see this speech as a defense of his pragmatic approach to Reconstruction, in which he sought to avoid harsh treatment of the defeated South by reminding his listeners of how wrong both sides had been in imagining what lay before them when the war began four years earlier. Lincoln balanced that rejection of triumphalism, however, with recognition of the unmistakable evil of slavery. The address is inscribed, along with the Gettysburg Address, in the Lincoln Memorial.
Lincoln used his Second Inaugural Address to touch on the question of Divine providence. He wondered what God 's will might have been in allowing the war to come, and why it had assumed the terrible dimensions it had taken. He endeavored to address some of these dilemmas, using allusions taken from the Bible.
Lincoln reiterates the cause of the war, slavery, in saying "slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war ''.
The words "wringing their bread from the sweat of other men 's faces '' are an allusion to the Fall of Man in the Book of Genesis. As a result of Adam 's sin, God tells Adam that henceforth "In the sweat of thy face shalt thou eat bread, till thou return unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou return '' (Genesis 3: 19).
Lincoln 's phrase, "but let us judge not, that we be not judged, '' is an allusion to the words of Jesus in Matthew 7: 1 which in the King James Version reads, "Judge not, that ye be not judged. ''
Lincoln quotes another of Jesus ' sayings: "Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh. '' Lincoln 's quoted language comes from Matthew 18: 7; a similar discourse by Jesus appears in Luke 17: 1.
Lincoln suggests that the death and destruction wrought by the war was divine retribution to the U.S. for possessing slavery, saying that God may will that the war continue "until every drop of blood drawn with the lash shall be paid by another drawn with the sword '', and that the war was the country 's "woe due ''. The quotation "the judgments of the Lord are true and righteous altogether '' is from Psalm 19: 9.
The closing paragraph contains two additional glosses from scripture "let us strive on to... bind up the nation 's wounds '' is a reworking of Psalm 147: 3. Also, "to care for him who shall have borne the battle and for his widow and his orphan '' relies on James 1: 27.
Lincoln 's point seems to be that God 's purposes are not directly knowable to humans, and represents a theme that he had expressed earlier. After Lincoln 's death, his secretaries found among his papers an undated manuscript now generally known as the "Meditations on the Divine Will. '' In that manuscript, Lincoln wrote:
Lincoln 's sense that the divine will was unknowable stood in marked contrast to sentiments popular at the time. In the popular mind, both sides of the Civil War assumed that they could read God 's will and assumed His favor in their opposing causes. Julia Ward Howe 's "Battle Hymn of the Republic '' expressed sentiments common among the supporters of the Union cause, that the Union was waging a righteous war that served God 's purposes. Similarly, the Confederacy chose Deo vindice as its motto, often translated as "God will vindicate us. '' Lincoln, responding to compliments from Thurlow Weed on the speech, said that "... I believe it is not immediately popular. Men are not flattered by being shown that there has been a difference of purpose between the Almighty and them. ''
Fellow - Countrymen:
At this second appearing to take the oath of the Presidential office there is less occasion for an extended address than there was at the first. Then a statement somewhat in detail of a course to be pursued seemed fitting and proper. Now, at the expiration of four years, during which public declarations have been constantly called forth on every point and phase of the great contest which still absorbs the attention and engrosses the energies of the nation, little that is new could be presented. The progress of our arms, upon which all else chiefly depends, is as well known to the public as to myself, and it is, I trust, reasonably satisfactory and encouraging to all. With high hope for the future, no prediction in regard to it is ventured.
On the occasion corresponding to this four years ago all thoughts were anxiously directed to an impending civil war. All dreaded it, all sought to avert it. While the inaugural address was being delivered from this place, devoted altogether to saving the Union without war, insurgent agents were in the city seeking to destroy it without war -- seeking to dissolve the Union and divide effects by negotiation. Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.
One - eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it. Neither party expected for the war the magnitude or the duration which it has already attained. Neither anticipated that the cause of the conflict might cease with or even before the conflict itself should cease. Each looked for an easier triumph, and a result less fundamental and astounding. Both read the same Bible and pray to the same God, and each invokes His aid against the other. It may seem strange that any men should dare to ask a just God 's assistance in wringing their bread from the sweat of other men 's faces, but let us judge not, that we be not judged. The prayers of both could not be answered. That of neither has been answered fully. The Almighty has His own purposes. "Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh. '' If we shall suppose that American slavery is one of those offenses which, in the providence of God, must needs come, but which, having continued through His appointed time, He now wills to remove, and that He gives to both North and South this terrible war as the woe due to those by whom the offense came, shall we discern therein any departure from those divine attributes which the believers in a living God always ascribe to Him? Fondly do we hope, fervently do we pray, that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman 's two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said "the judgments of the Lord are true and righteous altogether. ''
With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation 's wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.
|
when can sublimation be applied as a process of separation | Sublimation (phase transition) - wikipedia
Sublimation is the phase transition of a substance directly from the solid to the gas phase without passing through the intermediate liquid phase. Sublimation is an endothermic process that occurs at temperatures and pressures below a substance 's triple point in its phase diagram. The reverse process of sublimation is deposition or desublimation, in which a substance passes directly from a gas to a solid phase. Sublimation has also been used as a generic term to describe a solid - to - gas transition (sublimation) followed by a gas - to - solid transition (deposition).
At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gaseous state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. So, all solids that possess an appreciable vapor pressure at a certain temperature usually can sublime in air (e.g. water ice just below 0 ° C). For some substances, such as carbon and arsenic, sublimation is much easier than evaporation from the melt, because the pressure of their triple point is very high, and it is difficult to obtain them as liquids.
The term sublimation refers to a physical change of state and is not used to describe transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chloride and ammonia is not sublimation but a chemical reaction. Similarly the combustion of candles, containing paraffin wax, to carbon dioxide and water vapor is not sublimation but a chemical reaction with oxygen.
Sublimation requires additional energy and is an endothermic change. The enthalpy of sublimation (also called heat of sublimation) can be calculated by adding the enthalpy of fusion and the enthalpy of vaporization.
Solid carbon dioxide (dry ice) sublimes everywhere along the line below the triple point (e.g., at the temperature of − 78.5 ° C (194.65 K, − 104.2 ° F) at atmospheric pressure, whereas its melting into liquid CO can occur only along the line at pressures and temperatures above the triple point (i.e., 5.2 atm, − 56.4 ° C).
Snow and ice sublime, although more slowly, at temperatures below the freezing / melting point temperature line at 0 ° C for most pressures; see line below triple point. In freeze - drying, the material to be dehydrated is frozen and its water is allowed to sublime under reduced pressure or vacuum. The loss of snow from a snowfield during a cold spell is often caused by sunshine acting directly on the upper layers of the snow. Ablation is a process that includes sublimation and erosive wear of glacier ice.
Naphthalene, an organic compound commonly found in pesticide such as mothball also sublimes. It sublimes easily because it is made of non-polar molecules that are held together only by van der Waals intermolecular forces. Naphthalene is a solid that sublimes at standard atmospheric temperature with the sublimation point at around 80 _̊ C or 176 _̊ F. At low temperature, its vapour pressure is high enough, 1 mmHg at 53 _̊ C, to make the solid form of naphthalene evaporate into gas. On the cool surface, the sublimated vapour will be solidified to form a needle - like crystal.
Iodine produces fumes on gentle heating. It is possible to obtain liquid iodine at atmospheric pressure by controlling the temperature at just above the melting point of iodine. In forensic science, iodine vapor can reveal latent fingerprints on paper. Arsenic can also sublime at high temperatures.
Sublimation is a technique used by chemists to purify compounds. A solid is typically placed in a sublimation apparatus and heated under vacuum. Under this reduced pressure, the solid volatilizes and condenses as a purified compound on a cooled surface (cold finger), leaving a non-volatile residue of impurities behind. Once heating ceases and the vacuum is removed, the purified compound may be collected from the cooling surface. For even higher purification efficiencies a temperature gradient is applied, which also allows for the separation of different fractions. Typical setups use an evacuated glass tube that is gradually heated in a controlled manner. The material flow is from the hot end, where the initial material is placed, to the cold end that is connected to a pump stand. By controlling temperatures along the length of the tube the operator can control the zones of recondensation, with very volatile compounds being pumped out of the system completely (or caught by a separate cold trap), moderately volatile compounds recondensating along the tube according to their different volatilities, and non-volatile compounds remaining in the hot end. Vacuum sublimation of this type is also the method of choice for purification of organic compounds for the use in the organic electronics industry, where very high purities (often > 99.99 %) are needed to satisfy the standards for consumer electronics and other applications.
In ancient alchemy, a protoscience that contributed to the development of modern chemistry and medicine, alchemists developed a structure of basic laboratory techniques, theory, terminology, and experimental methods. Sublimation was used to refer to the process in which a substance is heated to a vapor, then immediately collects as sediment on the upper portion and neck of the heating medium (typically a retort or alembic), but can also be used to describe other similar non-laboratory transitions. It is mentioned by alchemical authors such as Basil Valentine and George Ripley, and in the Rosarium philosophorum, as a process necessary for the completion of the magnum opus. Here, the word sublimation is used to describe an exchange of "bodies '' and "spirits '' similar to laboratory phase transition between solids and gases. Valentine, in his Triumphal Chariot of Antimony (published 1678) makes a comparison to spagyrics in which a vegetable sublimation can be used to separate the spirits in wine and beer. Ripley uses language more indicative of the mystical implications of sublimation, indicating that the process has a double aspect in the spiritualization of the body and the corporalizing of the spirit. He writes:
And Sublimations we make for three causes, The first cause is to make the body spiritual. The second is that the spirit may be corporeal, And become fixed with it and consubstantial. The third cause is that from its filthy original. It may be cleansed, and its saltiness sulphurious May be diminished in it, which is infectious.
The enthalpy of sublimation has commonly been predicted using the equipartition theorem. If the lattice energy is assumed to be approximately half the packing energy, then the following thermodynamic corrections can be applied to predict the enthalpy of sublimation. Assuming a 1 molar ideal gas gives a correction for the thermodynamic environment (pressure and volume) in which pV = RT, hence a correction of 1RT. Additional corrections for the vibrations, rotations and translation then need to be applied. From the equipartition theorem gaseous rotation and translation contribute 1.5 RT each to the final state, therefore a + 3RT correction. Crystalline vibrations and rotations contribute 3RT each to the initial state, hence − 6RT. Summing the RT corrections; − 6RT + 3RT + RT = − 2RT. This leads to the following approximate sublimation enthalpy. A similar approximation can be found for the entropy term if rigid bodies are assumed. Δ H sublimation = − U lattice energy − 2 R T (\ displaystyle \ Delta H_ (\ text (sublimation)) = - U_ (\ text (lattice energy)) - 2RT)
|
what force did the romans use to move water | Roman aqueduct - wikipedia
The Romans constructed aqueducts throughout their Empire, to bring water from outside sources into cities and towns. Aqueduct water supplied public baths, latrines, fountains, and private households; it also supported mining operations, milling, farms, and gardens.
Aqueducts moved water through gravity alone, along a slight overall downward gradient within conduits of stone, brick, or concrete; the steeper the gradient, the faster the flow. Most conduits were buried beneath the ground and followed the contours of the terrain; obstructing peaks were circumvented or, less often, tunneled through. Where valleys or lowlands intervened, the conduit was carried on bridgework, or its contents fed into high - pressure lead, ceramic, or stone pipes and siphoned across. Most aqueduct systems included sedimentation tanks, which helped reduce any water - borne debris. Sluices and castella aquae (distribution tanks) regulated the supply to individual destinations. In cities and towns, the run - off water from aqueducts scoured the drains and sewers.
Rome 's first aqueduct was built in 312 BC, and supplied a water fountain at the city 's cattle market. By the 3rd century AD, the city had eleven aqueducts, sustaining a population of over a million in a water - extravagant economy; most of the water supplied the city 's many public baths. Cities and towns throughout the Roman Empire emulated this model, and funded aqueducts as objects of public interest and civic pride, "an expensive yet necessary luxury to which all could, and did, aspire ''.
Most Roman aqueducts proved reliable, and durable; some were maintained into the early modern era, and a few are still partly in use. Methods of aqueduct surveying and construction are noted by Vitruvius in his work De Architectura (1st century BC). The general Frontinus gives more detail in his official report on the problems, uses and abuses of Imperial Rome 's public water supply. Notable examples of aqueduct architecture include the supporting piers of the Aqueduct of Segovia, and the aqueduct - fed cisterns of Constantinople.
Dionysius of Halicarnassus, Roman Antiquities
Before the development of aqueduct technology, Romans, like most of their contemporaries in the ancient world, relied on local water sources such as springs and streams, supplemented by groundwater from privately or publicly owned wells, and by seasonal rain - water drained from rooftops into storage jars and cisterns. The reliance of ancient communities upon such water resources restricted their potential growth. Rome 's aqueducts were not strictly Roman inventions -- their engineers would have been familiar with the water - management technologies of Rome 's Etruscan and Greek allies -- but they proved conspicuously successful. By the early Imperial era, the city 's aqueducts supported a population of over a million, and an extravagant water supply for public amenities had become a fundamental part of Roman life. The run - off of aqueduct water scoured the sewers of cities and towns. Water from aqueducts was also used to supply villas, ornamental urban and suburban gardens, market gardens, farms, and agricultural estates, the latter being the core of Rome 's economy and wealth.
Rome had several springs within its perimeter walls but its groundwater was notoriously unpalatable; water from the river Tiber was badly affected by pollution and waterborne diseases. The city 's demand for water had probably long exceeded its local supplies by 312 BC, when the city 's first aqueduct, the Aqua Appia, was commissioned by the censor Appius Claudius Caecus. The Aqua Appia was one of two major public projects of the time; the other was a military road between Rome and Capua, the first leg of the so - called Appian Way. Both projects had significant strategic value, as the Third Samnite War had been under way for some thirty years by that point. The road allowed rapid troop movements; and by design or fortunate coincidence, most of the Aqua Appia ran within a buried conduit, relatively secure from attack. It was fed by a spring 16.4 km from Rome, and dropped 10 metres over its length to discharge approximately 75,500 cubic metres of water each day into a fountain at Rome 's cattle market, the Forum Boarium, one of the city 's lowest - lying public spaces.
A second aqueduct, the Aqua Anio Vetus, was commissioned some forty years later, funded by treasures seized from Pyrrhus of Epirus. Its flow was more than twice that of the Aqua Appia, and it entered the city on raised arches, supplying water to higher elevations of the city.
By 145 BC, the city had again outgrown its combined supplies. An official commission found the aqueduct conduits decayed, their water depleted by leakage and illegal tapping. The praetor Quintus Marcius Rex restored them, and introduced a third, "more wholesome '' supply, the Aqua Marcia, Rome 's longest aqueduct and high enough to supply the Capitoline Hill. The works cost 180,000,000 sesterces, and took two years to complete. As demand grew still further, more aqueducts were built, including the Aqua Tepula in 127 BC and the Aqua Julia in 33 BC. Aqueduct - building programmes reached a peak in the Imperial Era. Augustus ' reign saw the building of the Aqua Virgo, and the short Aqua Alsietina that supplied Trastevere 's artificial lake with water for staged sea - fights to entertain the populace. Another short Augustan aqueduct supplemented the Aqua Marcia with water of "excellent quality ''. The emperor Caligula added or began two aqueducts completed by his successor Claudius; the 69 km (42.8 mile) Aqua Claudia, which gave good quality water but failed on several occasions; and the Anio Novus, highest of all Rome 's aqueducts and one of the most reliable but prone to muddy, discoloured waters, particularly after rain, despite its use of settling tanks.
Most of Rome 's aqueducts drew on various springs in the valley and highlands of the Anio, the modern river Aniene, east of the Tiber. A complex system of aqueduct junctions, tributary feeds and distribution tanks supplied every part of the city. Trastevere, the city region west of the Tiber, was primarily served by extensions of several of the city 's eastern aqueducts, carried across the river by lead pipes buried in the roadbed of the river bridges, thus forming an inverted siphon. Whenever this cross-river supply had to be shut down for routine repair and maintenance works, the "positively unwholesome '' waters of the Aqua Alsietina were used to supply Trastevere 's public fountains. The situation was finally ameliorated when the emperor Trajan built the Aqua Traiana in 109 AD, bringing clean water directly to Trastavere from aquifers around Lake Bracciano.
By the late 3rd century AD, the city was supplied with water by 11 state - funded aqueducts. Their combined conduit length is estimated between 780 and a little over 800 kilometres, of which approximately 47 km (29 mi) were carried above ground level, on masonry supports. They supplied around 1 million cubic metres (300 million gallons) a day: a capacity 126 % of the current water supply of the city of Bangalore, which has a population of 10 million.
Hundreds of similar aqueducts were built throughout the Roman Empire. Many of them have since collapsed or been destroyed, but a number of intact portions remain. The Zaghouan Aqueduct is 92.5 km (57.5 mi) in length. It was built in the 2nd century to supply Carthage (in modern Tunisia). Surviving aqueduct bridges include the Pont du Gard in France and the Aqueduct of Segovia in Spain. The longest single conduit, at over 240 km, is associated with the Valens Aqueduct of Constantinople (Mango 1995). "The known system is at least two and half times the length of the longest recorded Roman aqueducts at Carthage and Cologne, but perhaps more significantly it represents one of the most outstanding surveying achievements of any pre-industrial society ''. Rivalling this in terms of length and possibly equaling or exceeding it in cost and complexity, is the provincial Aqua Augusta that supplied an entire region, which contained at least eight cities, including the major ports at Naples and Misenum; sea voyages by traders and the Roman navy required copious supplies of fresh water.
Whether state - funded or privately built, aqueducts were protected and regulated by law. Any proposed aqueduct had to be submitted to the scrutiny of civil authorities. Permission (from the senate or local authorities) was granted only if the proposal respected the water rights of other citizens; on the whole, Roman communities took care to allocate shared water resources according to need. The land on which a state - funded aqueduct was built might be state land (ager publicus) or privately owned, but in either case was subject to restrictions on usage and encroachment that might damage the fabric of the aqueduct. To this end, state funded aqueducts reserved a wide corridor of land, up to 15 feet each side of the aqueduct 's outer fabric. Ploughing, planting and building were prohibited within this boundary. Such regulation was necessary to the aqueduct 's long - term integrity and maintenance but was not always readily accepted or easily enforced at a local level, particularly when ager publicus was understood to be common property. Some privately built or smaller municipal aqueducts may have required less stringent and formal arrangements.
Springs were by far the most common sources for aqueduct water; for example, most of Rome 's supply came from various springs in the Anio valley and its uplands. Spring - water was fed into a stone or concrete springhouse, then entered the aqueduct conduit. Scattered springs would require several branch conduits feeding into a main channel. Some systems drew water from open, purpose - built, dammed reservoirs, such as the two (still in use) that supplied the aqueduct at the provincial city of Emerita Augusta.
The territory over which the aqueduct ran had to be carefully surveyed to ensure the water would flow at an acceptable gradient for the entire distance. Roman engineers used various surveying tools to plot the course of aqueducts across the landscape. They checked horizontal levels with a chorobates, a flatbedded wooden frame fitted with a water level. They plotted courses and angles could be plotted and checked using a groma, a relatively simple apparatus that was probably displaced by the more sophisticated dioptra, precursor of the modern theodolite. In Book 8 of his De Architectura, Vitruvius describes the need to ensure a constant supply, methods of prospecting, and tests for potable water.
Greek and Roman physicians knew the association between stagnant or tainted waters and water - borne disease. They also knew the adverse health effects of lead on those who mined and processed it, and for this reason, ceramic pipes were preferred over lead. Where lead pipes were used, a continuous water - flow and the inevitable deposition of water - borne minerals within the pipes somewhat reduced the water 's contamination by soluble lead. Nevertheless, the level of lead in this water was 100 times higher than in local spring waters.
Most Roman aqueducts were flat - bottomed, arch - section conduits that ran 0.5 to 1 m beneath the ground surface, with inspection - and - access covers at regular intervals. Conduits above ground level were usually slab - topped. Early conduits were ashlar - built but from around the late Republican era, brick - faced concrete was often used instead. The concrete used for conduit linings was usually waterproof. The flow of water depended on gravity alone. The volume of water transported within the conduit depended on the catchment hydrology -- rainfall, absorption, and runoff -- the cross section of the conduit, and its gradient; most conduits ran about two - thirds full. The conduit 's cross section was also determined by maintenance requirements; workmen must be able to enter and access the whole, with minimal disruption to its fabric.
Vitruvius recommends a low gradient of not less than 1 in 4800 for the channel, presumably to prevent damage to the structure through erosion and water pressure. This value agrees well with the measured gradients of surviving masonry aqueducts. The gradient of the Pont du Gard is only 34 cm per km, descending only 17 m vertically in its entire length of 50 km (31 mi): it could transport up to 20,000 cubic metres a day. The gradients of temporary aqueducts used for hydraulic mining could be considerably greater, as at Dolaucothi in Wales (with a maximum gradient of about 1: 700) and Las Medulas in northern Spain. Where sharp gradients were unavoidable in permanent conduits, the channel could be stepped downwards, widened or discharged into a receiving tank to disperse the flow of water and reduce its abrasive force. The use of stepped cascades and drops also helped re-oxygenate and thus "freshen '' the water.
Some aqueduct conduits were supported across valleys or hollows on arches of masonry, brick or concrete; the Pont du Gard, one of the most impressive surviving examples of a massive masonry multiple - piered conduit, spanned the Gardon river - valley some 48.8 m (160 ft) above the Gardon itself. Where particularly deep or lengthy depressions had to be crossed, inverted siphons could be used, instead of arched supports; the conduit fed water into a header tank, which fed it into pipes. The pipes crossed the valley at lower level, supported by a low "venter '' bridge, then rose to a receiving tank at a slightly lower elevation. This discharged into another conduit; the overall gradient was maintained. Siphon pipes were usually made of soldered lead, sometimes reinforced by concrete encasements or stone sleeves. Less often, the pipes themselves were stone or ceramic, jointed as male - female and sealed with lead. Vitruvius describes the construction of siphons and the problems of blockage, blow - outs and venting at their lowest levels, where the pressures were greatest. Nonetheless, siphons were versatile and effective if well - built and well - maintained. A horizontal section of high - pressure siphon tubing in the Aqueduct of the Gier was ramped up on bridgework to clear a navigable river, using nine lead pipes in parallel, cased in concrete. Modern hydraulic engineers use similar techniques to enable sewers and water pipes to cross depressions. At Arles, a minor branch of the main aqueduct supplied a local suburb via a lead siphon whose "belly '' was laid across a riverbed, eliminating any need for supporting bridgework.
Roman aqueducts required a comprehensive system of regular maintenance. The "clear corridors '' created to protect the fabric of underground and overground conduits were regularly patrolled for unlawful ploughing, planting, roadways and buildings. Frontinus describes the penetration of conduits by tree - roots as particularly damaging. The aqueducts conduits would have been regularly inspected and maintained by working patrols, to reduce algal fouling, repair accidental breaches, to clear the conduits of gravel and other loose debris, and to remove channel - narrowing accretions of calcium carbonate in systems fed by hard water sources. Inspection and access points were provided at regular intervals on the standard, buried conduits. Accretions within syphons could drastically reduce flow rates, due to the already narrow diameter of their pipes. Some had sealed openings that might have been used as rodding eyes, possibly using a pull - through device. In Rome, where a hard - water supply was the norm, mains pipework was shallowly buried beneath road kerbs, for ease of access; the accumulation of calcium carbonate in these pipes would have necessitated their frequent replacement.
The aqueducts were under the overall care and governance of a water commissioner (curator aquarum). It was a high status, high - profile appointment. In 97, Frontinus served both as consul and as curator aquarum, under the emperor Nerva. Little is known of the day - to - day business of aqueduct maintenance teams (aquarii). Under the emperor Claudius, Rome 's contingent of imperial aquarii comprised a familia aquarum of 700 persons, both slave and free, funded through a combination of Imperial largesse and water taxes. They were supervised by an Imperial freedman, who held office as procurator aquarium. Theirs was probably a never - ending routine of patrol, inspection and cleaning, punctuated by occasional emergencies. Full closure of any aqueduct for servicing would have been a rare event, kept as brief as possible, with repairs preferably made when water demand was lowest, which was presumably at night. The water supply could be shut off at its aqueduct outlet when small or local repairs were needed, but substantial maintenance and repairs to the aqueduct conduit itself required the complete diversion of water at any point upstream or at the spring - head itself.
Aqueduct mains could be directly tapped, but they more usually fed into public distribution terminals, known as castella aquae, which supplied various branches and spurs, usually via large - bore lead or ceramic pipes. Thereafter, the supply could be further subdivided. Licensed, fee - paying private users would have been registered, along with the bore of pipe that led from the public water supply to their private property -- the wider the pipe, the greater the flow and the higher the fee. Tampering and fraud to avoid or reduce payment were commonplace; methods included the fitting of unlicensed outlets, additional outlets, and the illegal widening of lead pipes; any of which might involve the bribery or connivance of unscrupulous aqueduct officials or workers. Official lead pipes carried inscriptions with information on the pipe 's manufacturer, its fitter, and probably on its subscriber and their entitlement. During the Imperial era, lead production became an Imperial monopoly, and the granting of rights to draw water for private use from state - funded aqueducts was made an imperial privilege.
Rome 's first aqueduct (312 BC) discharged at very low pressure and at a more - or-less constant rate in the city 's main trading centre and cattle - market, probably into a low - level, cascaded series of troughs or basins; the upper for household use, the lower for watering the livestock traded there. Most Romans would have filled buckets and storage jars at the basins, and carried the water to their apartments; the better off would have sent slaves to perform the same task. The outlet 's elevation was too low to offer any city household or building a direct supply; the overflow drained into Rome 's main sewer, and from there into the Tiber. At this time, Rome had no public baths. The first were probably built in the next century, based on precursors in neighboring Campania; a limited number of private baths and small, street - corner public baths would have had a private water supply, but once aqueduct water was brought to the city 's higher elevations, large and well - appointed public baths were built throughout the city, and drinking water was delivered to public fountains at high pressure. Public baths and fountains became distinctive features of Roman civilization, and the baths in particular became important social centres.
The majority of urban Romans lived in multi-storeyed blocks of flats (insulae). Some blocks offered water services, but only to tenants on the more expensive, lower floors; the rest would have drawn their water gratis from public fountains.
Between 65 and 90 % of the Roman Empire 's population was involved in some form of agricultural work. Farmers whose villas or estates were near a public aqueduct could draw, under license, a specified quantity of aqueduct water for summer irrigation at a predetermined time; this was intended to limit the depletion of water supply to users further down the gradient, and help ensure a fair distribution among competitors at the time when water was most needed and scarce. Water was possibly the most important variable in the agricultural economy of the Mediterranean world. Roman Italy 's natural water sources -- springs, streams, rivers and lakes -- were unevenly distributed across the landscape, and water tended to scarcity when most needed, during the warm, dry summer growing season. Columella recommends that any farm should contain a spring, stream or river; but acknowledges that not every farm did.
Farmland without a reliable summer water - source was virtually worthless. During the growing season, the water demand of a "modest local '' irrigation system might consume as much water as the city of Rome; and the livestock whose manure fertilised the fields must be fed and watered all year round. At least some Roman landowners and farmers relied in part or whole on aqueduct water to raise crops as their primary or sole source of income but the fraction of aqueduct water involved can only be guessed at. More certainly, the creation of municipal and city aqueducts brought a growth in the intensive and efficient suburban market - farming of fragile, perishable commodities such as flowers (for perfumes, and for festival garlands), grapes, vegetables and orchard fruits; and of small livestock such as pigs and chickens, close to the municipal and urban markets.
A licensed right to aqueduct water on farmland could lead to increased productivity, a cash income through the sale of surplus foodstuffs, and an increase in the value of the land itself. In the countryside, permissions to draw aqueduct water for irrigation were particularly hard to get; the exercise and abuse of such rights were subject to various known legal disputes and judgements, and at least one political campaign; in the early 2nd century BC Cato tried to block all unlawful rural outlets, especially those owned by the landed elite - "Look how much he bought the land for, where he is channeling the water! '' - during his censorship. His attempted reform proved impermanent at best. Though illegal tapping could be punished by seizure of assets, including the illegally watered land and its produce, this law seems never to have been used, and was probably impracticable; food surpluses kept prices low. Grain shortages in particular could lead to famine and social unrest. Any practical solution must strike a balance between the water - needs of urban populations and grain producers, tax the latter 's profits, and secure sufficient grain at reasonable cost for the Roman poor (the so - called "corn dole '') and the army. Rather than seek to impose unproductive and probably unenforcable bans, the authorities issued individual water grants (though seldom in rural areas) and licenses, and regulated water outlets, with variable success. In the 1st century AD, Pliny the Elder, like Cato, could fulminate against grain producers who continued to wax fat on profits from public water and public land.
Some landholders avoided such restrictions and entanglements by buying water access rights to distant springs, not necessarily on their own land. A few, of high wealth and status, built their own aqueducts to transport such water from source to field or villa; Mumius Niger Valerius Vegetus bought the rights to a spring and its water from his neighbour, and access rights to a corridor of intervening land, then built an aqueduct of just under 10 kilometres, connecting the springhead to his own villa. The senatorial permission for this "Aqua Vegetiana '' was given only when the project seemed not to impinge on the water rights of other citizens.
Some aqueducts supplied water to industrial sites, usually via an open channel cut into the ground, clay lined or wood - shuttered to reduce water loss. Most such leats were designed to operate at the steep gradients that could deliver the high water volumes needed in mining operations. Water was used in hydraulic mining to strip the overburden and expose the ore by hushing, to fracture and wash away metal - bearing rock already heated and weakened by fire - setting, and to power water - wheel driven stamps and trip - hammers that crushed ore for processing. Evidence of such leats and machines has been found at Dolaucothi in south - west Wales.
Mining sites such as Dolaucothi and Las Medulas in northwest Spain show multiple aqueducts that fed water from local rivers to the mine head. The channels may have deteriorated rapidly, or become redundant as the nearby ore was exhausted. Las Medulas shows at least seven such leats, and Dolaucothi at least five. At Dolaucothi, the miners used holding reservoirs as well as hushing tanks, and sluice gates to control flow, as well as drop chutes for diversion of water supplies. The remaining traces (see palimpsest) of such channels allows the mining sequence to be inferred.
A number of other sites fed by several aqueducts have not yet been thoroughly explored or excavated, such as those at Longovicium near Lanchester south of Hadrian 's wall, in which the water supplies may have been used to power trip - hammers for forging iron.
At Barbegal in Roman Gaul, a reservoir fed an aqueduct that drove a cascaded series of 15 or 16 overshot water mills, grinding flour for the Arles region. Similar arrangements, though on a lesser scale, have been found in Caesarea, Venafrum and Roman - era Athens. Rome 's Aqua Traiana drove a flour - mill at the Janiculum, west of the Tiber. A mill in the basement of the Baths of Caracalla was driven by aqueduct overspill; this was but one of many city mills driven by aqueduct water, with or without official permission. A law of the 5th century forbade the illicit use of aqueduct water for milling.
During the fall of the Roman Empire, some aqueducts were deliberately cut by enemies but more fell into disuse because of deteriorating Roman infrastructure and lack of maintenance, such as the Eifel aqueduct (pictured right). Observations made by the Spaniard Pedro Tafur, who visited Rome in 1436, reveal misunderstandings of the very nature of the Roman aqueducts:
Through the middle of the city runs a river, which the Romans brought there with great labour and set in their midst, and this is the Tiber. They made a new bed for the river, so it is said, of lead, and channels at one and the other end of the city for its entrances and exits, both for watering horses and for other services convenient to the people, and anyone entering it at any other spot would be drowned.
During the Renaissance, the standing remains of the city 's massive masonry aqueducts inspired architects, engineers and their patrons; Pope Nicholas V renovated the main channels of the Roman Aqua Virgo in 1453. Many aqueducts in Rome 's former empire were kept in good repair. The 15th - century rebuilding of aqueduct at Segovia in Spain shows advances on the Pont du Gard by using fewer arches of greater height, and so greater economy in its use of the raw materials. The skill in building aqueducts was not lost, especially of the smaller, more modest channels used to supply water wheels. Most such mills in Britain were developed in the medieval period for bread production, and used similar methods as that developed by the Romans with leats tapping local rivers and streams.
|
who played apollo creed's son in creed | Adonis Creed - wikipedia
Adonis "Donnie '' Creed, also known as Adonis "Donnie '' Johnson, is the main protagonist and title character from the Rocky spin - off and sequel Creed. Adonis is the illegitimate son of Apollo Creed; the result of an affair by the former heavyweight champion and a woman with the surname Johnson. Adonis spends the first several years of his life in foster care and juvenile hall, until he is subsequently adopted by Mary Anne Creed, Apollo 's widow. He lives a life of luxury and maintains a stable white collar job, only to abandon it to pursue a lifelong dream of becoming a professional boxer. He goes to Philadelphia and convinces his late father 's friend Rocky Balboa to train and mentor him.
In 1998, Adonis "Donnie '' Johnson is shown in a juvenile hall detention center fighting with other children. He is separated and taken back to his cell. Mary Anne Creed (Phylicia Rashad), Apollo 's widow, meets with Adonis and adopts him; informing him that he is Apollo Creed 's son (sired from an extramarital affair). In the present day, Adonis (using his biological mother 's last name Johnson) is a wealthy young college graduate working at a securities firm. However, on weekends, he sneaks out to Tijuana to fight professional boxing matches against unheralded opponents, and maintains an undefeated 15 - 0 record. Soon Adonis quits his job to pursue his dream of becoming a boxer. Mary Anne vehemently objects, remembering how her husband was killed in the ring 30 years earlier (Rocky IV) and how he suffered after every fight before then. He finds it hard to get anyone in Los Angeles to train him due to his father 's death in the ring, particularly after he suffers an embarrassing loss in a sparring match to light heavyweight contender Danny "Stuntman '' Wheeler (Andre Ward). Undaunted, Adonis decides to travel to Philadelphia in hopes of seeking out his father 's best friend and former rival, Rocky Balboa (Sylvester Stallone).
Once in Philadelphia, Adonis meets Rocky and tries to get the elder boxer to be his trainer. Having given up boxing, and believing Apollo would n't want his son being a fighter, Rocky refuses. However, Adonis ' persistence eventually wins Rocky over. He forms a strong bond with Rocky and regards him as an uncle, even going so far as to call him "Unc '' and introduce him to people as such. Meanwhile, Adonis forms a relationship with his downstairs neighbor, Bianca (Tessa Thompson), a singer - songwriter with progressive hearing loss. Donnie gets a match with Leo "The Lion '' Sporino (Gabriel Rosado), the son of a trainer who originally wanted Rocky to coach his son. He moves in with Rocky to train for the fight. Rocky takes Donnie -- now known as "Hollywood '' -- to the Front Street Gym to prepare with the help of several of Rocky 's longtime friends, where Adonis markedly improves his hand speed, stamina, and defense. Prior to the fight, Sporino 's father learns that Adonis is in fact Apollo Creed 's son. After Adonis wins the fight in a 2nd - round KO, Sporino 's father alerts the media of Adonis ' parentage. Meanwhile, World heavyweight champion "Pretty '' Ricky Conlan (Tony Bellew), due to gun charges he will go to prison which will effectively end his career, is gearing up for his final fight against Danny "Stuntman '' Wheeler. After Conlan breaks Wheeler 's jaw at the weigh - in for their title fight, Conlan 's manager Tommy Holiday (Graham McTavish) decides that the best way to end his career would be against the son of Apollo Creed. Conlan is against it, but reluctantly agrees. Holiday meets with Rocky and Adonis, demanding that he use the name Creed if he wants a shot at the heavyweight title. Adonis is reluctant due to his desire to forge his own legacy. He only consents after Bianca persuades him to use Apollo 's surname.
During one intense night of training, Rocky becomes sick and is taken to the hospital. He is diagnosed with Non-Hodgkin lymphoma, but refuses to undergo chemotherapy, remembering that it did n't save his wife Adrian. Adonis discovers pamphlets about the disease in Rocky 's coat pocket after another late night training session, where Rocky tells Adonis that they are not family and that he has nothing to live for now that Adrian, his best friend and brother - in - law Paulie, Apollo and his old coach Mickey, have died and his son has moved away to Vancouver. Feeling angry and dejected, Adonis gets into a fight with a rapper before one of Bianca 's shows, and is thrown in jail. When Rocky comes to bail him out, he tells off the former heavyweight champion, accusing him of getting Apollo killed in the ring.
Later, Adonis goes to Bianca 's to apologize and explain the situation but she takes out her hearing aids and ignores him. He then meets up with Rocky, explaining that he 's going to use the name Creed and fight against Conlan, but only if Rocky gets treatment to fight his illness. While Rocky is sick, he still trains Adonis in the hospital room and back in his house.
The fight takes place in Conlan 's hometown of Liverpool, England, where Adonis is antagonized by Conlan at the press conference. Before the fight, Bianca comes to their hotel at Rocky 's behest, and the two lovers reconcile. Mary Anne sends Adonis his father 's iconic American flag boxing shorts; the back of the shorts bearing the name Johnson and the front bearing the name Creed. After some early struggles, Adonis shocks the world by giving Conlan all he can handle. He ultimately goes the distance, even managing to knock Conlan down for the first time in his career as Bianca and the once - antagonistic crowd begin to cheer him on. Although he loses by split decision, Adonis gains the respect and admiration of Conlan and everyone watching.
Back in Philadelphia, Adonis and a recovering Rocky go up the Rocky Steps, representing a victory for both Rocky and Adonis in fighting their respective battles.
Michael B Jordan and Ryan Coogler had previously worked on Fruitvale Station together in 2013. Coogler contacted Jordan and presented him with the role of Apollo Creed 's son. Jordan did n't use a body double in his scenes. Of his experience, Jordan stated, "I did n't get knocked out or anything like that, but yeah there were definitely some slips, some jabs, some body punches. '' Jordan had a strict diet in preparation for the role: "I stripped down my diet completely. Grilled chicken, brown rice, broccoli and a lot of water. I worked out two to three times a day, six days a week. And... if you do that consistently for about 10 months your body will change. '' Ryan Coogler stated that "That was (Jordan) taking real punches '' and became "routinely bloodied, bruised and dizzy '' from his fight scenes with Andre Ward, Gabriel Rosado, and Tony Bellew, all of whom are professional boxers. Gabriel Rosado stated of Jordan 's boxing skills, "Michael can throw down man. If you sleep on him in the street he might put you to sleep. ''
Ryan Coogler was inspired to make Creed from his experiences with his own father: "He used to play Rocky before I had football games to pump me up, and he would get really emotional watching the movies. He used to watch Rocky II with his mom while she was sick and dying of cancer. She passed away when he was 18 years old. And so when he got sick he was losing his strength because he had a muscular condition. He was having trouble getting around, having trouble carrying stuff. I started thinking about this idea of my dad 's mortality. For me he was kind of like this mythical figure, my father, similar to what Rocky was for him. Going through it inspired me to make a film that told a story about his hero going through something similar to kind of motivate him and cheer him up. That 's how I came up with the idea for this movie. '' Although Sylvester Stallone was initially reluctant to help out with the film, he changed his mind upon meeting with Coogler and Jordan. In discussing Stallone 's advice to him, Jordan said that he "taught me how to throw punches and hit me in my chest a couple times. ''.
Adonis is torn between trying to preserve his father 's legacy and build his own. A.O. Scott of The New York Times wrote that, "Adonis is a complex character with a complex fate. He is at once a rich kid and a street kid, the proud carrier of an illustrious heritage and an invisible man. His relationship with Rocky is complicated, too. The older fighter is a mentor and a father figure, to be sure, but he also needs someone to take care of him, especially when illness adds a melodramatic twist to the plot. Adonis has been described as "arrogant '',. Although Adonis ' circumstances change after he is adopted by Mary Anne Creed, his late father 's widow, he retains his fiery personality. Short - tempered and impulsive, but good - natured, it is Adonis ' tenacity that convinces Rocky to train him. Michael O ' Sullivan of The Washington Post analyzes that Adonis ' "struggles with his temper '' are "a coping mechanism that helps him deal with the fear of not living up to the name Creed. '' Jordan states of Adonis, "My character is living in the shadow of his dad, who is arguably the greatest fighter that ever lived, and he really has to embrace that to move forward. I could understand wanting to have your own legacy and trying to find your own lane ''. Adonis ' hubris initially causes him to refuse to embrace the name Creed, instead using his mother 's surname Johnson. Only with his girlfriend Bianca 's encouragement does Adonis eventually come to accept the name. When Adonis meets Rocky and reveals to him that he is Apollo 's son and that he wants the elder former boxer to train him, Rocky questions, "Why would you pick a fighter 's life when you do n't need to? '' He immediately notes that Adonis is well - educated and comes from a wealthy background, which contrasts Rocky 's own upbringing. However, Rocky sees in Adonis the drive and determination in himself and Apollo when he was younger, and concedes in training him. After Rocky is diagnosed with non-hodgkins lymphona, it is Adonis who motivates him and teaches him to fight again: "Adonis is there to push (Rocky) the same way Rocky pushes him in the gym and in the ring. '' Adonis pays tribute to Rocky, his father, and his country by wearing the classic American flag shorts in his debut professional match that Apollo and Balboa sported in their bouts against Ivan Drago. However, Adonis ' shorts have the name Johnson on the back and Creed on the front, symbolizing that he can both preserve his father 's legacy and still make his own.
Although Apollo Creed 's boxing style was based on Muhammad Ali 's, Adonis uses a different style. Adonis has an orthodox stance like his father, yet Michael B. Jordan and Ryan Coogler wanted Adonis ' style to be "unorthodox ''. They modeled Adonis ' style on Timothy Bradley, a real life boxer from California. Jordan described Bradley 's style as "pretty wild '' and "kind of a brawler ''. He added that since Adonis "taught himself to box '', his style would be rough and untamed. Being trained by Rocky, Adonis learns to absorb a lot of punishment and has a strong chin, but he is generally much faster than Rocky and more prone to dodge punches. Unlike Bradley, who is a welterweight, or Rocky, a heavyweight, Adonis is a light heavyweight boxer.
The character of Adonis and Jordan 's portrayal of the character have received critical acclaim. Aisha Harris of Slate stated, "I feared signing on to Creed might derail Coogler 's and Jordan 's careers. Instead, this revitalizing crowd - pleaser solidifies my belief that these two have the potential to create really great art. '' "In a performance that should help his fans forget Fantastic Four, Jordan is flat - out terrific '', said reviewer Calvin Wilson. A.O. Scott of The New York Times noted that "Mr. Jordan 's limitations... have yet to be discovered. With every role, he seems to delight in the unfolding of his talent, and to pass his excitement along to the audience. '' David Sims of The Atlantic said, "Coogler and Jordan... create a protagonist of color who avoids the stereotypes of many of Hollywood 's black heroes while still being celebrated as one. Adonis is an easy hero for everyone to cheer for, but he 's not thinly painted. Scenes where he runs through Philadelphia followed by cheering kids on bikes are especially memorable -- they celebrate the film 's myth - making without putting the hero on an unreachable pedestal. '' Reviewers praised Adonis ' relationship with Bianca; Peter Travers of Rolling Stone stated that, "the romance Donny has with his own Adrian, R&B singer Bianca (a terrific Tessa Thompson), feels sexually frisky and freshly conceived. '' Stephanie Zacharek of Time says, "Jordan 's face, in particular, is the kind you feel protective of. He 's charismatic in a totally carefree way -- you never catch him trying too hard, and his scenes with Thompson have a lovely, bantering lyricism. ''
|
how old do you have to be to carry in texas | Gun laws in Texas - wikipedia
Gun laws in Texas regulate the sale, possession, and use of firearms and ammunition in the U.S. state of Texas.
Public four - year universities (as of August 1, 2016) and public two - year colleges (as of August 1, 2017) must allow concealed carry in campus buildings as well. Universities will be allowed to designate certain sensitive areas as "gun free zones ''; these will be subject to legislative analysis.
Texas has no laws regarding possession of any firearm regardless of age, without felony convictions; all existing restrictions in State law mirror Federal law. A person of any age, except certain Felons, can possess a firearm such as at a firing range. Texas and Federal law only regulate the ownership of all firearms to 18 years of age or older, and regulate the transfer of handguns to 21 years or older by FFL dealers. However, a private citizen may sell, gift, lease etc. a handgun to anyone over 18 who is not Felon. NFA weapons are also only subject to Federal restrictions; no State regulations exist. Municipal and county ordinances on possession and carry are generally overridden (preempted) due to the wording of the Texas Constitution, which gives the Texas Legislature (and it alone) the power to "regulate the wearing of arms, with a view to prevent crime ''. Penal Code Section 1.08 also prohibits local jurisdictions from enacting or enforcing any law that conflicts with State statute. Local ordinances restricting discharge of a firearm are generally allowed as State law has little or no specification thereof, but such restrictions do not preempt State law concerning justification of use of force and deadly force.
In Texas a convicted felon may possess a firearm in the residence in which he lives once five years have elapsed from his release from prison or parole, whichever is later. Under Texas Penal Code § § 12.33, 46.04, the unlawful possession of a firearm is a third degree felony with a punishment range of two to ten years for a defendant with one prior felony conviction and fine up to $10,001.
There is no legal statute specifically prohibiting the carry of a firearm other than a handgun (pre-1899 black powder weapons, and replicas of such, are not legally firearms in Texas). However, if the firearm is displayed in a manner "calculated to cause alarm, '' then it is "disorderly conduct ''. Open carry of a handgun in public had long been illegal in Texas, except when the carrier was on property he / she owned or had lawful control over, was legally hunting, or was participating in some gun - related public event such as a gun show. However, the 2015 Texas Legislature passed a bill to allow concealed handgun permit holders to begin carrying handguns openly. The bill was signed into law on June 13, 2015, and took effect on January 1, 2016. A License to Carry (LTC) is still required to carry a handgun openly or concealed in public.
The Texas handgun carry permit was previously called a "Concealed Handgun License '' or CHL. This has changed on Jan 1. 2016 to LTC "License To Carry '' and at the same time the laws changed to include "Open Carry ''. Permits are issued on a non-discretionary ("shall - issue '') basis to all eligible, qualified applicants. Texas has full reciprocity agreements with 30 states, not including Vermont (which is an "unrestricted '' state and neither issues nor requires permits), most of these having some residency restrictions (the holder must either be a resident of Texas or a non-resident of the reciprocal state for the Texas license to be valid in that state). Texas recognizes an additional 11 states ' concealed - carry permits unilaterally; those states do not recognize Texas ' own permit as valid within their jurisdiction, usually due to some lesser requirement of the Texas permit compared to their own.
The handgun licensing law sets out the eligibility criteria that must be met. For example, an applicant must be eligible to purchase a handgun under the State and Federal laws (including an age restriction of 21), however an exception is granted to active members of the military who are age 18 and over. Additionally, a number of factors may make a person ineligible (temporarily or permanently) to obtain a license, including:
This last category, though having little to do with a person 's ability to own a firearm, is in keeping with Texas policy for any licensing; those who are delinquent or in default on State - regulated debts or court judgments are generally barred from obtaining or renewing any State - issued license (including driver licenses), as an incentive to settle those debts.
"Information regarding any psychiatric, drug, alcohol, or criminal history '' is required only from new users. Renewals are required every five years, but are granted without further inquiry into or update of this information.
An eligible person wishing to obtain an LTC (formerly CHL) must take a State - set instruction course taught by a licensed instructor for a minimum of 4 hours and a maximum of 6 hours, covering topics such as applicable laws, conflict resolution, criminal / civil liability, and handgun safety, and pass a practical qualification at a firing range with a handgun. The caliber requirement was repealed on September 1, 2017. Such courses vary in cost, but are typically around $100 -- $125 for new applicants (usually not including the cost of ammunition and other shooting supplies; the practical qualification requires firing 50 rounds of ammunition). They may then apply, providing a picture, fingerprints, other documentation, and a $40 application fee (as of September 1, 2017; previously $140 and $70 for renewals), -- active and discharged military are eligible for discounts -- to the DPS, which processes the application, runs a federal background check, and if all is well, issues the permit. Permits are valid for five years, and allow resident holders to carry in 29 other states (nonresidents may carry in all but four of those), due to reciprocity agreements. Discounted LTC fees vary from $0 for active duty military (through one year after discharge), to $25 for military veterans.
Public four - year universities (as of August 1, 2016) and public two - year colleges (as of August 1, 2017) must allow concealed carry in campus buildings as well. Universities will be allowed to designate certain sensitive areas as "gun free zones ''; these will be subject to legislative analysis.
While a permitted resident of Texas (or a nonresident holding a recognized permit) is generally authorized to carry in most public places, there are State and Federal laws that still restrict a permit holder from carrying a weapon in certain situations. These include:
TPC section 30.06 covers "Trespass by a person licensed to carry a concealed handgun ''. It allows a residential or commercial landowner to post signage that preemptively bars licensed persons from entering the premises while carrying concealed. It is a Class A misdemeanor to fail to heed compliant signage. As of January 1, 2016, the charge for failing to heed signage has been reduced to a Class C misdemeanor, unless it can be shown at trial that the actor was given oral notice and failed to depart, in which case it remains a Class A misdemeanor. This is a significant difference, in that conviction of a Class A or B misdemeanor will result in the loss of your handgun license for at least 5 years, this is not the case if convicted of a Class C misdemeanor.
Signs posted in compliance with TPC 30.06 are colloquially called "30.06 signs '' or "30.06 signage ''.
"Pursuant to Section 30.06, Penal Code (trespass by license holder with a concealed handgun), a person licensed under Subchapter H, Chapter 411, Government Code (handgun licensing law), may not enter this property with a concealed handgun ''
As of January 1, 2016, holders of a Texas CHL or LTC are able to openly carry handguns in the same places that allow concealed carry with some exceptions. Openly carried handguns must be in a shoulder or belt holster.
Existing CHL holders may continue to carry with a valid license. New applicants will be required to complete training on the use of restraint holsters and methods to ensure the secure carry of openly carried handguns.
Exceptions to Open Carry
This law took effect January 1, 2016, and covers the new Open Carry law. Section 30.07 is substantially similar to Section 30.06 which covers concealed carry.
TPC section 30.07 covers "Trespass by license holder with an openly carried handgun '' This allows a residential or commercial landowner to post signage that preemptively bars licensed persons from entering the premises while openly carrying a handgun. An offense under section 30.07 is a Class C misdemeanor, unless it is shown at trial that the actor was given oral notice and failed to depart, in which case the offense is a Class A misdemeanor. The 30.07 sign differs from the 30.06 sign in that it must be displayed at each entrance to the property. (In both cases the sign must be "clearly visible to the public ''.)
The law states that notice may be given orally by the owner of the property, or someone with apparent authority to act for the owner, or by written communication.
Written communication consists of a card or other document consisting the 30.07 language below, or a sign posted on the property.
Both written communication and a posted sign must contain language identical to the following (30.07 notice):
On March 27, 2007, Governor Rick Perry signed Senate Bill 378 into law, making Texas a "Castle Doctrine '' state which came into effect September 1, 2007. Residents lawfully occupying a dwelling may use deadly force against a person who "unlawfully, and with force, enters or attempts to enter the dwelling '', or who unlawfully and with force removes or attempts to remove someone from that dwelling, or who commits or attempts to commit a "qualifying '' felony such as "aggravated kidnapping, murder, sexual assault, aggravated sexual assault, robbery, or aggravated robbery '' (TPC 9.32 (b)).
Senate Bill 378 also contains a "Stand Your Ground '' clause; A person who has a legal right to be wherever he / she is at the time of a defensive shooting has no "duty to retreat '' before being justified in shooting. The "trier of fact '' (the jury in a jury trial, otherwise the judge) may not consider whether the person retreated when deciding whether the person was justified in shooting (TPC 9.32 (c, d)).
In addition, two statutes of the Texas Civil Practice And Remedies Code protect people who justifiably threaten or use deadly force. Chapter 86 prohibits a person convicted of a misdemeanor or felony from filing suit to recover any damages suffered as a result of the criminal act or any justifiable action taken by others to prevent the criminal act or to apprehend the person, including the firing of a weapon. Chapter 83 of the same code states that a person who used force or deadly force against an individual that is justified under TPC Chapter 9 is immune from liability for personal injury or death of the individual against whom the force was used. This does not relieve a person from liability for use of force or deadly force on someone against whom the force would not be justified, such as a bystander hit by an errant shot.
This law does not prevent a person from being sued for using deadly force. The civil court will determine if the defendant was justified under chapter 9 of the Penal Code.
Gov. Perry also signed H.B. 1815 after passage by the 2007 Legislature, a bill that allows any Texas resident to carry a handgun in the resident 's motor vehicle without a CHL or other permit. The bill revised Chapter 46, Section 2 of the Penal Code to state that it is in fact not "Unlawful Carry of a Weapon '', as defined by the statute, for a person to carry a handgun while in a motor vehicle they own or control, or to carry while heading directly from the person 's home to that car. However, lawful carry while in a vehicle requires these four critical qualifiers: (1) the weapon must not be in plain sight (in Texas law, "plain sight '' and "concealed '' are mutually exclusive opposing terms); (2) the carrier can not be involved in criminal activities, other than Class C traffic misdemeanors; (3) the carrier can not be prohibited by state or federal law from possessing a firearm; and (4) the carrier can not be a member of a criminal gang.
Previous legislation (H.B. 823) enacted in the 2005 session of the Legislature had modified TPC 46.15 ("Non-Applicability '') to include the "traveller assumption ''; a law enforcement officer who encounters a firearm in a vehicle was required to presume that the driver of that vehicle was "travelling '' under a pre-existing provision of 46.15, and thus the Unlawful Carry statute did not apply, absent evidence that the person was engaged in criminal activity, a member of a gang, or prohibited from possessing a firearm. However, attorneys and law enforcement officials in several municipalities including DA Chuck Rosenthal of Houston stated that they would continue to prosecute individuals found transporting firearms in their vehicles despite this presumption, leading to the more forceful statement of non-applicability in the 2007 H.B. 1815.
Possession of destructive devices, automatic firearms (machine guns), short - barrel shotguns (SBS), short - barrel rifles (SBR), suppressors, smoothbore pistols and other such NFA - restricted weapons is permitted by Texas law as long the owner has registered the item (s) into the NFA registry. This registration is legal if the owner possesses the proper forms, processed in accordance with the National Firearms Act which includes a paid tax stamp and approval by the NFA branch of the BATFE.
Certain local preemptions exist within the state of Texas which prohibit city or county governments from passing ordinances further restricting the lawful carrying of handguns in certain instances beyond that of state law.
In 2015, the legislature passed penal code 411.209 which prohibited an agency or political subdivision from excluding from government property any concealed handgun license holder from carrying a gun unless firearms are prohibited on the premises by state law.
Furthermore, Texas penal code 235.023 states the commissioners court of a county (i.e. county legislative body) is not authorized to regulate the transfer, ownership, possession, or transportation of firearms or require the registration of firearms
More general prohibitions on restrictions on the sale, transfer, ownership, possession, etc. of firearms and knives under local law are listed under Local Government Code Title 7, Subtitle A, Chapter 229, Subchapter A
Further Attorney General decisions also exist. For example, in August 1995 under DM - 0364, Attorney General Dan Morales passed an opinion that municipalities are prohibited from regulating the carrying of concealed handguns in city parks by persons licensed to carry a handgun: "... we believe that a municipality does not have the power to prohibit licensees from carrying handguns in city parks but that a county does have such power over county parks ''
Dan Morales also held that municipal housing authorities are subject to the preemption statute and this precludes authorities from adopting a regulation to evict a tenant for the otherwise legal possession of a firearm.
|
who sings the song time has come today | Time Has Come Today - wikipedia
"Time Has Come Today '' is a hit single by the American soul group the Chambers Brothers, written by Willie & Joe Chambers. The song was recorded in 1966, released on the album The Time Has Come in November 1967, and as a single in December 1967. Although the single was a Top 10 near - miss in America, spending five weeks at No. 11 on the Billboard Hot 100 in the fall of 1968, it is now considered one of the landmark rock songs of the psychedelic era.
The song has been described as psychedelic rock, psychedelic soul and acid rock, and features a fuzz guitar twinned with a clean one. Various other effects were employed in its recording and production, including the alternate striking of two cow bells producing a "tick - tock '' sound, warped throughout most of the song by reverb, echo and changes in tempo. It quotes several bars from "The Little Drummer Boy '' at 5: 40 in the long version.
The song has appeared in many films. Director Hal Ashby used all 11: 06 as the backdrop to the climactic scene when Captain Robert Hyde (Bruce Dern) "comes home '' to an unfaithful wife (Jane Fonda) in the 1978 Academy Award winning film Coming Home.
Other films it has also been used in include the following:
The song has also appeared on television episodes:
The song was also featured in the final mission of the video game Homefront, which was developed by THQ and Kaos Studios.
Howard Stern proclaimed his love for the song on The Howard Stern Show, November 20, 2013.
Pearl Jam used the song as an intro tag to their cover of the Neil Young song "Rockin in the Free World '' during their August 22, 2016 concert at Wrigley Field in Chicago.
Anthony Bourdain had said, in 2010, that this song ' saved his life '.
The song was also featured in the trailer for the 2017 film Geostorm.
|
when did the ar-15 first go on sale | ArmaLite AR - 15 - wikipedia
The ArmaLite AR - 15 was a select - fire, air - cooled, gas - operated, magazine - fed assault rifle manufactured in the United States between 1959 and 1964. Designed by American gun manufacturer ArmaLite in 1956, it was based on its AR - 10 rifle. The ArmaLite AR - 15 was designed to be a lightweight assault rifle and to fire a new high - velocity, lightweight, small - caliber cartridge to allow the infantrymen to carry more ammunition.
In 1959, ArmaLite sold its rights to the AR - 10 and AR - 15 to Colt due to financial difficulties. After modifications (most notably, the charging handle was re-located from under the carrying handle like AR - 10 to the rear of the receiver), Colt rebranded it the Colt ArmaLite AR - 15. Colt marketed the redesigned rifle to various military services around the world and it was subsequently adopted by the U.S. military as the M16 rifle, which went into production in March 1964.
Colt continued to use the AR - 15 trademark for its line of semi-automatic - only rifles marketed to civilian and law - enforcement customers, known as Colt AR - 15. The Armalite AR - 15 is the parent of a variety of Colt AR - 15 & M16 rifle variants.
After World War II, the United States military started looking for a single automatic rifle to replace the M1 Garand, M1 / M2 Carbines, M1918 Browning Automatic Rifle, M3 "Grease Gun '' and Thompson submachine gun. However, early experiments with select - fire versions of the M1 Garand proved disappointing. During the Korean War, the select - fire M2 Carbine largely replaced the submachine gun in US service and became the most widely used Carbine variant. However, combat experience suggested that the. 30 Carbine round was under - powered. American weapons designers concluded that an intermediate round was necessary, and recommended a small - caliber, high - velocity cartridge.
However, senior American commanders having faced fanatical enemies and experienced major logistical problems during WWII and the Korean War, insisted that a single powerful. 30 caliber cartridge be developed, that could not only be used by the new automatic rifle, but by the new general - purpose machine gun (GPMG) in concurrent development. This culminated in the development of the 7.62 × 51mm NATO cartridge.
The United States Army then began testing several rifles to replace the obsolete M1 Garand. Springfield Armory 's T44E4 and heavier T44E5 were essentially updated versions of the Garand chambered for the new 7.62 mm round, while Fabrique Nationale submitted their FN FAL as the T48. ArmaLite entered the competition late, hurriedly submitting several AR - 10 prototype rifles in the fall of 1956 to the United States Army 's Springfield Armory for testing.
The AR - 10 featured an innovative straight - line barrel / stock design, forged aluminum alloy receivers and with phenolic composite stocks. It had rugged elevated sights, an oversized aluminum flash suppressor and recoil compensator, and an adjustable gas system. The final prototype, featured an upper and lower receiver with the now - familiar hinge and takedown pins, and the charging handle was on top of the receiver placed inside of the carry handle. For a 7.62 mm NATO rifle, the AR - 10 was incredibly lightweight at only 6.85 lbs. empty. Initial comments by Springfield Armory test staff were favorable, and some testers commented that the AR - 10 was the best lightweight automatic rifle ever tested by the Armory.
In the end the United States Army chose the T44 now called the M14 rifle which was an improved M1 Garand with a 20 - round magazine and automatic fire capability. The U.S. also adopted the M60 general purpose machine gun (GPMG). Its NATO partners adopted the FN FAL and HK G3 rifles, as well as the FN MAG and Rheinmetall MG3 GPMGs.
The first confrontations between the AK - 47 and the M14 came in the early part of the Vietnam War. Battlefield reports indicated that the M14 was uncontrollable in full - auto and that soldiers could not carry enough ammo to maintain fire superiority over the AK - 47. And, while the M2 Carbine offered a high rate of fire, it was under - powered and ultimately outclassed by the AK - 47. A replacement was needed: A medium between the traditional preference for high - powered rifles such as the M14, and the lightweight firepower of the M2 Carbine.
As a result, the Army was forced to reconsider a 1957 request by General Willard G. Wyman, commander of the U.S. Continental Army Command (CONARC) to develop a. 223 caliber (5.56 mm) select - fire rifle weighing 6 lb (2.7 kg) when loaded with a 20 - round magazine. The 5.56 mm round had to penetrate a standard U.S. M1 helmet at 500 yards (460 meters) and retain a velocity in excess of the speed of sound, while matching or exceeding the wounding ability of the. 30 Carbine cartridge. This request ultimately resulted in the development of a scaled - down version of the ArmaLite AR - 10, called ArmaLite AR - 15 rifle.
In 1958, ArmaLite submitted ten AR - 15 and one hundred 25 - round magazines for CONARC testing. The tests found that a 5 - to 7 - man team armed with AR - 15s has the same firepower as 11 - man team armed with M14s. That soldiers armed with AR - 15s could carry three times more ammunition as those armed with M14s (649 rounds vs 220 rounds). And, that the AR - 15 was three times more reliable than the M14 rifle. However, General Maxwell Taylor, then Army Chief of Staff, "vetoed '' the AR - 15 in favor of the M14. In 1959, ArmaLite now frustrated with the lack of results and suffering ongoing financial difficulties, sold its rights to the AR - 10 and AR - 15 to Colt.
After acquiring the AR - 15, Colt promptly redesigned the rifle to facilitate mass production. Based on the final ArmaLite design, most notably, the charging handle was re-located from under the carrying handle, like the earlier AR - 10 to the rear of the receiver, like the later M16 rifle. Colt then renamed and rebranded the rifle "Colt ArmaLite AR - 15 Model 01 '' After a far East tour, Colt made its first sale of Colt ArmaLite AR - 15 rifles to Malaya on September 30, 1959. Colt manufactured their first batch of 300 Colt ArmaLite AR - 15 rifles in December 1959. Colt would go on to market the Colt ArmaLite AR - 15 rifle to military services around the world.
In July 1960, General Curtis LeMay, then Vice Chief of Staff of the United States Air Force, was impressed by a demonstration of the AR - 15 and ordered 8500 rifles. In the meantime, the Army would continue testing the AR - 15, finding that the intermediate cartridge. 223 (5.56 mm) rifle is much easier to shoot than the standard 7.62 mm NATO M14 rifle. In 1961 marksmanship testing, the U.S. Army found that 43 % of AR - 15 shooters achieved Expert, while only 22 % of M - 14 rifle shooters did so. Also, a lower recoil impulse, allows for more controllable automatic weapons fire.
In the summer of 1961, General LeMay was promoted to Chief of Staff of the United States Air Force, and requested an additional 80,000 AR - 15s. However, General Maxwell D. Taylor, now Chairman of the Joint Chiefs of Staff, (who repeatedly clashed with LeMay) advised President John F. Kennedy that having two different calibers within the military system at the same time would be problematic and the request was rejected. In October 1961, William Godel, a senior man at the Advanced Research Projects Agency, sent 10 AR - 15s to South Vietnam. The reception was enthusiastic, and in 1962, another 1,000 AR - 15s were sent. United States Army Special Forces personnel filed battlefield reports lavishly praising the AR - 15 and the stopping - power of the 5.56 mm cartridge, and pressed for its adoption.
The damage caused by the 5.56 mm bullet was originally believed to be caused by "tumbling '' due to the slow 1 in 14 - inch (360 mm) rifling twist rate. However, any pointed lead core bullet will "tumble '' after penetration in flesh, because the center of gravity is towards the rear of the bullet. The large wounds observed by soldiers in Vietnam were actually caused by bullet fragmentation, which was created by a combination of the bullet 's velocity and construction. These wounds were so devastating, that the photographs remained classified into the 1980s.
However, despite overwhelming evidence that the AR - 15 could bring more firepower to bear than the M14, the Army opposed the adoption of the new rifle. U.S. Secretary of Defense Robert McNamara now had two conflicting views: the USAF 's (General LeMay 's) repeated requests for additional AR - 15s and the ARPA report favoring the AR - 15, versus the Army 's position favoring the M14. Even President Kennedy expressed concern, so McNamara ordered Secretary of the Army Cyrus Vance to test the M14, the AR - 15 and the AK - 47. The Army reported that only the M14 was suitable for service, but Vance wondered about the impartiality of those conducting the tests. He ordered the Army Inspector General to investigate the testing methods used; the Inspector General confirmed that the testers were biased towards the M14.
In January 1963, Secretary McNamara received reports that M14 production was insufficient to meet the needs of the armed forces and ordered a halt to M14 production. At the time, the AR - 15 was the only rifle that could fulfill a requirement of a "universal '' infantry weapon for issue to all services. McNamara ordered its adoption, despite receiving reports of several deficiencies, most notably the lack of a chrome - plated chamber.
After minor modifications, the new redesigned rifle was renamed the Rifle, Caliber 5.56 mm, M16. Meanwhile, the Army relented and recommended the adoption of the M16 for jungle warfare operations. However, the Army insisted on the inclusion of a forward assist to help push the bolt into battery in the event that a cartridge failed to seat into the chamber. The Air Force, Colt and Eugene Stoner believed that the addition of a forward assist was an unjustified expense. As a result, the design was split into two variants: the Air Force 's M16 without the forward assist, and the XM16E1 (AKA: M16A1) with the forward assist for the other service branches.
In November 1963, McNamara approved the U.S. Army 's order of 85,000 XM16E1s; and to appease General LeMay, the Air Force was granted an order for another 19,000 M16s. In March 1964, the M16 rifle went into production and the Army accepted delivery of the first batch of 2129 rifles later that year, and an additional 57,240 rifles the following year.
The Colt ArmaLite AR - 15 was discontinued with the adoption of the M16 rifle. Most AR - 15 rifles in U.S. service have long ago been upgraded to M16 configuration. The Colt ArmaLite AR - 15 was also used by the United States Secret Service and other U.S. federal, state and local law enforcement agencies.
Shortly after the United States military adopted the M16 rifle, Colt introduced its line semi-automatic - only Colt AR - 15 rifles, which it markets to civilians and law enforcement. Colt continues to use the AR - 15 name for these rifles in order to pay homage to their predecessor the ArmaLite AR - 15.
The AR - 15 is a select - fire, 5.56 × 45mm, air - cooled, direct impingement gas - operated, magazine - fed rifle, with a rotating bolt and straight - line recoil design. It was designed to be manufactured with the extensive use of aluminium and synthetic materials by state of the art Computer Numerical Control (CNC) automated machinery.
The AR - 15 is a Modular Weapon System. It is easy to assemble, modify and repair using a few simple hand tools, and a flat surface to work on. The AR - 15 's upper receiver incorporates the fore stock, the charging handle, the gas operating system, the barrel, the bolt and bolt carrier assembly. The lower receiver incorporates the magazine well, the pistol grip and the buttstock. The lower receiver also contains the trigger, disconnector, hammer and fire selector (collectively known as the fire control group). The AR - 15 's "duckbill '' flash suppressor had three tines or prongs and was designed to preserve the shooter 's night vision by disrupting the flash. Early AR - 15 's had a 25 - round magazine. Later model AR - 15s used a 20 - round waffle - patterned magazine that was meant to be a lightweight, disposable item. As such, it is made of pressed / stamped aluminum and was not designed to be durable.
The AR - 15 's most distinctive ergonomic feature is the carrying handle and rear sight assembly on top of the receiver. This is a by - product of the design, where the carry handle serves to protect the charging handle. The AR - 15 rifle has a 500 mm (19.75 inches) sight radius. The AR - 15 uses an L - type flip, aperture rear sight and it is adjustable with two settings, 0 to 300 meters and 300 to 400 meters. The front sight is a post adjustable for elevation. The rear sight can be adjusted for windage. The sights can be adjusted with a bullet tip or pointed tool.
"The (AR - 15 's) Stoner system provides a very symmetric design that allows straight line movement of the operating components. This allows recoil forces to drive straight to the rear. Instead of connecting or other mechanical parts driving the system, high pressure gas performs this function, reducing the weight of moving parts and the rifle as a whole. '' The AR - 15 's straight - line recoil design, where the recoil spring is located in the stock directly behind the action, and serves the dual function of operating spring and recoil buffer. The stock being in line with the bore also reduces muzzle rise, especially during automatic fire. Because recoil does not significantly shift the point of aim, faster follow - up shots are possible and user fatigue is reduced.
Colt 's first two models produced after the acquisition of the rifle from ArmaLite were the 601 and 602, and these rifles were in many ways clones of the original ArmaLite rifle (in fact, these rifles were often found stamped Colt ArmaLite AR - 15, Property of the U.S. Government caliber. 223, with no reference to them being M16s).
The 601 and 602 are virtually identical to the later M16 rifle without the forward - assist. Like the later M16 rifle their charging handle was re-located from under the carrying handle like AR - 10 to the rear of the receiver. They were equipped with triangular fore - stocks and occasionally green or brown furniture. Their front sight had a more triangular shape. They had flat lower receivers without raised surfaces around the magazine well. Their bolt hold open device lacked a raised lower engagement surface and had a slanted and serrated surface that had to be engaged with a bare thumb, index finger, or thumb nail because of the lack of this surface. Their fire - selector was also changed from upward = safe, backward = semi-auto and forward = full - auto, to the now familiar forward = safe, upward = semi-auto, and backward = full - auto of the M16 rifle.
The only major difference between the 601 and 602 is the switch from the original 1: 14 - inch rifling twist to the more common 1: 12 - inch twist. The 601 was first adopted by the United States Air Force, and was quickly supplemented with the 602s (AKA: XM16s) and later the 604s (AKA: M16s). Over time, the 601s and 602s were converted to M16 rifle configuration. The USAF continued to use ArmaLite AR - 15 marked rifles well into the 1990s.
|
supernatural season 7 what episode does castiel come back | Supernatural (season 7) - Wikipedia
The seventh season of Supernatural, an American fantasy horror television series created by Eric Kripke, premiered September 23, 2011, and concluded May 18, 2012, airing 23 episodes. The season focuses on protagonists Sam (Jared Padalecki) and Dean Winchester (Jensen Ackles) facing a new enemy called Leviathans, stronger than they have encountered so far as well as rendering their usual weapons useless.
On January 12, 2012, the season won two People 's Choice Awards including Best Network TV Drama. This is the second and final season of Sera Gamble as showrunner, with Jeremy Carver taking over the role for season eight. Warner Home Video released the season on DVD and Blu - ray in Region 1 on September 18, 2012, in region 2 on November 5, 2012, and in Region 4 on October 31, 2012. The seventh season had an average viewership of 1.73 million U.S. viewers.
In this table, the number in the first column refers to the episode 's number within the entire series, whereas the number in the second column indicates the episode 's number within this particular season. "U.S. viewers in millions '' refers to how many Americans watched the episode live or on the day of broadcast.
The series was renewed for a seventh season on April 26, 2011, and remained on Fridays at 9: 00 pm (ET).
The CW announced on August 20, 2011, that the season would be increased to 23 episodes, up one episode from the 22 - episode pickup the series had previously received.
On January 11, 2012, it was announced by the executive producer of the show, Robert Singer, that there was another cliffhanger ending planned for season seven.
Critical reception to the season has generally been positive. It received a positive review from the review aggregator website Rotten Tomatoes, where it was reported as having a 100 % approval rating with an average rating of 8.8 / 10 based on 5 reviews. However, one criticism from reviewers was of the lack of an emotional link between the Leviathans as a whole and the Winchester brothers, an element which had been present in previous seasons. The lack of threat from the monsters was also noted as a downside to the season, though the portrayal of James Patrick Stuart as Dick Roman, using corporate mannerisms and charm mixed with his own self - confidence, was pointed to as a high point of the story arc.
The overturn of Mark Sheppard 's character Crowley at the final moments of the season, was very surprising for the critics. Crowley 's successful separation of the Winchester brothers by taking advantage of Dean 's imprisonment in Purgatory and the kidnap of both Kevin and Meg was a good cliffhanger going into the next season, opening up many possibilities and questions. Another well - received point was the return of the Impala at the end of season, much to the appreciation of the fans, and Misha Collins ' portrayal of the resurrected and traumatized Castiel, which brought a new element to the chemistry between the brothers.
|
what is a co curricular activity at the school | Co-curricular activity (Singapore) - wikipedia
In Singapore, a co-curricular activity (CCA), previously known as an extracurricular activity (ECA), is a non-academic activity that all students, regardless of nationality, must participate in. This policy was introduced by the Ministry of Education (MOE).
Through CCAs, students discover their interests and talents while developing values and competencies that will prepare them for a rapidly changing world. CCA also promotes friendships among students from diverse backgrounds as they learn, play and grow together. Participation in CCA fosters social integration and deepens students ' sense of belonging to their school, community and nation.
CCAs also give students in their early teens actual public responsibilities. Red Cross and St John members, for example, are often required to render first aid at public events. Most uniformed groups require precision, management and organisational skills, providing training to prepare students for the outside world.
However, CCA records are rarely considered by potential employers.
CCA choices vary widely from school to school, although schools at each education level are required to conform to national standards prescribed for that level.
In primary schools, CCAs are often introduced to students at Primary Three. Not all primary schools make CCA participation compulsory. In primary schools, Brownies are likened to junior Girl Guides.
In secondary schools, CCAs are treated more seriously. Students are required to pick at least one Core CCA to join at Secondary One. Belonging to a Core CCA is compulsory, and the students may choose a second CCA if they wish. At the end of the fourth / fifth year, 1 to 2 ' O ' Level points are removed from the examination aggregate (a lower aggregate indicates better marks). Although the marks are few, it is believed by many that they may make a difference when the students are considered for the most popular post-secondary educational institutions. For example, one minimum prerequisite for admission Raffles Institution at Year Five, via the ' O ' Levels, is an already perfect score with the maximum of 4 points removed.
CCAs are held outside of curriculum hours and the activities partaken depend on the nature of CCA. For example, Uniformed groups do foot drills and team - building exercises while competitive sportsmen spend most of the time training and learning techniques from their instructors.
CCA groups typically feature an Executive Committee. In musical groups and CCAs catering to specific interests, the Executive Committee typically consists of a Chairman, Secretary and Treasurer, among other positions.
Many former students return to their alma mater after graduation to help impart what they have learned to their juniors. Some do so within a formal framework, such as those in the uniformed groups (where ex-cadets are appointed as cadet officers), or the Voluntary Adult Leader scheme (for those above age 20). Others do so on a casual basis.
Many CCA - related competitions are held in Singapore, creating a competitive environment which provide CCA groups an objective to work towards.
The Ministry of Education organises competitions for competitive sports at the zonal and national level, respectively the yearly Zonal and National Schools Competitions. MOE also organises the biennial Singapore Youth Festival (SYF) for the Aesthetics CCAs.
Note that Band may either count as a uniformed group or a performing arts group.
In some schools, instead of separate clubs for Language, Debate and Drama (and even Culture), these domains are grouped under the heading of Language Debate and Drama Societies, an example of which is the English Literary Drama and Debate Society (ELDDS).
|
when will season 2 of skylanders academy come out on netflix | Skylanders Academy - wikipedia
Skylanders Academy is a French - American animated web television series produced by TeamTO and Activision Blizzard Studios based on the Skylanders series. The first season debuted on Netflix on October 28, 2016, with a second season slated to debut on October 6th, 2017. A trailer for the series debuted on October 12, 2016.
In the world of Skylands, Spyro the Dragon, Stealth Elf, and Eruptor are new graduates at Skylanders Academy. Under the teachings of Master Eon and Jet - Vac, the three of them will learn what it means to be a Skylander while fighting the evil Kaos and other villains of Skylands.
The voice cast consists of:
At an Investor Day presentation on November 6, 2015, Activision Blizzard announced the formation of Activision Blizzard Studios, a film production subsidiary dedicated to creating original films and television series. Headed by former The Walt Disney Company executive Nick van Dyk, Activision Blizzard Studios would look to produce and adapting Skylanders into a film and television series; the latter being called Skylanders Academy, which started airing on October 28, 2016 on Netflix.
The series is a spin - off, said to be separate from the storyline of the games with no direct tie - ins to sequels. In addition, the different voice cast is made to give the show its own separate identity. However, Richard Horvitz, Jonny Rees, and Bobcat Goldthwait are the only actors from the games to reprise their roles as Kaos, Jet - Vac, Pop Fizz, respectively.
|
diabetes mellitus is a group of metabolic diseases all of which present with | Diabetes mellitus - wikipedia
Diabetes mellitus (DM), commonly referred to as diabetes, is a group of metabolic disorders in which there are high blood sugar levels over a prolonged period. Symptoms of high blood sugar include frequent urination, increased thirst, and increased hunger. If left untreated, diabetes can cause many complications. Acute complications can include diabetic ketoacidosis, hyperosmolar hyperglycemic state, or death. Serious long - term complications include cardiovascular disease, stroke, chronic kidney disease, foot ulcers, and damage to the eyes.
Diabetes is due to either the pancreas not producing enough insulin or the cells of the body not responding properly to the insulin produced. There are three main types of diabetes mellitus:
Prevention and treatment involve maintaining a healthy diet, regular physical exercise, a normal body weight, and avoiding use of tobacco. Control of blood pressure and maintaining proper foot care are important for people with the disease. Type 1 DM must be managed with insulin injections. Type 2 DM may be treated with medications with or without insulin. Insulin and some oral medications can cause low blood sugar. Weight loss surgery in those with obesity is sometimes an effective measure in those with type 2 DM. Gestational diabetes usually resolves after the birth of the baby.
As of 2015, an estimated 415 million people had diabetes worldwide, with type 2 DM making up about 90 % of the cases. This represents 8.3 % of the adult population, with equal rates in both women and men. As of 2014, trends suggested the rate would continue to rise. Diabetes at least doubles a person 's risk of early death. From 2012 to 2015, approximately 1.5 to 5.0 million deaths each year resulted from diabetes. The global economic cost of diabetes in 2014 was estimated to be US $612 billion. In the United States, diabetes cost $245 billion in 2012.
The classic symptoms of untreated diabetes are weight loss, polyuria (increased urination), polydipsia (increased thirst), and polyphagia (increased hunger). Symptoms may develop rapidly (weeks or months) in type 1 DM, while they usually develop much more slowly and may be subtle or absent in type 2 DM.
Several other signs and symptoms can mark the onset of diabetes although they are not specific to the disease. In addition to the known ones above, they include blurry vision, headache, fatigue, slow healing of cuts, and itchy skin. Prolonged high blood glucose can cause glucose absorption in the lens of the eye, which leads to changes in its shape, resulting in vision changes. A number of skin rashes that can occur in diabetes are collectively known as diabetic dermadromes.
Low blood sugar is common in persons with type 1 and type 2 DM. Most cases are mild and are not considered medical emergencies. Effects can range from feelings of unease, sweating, trembling, and increased appetite in mild cases to more serious issues such as confusion, changes in behavior such as aggressiveness, seizures, unconsciousness, and (rarely) permanent brain damage or death in severe cases. Moderate hypoglycemia may easily be mistaken for drunkenness; rapid breathing and sweating, cold, pale skin are characteristic of hypoglycemia but not definitive. Mild to moderate cases are self - treated by eating or drinking something high in sugar. Severe cases can lead to unconsciousness and must be treated with intravenous glucose or injections with glucagon.
People (usually with type 1 DM) may also experience episodes of diabetic ketoacidosis, a metabolic disturbance characterized by nausea, vomiting and abdominal pain, the smell of acetone on the breath, deep breathing known as Kussmaul breathing, and in severe cases a decreased level of consciousness.
A rare but equally severe possibility is hyperosmolar hyperglycemic state, which is more common in type 2 DM and is mainly the result of dehydration.
All forms of diabetes increase the risk of long - term complications. These typically develop after many years (10 -- 20) but may be the first symptom in those who have otherwise not received a diagnosis before that time.
The major long - term complications relate to damage to blood vessels. Diabetes doubles the risk of cardiovascular disease and about 75 % of deaths in diabetics are due to coronary artery disease. Other "macrovascular '' diseases are stroke, and peripheral artery disease.
The primary complications of diabetes due to damage in small blood vessels include damage to the eyes, kidneys, and nerves. Damage to the eyes, known as diabetic retinopathy, is caused by damage to the blood vessels in the retina of the eye, and can result in gradual vision loss and blindness. Damage to the kidneys, known as diabetic nephropathy, can lead to tissue scarring, urine protein loss, and eventually chronic kidney disease, sometimes requiring dialysis or kidney transplantation. Damage to the nerves of the body, known as diabetic neuropathy, is the most common complication of diabetes. The symptoms can include numbness, tingling, pain, and altered pain sensation, which can lead to damage to the skin. Diabetes - related foot problems (such as diabetic foot ulcers) may occur, and can be difficult to treat, occasionally requiring amputation. Additionally, proximal diabetic neuropathy causes painful muscle atrophy and weakness.
There is a link between cognitive deficit and diabetes. Compared to those without diabetes, those with the disease have a 1.2 to 1.5-fold greater rate of decline in cognitive function. Being diabetic, especially when on insulin, increases the risk of falls in older people.
Diabetes mellitus is classified into four broad categories: type 1, type 2, gestational diabetes, and "other specific types ''. The "other specific types '' are a collection of a few dozen individual causes. Diabetes is a more variable disease than once thought and people may have combinations of forms. The term "diabetes '', without qualification, usually refers to diabetes mellitus.
Type 1 diabetes mellitus is characterized by loss of the insulin - producing beta cells of the pancreatic islets, leading to insulin deficiency. This type can be further classified as immune - mediated or idiopathic. The majority of type 1 diabetes is of the immune - mediated nature, in which a T cell - mediated autoimmune attack leads to the loss of beta cells and thus insulin. It causes approximately 10 % of diabetes mellitus cases in North America and Europe. Most affected people are otherwise healthy and of a healthy weight when onset occurs. Sensitivity and responsiveness to insulin are usually normal, especially in the early stages. Type 1 diabetes can affect children or adults, but was traditionally termed "juvenile diabetes '' because a majority of these diabetes cases were in children.
"Brittle '' diabetes, also known as unstable diabetes or labile diabetes, is a term that was traditionally used to describe the dramatic and recurrent swings in glucose levels, often occurring for no apparent reason in insulin - dependent diabetes. This term, however, has no biologic basis and should not be used. Still, type 1 diabetes can be accompanied by irregular and unpredictable high blood sugar levels, frequently with ketosis, and sometimes with serious low blood sugar levels. Other complications include an impaired counterregulatory response to low blood sugar, infection, gastroparesis (which leads to erratic absorption of dietary carbohydrates), and endocrinopathies (e.g., Addison 's disease). These phenomena are believed to occur no more frequently than in 1 % to 2 % of persons with type 1 diabetes.
Type 1 diabetes is partly inherited, with multiple genes, including certain HLA genotypes, known to influence the risk of diabetes. In genetically susceptible people, the onset of diabetes can be triggered by one or more environmental factors, such as a viral infection or diet. Several viruses have been implicated, but to date there is no stringent evidence to support this hypothesis in humans. Among dietary factors, data suggest that gliadin (a protein present in gluten) may play a role in the development of type 1 diabetes, but the mechanism is not fully understood.
Type 2 DM is characterized by insulin resistance, which may be combined with relatively reduced insulin secretion. The defective responsiveness of body tissues to insulin is believed to involve the insulin receptor. However, the specific defects are not known. Diabetes mellitus cases due to a known defect are classified separately. Type 2 DM is the most common type of diabetes mellitus.
In the early stage of type 2, the predominant abnormality is reduced insulin sensitivity. At this stage, high blood sugar can be reversed by a variety of measures and medications that improve insulin sensitivity or reduce the liver 's glucose production.
Type 2 DM is primarily due to lifestyle factors and genetics. A number of lifestyle factors are known to be important to the development of type 2 DM, including obesity (defined by a body mass index of greater than 30), lack of physical activity, poor diet, stress, and urbanization. Excess body fat is associated with 30 % of cases in those of Chinese and Japanese descent, 60 -- 80 % of cases in those of European and African descent, and 100 % of Pima Indians and Pacific Islanders. Even those who are not obese often have a high waist -- hip ratio.
Dietary factors also influence the risk of developing type 2 DM. Consumption of sugar - sweetened drinks in excess is associated with an increased risk. The type of fats in the diet is also important, with saturated fat and trans fats increasing the risk and polyunsaturated and monounsaturated fat decreasing the risk. Eating lots of white rice also may increase the risk of diabetes. A lack of physical activity is believed to cause 7 % of cases.
Gestational diabetes mellitus (GDM) resembles type 2 DM in several respects, involving a combination of relatively inadequate insulin secretion and responsiveness. It occurs in about 2 -- 10 % of all pregnancies and may improve or disappear after delivery. However, after pregnancy approximately 5 -- 10 % of women with gestational diabetes are found to have diabetes mellitus, most commonly type 2. Gestational diabetes is fully treatable, but requires careful medical supervision throughout the pregnancy. Management may include dietary changes, blood glucose monitoring, and in some cases, insulin may be required.
Though it may be transient, untreated gestational diabetes can damage the health of the fetus or mother. Risks to the baby include macrosomia (high birth weight), congenital heart and central nervous system abnormalities, and skeletal muscle malformations. Increased levels of insulin in a fetus 's blood may inhibit fetal surfactant production and cause respiratory distress syndrome. A high blood bilirubin level may result from red blood cell destruction. In severe cases, perinatal death may occur, most commonly as a result of poor placental perfusion due to vascular impairment. Labor induction may be indicated with decreased placental function. A Caesarean section may be performed if there is marked fetal distress or an increased risk of injury associated with macrosomia, such as shoulder dystocia.
Maturity onset diabetes of the young (MODY) is an autosomal dominant inherited form of diabetes, due to one of several single - gene mutations causing defects in insulin production. It is significantly less common than the three main types. The name of this disease refers to early hypotheses as to its nature. Being due to a defective gene, this disease varies in age at presentation and in severity according to the specific gene defect; thus there are at least 13 subtypes of MODY. People with MODY often can control it without using insulin.
Prediabetes indicates a condition that occurs when a person 's blood glucose levels are higher than normal but not high enough for a diagnosis of type 2 DM. Many people destined to develop type 2 DM spend many years in a state of prediabetes.
Latent autoimmune diabetes of adults (LADA) is a condition in which type 1 DM develops in adults. Adults with LADA are frequently initially misdiagnosed as having type 2 DM, based on age rather than cause.
Some cases of diabetes are caused by the body 's tissue receptors not responding to insulin (even when insulin levels are normal, which is what separates it from type 2 diabetes); this form is very uncommon. Genetic mutations (autosomal or mitochondrial) can lead to defects in beta cell function. Abnormal insulin action may also have been genetically determined in some cases. Any disease that causes extensive damage to the pancreas may lead to diabetes (for example, chronic pancreatitis and cystic fibrosis). Diseases associated with excessive secretion of insulin - antagonistic hormones can cause diabetes (which is typically resolved once the hormone excess is removed). Many drugs impair insulin secretion and some toxins damage pancreatic beta cells. The ICD - 10 (1992) diagnostic entity, malnutrition - related diabetes mellitus (MRDM or MMDM, ICD - 10 code E12), was deprecated by the World Health Organization when the current taxonomy was introduced in 1999.
Other forms of diabetes mellitus include congenital diabetes, which is due to genetic defects of insulin secretion, cystic fibrosis - related diabetes, steroid diabetes induced by high doses of glucocorticoids, and several forms of monogenic diabetes.
"Type 3 diabetes '' has been suggested as a term for Alzheimer 's disease as the underlying processes may involve insulin resistance by the brain.
The following is a comprehensive list of other causes of diabetes:
Insulin is the principal hormone that regulates the uptake of glucose from the blood into most cells of the body, especially liver, adipose tissue and muscle, except smooth muscle, in which insulin acts via the IGF - 1. Therefore, deficiency of insulin or the insensitivity of its receptors plays a central role in all forms of diabetes mellitus.
The body obtains glucose from three main places: the intestinal absorption of food; the breakdown of glycogen, the storage form of glucose found in the liver; and gluconeogenesis, the generation of glucose from non-carbohydrate substrates in the body. Insulin plays a critical role in balancing glucose levels in the body. Insulin can inhibit the breakdown of glycogen or the process of gluconeogenesis, it can stimulate the transport of glucose into fat and muscle cells, and it can stimulate the storage of glucose in the form of glycogen.
Insulin is released into the blood by beta cells (β - cells), found in the islets of Langerhans in the pancreas, in response to rising levels of blood glucose, typically after eating. Insulin is used by about two - thirds of the body 's cells to absorb glucose from the blood for use as fuel, for conversion to other needed molecules, or for storage. Lower glucose levels result in decreased insulin release from the beta cells and in the breakdown of glycogen to glucose. This process is mainly controlled by the hormone glucagon, which acts in the opposite manner to insulin.
If the amount of insulin available is insufficient, if cells respond poorly to the effects of insulin (insulin insensitivity or insulin resistance), or if the insulin itself is defective, then glucose will not be absorbed properly by the body cells that require it, and it will not be stored appropriately in the liver and muscles. The net effect is persistently high levels of blood glucose, poor protein synthesis, and other metabolic derangements, such as acidosis.
When the glucose concentration in the blood remains high over time, the kidneys will reach a threshold of reabsorption, and glucose will be excreted in the urine (glycosuria). This increases the osmotic pressure of the urine and inhibits reabsorption of water by the kidney, resulting in increased urine production (polyuria) and increased fluid loss. Lost blood volume will be replaced osmotically from water held in body cells and other body compartments, causing dehydration and increased thirst (polydipsia).
Diabetes mellitus is characterized by recurrent or persistent high blood sugar, and is diagnosed by demonstrating any one of the following:
A positive result, in the absence of unequivocal high blood sugar, should be confirmed by a repeat of any of the above methods on a different day. It is preferable to measure a fasting glucose level because of the ease of measurement and the considerable time commitment of formal glucose tolerance testing, which takes two hours to complete and offers no prognostic advantage over the fasting test. According to the current definition, two fasting glucose measurements above 126 mg / dl (7.0 mmol / l) is considered diagnostic for diabetes mellitus.
Per the World Health Organization people with fasting glucose levels from 6.1 to 6.9 mmol / l (110 to 125 mg / dl) are considered to have impaired fasting glucose. people with plasma glucose at or above 7.8 mmol / l (140 mg / dl), but not over 11.1 mmol / l (200 mg / dl), two hours after a 75 g oral glucose load are considered to have impaired glucose tolerance. Of these two prediabetic states, the latter in particular is a major risk factor for progression to full - blown diabetes mellitus, as well as cardiovascular disease. The American Diabetes Association since 2003 uses a slightly different range for impaired fasting glucose of 5.6 to 6.9 mmol / l (100 to 125 mg / dl).
Glycated hemoglobin is better than fasting glucose for determining risks of cardiovascular disease and death from any cause.
There is no known preventive measure for type 1 diabetes. Type 2 diabetes -- which accounts for 85 - 90 % of all cases -- can often be prevented or delayed by maintaining a normal body weight, engaging in physical activity, and consuming a healthful diet. Higher levels of physical activity (more than 90 minutes per day) reduce the risk of diabetes by 28 %. Dietary changes known to be effective in helping to prevent diabetes include maintaining a diet rich in whole grains and fiber, and choosing good fats, such as the polyunsaturated fats found in nuts, vegetable oils, and fish. Limiting sugary beverages and eating less red meat and other sources of saturated fat can also help prevent diabetes. Tobacco smoking is also associated with an increased risk of diabetes and its complications, so smoking cessation can be an important preventive measure as well.
The relationship between type 2 diabetes and the main modifiable risk factors (excess weight, unhealthy diet, physical inactivity and tobacco use) is similar in all regions of the world. There is growing evidence that the underlying determinants of diabetes are a reflection of the major forces driving social, economic and cultural change: globalization, urbanization, population aging, and the general health policy environment.
Diabetes mellitus is a chronic disease, for which there is no known cure except in very specific situations. Management concentrates on keeping blood sugar levels as close to normal, without causing low blood sugar. This can usually be accomplished with a healthy diet, exercise, weight loss, and use of appropriate medications (insulin in the case of type 1 diabetes; oral medications, as well as possibly insulin, in type 2 diabetes).
Learning about the disease and actively participating in the treatment is important, since complications are far less common and less severe in people who have well - managed blood sugar levels. The goal of treatment is an HbA level of 6.5 %, but should not be lower than that, and may be set higher. Attention is also paid to other health problems that may accelerate the negative effects of diabetes. These include smoking, elevated cholesterol levels, obesity, high blood pressure, and lack of regular exercise. Specialized footwear is widely used to reduce the risk of ulceration, or re-ulceration, in at - risk diabetic feet. Evidence for the efficacy of this remains equivocal, however.
People with diabetes can benefit from education about the disease and treatment, good nutrition to achieve a normal body weight, and exercise, with the goal of keeping both short - term and long - term blood glucose levels within acceptable bounds. In addition, given the associated higher risks of cardiovascular disease, lifestyle modifications are recommended to control blood pressure.
There is no single dietary pattern that is best for all people with diabetes. For overweight people with type 2 diabetes, any diet that the person will adhere to and achieve weight loss on is effective.
Medications used to treat diabetes do so by lowering blood sugar levels. There are a number of different classes of anti-diabetic medications. Some are available by mouth, such as metformin, while others are only available by injection such as GLP - 1 agonists. Type 1 diabetes can only be treated with insulin, typically with a combination of regular and NPH insulin, or synthetic insulin analogs.
Metformin is generally recommended as a first line treatment for type 2 diabetes, as there is good evidence that it decreases mortality. It works by decreasing the liver 's production of glucose. Several other groups of drugs, mostly given by mouth, may also decrease blood sugar in type II DM. These include agents that increase insulin release, agents that decrease absorption of sugar from the intestines, and agents that make the body more sensitive to insulin. When insulin is used in type 2 diabetes, a long - acting formulation is usually added initially, while continuing oral medications. Doses of insulin are then increased to effect.
Since cardiovascular disease is a serious complication associated with diabetes, some have recommended blood pressure levels below 130 / 80 mmHg. However, evidence supports less than or equal to somewhere between 140 / 90 mmHg to 160 / 100 mmHg; the only additional benefit found for blood pressure targets beneath this range was an isolated decrease in stroke risk, and this was accompanied by an increased risk of other serious adverse events. A 2016 review found potential harm to treating lower than 140 mmHg. Among medications that lower blood pressure, angiotensin converting enzyme inhibitors (ACEIs) improve outcomes in those with DM while the similar medications angiotensin receptor blockers (ARBs) do not. Aspirin is also recommended for people with cardiovascular problems, however routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes.
A pancreas transplant is occasionally considered for people with type 1 diabetes who have severe complications of their disease, including end stage kidney disease requiring kidney transplantation.
Weight loss surgery in those with obesity and type two diabetes is often an effective measure. Many are able to maintain normal blood sugar levels with little or no medications following surgery and long - term mortality is decreased. There is, however, a short - term mortality risk of less than 1 % from the surgery. The body mass index cutoffs for when surgery is appropriate are not yet clear. It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control.
In countries using a general practitioner system, such as the United Kingdom, care may take place mainly outside hospitals, with hospital - based specialist care used only in case of complications, difficult blood sugar control, or research projects. In other circumstances, general practitioners and specialists share care in a team approach. Home telehealth support can be an effective management technique.
As of 2016, 422 million people have diabetes worldwide, up from an estimated 382 million people in 2013 and from 108 million in 1980. Accounting for the shifting age structure of the global population, the prevalence of diabetes is 8.5 % among adults, nearly double the rate of 4.7 % in 1980. Type 2 makes up about 90 % of the cases. Some data indicate rates are roughly equal in women and men, but male excess in diabetes has been found in many populations with higher type 2 incidence, possibly due to sex - related differences in insulin sensitivity, consequences of obesity and regional body fat deposition, and other contributing factors such as high blood pressure, tobacco smoking, and alcohol intake.
The World Health Organization (WHO) estimates that diabetes mellitus resulted in 1.5 million deaths in 2012, making it the 8th leading cause of death. However another 2.2 million deaths worldwide were attributable to high blood glucose and the increased risks of cardiovascular disease and other associated complications (e.g. kidney failure), which often lead to premature death and are often listed as the underlying cause on death certificates rather than diabetes. For example, in 2014, the International Diabetes Federation (IDF) estimated that diabetes resulted in 4.9 million deaths worldwide, using modeling to estimate the total number of deaths that could be directly or indirectly attributed to diabetes.
Diabetes mellitus occurs throughout the world but is more common (especially type 2) in more developed countries. The greatest increase in rates has however been seen in low - and middle - income countries, where more than 80 % of diabetic deaths occur. The fastest prevalence increase is expected to occur in Asia and Africa, where most people with diabetes will probably live in 2030. The increase in rates in developing countries follows the trend of urbanization and lifestyle changes, including increasingly sedentary lifestyles, less physically demanding work and the global nutrition transition, marked by increased intake of foods that are high energy - dense but nutrient - poor (often high in sugar and saturated fats, sometimes referred to as the "Western - style '' diet).
Diabetes was one of the first diseases described, with an Egyptian manuscript from c. 1500 BCE mentioning "too great emptying of the urine ''. The Ebers papyrus includes a recommendation for a drink to be taken in such cases. The first described cases are believed to be of type 1 diabetes. Indian physicians around the same time identified the disease and classified it as madhumeha or "honey urine '', noting the urine would attract ants.
The term "diabetes '' or "to pass through '' was first used in 230 BCE by the Greek Apollonius of Memphis. The disease was considered rare during the time of the Roman empire, with Galen commenting he had only seen two cases during his career. This is possibly due to the diet and lifestyle of the ancients, or because the clinical symptoms were observed during the advanced stage of the disease. Galen named the disease "diarrhea of the urine '' (diarrhea urinosa).
The earliest surviving work with a detailed reference to diabetes is that of Aretaeus of Cappadocia (2nd or early 3rd century CE). He described the symptoms and the course of the disease, which he attributed to the moisture and coldness, reflecting the beliefs of the "Pneumatic School ''. He hypothesized a correlation of diabetes with other diseases, and he discussed differential diagnosis from the snakebite which also provokes excessive thirst. His work remained unknown in the West until 1552, when the first Latin edition was published in Venice.
Type 1 and type 2 diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400 -- 500 CE with type 1 associated with youth and type 2 with being overweight. The term "mellitus '' or "from honey '' was added by the Briton John Rolle in the late 1700s to separate the condition from diabetes insipidus, which is also associated with frequent urination. Effective treatment was not developed until the early part of the 20th century, when Canadians Frederick Banting and Charles Herbert Best isolated and purified insulin in 1921 and 1922. This was followed by the development of the long - acting insulin NPH in the 1940s.
The word diabetes (/ ˌdaɪ. əˈbiːtiːz / or / ˌdaɪ. əˈbiːtɪs /) comes from Latin diabētēs, which in turn comes from Ancient Greek διαβήτης (diabētēs), which literally means "a passer through; a siphon ''. Ancient Greek physician Aretaeus of Cappadocia (fl. 1st century CE) used that word, with the intended meaning "excessive discharge of urine '', as the name for the disease. Ultimately, the word comes from Greek διαβαίνειν (diabainein), meaning "to pass through, '' which is composed of δια - (dia -), meaning "through '' and βαίνειν (bainein), meaning "to go ''. The word "diabetes '' is first recorded in English, in the form diabete, in a medical text written around 1425.
The word mellitus (/ məˈlaɪtəs / or / ˈmɛlɪtəs /) comes from the classical Latin word mellītus, meaning "mellite '' (i.e. sweetened with honey; honey - sweet). The Latin word comes from mell -, which comes from mel, meaning "honey ''; sweetness; pleasant thing, and the suffix - ītus, whose meaning is the same as that of the English suffix "- ite ''. It was Thomas Willis who in 1675 added "mellitus '' to the word "diabetes '' as a designation for the disease, when he noticed the urine of a diabetic had a sweet taste (glycosuria). This sweet taste had been noticed in urine by the ancient Greeks, Chinese, Egyptians, Indians, and Persians.
The 1989 "St. Vincent Declaration '' was the result of international efforts to improve the care accorded to those with diabetes. Doing so is important not only in terms of quality of life and life expectancy but also economically -- expenses due to diabetes have been shown to be a major drain on health -- and productivity - related resources for healthcare systems and governments.
Several countries established more and less successful national diabetes programmes to improve treatment of the disease.
People with diabetes who have neuropathic symptoms such as numbness or tingling in feet or hands are twice as likely to be unemployed as those without the symptoms.
In 2010, diabetes - related emergency room (ER) visit rates in the United States were higher among people from the lowest income communities (526 per 10,000 population) than from the highest income communities (236 per 10,000 population). Approximately 9.4 % of diabetes - related ER visits were for the uninsured.
The term "type 1 diabetes '' has replaced several former terms, including childhood - onset diabetes, juvenile diabetes, and insulin - dependent diabetes mellitus (IDDM). Likewise, the term "type 2 diabetes '' has replaced several former terms, including adult - onset diabetes, obesity - related diabetes, and noninsulin - dependent diabetes mellitus (NIDDM). Beyond these two types, there is no agreed - upon standard nomenclature.
Diabetes mellitus is also occasionally known as "sugar diabetes '' to differentiate it from diabetes insipidus.
In animals, diabetes is most commonly encountered in dogs and cats. Middle - aged animals are most commonly affected. Female dogs are twice as likely to be affected as males, while according to some sources, male cats are also more prone than females. In both species, all breeds may be affected, but some small dog breeds are particularly likely to develop diabetes, such as Miniature Poodles. The symptoms may relate to fluid loss and polyuria, but the course may also be insidious. Diabetic animals are more prone to infections. The long - term complications recognized in humans are much rarer in animals. The principles of treatment (weight loss, oral antidiabetics, subcutaneous insulin) and management of emergencies (e.g. ketoacidosis) are similar to those in humans.
Inhalable insulin has been developed. The original products were withdrawn due to side effects. Afrezza, under development by the pharmaceuticals company MannKind Corporation, was approved by the FDA for general sale in June 2014. An advantage to inhaled insulin is that it may be more convenient and easy to use.
Transdermal insulin in the form of a cream has been developed and trials are being conducted on people with type 2 diabetes.
|
where was the movie ramona and beezus filmed | Ramona and Beezus - wikipedia
Ramona and Beezus is a 2010 American family adventure comedy film adaptation based on the Ramona series of novels written by Beverly Cleary. It was directed by Elizabeth Allen, co-produced by Dune Entertainment, Di Novi Pictures, and Walden Media, written by Laurie Craig and Nick Pustay, and produced by Denise Di Novi and Alison Greenspan with music by Mark Mothersbaugh. The film stars Joey King, Selena Gomez, Hutch Dano, Ginnifer Goodwin, John Corbett, Bridget Moynahan, Josh Duhamel, Jason Spevack, Sandra Oh, Sierra McCormick, Patti Allan, Lynda Boyd, and Aila and Zanti McCubbing. Though the film 's title is derived from Beezus and Ramona, the first of Cleary 's Ramona books, the plot is mostly based on the sequels Ramona Forever and Ramona 's World. Fox 2000 Pictures released the film on July 23, 2010. Ramona and Beezus earned generally positive reviews from critics and grossed $27,293,743.
The adventurous and creative third - grader Ramona Quimby (Joey King) often finds herself in trouble at school and at home, usually with her best friend, Howie (Jason Spevack). When her father Robert (John Corbett) loses his job and the family falls into severe debt, Ramona 's efforts to earn money end up backfiring in humorous ways. She repeatedly embarrasses her older sister, Beatrice (Selena Gomez), calling her by her family nickname, "Beezus '', in front of Beatrice 's crush, the paperboy Henry Huggins (Hutch Dano). After working as an executive in a storage company since Beezus 's birth, Robert causes quarrels with his wife and the girls ' mother Dorothy (Bridget Moynahan) when he decides to pursue a creative career.
Meanwhile, Ramona 's visiting aunt Bea (Ginnifer Goodwin) is one of the few people who accept Ramona despite all her eccentricities. After a car - painting accident involving Bea 's old flame Hobart (Josh Duhamel), Ramona gives up her money - making schemes. The next day, Ramona ruins her school portrait by cracking a raw egg in her hair and responding with disgust when the photographer asks her to say "Peas '' instead of "Cheese. '' Ramona 's worries increase the following day, when her classmate Susan (Sierra McCormick) reveals that after her own father lost his job, her parents divorced and her father moved to Tacoma. The news makes Ramona sick, and Robert has to pick her up early from school, interfering with a sudden job interview. Instead of being angry, Robert decides to spend the rest of his day drawing a mural with Ramona.
Ramona and Beezus attempt to make dinner for their parents, but the pan catches fire while Beezus is on the phone with Henry. During the ensuing argument, Henry overhears that Beezus loves him. Still upset, Ramona goes to feed her cat Picky - Picky but is devastated to find the cat dead. The girls ' private funeral for Picky - Picky helps them reconcile. A job offer for Robert in Oregon leads Ramona 's parents to decide to sell their house. As the family touches up the garden during an open house, Ramona inadvertently initiates a water fight with the neighbors, which floods the neighbors ' backyard and exposes a box that Hobart buried there years ago. The box contains mementos of Bea and Hobart 's teenage romance, and in light of their rekindling relationship, he proposes to her. Hesitantly, Bea accepts, and the family begins planning the impromptu wedding. Furious that her aunt broke her promise not to get "reeled in, '' Ramona rushes home and seeks solace in the attic. The fragile rafters break, leaving Ramona 's legs dangling from the ceiling during the open house. After the open house clears out, Robert scolds Ramona for her lack of maturity. He then receives a phone call from her teacher, Mrs. Meachum (Sandra Oh). Feeling unwanted, Ramona decides to run away. Unable to convince Ramona not to leave, her mother helps her pack her suitcase. Opening the heavy suitcase at a bus stop, Ramona discovers that her mother made it heavy on purpose to keep Ramona from traveling far. Inside, her mother has packed a book of Robert 's sketches of Ramona. Her family finds her soon afterward and everyone is happily reunited.
At Bea and Hobart 's wedding, Ramona saves the day when she finds the wedding ring Howie dropped. During the reception, Beezus and Henry share a kiss and dance together. Robert also receives a job offer from Ramona 's school; Mrs. Meachum recommended Robert to the school 's board as its new art teacher after she saw the mural that he and Ramona made. Ramona is delighted that the family will not have to move and that Robert and Dorothy reconcile. Before Bea and Hobart leave for their honeymoon in Alaska, Ramona gives Bea a locket with her school picture, and Bea tells Ramona that she 's "extraordinary. ''
On January 10, 2010, it was announced that Elizabeth Allen would direct a film adaptation of the Ramona series of novels written by Beverly Cleary. The film, Ramona and Beezus, would be released in cinemas on July 23, 2010. Denise Di Novi and Alison Greenspan spent $15 million to produce the film with writers Laurie Craig and Nick Pustay. Joey King, Selena Gomez, Hutch Dano, Ginnifer Goodwin, John Corbett, Bridget Moynahan, Josh Duhamel, Jason Spevack, Sandra Oh, Sierra McCormick, Patti Allan, Lynda Boyd, and Aila and Zanti McCubbing starred in the film. Fox 2000 Pictures acquired distribution rights to the film. Mark Mothersbaugh composed the music for the film. Dune Entertainment, Walden Media, and Di Novi Pictures co-produced the film.
Ramona and Beezus was released in theaters on July 23, 2010 by 20th Century Fox and Walden Media to 2,719 theaters nationwide. The film was rated G by MPAA, becoming the studio 's fourth film to be rated G since 1997 's Anastasia. (The next film from 20th Century Fox to receive that rating being 2015 's The Peanuts Movie) The trailer was released on March 18, 2010, and was shown in theaters along with How to Train Your Dragon, The Last Song, Despicable Me, Toy Story 3, and 20th Century Fox 's other films, including Diary of a Wimpy Kid and Marmaduke. The film premiered in New York City on July 20, 2010. It was released in Irish and British cinemas October 22, 2010.
Ramona and Beezus earned generally positive reviews. Review aggregator Rotten Tomatoes reports that of 68 critical reviews, 70 % have been positive for an average rating of 6.3 / 10. Among Rotten Tomatoes ' "Top Critics '', consisting of notable critics from the top newspapers and websites, the film holds an overall approval rating of 73 %, based on 26 reviews. The film holds a 56 rating on Metacritic, based on 28 reviews. Eric Snider of Film.com said that "The resulting story is a jumble, and there are too many side characters, but golly if it is n't pretty darned infectious. '' Jason Anderson of the Toronto Star gave Ramona and Beezus a good review, saying that "(Ramona and Beezus) is a lively affair, largely thanks to the sweet and snappy screenplay by Laurie Craig and Nick Pustay and to the appealing performances by the cast. ''
The film opened at # 4, grossing under $3 million. It brought in $7.8 million during its opening weekend, earning it # 6 at the box office. Over its first week, it earned nearly $12.7 million. As of November 20, 2010, its total gross stands at $26,645,939, surpassing its $15 million budget. The film made £ 84,475 on its first weekend in the UK (information based on the UK film council).
The film was released on DVD and Blu - ray combo pack on November 9, 2010.
The film 's soundtrack includes "Live Like There 's No Tomorrow '', performed by Selena Gomez & the Scene. The song was digitally released as a soundtrack single on July 13, 2010 and appears on the band 's second album, A Year Without Rain. Other songs in the movie include "A Place in This World '' by Taylor Swift, "Say Hey (I Love You) '' by Michael Franti & Spearhead, "Here It Goes Again '' by OK Go, a cover of "Walking on Sunshine by Aly & AJ, "Eternal Flame '' by The Bangles, and "(Let 's Get Movin ') Into Action '' by Skye Sweetnam featuring Tim Armstrong.
"Live Like There 's No Tomorrow '', performed by Selena Gomez & the Scene, was released as a single from the Ramona and Beezus soundtrack album on July 13, 2010 and appears as the final track of the band 's album A Year Without Rain.
Kim Gillespie gave A Year Without Rain 3.5 out of 5 stars in the Wiarapa Times - Age. Bill Lamb of About.com praised "Live Like There 's No Tomorrow '' as "an inspirational ballad that truly does uplift by the end. '' MusicOMH editor David Welsh wrote a mixed opinion, stating that the song "stumbles upon emotional resonance thanks in no small part to Gomez 's most impressive performance to date. ''
|
dodge viper hennessey venom 1000 tt top speed | Hennessey Viper Venom 1000 Twin Turbo - wikipedia
The Hennessey Viper Venom 1000TT (Twin Turbo) is an upgraded version of the Dodge Viper produced by Hennessey Performance Engineering, also known as HPE, that can be purchased as a complete car or as an upgrade package. The car can be had as a coupe or a convertible. It has a theoretical maximum production run of 24 vehicles.
As tested in 2006 by Motor Trend magazine, the coupe variant weighed 3469 lb, cost $187,710, and had a 0.52 drag coefficient.
The Venom 1000TT is powered by an 8.5 liter V10 motor from a 2003 Viper that originally produced 500 brake horsepower (370 kW) and 525 foot pounds (712 N m) but has been modified to produce 1000 bhp and 1100 ft lbf (1,490 N m) of torque. The engine has been stroked from 8.4 to 8.55 liters, and has had the compression ratio lowered to 9.0: 1. It also has been equipped with Twin Garrett ball bearing turbochargers and a front - mounted air - to - air intercooler.
According to testing by Motor Trend in 2005, the Venom failed California emissions testing due to excess carbon monoxide, but is legal in the other 49 states. However, the car was tuned for a standing mile race on a closed circuit, and Motor Trend surprised Hennessey with this emissions test, and the car only failed by a very small threshold, which could have easily been remedied with a less aggressive tune.
0 - 60 mph: 3.25 seconds
Top Speed: 215 mph (346 km / h)
Road & Track 0 - 200 Shootout (Sept 07) Article
|
where was the carthage located and why did it compete with rome | Carthage - wikipedia
Carthage (/ ˈkɑːrθɪdʒ /, from Latin: Carthāgō; Phoenician: 𐤒𐤓𐤕𐤇𐤃𐤔𐤕 Qart - ḥadašt "New City '') was the centre or capital city of the ancient Carthaginian civilization, on the eastern side of the Lake of Tunis in what is now the Tunis Governorate in Tunisia.
The city developed from a Phoenician colony into the capital of an empire dominating the Mediterranean during the first millennium BC. The apocryphal queen Dido is regarded as the founder of the city, though her historicity has been questioned. According to accounts by Timaeus of Tauromenium, she purchased from a local tribe the amount of land that could be covered by an oxhide. Cutting the skin into strips, she laid out her claim and founded an empire that would become, through the Punic Wars, the only existential threat to the Roman Empire until the coming of the Vandals several centuries later.
The ancient city was destroyed by the Roman Republic in the Third Punic War in 146 BC then re-developed as Roman Carthage, which became the major city of the Roman Empire in the province of Africa. The Roman city was again occupied by the Muslim conquest of the Maghreb, in 698. The site remained uninhabited, the regional power shifting to the Medina of Tunis in the medieval period, until the early 20th century, when it began to develop into a coastal suburb of Tunis, incorporated as Carthage municipality in 1919.
The archaeological site was first surveyed in 1830, by Danish consul Christian Tuxen Falbe. Excavations were performed in the second half of the 19th century by Charles Ernest Beulé and by Alfred Louis Delattre. The Carthage National Museum was founded in 1875 by Cardinal Charles Lavigerie. Excavations performed by French archaeologists in the 1920s attracted an extraordinary amount of attention because of the evidence they produced for child sacrifice, in Greco - Roman and Biblical tradition associated with the Canaanite god Baal Hammon. The open - air Carthage Paleo - Christian Museum has exhibits excavated under the auspices of UNESCO from 1975 to 1984.
The name Carthage / ˈkarθɪdʒ / is the Early Modern anglicisation of French Carthage / kaʁ. taʒ /, from Latin Carthāgō, derived via Greek Karkhēdōn (Καρχηδών) and Etruscan * Carθaza, from the Punic qrt - ḥdšt (𐤒𐤓𐤕 𐤇𐤃𐤔𐤕) "new city '', implying it was a "new Tyre ''. The Latin Carthāgō, Carthāginis is an n - stem, as reflected in the English adjective Carthaginian. The Latin adjective pūnicus, a variant of the word "Phoenician '', is reflected in English in some borrowings from Latin -- notably the Punic Wars and the Punic language.
The Modern Standard Arabic form قرطاج (Qarṭāj) is an adoption of French Carthage, replacing an older local toponym reported as Cartagenna that directly continued the Latin name.
Carthage was built on a promontory with sea inlets to the north and the south. The city 's location made it master of the Mediterranean 's maritime trade. All ships crossing the sea had to pass between Sicily and the coast of Tunisia, where Carthage was built, affording it great power and influence. Two large, artificial harbors were built within the city, one for harboring the city 's massive navy of 220 warships and the other for mercantile trade. A walled tower overlooked both harbors. The city had massive walls, 37 km (23 mi) in length, longer than the walls of comparable cities. Most of the walls were located on the shore, thus could be less impressive, as Carthaginian control of the sea made attack from that direction difficult. The 4.0 to 4.8 km (2.5 to 3 mi) of wall on the isthmus to the west were truly massive and were never penetrated. The city had a huge necropolis or burial ground, religious area, market places, council house, towers, and a theater, and was divided into four equally sized residential areas with the same layout. Roughly in the middle of the city stood a high citadel called the Byrsa.
Carthage was one of the largest cities of the Hellenistic period and was among the largest cities in preindustrial history. Whereas by AD 14, Rome had at least 750,000 inhabitants and in the following century may have reached 1 million, the cities of Alexandria and Antioch numbered only a few hundred thousand or less. According to the not always reliable history of Herodian, Carthage rivaled Alexandria for second place in the Roman empire.
On top of Byrsa hill, the location of the Roman Forum, a residential area from the last century of existence (early second century BCE.) of the Punic city was excavated by the French archaeologist Serge Lancel. The neighborhood, with its houses, shops, and private spaces, is significant for what it reveals about daily life there over 2100 years ago.
The remains have been preserved under embankments, the substructures of the later Roman forum, whose foundation piles dot the district. The housing blocks are separated by a grid of straight streets about 6 m (20 ft) wide, with a roadway consisting of clay; in situ stairs compensate for the slope of the hill. Construction of this type presupposes organization and political will, and has inspired the name of the neighborhood, "Hannibal district '', referring to the legendary Punic general or sufet (consul) at the beginning of the second century BCE.
The habitat is typical, even stereotypical. The street was often used as a storefront / shopfront; cisterns were installed in basements to collect water for domestic use, and a long corridor on the right side of each residence led to a courtyard containing a sump, around which various other elements may be found. In some places, the ground is covered with mosaics called punica pavement, sometimes using a characteristic red mortar.
The merchant harbor at Carthage was developed, after settlement of the nearby Punic town of Utica. Eventually the surrounding countryside was brought into the orbit of the Punic urban centres, first commercially, then politically. Direct management over cultivation of neighbouring lands by Punic owners followed. A 28 - volume work on agriculture written in Punic by Mago, a retired army general (c. 300), was translated into Latin and later into Greek. The original and both translations have been lost; however, some of Mago 's text has survived in other Latin works. Olive trees (e.g., grafting), fruit trees (pomegranate, almond, fig, date palm), viniculture, bees, cattle, sheep, poultry, implements, and farm management were among the ancient topics which Mago discussed. As well, Mago addresses the wine - maker 's art (here a type of sherry).
In Punic farming society, according to Mago, the small estate owners were the chief producers. They were, two modern historians write, not absent landlords. Rather, the likely reader of Mago was "the master of a relatively modest estate, from which, by great personal exertion, he extracted the maximum yield. '' Mago counselled the rural landowner, for the sake of their own ' utilitarian ' interests, to treat carefully and well their managers and farm workers, or their overseers and slaves. Yet elsewhere these writers suggest that rural land ownership provided also a new power base among the city 's nobility, for those resident in their country villas. By many, farming was viewed as an alternative endeavour to an urban business. Another modern historian opines that more often it was the urban merchant of Carthage who owned rural farming land to some profit, and also to retire there during the heat of summer. It may seem that Mago anticipated such an opinion, and instead issued this contrary advice (as quoted by the Roman writer Columella):
"The man who acquires an estate must sell his house, lest he prefer to live in the town rather than in the country. Anyone who prefers to live in a town has no need of an estate in the country. '' "One who has bought land should sell his town house, so that he will have no desire to worship the household gods of the city rather than those of the country; the man who takes greater delight in his city residence will have no need of a country estate. ''
The issues involved in rural land management also reveal underlying features of Punic society, its structure and stratification. The hired workers might be considered ' rural proletariat ', drawn from the local Berbers. Whether or not there remained Berber landowners next to Punic - run farms is unclear. Some Berbers became sharecroppers. Slaves acquired for farm work were often prisoners of war. In lands outside Punic political control, independent Berbers cultivated grain and raised horses on their lands. Yet within the Punic domain that surrounded the city - state of Carthage, there were ethnic divisions in addition to the usual quasi feudal distinctions between lord and peasant, or master and serf. This inherent instability in the countryside drew the unwanted attention of potential invaders. Yet for long periods Carthage was able to manage these social difficulties.
The many amphorae with Punic markings subsequently found about ancient Mediterranean coastal settlements testify to Carthaginian trade in locally made olive oil and wine. Carthage 's agricultural production was held in high regard by the ancients, and rivaled that of Rome -- they were once competitors, e.g., over their olive harvests. Under Roman rule, however, grain production ((wheat) and barley) for export increased dramatically in ' Africa '; yet these later fell with the rise in Roman Egypt 's grain exports. Thereafter olive groves and vineyards were re-established around Carthage. Visitors to the several growing regions that surrounded the city wrote admiringly of the lush green gardens, orchards, fields, irrigation channels, hedgerows (as boundaries), as well as the many prosperous farming towns located across the rural landscape.
Accordingly, the Greek author and compiler Diodorus Siculus (fl. 1st century BCE), who enjoyed access to ancient writings later lost, and on which he based most of his writings, described agricultural land near the city of Carthage circa 310 BC:
"It was divided into market gardens and orchards of all sorts of fruit trees, with many streams of water flowing in channels irrigating every part. There were country homes everywhere, lavishly built and covered with stucco... Part of the land was planted with vines, part with olives and other productive trees. Beyond these, cattle and sheep were pastured on the plains, and there were meadows with grazing horses. ''
The Chora (farm lands of Carthage) encompassed a limited area: the north coastal tell, the lower Bagradas river valley (inland from Utica), Cape Bon, and the adjacent sahel on the east coast. Punic culture here achieved the introduction of agricultural sciences first developed for lands of the eastern Mediterranean, and their adaptation to local African conditions.
The urban landscape of Carthage is known in part from ancient authors, augmented by modern digs and surveys conducted by archeologists. The "first urban nucleus '' dating to the seventh century, in area about ten hectares (or four acres), was apparently located on low - lying lands along the coast (north of the later harbors). As confirmed by archaeological excavations, Carthage was a "creation ex nihilo '', built on ' virgin ' land, and situated at the end of a peninsula (per the ancient coastline). Here among "mud brick walls and beaten clay floors '' (recently uncovered) were also found extensive cemeteries, which yielded evocative grave goods like clay masks. "Thanks to this burial archaeology we know more about archaic Carthage than about any other contemporary city in the western Mediterranean. '' Already in the eighth century, fabric dyeing operations had been established, evident from crushed shells of murex (from which the ' Phoenician purple ' was derived). Nonetheless, only a "meager picture '' of the cultural life of the earliest pioneers in the city can be conjectured, and not much about housing, monuments or defenses. The Roman poet Virgil (70 -- 19 BC) imagined early Carthage, when his legendary character Aeneas had arrived there:
"Aeneas found, where lately huts had been, marvelous buildings, gateways, cobbled ways, and din of wagons. There the Tyrians were hard at work: laying courses for walls, rolling up stones to build the citadel, while others picked out building sites and plowed a boundary furrow. Laws were being enacted, magistrates and a sacred senate chosen. Here men were dredging harbors, there they laid the deep foundations of a theatre, and quarried massive pillars... ``
The two inner harbours (called in Punic cothon) were located in the southeast; one being commercial, and the other for war. Their definite functions are not entirely known, probably for the construction, outfitting, or repair of ships, perhaps also loading and unloading cargo. Larger anchorages existed to the north and south of the city. North and west of the cothon were located several industrial areas, e.g., metalworking and pottery (e.g., for amphora), which could serve both inner harbours, and ships anchored to the south of the city.
About the Byrsa, the citadel area to the north, considering its importance our knowledge of it is patchy. Its prominent heights were the scene of fierce combat during the fiery destruction of the city in 146 BC. The Byrsa was the reported site of the Temple of Eshmun (the healing god), at the top of a stairway of sixty steps. A temple of Tanit (the city 's queen goddess) was likely situated on the slope of the ' lesser Byrsa ' immediately to the east, which runs down toward the sea. Also situated on the Byrsa were luxury homes.
South of the citadel, near the cothon (the inner harbours) was the tophet, a special and very old cemetery, which when begun lay outside the city 's boundaries. Here the Salammbô was located, the Sanctuary of Tanit, not a temple but an enclosure for placing stone stelae. These were mostly short and upright, carved for funeral purposes. Evidence from here may indicate the occurrence of child sacrifice. Probably the tophet burial fields were "dedicated at an early date, perhaps by the first settlers. ''
Between the sea - filled cothon for shipping and the Byrsa heights lay the agora (Greek: "market ''), the city - state 's central marketplace for business and commerce. The agora was also an area of public squares and plazas, where the people might formally assemble, or gather for festivals. It was the site of religious shrines, and the location of whatever were the major municipal buildings of Carthage. Here beat the heart of civic life. In this district of the Carthage, more probably, the ruling suffets presided, the council of elders convened, the tribunal of the 104 met, and justice was dispensed at trials in the open air.
Early residential districts wrapped around the Byrsa from the south to the north east. Houses usually were whitewashed and blank to the street, but within were courtyards open to the sky. In these neighborhoods multistory construction later became common, some up to six stories tall according to an ancient Greek author. Several architecutural floorplans of homes have been revealed by recent excavations, as well as the general layout of several city blocks. Stone stairs were set in the streets, and drainage was planned, e.g., in the form of soakways leaching into the sandy soil. Along the Byrsa 's southern slope were located not only fine old homes, but also many of the earliest grave - sites, juxtaposed in small areas, interspersed with daily life.
Artisan workshops were located in the city at sites north and west of the harbours. The location of three metal workshops (implied from iron slag and other vestiges of such activity) were found adjacent to the naval and commercial harbours, and another two were further up the hill toward the Byrsa citadel. Sites of pottery kilns have been identified, between the agora and the harbours, and further north. Earthenware often used Greek models. A fuller 's shop for preparing woolen cloth (shrink and thicken) was evidently situated further to the west and south, then by the edge of the city. Carthage also produced objects of rare refinement. During the 4th and 3rd centuries, the sculptures of the sarcophagi became works of art. "Bronze engraving and stone - carving reached their zenith. ''
The elevation of the land at the promontory on the seashore to the north - east (now called Sidi Bou Saïd), was twice as high above sea level as that at the Byrsa (100 m and 50 m). In between runs a ridge, several times reaching 50 m; it continues northwestward along the seashore, and forms the edge of a plateau - like area between the Byrsa and the sea. Newer urban developments lay here in these northern districts.
Surrounding Carthage were walls "of great strength '' said in places to rise above 13 m, being nearly 10 m thick, according to ancient authors. To the west, three parallel walls were built. The walls altogether ran for about 33 kilometres (21 miles) to encircle the city. The heights of the Byrsa were additionally fortified; this area being the last to succumb to the Romans in 146 BC. Originally the Romans had landed their army on the strip of land extending southward from the city.
Greek cities contested with Carthage for the Western Mediterranean culminating in the Sicilian Wars and the Pyrrhic War over Sicily, while the Romans fought three wars against Carthage, known as the Punic Wars.
The Carthaginian republic was one of the longest - lived and largest states in the ancient Mediterranean. Reports relay several wars with Syracuse and finally, Rome, which eventually resulted in the defeat and destruction of Carthage in the Third Punic War. The Carthaginians were Phoenician settlers originating in the Mediterranean coast of the Near East. They spoke Canaanite, a Semitic language, and followed a local variety of the ancient Canaanite religion.
The fall of Carthage came at the end of the Third Punic War in 146 BC at the Battle of Carthage. Despite initial devastating Roman naval losses and Rome 's recovery from the brink of defeat after the terror of a 15 - year occupation of much of Italy by Hannibal, the end of the series of wars resulted in the end of Carthaginian power and the complete destruction of the city by Scipio Aemilianus. The Romans pulled the Phoenician warships out into the harbor and burned them before the city, and went from house to house, capturing and enslaving the people. About 50,000 Carthaginians were sold into slavery. The city was set ablaze and razed to the ground, leaving only ruins and rubble. After the fall of Carthage, Rome annexed the majority of the Carthaginian colonies, including other North African locations such as Volubilis, Lixus, Chellah, and Mogador.
The legend that the city was sown with salt remains widely accepted despite lacking evidence among ancient historical accounts; R.T. Ridley found that the earliest such claim is attributed to B.L. Hallward 's chapter in Cambridge Ancient History, published in 1930. Ridley contended that Hallward 's claim may have gained traction due to historical evidence of other salted - earth instances such as Abimelech 's salting of Shechem in Judges 9: 45. Many historians have since issued retractions acknowledging Ridley. B.H. Warmington similarly admitted fault in repeating Hallward 's error, but posited that the legend precedes 1930 and inspired repetitions of the practice. For this reason, Warmington suggested that the symbolic value of the legend is so great and enduring that it mitigates the deficiency of concrete evidence that it happened and is useful to understand how subsequent historical narratives have been framed.
Starting in the 19th century, various texts claim that the Roman general Scipio Aemilianus Africanus plowed over and sowed the city of Carthage with salt after defeating it in the Third Punic War (146 BC), sacking it, and forcing the survivors into slavery. However, no ancient sources exist documenting the salting itself. The Carthage story is a later invention, probably modeled on the story of Shechem. The ritual of symbolically drawing a plow over the site of a city is, however, mentioned in ancient sources, though not in reference to Carthage specifically. When Pope Boniface VIII destroyed Palestrina in 1299, he issued a papal bull that it be plowed "following the old example of Carthage in Africa '', and also salted. "I have run the plough over it, like the ancient Carthage of Africa, and I have had salt sown upon it... ''
When Carthage fell, its nearby rival Utica, a Roman ally, was made capital of the region and replaced Carthage as the leading center of Punic trade and leadership. It had the advantageous position of being situated on the outlet of the Medjerda River, Tunisia 's only river that flowed all year long. However, grain cultivation in the Tunisian mountains caused large amounts of silt to erode into the river. This silt accumulated in the harbor until it became useless, and Rome was forced to rebuild Carthage.
By 122 BC, Gaius Gracchus founded a short - lived colony, called Colonia Iunonia, after the Latin name for the Punic goddess Tanit, Iuno Caelestis. The purpose was to obtain arable lands for impoverished farmers. The Senate abolished the colony some time later, to undermine Gracchus ' power.
After this ill - fated attempt, a new city of Carthage was built on the same land by Julius Caesar in the period from 49 to 44 BC, and by the first century, it had grown to be the second - largest city in the western half of the Roman Empire, with a peak population of 500,000. It was the center of the province of Africa, which was a major breadbasket of the Empire. Among its major monuments was an amphitheater.
Carthage also became a center of early Christianity (see Carthage (episcopal see)). In the first of a string of rather poorly reported councils at Carthage a few years later, no fewer than 70 bishops attended. Tertullian later broke with the mainstream that was increasingly represented in the West by the primacy of the Bishop of Rome, but a more serious rift among Christians was the Donatist controversy, which Augustine of Hippo spent much time and parchment arguing against. At the Council of Carthage (397), the biblical canon for the western Church was confirmed.
The political fallout from the deep disaffection of African Christians is supposedly a crucial factor in the ease with which Carthage and the other centers were captured in the fifth century by Genseric, king of the Vandals, who defeated the Roman general Bonifacius and made the city the capital of the Vandal Kingdom. Genseric was considered a heretic, too, an Arian, and though Arians commonly despised Catholic Christians, a mere promise of toleration might have caused the city 's population to accept him.
After a failed attempt to recapture the city in the fifth century, the Eastern Roman Empire finally subdued the Vandals in the Vandalic War in 533 -- 534. Thereafter, the city became the seat of the praetorian prefecture of Africa, which was made into an exarchate during the emperor Maurice 's reign, as was Ravenna on the Italian Peninsula. These two exarchates were the western bulwarks of the Byzantine Empire, all that remained of its power in the West. In the early seventh century Heraclius the Elder, the exarch of Carthage, overthrew the Byzantine emperor Phocas, whereupon his son Heraclius succeeded to the imperial throne.
The Roman Exarchate of Africa was not able to withstand the seventh - century Muslim conquest of the Maghreb. The Umayyad Caliphate under Abd al - Malik ibn Marwan in 686 sent a force led by Zuhayr ibn Qais, who won a battle over the Romans and Berbers led by King Kusaila of the Kingdom of Altava on the plain of Kairouan, but he could not follow that up. In 695, Hasan ibn al - Nu'man captured Carthage and advanced into the Atlas Mountains. An imperial fleet arrived and retook Carthage, but in 698, Hasan ibn al - Nu'man returned and defeated Emperor Tiberios III at the 698 Battle of Carthage. Roman imperial forces withdrew from all of Africa except Ceuta. Roman Carthage was destroyed -- its walls torn down, its water supply cut off, and its harbors made unusable. The destruction of the Exarchate of Africa marked a permanent end to the Byzantine Empire 's influence in the region.
The Medina of Tunis, originally a Berber settlement, was established as the new regional center under the Umayyad Caliphate in the early 8th century. Under the Aghlabids, the people of Tunis revolted numerous times, but the city profited from economic improvements and quickly became the second most important in the kingdom. It was briefly the national capital, from the end of the reign of Ibrahim II in 902, until 909, when the Shi'ite Berbers took over Ifriqiya and founded the Fatimid Caliphate.
Carthage remained a residential see until the high medieval period, mentioned in two letters of Pope Leo IX dated 1053, written in reply to consultations regarding a conflict between the bishops of Carthage and Gummi. In each of the two letters, Pope Leo declares that, after the Bishop of Rome, the first archbishop and chief metropolitan of the whole of Africa is the bishop of Carthage. Later, an archbishop of Carthage named Cyriacus was imprisoned by the Arab rulers because of an accusation by some Christians. Pope Gregory VII wrote him a letter of consolation, repeating the hopeful assurances of the primacy of the Church of Carthage, "whether the Church of Carthage should still lie desolate or rise again in glory ''. By 1076, Cyriacus was set free, but there was only one other bishop in the province. These are the last of whom there is mention in that period of the history of the see.
Carthage is some 15 kilometres (9.3 miles) east - northeast of Tunis; the settlements nearest to Carthage were the town of Sidi Bou Said to the north and the village of Le Kram to the south. Sidi Bou Saint was a village which had grown around the tomb of the eponymous sufi saint (d. 1231), which had been developed into a town under Ottoman rule in the 18th century. Le Kram was developed in the late 19th century under French administration as a settlement close to the port of La Goulette.
In 1881, Tunisia became a French protectorate, and in the same year Charles Lavigerie, who was archbishop of Algiers, became apostolic administrator of the vicariate of Tunis. In the following year, Lavigerie became a cardinal. He "saw himself as the reviver of the ancient Christian Church of Africa, the Church of Cyprian of Carthage '', and, on 10 November 1884, was successful in his great ambition of having the metropolitan see of Carthage restored, with himself as its first archbishop. In line with the declaration of Pope Leo IX in 1053, Pope Leo XIII acknowledged the revived Archdiocese of Carthage as the primatial see of Africa and Lavigerie as primate.
The Acropolium of Carthage (Saint Louis Cathedral of Carthage) was erected on Byrsa hill in 1884.
The Danish consul Christian Tuxen Falbe conducted a first survey of the topography of the archaeological site (published in 1833). Antiquarian interest was intensified following the publication of Flaubert 's Salammbô in 1858. Charles Ernest Beulé performed some preliminary excavations of Roman remains on Byrsa hill in 1860. A more systematic survey of both Punic and Roman - era remains is due to Alfred Louis Delattre, who was sent to Tunis by cardinal Charles Lavigerie in 1875 on both an apostolic and an archaeological mission. Audollent (1901, p. 203) cites Delattre and Lavigerie to the effect that in the 1880s, locals still knew the area of the ancient city under the name of Cartagenna (i.e. reflecting the Latin n - stem Carthāgine).
Auguste Audollent divides the area of Roman Carthage into four quarters, Cartagenna, Dermèche, Byrsa and La Malga. Cartagenna and Dermèche correspond with the lower city, including the site of Punic Carthage; Byrsa is associated with the upper city, which in Punic times was a walled citadel above the harbour; and La Malga is linked with the more remote parts of the upper city in Roman times.
French - led excavations at Carthage began in 1921, and from 1923 reported finds of a large quantity of urns containing a mixture of animal and children 's bones. René Dussaud identified a 4th - century BC stela found in Carthage as depicting a child sacrifice.
A temple at Amman (1400 -- 1250 BC) excavated and reported upon by J.B. Hennessy in 1966, shows the possibility of bestial and human sacrifice by fire. While evidence of child sacrifice in Canaan was the object of academic disagreement, with some scholars arguing that merely children 's cemeteries had been unearthed in Carthage, the mixture of children 's with animal bones as well as associated epigraphic evidence involving mention of mlk led to a consensus that, at least in Carthage, child sacrifice was indeed common practice.
In 2016, an ancient Carthaginian individual, who was excavated from a Punic tomb in Byrsa Hill, was found to belong to the rare U5b2c1 maternal haplogroup. The Young Man of Byrsa specimen dates from the late 6th century BCE, and his lineage is believed to represent early gene flow from Iberia to the Maghreb.
In 1920, the first seaplane base was built on the Lake of Tunis for the seaplanes of Compagnie Aéronavale. The Tunis Airfield opened in 1938, serving around 5,800 passengers annually on the Paris - Tunis route. During World War II, the airport was used by the United States Army Air Force Twelfth Air Force as a headquarters and command control base for the Italian Campaign of 1943. Construction on the Tunis - Carthage Airport, which was fully funded by France, began in 1944, and in 1948 the airport become the main hub for Tunisair.
In the 1950s the Lycée Français de Carthage was established to serve French families in Carthage. In 1961 it was given to the Tunisian government as part of the Independence of Tunisia, so the nearby Collège Maurice Cailloux in La Marsa, previously an annex of the Lycée Français de Carthage, was renamed to the Lycée Français de La Marsa and began serving the lycée level. It is currently the Lycée Gustave Flaubert.
After Tunisian independence in 1956, the Tunis conurbation gradually extended around the airport, and Carthage (قرطاج Qarṭāj) is now a suburb of Tunis, covering the area between Sidi Bou Said and Le Kram. Its population as of January 2013 was estimated at 21,276, mostly attracting the more wealthy residents. If Carthage is not the capital, it tends to be the political pole, a "place of emblematic power '' according to Sophie Bessis, leaving to Tunis the economic and administrative roles. The Carthage Palace (the Tunisian presidential palace) is located in the coast.
The suburb has six train stations of the TGM line between Le Kram and Sidi Bou Said: Carthage Salammbo (named for Salambo, the fictional daughter of Hamilcar), Carthage Byrsa (named for Byrsa hill), Carthage Dermech (Dermèche), Carthage Hannibal (named for Hannibal), Carthage Présidence (named for the Presidential Palace) and Carthage Amilcar (named for Hamilcar).
The merchants of Carthage were in part heirs of the Mediterranean trade developed by Phoenicia, and so also heirs of the rivalry with Greek merchants. Business activity was accordingly both stimulated and challenged. Cyprus had been an early site of such commercial contests. The Phoenicians then had ventured into the western Mediterranean, founding trading posts, including Utica and Carthage. The Greeks followed, entering the western seas where the commercial rivalry continued. Eventually it would lead, especially in Sicily, to several centuries of intermittent war. Although Greek - made merchandise was generally considered superior in design, Carthage also produced trade goods in abundance. That Carthage came to function as a manufacturing colossus was shown during the Third Punic War with Rome. Carthage, which had previously disarmed, then was made to face the fatal Roman siege. The city "suddenly organised the manufacture of arms '' with great skill and effectiveness. According to Strabo (63 BC -- AD 21) in his Geographica:
"(Carthage) each day produced one hundred and forty finished shields, three hundred swords, five hundred spears, and one thousand missiles for the catapults... Furthermore, (Carthage although surrounded by the Romans) built one hundred and twenty decked ships in two months... for old timber had been stored away in readiness, and a large number of skilled workmen, maintained at public expense. ''
The textiles industry in Carthage probably started in private homes, but the existence of professional weavers indicates that a sort of factory system later developed. Products included embroidery, carpets, and use of the purple murex dye (for which the Carthaginian isle of Djerba was famous). Metalworkers developed specialized skills, i.e., making various weapons for the armed forces, as well as domestic articles, such as knives, forks, scissors, mirrors, and razors (all articles found in tombs). Artwork in metals included vases and lamps in bronze, also bowls, and plates. Other products came from such crafts as the potters, the glassmakers, and the goldsmiths. Inscriptions on votive stele indicate that many were not slaves but ' free citizens '.
Phoenician and Punic merchant ventures were often run as a family enterprise, putting to work its members and its subordinate clients. Such family - run businesses might perform a variety of tasks: (a) own and maintain the ships, providing the captain and crew; (b) do the negotiations overseas, either by barter or buy and sell, of (i) their own manufactured commodities and trade goods, and (ii) native products (metals, foodstuffs, etc.) to carry and trade elsewhere; and (c) send their agents to stay at distant outposts in order to make lasting local contacts, and later to establish a warehouse of shipped goods for exchange, and eventually perhaps a settlement. Over generations, such activity might result in the creation of a wide - ranging network of trading operations. Ancillary would be the growth of reciprocity between different family firms, foreign and domestic.
State protection was extended to its sea traders by the Phoenician city of Tyre and later likewise by the daughter city - state of Carthage. Stéphane Gsell, the well - regarded French historian of ancient North Africa, summarized the major principles guiding the civic rulers of Carthage with regard to its policies for trade and commerce:
Both the Phoenicians and the Cathaginians were well known in antiquity for their secrecy in general, and especially pertaining to commercial contacts and trade routes. Both cultures excelled in commercial dealings. Strabo (63BC - AD21) the Greek geographer wrote that before its fall (in 146 BC) Carthage enjoyed a population of 700,000, and directed an alliance of 300 cities. The Greek historian Polybius (c. 203 -- 120) referred to Carthage as "the wealthiest city in the world ''.
A "suffet '' (possibly two) was elected by the citizens, and held office with no military power for a one - year term. Carthaginian generals marshalled mercenary armies and were separately elected. From about 550 to 450 the Magonid family monopolized the top military position; later the Barcid family acted similarly. Eventually it came to be that, after a war, the commanding general had to testify justifying his actions before a court of 104 judges.
Aristotle (384 -- 322) discusses Carthage in his work, Politica; he begins: "The Carthaginians are also considered to have an excellent form of government. '' He briefly describes the city as a "mixed constitution '', a political arrangement with cohabiting elements of monarchy, aristocracy, and democracy, i.e., a king (Gk: basileus), a council of elders (Gk: gerusia), and the people (Gk: demos). Later Polybius of Megalopolis (c. 204 -- 122, Greek) in his Histories would describe the Roman Republic in more detail as a mixed constitution in which the Consuls were the monarchy, the Senate the aristocracy, and the Assemblies the democracy.
Evidently Carthage also had an institution of elders who advised the Suffets, similar to a Greek gerusia or the Roman Senate. We do not have a Punic name for this body. At times its members would travel with an army general on campaign. Members also formed permanent committees. The institution had several hundred members drawn from the wealthiest class who held office for life. Vacancies were probably filled by recruitment from among the elite, i.e., by co-option. From among its members were selected the 104 Judges mentioned above. Later the 104 would come to evaluate not only army generals but other office holders as well. Aristotle regarded the 104 as most important; he compared it to the ephorate of Sparta with regard to control over security. In Hannibal 's time, such a Judge held office for life. At some stage there also came to be independent self - perpetuating boards of five who filled vacancies and supervised (non-military) government administration.
Popular assemblies also existed at Carthage. When deadlocked the Suffets and the quasi-senatorial institution of elders might request the assembly to vote; also, assembly votes were requested in very crucial matters in order to achieve political consensus and popular coherence. The assembly members had no legal wealth or birth qualification. How its members were selected is unknown, e.g., whether by festival group or urban ward or another method.
The Greeks were favourably impressed by the constitution of Carthage; Aristotle had a separate study of it made which unfortunately is lost. In his Politica he states: "The government of Carthage is oligarchical, but they successfully escape the evils of oligarchy by enriching one portion of the people after another by sending them to their colonies. '' "(T) heir policy is to send some (poorer citizens) to their dependent towns, where they grow rich. '' Yet Aristotle continues, "(I) f any misfortune occurred, and the bulk of the subjects revolted, there would be no way of restoring peace by legal means. '' Aristotle remarked also:
"Many of the Carthaginian institutions are excellent. The superiority of their constitution is proved by the fact that the common people remain loyal to the constitution; the Carthaginians have never had any rebellion worth speaking of, and have never been under the rule of a tyrant. ''
Here one may remember that the city - state of Carthage, who citizens were mainly Libyphoenicians (of Phoenician ancestry born in Africa), dominated and exploited an agricultural countryside composed mainly of native Berber sharecroppers and farmworkers, whose affiliations to Carthage were open to divergent possibilities. Beyond these more settled Berbers and the Punic farming towns and rural manors, lived the independent Berber tribes, who were mostly pastoralists.
In the brief, uneven review of government at Carthage found in his Politica Aristotle mentions several faults. Thus, "that the same person should hold many offices, which is a favorite practice among the Carthaginians. '' Aristotle disapproves, mentioning the flute - player and the shoemaker. Also, that "magistrates should be chosen not only for their merit but for their wealth. '' Aristotle 's opinion is that focus on pursuit of wealth will lead to oligarchy and its evils.
"(S) urely it is a bad thing that the greatest offices... should be bought. The law which allows this abuse makes wealth of more account than virtue, and the whole state becomes avaricious. For, whenever the chiefs of the state deem anything honorable, the other citizens are sure to follow their example; and, where virtue has not the first place, their aristocracy can not be firmly established. ''
In Carthage the people seemed politically satisfied and submissive, according to the historian Warmington. They in their assemblies only rarely exercised the few opportunities given them to assent to state decisions. Popular influence over government appears not to have been an issue at Carthage. Being a commercial republic fielding a mercenary army, the people were not conscripted for military service, an experience which can foster the feel for popular political action. But perhaps this misunderstands the society; perhaps the people, whose values were based on small - group loyalty, felt themselves sufficiently connected to their city 's leadership by the very integrity of the person - to - person linkage within their social fabric. Carthage was very stable; there were few openings for tyrants. Only after defeat by Rome devastated Punic imperial ambitions did the people of Carthage seem to question their governance and to show interest in political reform.
In 196, following the Second Punic War (218 -- 201), Hannibal Barca, still greatly admired as a Barcid military leader, was elected suffet. When his reforms were blocked by a financial official about to become a judge for life, Hannibal rallied the populace against the 104 judges. He proposed a one - year term for the 104, as part of a major civic overhaul. Additionally, the reform included a restructuring of the city 's revenues, and the fostering of trade and agriculture. The changes rather quickly resulted in a noticeable increase in prosperity. Yet his incorrigible political opponents cravenly went to Rome, to charge Hannibal with conspiracy, namely, plotting war against Rome in league with Antiochus the Hellenic ruler of Syria. Although the Roman Scipio Africanus resisted such manoeuvre, eventually intervention by Rome forced Hannibal to leave Carthage. Thus, corrupt city officials efficiently blocked Hannibal Barca in his efforts to reform the government of Carthage.
Mago (6th century) was King of Carthage; the head of state, war leader, and religious figurehead. His family was considered to possess a sacred quality. Mago 's office was somewhat similar to that of a pharaoh, but although kept in a family it was not hereditary, it was limited by legal consent. Picard, accordingly, believes that the council of elders and the popular assembly are late institutions. Carthage was founded by the king of Tyre who had a royal monopoly on this trading venture. Thus it was the royal authority stemming from this traditional source of power that the King of Carthage possessed. Later, as other Phoenician ship companies entered the trading region, and so associated with the city - state, the King of Carthage had to keep order among a rich variety of powerful merchants in their negotiations among themselves and over risky commerce across the Mediterranean. Under these circumstance, the office of king began to be transformed. Yet it was not until the aristocrats of Carthage became wealthy owners of agricultural lands in Africa that a council of elders was institutionalized at Carthage.
Most ancient literature concerning Carthage comes from Greek and Roman sources as Carthage 's own documents were destroyed by the Romans. Apart from inscriptions, hardly any Punic literature has survived, and none in its own language and script. A brief catalogue would include:
"(F) rom the Greek author Plutarch ((c. 46 -- c. 120)) we learn of the ' sacred books ' in Punic safeguarded by the city 's temples. Few Punic texts survive, however. '' Once "the City Archives, the Annals, and the scribal lists of suffets '' existed, but evidently these were destroyed in the horrific fires during the Roman capture of the city in 146 BC.
Yet some Punic books (Latin: libri punici) from the libraries of Carthage reportedly did survive the fires. These works were apparently given by Roman authorities to the newly augmented Berber rulers. Over a century after the fall of Carthage, the Roman politician - turned - author Gaius Sallustius Crispus or Sallust (86 -- 34) reported his having seen volumes written in Punic, which books were said to be once possessed by the Berber king, Hiempsal II (r. 88 -- 81). By way of Berber informants and Punic translators, Sallust had used these surviving books to write his brief sketch of Berber affairs.
Probably some of Hiempsal II 's libri punici, that had escaped the fires that consumed Carthage in 146 BC, wound up later in the large royal library of his grandson Juba II (r. 25 BC - AD 24). Juba II not only was a Berber king, and husband of Cleopatra 's daughter, but also a scholar and author in Greek of no less than nine works. He wrote for the Mediterranean - wide audience then enjoying classical literature. The libri punici inherited from his grandfather surely became useful to him when composing his Libyka, a work on North Africa written in Greek. Unfortunately, only fragments of Libyka survive, mostly from quotations made by other ancient authors. It may have been Juba II who ' discovered ' the five - centuries - old ' log book ' of Hanno the Navigator, called the Periplus, among library documents saved from fallen Carthage.
In the end, however, most Punic writings that survived the destruction of Carthage "did not escape the immense wreckage in which so many of Antiquity 's literary works perished. '' Accordingly, the long and continuous interactions between Punic citizens of Carthage and the Berber communities that surrounded the city have no local historian. Their political arrangements and periodic crises, their economic and work life, the cultural ties and social relations established and nourished (infrequently as kin), are not known to us directly from ancient Punic authors in written accounts. Neither side has left us their stories about life in Punic - era Carthage.
Regarding Phoenician writings, few remain and these seldom refer to Carthage. The more ancient and most informative are cuneiform tablets, ca. 1600 -- 1185, from ancient Ugarit, located to the north of Phoenicia on the Syrian coast; it was a Canaanite city politically affiliated with the Hittites. The clay tablets tell of myths, epics, rituals, medical and administrative matters, and also correspondence. The highly valued works of Sanchuniathon, an ancient priest of Beirut, who reportedly wrote on Phoenician religion and the origins of civilization, are themselves completely lost, but some little content endures twice removed. Sanchuniathon was said to have lived in the 11th century, which is considered doubtful. Much later a Phoenician History by Philo of Byblos (64 -- 141) reportedly existed, written in Greek, but only fragments of this work survive. An explanation proffered for why so few Phoenician works endured: early on (11th century) archives and records began to be kept on papyrus, which does not long survive in a moist coastal climate. Also, both Phoenicians and Carthaginians were well known for their secrecy.
Thus, of their ancient writings we have little of major interest left to us by Carthage, or by Phoenicia the country of origin of the city founders. "Of the various Phoenician and Punic compositions alluded to by the ancient classical authors, not a single work or even fragment has survived in its original idiom. '' "Indeed, not a single Phoenician manuscript has survived in the original (language) or in translation. '' We can not therefore access directly the line of thought or the contour of their worldview as expressed in their own words, in their own voice. Ironically, it was the Phoenicians who "invented or at least perfected and transmitted a form of writing (the alphabet) that has influenced dozens of cultures including our own. ''
As noted, the celebrated ancient books on agriculture written by Mago of Carthage survives only via quotations in Latin from several later Roman works.
|
sa re ga ma pa 2017 grand jury names with images | Sa Re Ga Ma Pa L'il Champs 2017 - wikipedia
Sa Re Ga Ma Pa L'il Champs 2017 (stylised as Saregamapa Li'l Champs) is a singing competition television series, which premiered on February 25, 2017 on Zee TV. The series airs every Saturday and Sunday nights at 9 pm. In a ' Back to School ' format that lends itself to a spirited atmosphere of camaraderie and fun, the L'il Champs will represent their schools.
For the first time in L'il Champs, they have incorporated the format created for Sa Re Ga Ma Pa 2016. There is a 30 - member Grand Jury which grades the contestants and the average percentage of their scores is displayed. Himesh Reshammiya, Neha Kakkar & Javed Ali are the mentors in the show, whereas Aditya Narayan is the host.
Children aged 5 - 14 years participate in a singing competition. In the auditions round, they have 100 seconds to impress the three judges and the 30 - members of the grand jury. If two of the three judges say YES and they secure at least 50 per cent of the support of the Grand Jury, then the contestant progresses to the next round.
In the Gala round, the judges select the ' Student of the Week ' who is the best performer according to them. The contestant gets to sit on a ' flying sofa ', which indicates that the contestant is safe from next week 's elimination.
The audience vote will come in later in the competition. The contestant with the highest votes in the final round will win the competition.
The show has a panel of 30 jury members who are experts from the music industry. They closely assess the contestants from the audition stage wherein they help the mentors in the selection process and are present in the show during the studio rounds too.
Some of them are:
|
who says what a piece of work is man | What a piece of work is a man - wikipedia
"What a piece of work is man! '' is a phrase within a monologue by Hamlet in William Shakespeare 's eponymous play. Hamlet is reflecting, at first admiringly, and then despairingly, on the human condition.
The monologue, spoken in the play by Prince Hamlet to Rosencrantz and Guildenstern in Act II, Scene 2, follows in its entirety. Rather than appearing in blank verse, the typical mode of composition of Shakespeare 's plays, the speech appears in straight prose:
I will tell you why; so shall my anticipation prevent your discovery, and your secrecy to the King and queene: moult no feather. I have of late, (but wherefore I know not) lost all my mirth, forgone all custom of exercises; and indeed, it goes so heavily with my disposition; that this goodly frame the earth, seems to me a sterile promontory; this most excellent canopy the air, look you, this brave o'er hanging firmament, this majestical roof, fretted with golden fire: why, it appeareth no other thing to me, than a foul and pestilent congregation of vapours. What a piece of work is man, How noble in reason, how infinite in faculty, In form and moving how express and admirable, In action how like an Angel, In apprehension how like a god, The beauty of the world, The paragon of animals. And yet to me, what is this quintessence of dust? Man delights not me; no, nor Woman neither; though by your smiling you seem to say so.
Hamlet is saying that although humans may appear to think and act "nobly '' they are really essentially "dust ''. Hamlet is expressing his melancholy to his old friends over the difference between the best that men aspire to be, and how they actually behave; the great divide that depresses him.
The speech was fully omitted from Nicholas Ling 's 1603 First Quarto, which reads simply:
Yes faith, this great world you see contents me not, No nor the spangled heauens, nor earth, nor sea, No nor Man that is so glorious a creature, Contents not me, no nor woman too, though you laugh.
This version has been argued to have been a bad quarto, a tourbook copy, or an initial draft. By the 1604 Second Quarto, the speech is essentially present but punctuated differently:
What a piece of work is a man, how noble in reason, how infinite in faculties, in form and moving, how express and admirable in action, how like an angel in apprehension, how like a god!
Then, by the 1623 First Folio, it appeared as:
What a piece of worke is a man! how Noble in Reason? how infinite in faculty? in forme and mouing how expresse and admirable? in Action, how like an Angel? in apprehension, how like a God?...
J. Dover Wilson, in his notes in the New Shakespeare edition, observed that the Folio text "involves two grave difficulties '', namely that according to Elizabethan thought angels could apprehend but not act, making "in action how like an angel '' nonsensical, and that "express '' (which as an adjective means "direct and purposive '') makes sense applied to "action '', but goes very awkwardly with "form and moving ''.
These difficulties are remedied if we read it thus:
What a piece of worke is a man! how Noble in Reason? how infinite in faculty, in forme, and mouing how expresse and admirable in Action, how like an Angel in apprehension, how like a God?
Scholars have pointed out this section 's similarities to lines written by Montaigne:
Qui luy a persuadé que ce branle admirable de la voute celeste, la lumiere eternelle de ces flambeaux roulans si fierement sur sa teste, les mouvemens espouventables de ceste mer infinie, soyent establis et se continuent tant de siecles, pour sa commodité et pour son service? Est - il possible de rien imaginer si ridicule, que ceste miserable et chetive creature, qui n'est pas seulement maistresse de soy, exposée aux offences de toutes choses, se die maistresse et emperiere de l'univers?
Who have persuaded (man) that this admirable moving of heavens vaults, that the eternal light of these lampes so fiercely rowling over his head, that the horror - moving and continuall motion of this infinite vaste ocean were established, and continue so many ages for his commoditie and service? Is it possible to imagine so ridiculous as this miserable and wretched creature, which is not so much as master of himselfe, exposed and subject to offences of all things, and yet dareth call himself Master and Emperor.
However, rather than being a direct influence on Shakespeare, Montaigne may have merely been reacting to the same general atmosphere of the time, making the source of these lines one of context rather than direct influence.
|
what does x and y mean in science | XY sex - determination system - wikipedia
The XY sex - determination system is the sex - determination system found in humans, most other mammals, some insects (Drosophila), some snakes, and some plants (Ginkgo). In this system, the sex of an individual is determined by a pair of sex chromosomes. Females typically have two of the same kind of sex chromosome (XX), and are called the homogametic sex. Males typically have two different kinds of sex chromosomes (XY), and are called the heterogametic sex. Exceptions to this are cases of XX males or XY females, or other syndromes.
The XY system contrasts in several ways with the ZW sex - determination system found in birds, some insects, many reptiles, and various other animals, in which the heterogametic sex is female. It had been thought for several decades that in all snakes sex was determined by the ZW system, but there had been observations of unexpected effects in the genetics of species in the families Boidae and Pythonidae; for example, parthenogenic reproduction produced only females rather than males, which is the opposite of what is to be expected in the ZW system. In the early years of the 21st century such observations prompted research that demonstrated that all pythons and boas so far investigated definitely have the XY system of sex determination.
A temperature - dependent sex determination system is found in some reptiles.
All animals have a set of DNA coding for genes present on chromosomes. In humans, most mammals, and some other species, two of the chromosomes, called the X chromosome and Y chromosome, code for sex. In these species, one or more genes are present on their Y chromosome that determine maleness. In this process, an X chromosome and a Y chromosome act to determine the sex of offspring, often due to genes located on the Y chromosome that code for maleness. Offspring have two sex chromosomes: an offspring with two X chromosomes will develop female characteristics, and an offspring with an X and a Y chromosome will develop male characteristics.
In humans, half of spermatozoons carry X chromosome and the other half Y chromosome. A single gene (SRY) present on the Y chromosome acts as a signal to set the developmental pathway towards maleness. Presence of this gene starts off the process of virilization. This and other factors result in the sex differences in humans. The cells in females, with two X chromosomes, undergo X-inactivation, in which one of the two X chromosomes is inactivated. The inactivated X chromosome remains within a cell as a Barr body.
Humans, as well as some other organisms, can have a chromosomal arrangement that is contrary to their phenotypic sex; for example, XX males or XY females (see androgen insensitivity syndrome). Additionally, an abnormal number of sex chromosomes (aneuploidy) may be present, such as Turner 's syndrome, in which a single X chromosome is present, and Klinefelter 's syndrome, in which two X chromosomes and a Y chromosome are present, XYY syndrome and XXYY syndrome. Other less common chromosomal arrangements include: triple X syndrome, 48, XXXX, and 49, XXXXX.
In most mammals, sex is determined by presence of the Y chromosome. "Female '' is the default sex, due to the absence of the Y chromosome. In the 1930s, Alfred Jost determined that the presence of testosterone was required for Wolffian duct development in the male rabbit.
SRY is a sex - determining gene on the Y chromosome in the therians (placental mammals and marsupials). Non-human mammals use several genes on the Y chromosome. Not all male - specific genes are located on the Y chromosome. Other species (including most Drosophila species) use the presence of two X chromosomes to determine femaleness. One X chromosome gives putative maleness. The presence of Y chromosome genes is required for normal male development.
Birds and many insects have a similar system of sex determination (ZW sex - determination system), in which it is the females that are heterogametic (ZW), while males are homogametic (ZZ).
Many insects of the order Hymenoptera instead have a system (the haplo - diploid sex - determination system), where the males are haploid individuals (which have just one chromosome of each type), while the females are diploid (with chromosomes appearing in pairs). Some other insects have the X0 sex - determination system, where just one chromosome type appears in pairs for the female but alone in the males, while all other chromosomes appear in pairs in both sexes.
It has long been believed that the female form was the default template for the mammalian fetuses of both sexes. After the discovery of the testis - determining gene SRY, many scientists shifted to the theory that the genetic mechanism that causes a fetus to develop into a male form was initiated by the SRY gene, which was thought to be responsible for the production of testosterone and its overall effects on body and brain development. This perspective still shares the classical way of thinking; that in order to produce two sexes, nature has developed a default female pathway and an active pathway by which male genes would initiate the process of determining a male sex, as something that is developed in addition to and based on the default female form. However, In an interview for the Rediscovering Biology website, researcher Eric Vilain described how the paradigm changed since the discovery of the SRY gene:
For a long time we thought that SRY would activate a cascade of male genes. It turns out that the sex determination pathway is probably more complicated and SRY may in fact inhibit some anti-male genes.
The idea is instead of having a simplistic mechanism by which you have pro-male genes going all the way to make a male, in fact there is a solid balance between pro-male genes and anti-male genes and if there is a little too much of anti-male genes, there may be a female born and if there is a little too much of pro-male genes then there will be a male born.
We (are) entering this new era in molecular biology of sex determination where it 's a more subtle dosage of genes, some pro-males, some pro-females, some anti-males, some anti-females that all interplay with each other rather than a simple linear pathway of genes going one after the other, which makes it very fascinating but very complicated to study.
In mammals, including humans, the SRY gene is responsible with triggering the development of non-differentiated gonads into testes, rather than ovaries. However, there are cases in which testes can develop in the absence of an SRY gene (see sex reversal). In these cases, the SOX9 gene, involved in the development of testes, can induce their development without the aid of SRY. In the absence of SRY and SOX9, no testes can develop and the path is clear for the development of ovaries. Even so, the absence of the SRY gene or the silencing of the SOX9 gene are not enough to trigger sexual differentiation of a fetus in the female direction. A recent finding suggests that ovary development and maintenance is an active process, regulated by the expression of a "pro-female '' gene, FOXL2. In an interview for the TimesOnline edition, study co-author Robin Lovell - Badge explained the significance of the discovery:
We take it for granted that we maintain the sex we are born with, including whether we have testes or ovaries. But this work shows that the activity of a single gene, FOXL2, is all that prevents adult ovary cells turning into cells found in testes.
Looking into the genetic determinants of human sex can have wide - ranging consequences. Scientists have been studying different sex determination systems in fruit flies and animal models to attempt an understanding of how the genetics of sexual differentiation can influence biological processes like reproduction, ageing and disease.
In humans and many other species of animals, the father determines the sex of the child. In the XY sex - determination system, the female - provided ovum contributes an X chromosome and the male - provided sperm contributes either an X chromosome or a Y chromosome, resulting in female (XX) or male (XY) offspring, respectively.
Hormone levels in the male parent affect the sex ratio of sperm in humans. Maternal influences also impact which sperm are more likely to achieve conception.
Human ova, like those of other mammals, are covered with a thick translucent layer called the zona pellucida, which the sperm must penetrate to fertilize the egg. Once viewed simply as an impediment to fertilization, recent research indicates the zona pellucida may instead function as a sophisticated biological security system that chemically controls the entry of the sperm into the egg and protects the fertilized egg from additional sperm.
Recent research indicates that human ova may produce a chemical which appears to attract sperm and influence their swimming motion. However, not all sperm are positively impacted; some appear to remain uninfluenced and some actually move away from the egg.
Maternal influences may also be possible that affect sex determination in such a way as to produce fraternal twins equally weighted between one male and one female.
The time at which insemination occurs during the oestrus cycle has been found to affect the sex ratio of the offspring of humans, cattle, hamsters, and other mammals. Hormonal and pH conditions within the female reproductive tract vary with time, and this affects the sex ratio of the sperm that reach the egg.
Sex - specific mortality of embryos also occurs.
Since ancient times, people have believed that the sex of an infant is determined by how much heat a man 's sperm had during insemination. Aristotle wrote that:
... the semen of the male differs from the corresponding secretion of the female in that it contains a principle within itself of such a kind as to set up movements also in the embryo and to concoct thoroughly the ultimate nourishment, whereas the secretion of the female contains material alone. If, then, the male element prevails it draws the female element into itself, but if it is prevailed over it changes into the opposite or is destroyed.
Aristotle claimed that the male principle was the driver behind sex determination, such that if the male principle was insufficiently expressed during reproduction, the fetus would develop as a female.
Nettie Stevens and Edmund Beecher Wilson are credited with independently discovering, in 1905, the chromosomal XY sex - determination system, i.e. the fact that males have XY sex chromosomes and females have XX sex chromosomes.
The first clues to the existence of a factor that determines the development of testis in mammals came from experiments carried out by Alfred Jost, who castrated embryonic rabbits in utero and noticed that they all developed as female.
In 1959, C.E. Ford and his team, in the wake of Jost 's experiments, discovered that the Y chromosome was needed for a fetus to develop as male when they examined patients with Turner 's syndrome, who grew up as phenotypic females, and found them to be X0 (hemizygous for X and no Y). At the same time, Jacob & Strong described a case of a patient with Klinefelter syndrome (XXY), which implicated the presence of a Y chromosome in development of maleness.
All these observations lead to a consensus that a dominant gene that determines testis development (TDF) must exist on the human Y chromosome. The search for this testis - determining factor (TDF) led a team of scientists in 1990 to discover a region of the Y chromosome that is necessary for the male sex determination, which was named SRY (sex - determining region of the Y chromosome).
|
where is most of the water in the body located | Body water - wikipedia
In physiology, body water is the water content of an animal body that is contained in the tissues, the blood, the bones and elsewhere. The percentages of body water contained in various fluid compartments add up to total body water (TBW). This water makes up a significant fraction of the human body, both by weight and by volume. Ensuring the right amount of body water is part of fluid balance, an aspect of homeostasis.
By weight, the average human adult male is approximately 60 % water and the average adult female is approximately 50 %. There can be considerable variation in body water percentage based on a number of factors like age, health, weight, and sex. In a large study of adults of all ages and both sexes, the adult human body averaged ~ 65 % water. However, this varied substantially by age, sex, and adiposity (amount of fat in body composition). The figure for water fraction by weight in this sample was found to be 58 ± 8 % water for males and 48 ± 6 % for females. The body water constitutes as much as 73 % of the body weight of a newborn infant, whereas some obese people are as little as 45 % water by weight. This is due to how fat tissue does not retain water as well as lean tissue. These statistical averages will vary with factors such as type of population, age of people sampled, number of people sampled, and methodology. So there is not, and can not be, a figure that is exactly the same for all people, for this or any other physiological measure.
Most of animal body water is contained in various body fluids. These include intracellular fluid; extracellular fluid; plasma; interstitial fluid; and transcellular fluid. Water is also contained inside organs, in gastrointestinal, cerebrospinal, peritoneal, and ocular fluids. Adipose tissue contains about 10 % of water, while muscle tissue contains about 75 %.
In Netter 's Atlas of Human Physiology, body water is broken down into the following compartments:
Total body water can be determined using Flowing afterglow mass spectrometry measurement of deuterium abundance in breath samples from individuals. A known dose of deuterated water (Heavy water, D O) is ingested and allowed to equilibrate within the body water. The FA - MS instrument then measures the deuterium - to - hydrogen (D: H) ratio in the exhaled breath water vapour. The total body water is then accurately measured from the increase in breath deuterium content in relation to the volume of D O ingested.
Different substances can be used to measure different fluid compartments:
Intracellular fluid may then be estimated by subtracting extracellular fluid from total body water.
Another method of determining total body water percentage (TBW %) is via Bioelectrical Impedance Analysis (BIA). In the traditional BIA method, a person lies on a cot and spot electrodes are placed on the hands and bare feet. Electrolyte gel is applied first, and then a weak current of frequency 50kHz is introduced. This AC waveform allows the creation of a current inside the body via the very capacitive skin without causing a DC flow or burns, and limited in the ~ 20mA range current for safety.
BIA has emerged as a promising technique because of its simplicity, low cost, high reproducibility and noninvasiveness. BIA prediction equations can be either generalized or population - specific, allowing this method to be potentially very accurate. Selecting the appropriate equation is important to determining the quality of the results.
For clinical purposes, scientists are developing a multi-frequency BIA method that may further improve the method 's ability to predict a person 's hydration level. New segmental BIA equipment that uses more electrodes may lead to more precise measurements of specific parts of the body.
In humans, total body water can be estimated based on the premorbid (or ideal) body weight and correction factor.
T B W = w e i g h t ∗ C (\ displaystyle TBW = weight * C).
C is a coefficient for the expected percentage of weight made up of free water. For adult, non-elderly males, C = 0.6. For adult elderly males, malnourished males, or females, C = 0.5. For adult elderly or malnourished females C = 0.45. A total body water deficit (TBWD) can then be approximated by the following formula:
Where (Na) t = target sodium concentration (usually 140 mEq / L), and (Na) m = measured sodium concentration.
The resultant value is the approximate volume of free water required to correct a hypernatremic state. In practice, the value rarely approximates the actual amount of free water required to correct a deficit due to insensible losses, urinary output, and differences in water distribution among patients.
Water in the animal body performs a number of functions: as a solvent for transportation of nutrients; as a medium for excretion; a means for heat control; as a lubricant for joints; and for shock absorption.
The usual way of adding water to a body is by drinking. Water also enters the body with foods, especially those rich in water, such as plants, raw meat, and fish.
The amount of this water that is retained in animals is affected by several factors. Water amounts vary with the age of the animal. The older the vertebrate animal, the higher its relative bone mass and the lower its body water content.
In diseased states, where body water is affected, the fluid compartment or compartments that have changed can give clues to the nature of the problem, or problems. Body water is regulated by hormones, including anti-diuretic hormone, aldosterone and atrial natriuretic peptide.
Volume contraction is a decrease in body fluid volume, with or without a concomitant loss of osmolytes. The loss of the body water component of body fluid is specifically termed dehydration.
Sodium loss approximately correlates with fluid loss from extracellular fluid, since sodium has a much higher concentration in extracelluliar fluid (ECF) than intracellular fluid (ICF). In contrast, K has a much higher concentration in ICF than ECF, and therefore its loss rather correlates with fluid loss from ICF, since K loss from ECF causes the K in ICF to diffuse out of the cells, dragging water with it by osmosis.
Body water: Intracellular fluid / Cytosol
|
where does candia nh go to high school | Candia, New Hampshire - wikipedia
Candia is a town in Rockingham County, New Hampshire, United States. The population was 3,909 at the 2010 census. The town includes the villages of Candia, Candia Four Corners and East Candia.
Settled in 1743, Candia was once part of Chester and known as "Charmingfare '', probably because of the many bridle paths or "parades '' through the pleasant scenery. Charmingfare was incorporated in 1763 and named "Candia '' by Colonial Governor Benning Wentworth, possibly after the old name under Venetian domination of the principal city of Crete, which he had visited after graduation from Harvard. Another account holds, "The town received its present name in compliment to Governor Benning Wentworth, who was once a prisoner on the island of Candia, in the Mediterranean Sea. ''
Candia was served by the Portsmouth & Concord Railroad, which stretched between its namesake cities. In 1862 the segment between Candia and Suncook was abandoned, coinciding with the opening of a new segment between Manchester and Candia. Therefore, the new line ran from Manchester to Portsmouth via Candia. In 1895 ownership of the line passed to the Boston & Maine Railroad who made it their Portsmouth Branch. Passenger service ended in 1954. The last trains passed through Candia in the early 1980s. The track was abandoned in 1982 and removed between 1983 and 1985. Today the railbed is part of the Rockingham Recreational Trail.
According to the United States Census Bureau, the town has a total area of 30.6 square miles (79 km). 30.3 square miles (78 km) of it is land and 0.2 square miles (0.52 km) of it is water, comprising 0.79 % of the town. The town is bordered by Deerfield to the north, Hooksett (in Merrimack County) to the west, Auburn and Chester to the south, and Raymond to the east. Notable villages in the town include Candia proper, near the town 's northern border; Candia Four Corners, closer to the geographic center of the town; and East Candia, near the town 's eastern border.
Candia is drained by the North Branch River, a tributary of the Lamprey River. The town lies almost fully within the Piscataqua River watershed except for the western and southern edges, which are in the Merrimack River watershed. The highest point in town is Hall Mountain, at 941 feet (287 m) above sea level, located in Bear Brook State Park in the northwestern part of the town. (The main entrance to the state park and most of its facilities are in neighboring Allenstown.)
Candia is bisected by two state highways, Route 43 running north from Route 101 through the Candia Four Corners to the Deerfield town line, and Route 27, running east / west from the Hooksett town line through the Candia Four Corners to the Raymond town line. Route 101 is the major east / west thoroughfare through southern New Hampshire and travels through the south part of Candia.
As of the census of 2000, there were 3,911 people, 1,359 households, and 1,108 families residing in the town. The population density was 129.0 per square mile (49.8 / km2). There were 1,384 housing units at an average density of 45.6 per square mile (17.6 / km2). The racial makeup of the town was 98.11 % White, 0.43 % African American, 0.46 % Native American, 0.59 % Asian, 0.03 % Pacific Islander, 0.10 % from other races, and 0.28 % from two or more races. Hispanic or Latino of any race were 0.87 % of the population.
There were 1,359 households out of which 40.0 % had children under the age of 18 living with them, 72.0 % were married couples living together, 5.2 % had a female householder with no husband present, and 18.4 % were non-families. 12.7 % of all households were made up of individuals and 3.2 % had someone living alone who was 65 years of age or older. The average household size was 2.88 and the average family size was 3.14.
In the town, the population was spread out with 26.6 % under the age of 18, 6.0 % from 18 to 24, 33.6 % from 25 to 44, 26.5 % from 45 to 64, and 7.2 % who were 65 years of age or older. The median age was 38 years. For every 100 females there were 102.1 males. For every 100 females age 18 and over, there were 102.0 males.
The median income for a household in the town was $61,389, and the median income for a family was $67,163. Males had a median income of $43,260 versus $31,127 for females. The per capita income for the town was $25,267. About 2.3 % of families and 2.6 % of the population were below the poverty line, including 2.3 % of those under age 18 and 5.3 % of those age 65 or over.
Candia is part of School Administrative Unit 15, along with Hooksett and Auburn. There is one public school in Candia, the Henry W. Moore School for kindergarten through eighth grade, located near the Candia Four Corners on Deerfield Road. High school education students from Candia attend school outside of the district, currently under contract at Manchester Central High School, but are also in transition for a choice between the Manchester Central and Pinkerton Academy in Derry. Candia is also home to Jesse Remington High School, a private Christian school that offers grades 9 - 12. Some Candia residents send their children to other private high schools in the area, including Trinity High School in Manchester.
Fitts Museum
McDonald Mill c. 1915
Gate, Candia Congregational Cemetery
Fire and Emergency Medical Services are provided by the Candia Volunteer Fire Department, an all - volunteer department organized in 1925. This department provides fire suppression, rescue, and first - responder Emergency Medical Services to the citizens of Candia and the surrounding communities. The closest hospitals are the Elliot Hospital, a Level Two trauma center, and Catholic Medical Center, one of the most advanced cardiac care centers in New England. Both of these facilities are located approximately 20 minutes away in Manchester. Exeter Hospital is also located about 20 minutes away in Exeter.
Police protection is provided by the Candia Police Department, assisted by the New Hampshire State Police and other local municipal police departments.
|
where is loch lomond and the trossachs national park located | Loch Lomond and the Trossachs National park - wikipedia
Loch Lomond and The Trossachs National Park (Scottish Gaelic: Pàirc Nàiseanta Loch Laomainn is nan Tròisichean) is a national park in Scotland centred on Loch Lomond, and includes several ranges of hills and the Trossachs. It was the first of the two national parks established by the Scottish Parliament in 2002, the second being the Cairngorms National Park.
The park is the fourth largest in the British Isles, with a total area of 1,865 km (720 sq mi) and a boundary of some 350 km (220 mi) in length. It includes 21 Munros (including Ben Lomond, Ben Lui, Beinn Challuim, Ben More and two peaks called Ben Vorlich), 19 Corbetts, two forest parks (Queen Elizabeth, and Argyll) and 57 designated special nature conservation sites.
15,600 people live in the park, which is customarily split into four sections: Breadalbane, Loch Lomond, The Trossachs, and Argyll Forest Park.
The park consists of many mountains and lochs, and the principal attractions are scenery, walking, and wildlife.
For walkers seeking a challenge, the West Highland Way passes through the park, while the mountains of Ben Lomond in Dunbartonshire and The Cobbler in the Arrochar Alps on the Cowal Peninsula attract most hikers. Less intrepid visitors can detour from the A82 to view the Falls of Dochart.
There is a national park visitor centre at the southern end of Loch Lomond, called Loch Lomond Shores in Balloch, which includes a visitor information centre at the most popular gateway to the park, as well as an aquarium, shops and restaurants.
On Loch Katrine, visitors can travel on the historic steamship SS Sir Walter Scott, while cruises on Loch Lomond can be taken from Tarbet, Argyll and Bute and Balloch; there is also an extensive water taxi service between most lochside communities.
A list of mountains over 3,000 feet (914 m) within the park and the closest village:
There are 21 Munros in the National Park and 16 of them are within Breadalbane. Ben Lomond remains the most popular mountain in Scotland to be climbed.
|
when does mac os high sierra get released | MacOS High Sierra - Wikipedia
macOS High Sierra (version 10.13) is the fourteenth major release of macOS, Apple Inc. 's desktop operating system for Macintosh computers. The successor to macOS Sierra, it was announced at WWDC 2017 on June 5, 2017.
The name "High Sierra '' refers to the High Sierra region in California. As with Snow Leopard, Mountain Lion and El Capitan, the name also alludes to its status as a refinement of its predecessor, focused on performance improvements and technical updates rather than user features. Among the apps with notable changes are Photos and Safari.
macOS High Sierra has the same requirements as its predecessor:
A workaround exists to install macOS High Sierra on some Mac computers that are no longer officially supported.
HEVC hardware acceleration requires a Mac with a sixth - generation Intel processor or newer:
Apple File System (APFS) replaces HFS Plus as the default file system in macOS. It supports 64 ‐ bit inode numbers, is designed for flash technology, and is designed to make common tasks like duplicating a file and finding the size of a folder 's contents faster. It also has built ‐ in encryption, crash ‐ safe protections, and simplified data backup on the go.
Metal, Apple 's low - level graphics API has been updated to Metal 2. It includes virtual - reality and machine - learning features, as well as support for external GPUs. The system 's windowing system Quartz Compositor supports Metal 2.
macOS High Sierra adds support for High Efficiency Video Coding (HEVC), with hardware acceleration where available, as well as support for High Efficiency Image File Format (HEIF). Macs with the Intel Kaby Lake processor offer hardware support for Main 10 profile 10 - bit hardware decoding, those with the Intel Skylake processor support Main profile 8 - bit hardware decoding, and those with AMD Radeon 400 series graphics also support full HEVC decoding. In addition, audio codecs FLAC and Opus will also be supported, but not in iTunes.
Kernel extensions ("kexts '') will require explicit approval by the user before being able to run.
The Low Battery notification and its icon were replaced by a flatter modern look.
The time service ntpd was replaced with timed for the time synchronisation.
macOS High Sierra gives Photos an updated sidebar and new editing tools. Photos synchronizes tagged People with iOS 11.
Mail has improved Spotlight search with Top Hits. Mail also uses 35 % less storage space due to optimizations, and Mail 's compose window can now be used in split screen mode.
Safari has a new "Intelligent Tracking Prevention '' feature which uses machine learning to block third parties from tracking the user 's actions. Safari can also block autoplaying videos from playing. The "Reader Mode '' can be made to be always turned on. In addition, Safari 11 will also support WebAssembly.
The Notes app allows the user to add tables to a note. A note can be pinned to the top of the list.
Siri now uses a more natural and expressive voice. It also uses machine learning to understand the user better. Siri synchronizes information across iOS and Mac devices so the Siri experience is the same regardless of the product being used.
|
nba team with most first round draft picks | List of first overall NBA draft picks - wikipedia
The National Basketball Association 's first overall pick is the player who is selected first among all eligible draftees by a team during the annual National Basketball Association (NBA) draft. The first pick is awarded to the team that wins the NBA draft lottery; in most cases, that team had a losing record in the previous season. The team with the first pick attracts significant media attention, as does the player who is selected with that pick.
Eleven first picks have won the NBA Most Valuable Player Award: Oscar Robertson, Kareem Abdul - Jabbar (record six - time winner), Bill Walton, Magic Johnson (three - time winner), Hakeem Olajuwon, David Robinson, Shaquille O'Neal, Allen Iverson, Tim Duncan (two - time winner), LeBron James (four - time winner), and Derrick Rose (youngest winner).
Since the advent of the draft lottery in 1985, seven number one overall picks have won an NBA title. They are David Robinson, Shaquille O'Neal, Glenn Robinson, Tim Duncan, LeBron James, Andrew Bogut, and Kyrie Irving.
China 's Yao Ming (2002) and Italy 's Andrea Bargnani (2006) are the only two players without competitive experience in the United States to be drafted first overall. Nine other international players with U.S. college experience have been drafted first overall -- Mychal Thompson (Bahamas) in 1978, Hakeem Olajuwon (Nigeria) in 1984, Patrick Ewing (Jamaica) in 1985, Tim Duncan (U.S. Virgin Islands) in 1997, Michael Olowokandi (Nigeria) in 1998, Andrew Bogut (Australia) in 2005, Kyrie Irving (Australia) in 2011, Anthony Bennett (Canada) in 2013, Andrew Wiggins (Canada) in 2014, and Ben Simmons (Australia) in 2016. Duncan is an American citizen, but is considered an "international '' player by the NBA because he was not born in one of the fifty states or the District of Columbia. Ewing had dual Jamaican - American citizenship when he was drafted and Irving and Simmons had dual Australian - American citizenship when they were drafted.
Note that the drafts between 1947 and 1949 were held by the Basketball Association of America (BAA). The Basketball Association of America became the National Basketball Association after absorbing teams from the National Basketball League in the fall of 1949. Official NBA publications include the BAA Drafts as part of the NBA 's draft history.
1961
1966
1967
1968
1970
1980
1988
1989
1995
|
what term refers to higher grades given for the same work | Grade inflation - wikipedia
Grade inflation is used in two senses: (1) grading leniency: the awarding of higher grades than students deserve, which yields a higher average grade given to students (2) the tendency to award progressively higher academic grades for work that would have received lower grades in the past.
This article is about grade inflation in the second sense. Higher grades in themselves do not prove grade inflation and many believe there is no such problem. It is also necessary to demonstrate that the grades are not deserved.
Grade inflation is frequently discussed in relation to education in the United States, and to GCSEs and A levels in England and Wales. It is also an issue in many other nations, such as Canada, Australia, New Zealand, France, Germany, South Korea and India.
Louis Goldman, professor at Wichita State University, states that an increase of. 404 points was reported from a survey in 134 colleges from 1965 to 1973. A second study in 180 colleges, showed a. 432 GPA increase from 1960 to 1974, both indicating grade inflation.
Stuart Rojstaczer, a retired geophysics professor at Duke University, has collected historical data from over 400 four - year schools, in some cases dating back to the 1920s, showing evidence of nationwide grade inflation over time, and regular differences between classes of schools and departments.
Harvey Mansfield, a professor of government at Harvard University, argues that just denying the existence of grading inflation at Harvard proves that the problem is serious. He states that students are given easy grades by some professors to be popular, and these professors will be forgotten; only the ones challenging students will be remembered.
Main historical trends identified include:
The average at private schools is currently 3.3, while at public schools it is 3.0. This difference is partly but not entirely attributed to differences in quality of student body, as measured by standardized test scores or selectivity. After correcting for these factors, private schools grade on average 0.1 or 0.2 points higher than comparable public schools, depending on which measure is used.
There is significant variation in grading between different schools, and across disciplines. Between classes of schools, engineering schools grade lower by an average of 0.15 points, while public flagship schools grade somewhat higher. Across disciplines, science departments grade on average 0.4 points below humanities and 0.2 points below social sciences. While engineering schools grade lower on average, engineering departments grade comparably to social sciences departments, about 0.2 points above science departments. These differences between disciplines have been present for at least 40 years, and sparse earlier data suggests that they date back 70 years or more.
Until recently, the evidence for grade inflation in the US has been sparse, largely anecdotal, and sometimes even contradictory; firm data on this issue was not abundant, nor was it easily attainable or amenable for analysis. National surveys in the 1990s generally showed rising grades at American colleges and universities, but a survey of college transcripts by a senior research analyst in the U.S. Department of Education found that grades declined slightly in the 1970s and 1980s. Data for American high schools were lacking.
Recent data leave little doubt that grades are rising at American colleges, universities and high schools. An evaluation of grading practices in US colleges and universities written in 2003, shows that since the 1960s, grades in the US have risen at a rate of 0.15 per decade on a 4.0 scale. The study included over 80 institutions with a combined enrollment of over 1,000,000 students. An annual national survey of college freshmen indicates that students are studying less in high school, yet an increasing number report high school grades of A − or better. Studies are being made in order to figure out who is more likely to inflate grades, and why an instructor would inflate a grade.
In an attempt to combat the grade inflation prevalent at many top US institutions, Princeton began in the autumn of 2004 to employ guidelines for grading distributions across departments. Under the new guidelines, departments have been encouraged to re-evaluate and clarify their grading policies. The administration suggests that, averaged over the course of several years in an individual department, A-range grades should constitute 35 % of grades in classroom work, and 55 % of grades in independent work such as Senior Theses. These guidelines are enforced by the academic departments. Since the policy 's inception, A-range grades have declined significantly in Humanities departments, while remaining nearly constant in the Natural Science departments, which were typically at or near the 35 % guideline already.
In 2009, it was confirmed that the policy implemented in 2004 had brought undergraduate grades within the ranges targeted by the initiative. In 2008 -- 09, A grades (A+, A, A −) accounted for 39.7 % of grades in undergraduate courses across the University, the first time that A grades have fallen below 40 % since the policy was approved. The results were in marked contrast to those from 2002 -- 03, when As accounted for a high of 47.9 % of all grades.
Deflation has varied by division, with the social sciences and natural sciences largely holding steady for the last four years. During that period, A grades have ranged from 37.1 to 37.9 % in the social sciences and from 35.1 to 35.9 % in the natural sciences. In the humanities and engineering, where deflation has been slower, 2008 -- 09 brought significant movement. A 's accounted for 42.5 % of grades in the humanities last year and 40.6 % of grades in engineering, both down two percentage points compared to 2007 -- 08. In the period from fall 2006 through spring 2009, the most recent three - year period under the new grading policy, A 's accounted for 40.1 % of grades in undergraduate courses, down from 47.0 % in 2001 -- 04, the three years before the faculty adopted the policy. The 2006 -- 09 results also mark continued deflation from those reported a year ago, when A 's accounted for 40.4 % of undergraduate grades in the 2005 -- 08 period. In humanities departments, A 's accounted for 44.1 % of the grades in undergraduate courses in 2006 -- 09, down from 55.6 % in 2001 -- 04. In the social sciences, there were 37.7 % A grades in 2006 -- 09, down from 43.3 % in 2001 -- 04. In the natural sciences, there were 35.6 % A grades in 2006 -- 09, compared to 37.2 % in 2001 -- 04. In engineering, the figures were 41.7 % A 's in 2006 -- 09, down from 50.2 % in 2001 -- 04.
Grade inflation is often equated with lax academic standards. For example, the following quote about lax standards from a Harvard University report in 1894 has been used to claim that grade inflation has been a longstanding issue: "Grades A and B are sometimes given too readily -- Grade A for work of no very high merit, and Grade B for work not far above mediocrity... insincere students gain passable grades by sham work. '' Issues of standards in American education have been longstanding. However, rising grades did not become a major issue in American education until the 1960s. For example, in 1890 Harvard 's average GPA was 2.27. In 1950, its average GPA was 2.55. By 2004, its GPA, as a result of dramatic rises in the 1960s and gradual rises since, had risen to 3.48.
Harvard graduate and professor Harvey Mansfield is a longtime vocal opponent of grade inflation at his alma mater. In 2013, Mansfield, after hearing from a dean that "the most frequent grade is an A '', claimed to give students two grades: one for their transcript, and the one he thinks they deserve. He commented, "I did n't want my students to be punished by being the only ones to suffer for getting an accurate grade ''. In response, Nathaniel Stein published a satirical "leaked '' grading rubric in The New York Times, which included such grades as an A++ and A+++, or "A+ with garlands ''. In his 2001 article in the Chronicle of Higher Education, Mansfield blames grade inflation on affirmative action and unqualified African American students: "I said that when grade inflation got started, in the late 60 's and early 70 's, white professors, imbibing the spirit of affirmative action, stopped giving low or average grades to black students and, to justify or conceal it, stopped giving those grades to white students as well. '' He also claimed that he did not have figures to back up his claims, but he is convinced that grade inflation began with African American students getting grades which were too high, "Because I have no access to the figures, I have to rely on what I saw and heard at the time. Although it is not so now, it was then utterly commonplace for white professors to overgrade black students. Any professor who did not overgrade black students either felt the impulse to do so or saw others doing it. From that, I inferred a motive for overgrading white students, too. ''
The University of Alabama has been cited as a recent case of grade inflation. In 2003, Robert Witt, president of the university, responded to criticism that his administration encouraged grade inflation on campus by shutting down access to the records of the Office of Institutional Research, which, until that year, had made grade distribution data freely available. It is however, still available on the Greek Affairs website. The Alabama Scholars Organization, and its newspaper, the Alabama Observer, had been instrumental in exposing the situation and recommending that the Witt administration adopt public accountability measures. The paper had revealed that several departments awarded more than 50 percent "A '' s in introductory courses and that one department, Women 's Studies, handed out 90 percent "A '' s (the vast majority of those being "A + ''). Grades had grown consistently higher during the period examined, from 1973 to 2003.
UC Berkeley has a reputation for rigorous grading policies. Engineering departmental guidelines state that no more than 17 % of the students in any given class may be awarded A grades, and that the class GPA should be in the range of 2.7 to 2.9 out of a maximum of 4.0 grade points. Some departments, however, are not adhering to such strict guidelines, as data from the UCB 's Office of Student Research indicates that the average overall undergraduate GPA was about 3.25 in 2006. Other campuses have stricter grading policies. For example, the average undergraduate GPA of UC San Diego is 3.05, and fewer students have GPA > 3.5 in science majors. UC Irvine 's average GPA is 3.01.
A small liberal arts college in New Hampshire, Saint Anselm College has received national attention and recognition for attempting to buck the trend of grade inflation seen on the campuses of many American colleges and universities. At Saint Anselm, the top 25 % of the class has a 3.1 GPA; the median grade at the college is around a 2.50 GPA. Some professors and administrators believe that inflating grades makes it harder for students to realize their academic strengths and weaknesses and may encourage students to take classes based on grade expectation. The practice also makes it harder for parents and students to determine whether or not the grade was earned. Because of this, at Saint Anselm College, a curriculum committee was set up in 1980 to meet with the academic dean and review the grading policies on a monthly basis. This committee fights the practice of inflation by joining the administration and faculty in an effort to mend them into a working force against grade inflation. The former president of the college, Father Jonathan DeFelice, is quoted as saying, "I can not speak for everyone, but if I 'm headed for the operating room, I will take the surgeon who earned his or her "A '' the honest way, '' in support of Saint Anselm 's stringent grading system.
Other colleges such as Washington and Lee University, University of Rochester, Middlebury College, (1) The College of William and Mary, Fordham University, Swarthmore College, Bates College, Cornell University, the University of Chicago and Boston University are also known for their rigorous grading practices. However, data indicate that even schools known for their traditionally rigorous grading practices have experienced grade inflation and these claims may now be overstated. Washington and Lee had an average GPA of 3.27 in 2006 and Swarthmore 's graduates had a mean GPA of 3.24 in 1997. At some schools there are concerns about different grading practices in different departments; engineering and science departments at schools such as Northwestern University are reputed to have more rigorous standards than departments in other disciplines. To clarify the grades on its graduates ' transcripts, Reed College includes a card, the current edition of which reports that "The average GPA for all students in 2013 -- 14 was 3.15 on a 4.00 scale. This figure has increased by less than 0.2 of a grade point in the past 30 years. During that period, only eleven students have graduated from Reed with perfect 4.00 grade averages. '' Wellesley College implemented a maximum per - class grade cap of 3.33 in 2004, though professors could award a higher average grade by filing a written explanation. Grades lowered to comply with the cap, and student evaluations of professors also lowered. The number of students majoring in economics increased and other social sciences decreased though this may have been part of larger general trends at the time.
A January 7, 2009 article in the Pittsburgh Post-Gazette used the term "grade inflation '' to describe how some people viewed a grading policy in the Pittsburgh public school district. According to the article, the policy sets 50 % as the minimum score that a student can get on any given school assignment. The article also stated that some students said they would rather get a score of 50 % than do the school work. A March 2, 2009 follow - up article in the same newspaper said that the policy had been amended so that students who refuse to do the work will receive a grade of zero, and that the minimum grade of 50 % will only apply to students who make a "good - faith effort ''. A March 3, 2009, article in the same newspaper quoted Bill Hileman, a Pittsburgh Federation of Teachers staff representative, as saying, "The No. 1 problem with the 50 percent minimum was the negative impact on student behavior. '' The same article also said that the school district was planning to adopt a new grading scale in at least two schools by the end of the month. The article stated that under the original grading scale, the minimum scores required to earn an A, B, C, D, or F, were, respectively, 90 %, 80 %, 70 %, 60 %, and 0 %. Under the new 5 - point grading scale, the minimum scores required to earn an A, B, C, D, or F would be changed, respectively, to 4.0, 3.0, 2.0, 1.0, and 0.
James Côté and Anton L. Allahar, both professors of sociology at the University of Western Ontario conducted a rigorous empirical study of grade inflation in Canada, particularly of the province of Ontario. Up until the 1960s, grading in Ontario had been borne out of the British system, in which no more than 5 % of students were given As, and 30 % given Bs. In the 1960s, average performers in Ontario were C - students, while A-students were considered exceptional. As of 2007, 90 % of Ontario students have a B average or above. In Ontario, high school grades began to rise with the abolition of province - wide standardized exams in 1967.
The abolition of province - wide exams meant that student marks were entirely assigned by individual teachers. In 1983, 38 % of students registering in universities had an average that was higher than 80 %. By 1992, this figure was 44 %. According to the Council of Ontario Universities, 52.6 % of high school graduates applying to Ontario universities in 1995 had an A average. In 2004, this figure had risen 61 %. In 1995, 9.4 percent of high school graduates reported an A+ average. In 2003, this figure had risen to a high of 14.9 %. The average grade of university applicants was 80 % in 1997, and this percentage has steadily increased each year since.
In 2004, Quebec 's McGill University admitted that students from Ontario were given a higher cutoff grade than students from other provinces, because of concerns about grade inflation originating from the fact that Ontario does not have standardized provincial testing as a key component of high school graduation requirements.
In 2007, the Atlantic Institute for Market Studies released a report on grade inflation in Atlantic Canada. Mathematics scores in New Brunswick francophone high schools indicate that teacher - assigned marks are inflated in relation to marks achieved on provincial exams. It was found that the average school marks and the average provincial exam were discordant. When looking at marks for school years 2001 -- 2002 to 2003 -- 2004, it was found that school marks in all 21 high schools were higher than the provincial exam marks. The provincial average for school marks is 73.7 % while the average for provincial exams marks is 60.1 % over the three years. School marks in all 21 high schools were higher than the provincial exam marks.
In the context of provincial exams and teacher assigned grades, grade inflation is defined as the difference between the teacher - assigned marks and the results on a provincial exam for that particular course. It was found that higher grade inflation points to lower provincial exam results. Of the 21 high schools, École Marie - Gaëtane had the highest grade inflation, at 24.7 %. With a provincial exam average of 52.3 % this school is also the least achieving school in the province. In contrast, schools Polyvalente Louis - J - Robichaud, Polyvalente Mathieu - Martin, École Grande - Rivière and Polyvalente Roland - Pépin had the lowest grade inflation with values ranging from − 0.7 % to 9.3 %. They were the four top performing schools on the grade 11 mathematics provincial exams. Similar results were found for Anglophone New Brunswick high schools, as well as for Newfoundland and Labrador schools. Despite the high marks assigned by teachers, Atlantic Canadian high school students have consistently ranked poorly in pan Canadian and international assessments.
In 2008, in British Columbia, the University of Victoria (UVic) and the University of British Columbia (UBC) reduced the number of Grade 12 provincial exams that high school students were required to write in order to gain admission to those universities. Prior to 2008, high school students applying to UVic and UBC were required to write 4 provincial exams, including Grade 12 English. In 2008, this standard was reduced so that students were only required to write the provincial exam for Grade 12 English. A UVic administrator claimed that the rationale for this reduction in standards is that it allows the university to better compete with central Canadian universities (i.e. Ontario and Québec universities) for students, and prevent enrollment from falling. Universities in central Canada do not require high school students to write provincial exams, and can offer early admission based on class marks alone. A Vancouver high school principal criticized the change in requirements by charging that it would become difficult to detect grade inflation. The president of the University Presidents ' Council of British Columbia also criticized the move and said the provincial exams are "the great equalizer ''. The British Columbia Teachers Federation supported the change because in the past some students avoided certain subjects for fear that poor marks on provincial exams would bring down their average.
In the fall of 2009, Simon Fraser University (SFU) changed its requirements so that high school students only need to pass the English 12 provincial exam. Previously, students were required to pass 4 provincial exams, including English 12, in order to apply. This change brought SFU into line with UVic and UBC. Administrators claimed that this reduction of standards was necessary so that SFU could better compete with UBC and UVic for students. The change was criticized on the ground that it leads to "a race to the bottom ''.
As of 2007, 40 % of Ontario high school graduates leave with A averages -- 8 times as many as would be awarded in the traditional British system. In Alberta, as of 2007, just over 20 % of high school graduates leave with an A average. This discrepancy may be explained that all Alberta high school students must write province - wide standardized exams, Diploma exams, in core subjects, in order to graduate.
The Alberta Diploma exams are given in grade 12, covering core subjects such as biology, chemistry, English, math, physics and social studies. The exams are worth 30 percent of a grade 12 student 's final mark. Quebec also requires its students to write Diploma Exams for graduating students. Saskatchewan, Manitoba, and Nova Scotia have similar tests. British Columbia has a mandatory English proficiency test in grade 12, provincial tests in other subjects are optional.
Alberta 's focus on standardized exams keeps grade inflation in check, but can put Albertan high school students at a disadvantage relative to students in other provinces. However, Alberta has the highest standards in Canada and produces students who are among the best in international comparisons. By preventing grade inflation, Albertan high schools have been able to greatly ameliorate the problem of compressing students with different abilities into the same category (i.e. inflating grades so that a student in the 98th percentile, for example, can not be distinguished from one in the 82nd percentile).
In relation to grade inflation at the university level, the research of the aforementioned Professors Côté and Allahar concluded that: "We find significant evidence of grade inflation in Canadian universities in both historical and comparative terms, as well as evidence that it is continuing beyond those levels at some universities so as to be comparable with levels found in some American universities. It is also apparent that the inflated grades at Canadian universities are now taken for granted as normal, or as non-inflated, by many people, including professors who never knew the traditional system, have forgotten it, or are in denial ''.
A 2000 study of grade patterns over 20 years at seven Ontario universities (Brock, Guelph, McMaster, Ottawa, Trent, Wilfrid Laurier and Windsor) found that grade point averages rose in 11 of 12 arts and sciences courses between 1973 -- 74 and 1993 -- 94. In addition, it was found that a higher percentage of students received As and Bs and fewer got Cs, Ds and Fs.
A 2006 study by the Canadian Undergraduate Survey Consortium released earlier in 2007 found students at the University of Toronto Scarborough got lower marks on average than their counterparts at Carleton and Ryerson. Marking, not ability, was determined to be the reason.
In 2009 a presentation by Greg Mayer on Grade Inflation at the University of Waterloo reported that grade inflation was occurring there. The study initially stated that there was "no consensus on how Grade Inflation is defined... I will define GI as an increase in grades in one or more academic departments over time ''. From 1988 / 89 to 2006 / 07 it was determined that there had been an 11.02 % increase in undergraduate A grades, with the rate of increase being 0.656 % per year. In 100 level Math for the year 2006 / 07, the grade distribution of 11,042 assigned grades was: 31.9 % A, 22.0 % B, 18 % C, 16.3 % D, 11.8 % F. In 400 level Fine Arts courses for 2006 / 07, the distribution of 50 assigned grades was: 100 % A. In relation to increased scores in first - year mathematics, there was no evidence of better preparedness of UW students. A possible source of grade inflation may have been pressure from administrators to raise grades. A case was documented in which a math dean adjusted grades without the consent or authorization of the instructor.
When comparing the 1988 -- 1993 school years with that of the years from 2002 -- 2007, it was discovered that the percentage of As assigned in 400 levels in the Faculty of Arts had risen as follows for every department (first figure is percentage of As for 1988 -- 1993 years, second is percentage of As for 2002 -- 2007 years): Music 65 % / 93 %, Fine Art 51 % / 84 %, Sociology 54 % / 73 %, History 66 % / 71 %, Philosophy 63 % / 69 %, Anthropology 63 % / 68 %, Drama 39 % / 63 %, Political Science 46 % / 57 %, English 43 % / 57 %, French 39 % / 56 %, Economics 36 % / 51 %, Business 28 % / 47 %, Psychology 80 % / 81 %. It is important to note that this study examined only 400 - level courses and conclusions regarding grade inflation should not be generalized to courses at other levels.
Annual grade inflation has been a continuing feature of the UK public examination system for several decades. In April 2012 Glenys Stacey, the chief executive of Ofqual, the UK public examinations regulator, acknowledged its presence and announced a series of measures to restrict further grade devaluation.
Since the turn of the millennium the percentage of pupils obtaining 5 or more good GCSEs has increased by about 30 %, while independent tests performed as part of the OECD PISA and IES TIMSS studies have reported Literacy, Maths and Science scores in England and Wales having fallen by about 6 %, based on their own tests
In September 2009 and June 2012, The Daily Mail and The Telegraph respectively reported that teenagers ' maths skills are no better than 30 years ago, despite soaring GCSE passes. The articles are based on a 2009 paper by Jeremy Hodgen, of King 's College London, who compared the results of 3,000 fourteen - year - olds sitting a mathematics paper containing questions identical to one set in 1976. He found similar overall levels of attainment between the two cohorts. The articles suggest rising GCSE scores owe more to ' teaching to the test ' and grade inflation than to real gains in mathematical understanding.
Between 1975, with the introduction of the national alphabetic grades to the O - Level, and the replacement of both the O - Level and CSE with the GCSE, in 1988, approximately 36 % of pupils entered for a Mathematics exam sat the O - Level and 64 % the CSE paper. With grades allocated on a normative basis with approximately ~ 53 % (10 % A, 15 % B, 25 -- 30 % C) obtaining a C or above at O - Level, and 10 % the O - Level C equivalent Grade 1 CSE; a proportion being entered for neither paper. The percentage of the population obtaining at least a grade "C '' or equivalent in maths, at O - level, remained fixed in 22 -- 26 % band.
Note: Historically an:
With the replacement of the previous exams with the GCSE and a move from a normative to a criterion referencing grade system, reliant on examiner judgement, the percentage obtaining at least a grade C, in mathematics, has risen to 58.4 %, in 2012.
An analysis of the GCSE awards to pupils achieving the average YELLIS ability test score of 45, between 1996 -- 2006, identified a general increase in awards over the 10 years, ranging from 0.2 (Science) to 0.8 (Maths) of a GCSE grade.
It has also been suggested that the incorporation of GCSE awards into school league tables, and the setting of School level targets, at above national average levels of attainment, may be a driver of GCSE grade inflation. At the time of introduction the E grade was intended to equivalent to the CSE grade 4, and so obtainable by a candidate of average / median ability; Sir Keith Joseph set Schools a target to have 90 % of their pupil obtain a minimum of a grade F (which was the ' average ' grade achieved in 1988), the target was achieved nationally in summer of 2005. David Blunkett went further and set schools the goal of ensuring 50 % of 16 - year olds gained 5 GCSEs or equivalent at grade C and above, requiring schools to device a means for 50 % of their pupils to achieve the grades previously only obtained by the top 30 %, this was achieved by the summer of 2004 with the help of equivalent and largely vocational qualifications. Labelling Schools failing if they are unable to achieve at least 5 Cs, including English and Maths at GCSE, for 40 % of their pupils has also been criticised, as it essentially requires 40 % of each intake to achieve the grades only obtained by the top 20 % at the time of the qualifications introduction.
A number of reports have also suggested the licensing of competing commercial entities to award GCSEs may be contributing to the increasing pass rates, with schools that aggressively switch providers appearing to obtain an advantage in exam pass rates.
The five exam boards that certify examinations have little incentive to uphold higher standards than their competitors - although an independent regulator, Ofqual is in place to guard against lowering standards. Nevertheless, there remains strong incentives for "gaming '' and "teaching to the test ''.
In response to allegations of grade inflation, a number of schools have switched to other exams, such as the International GCSE, or the International Baccalaureate middle years programme.
Source: Joint Council for General Qualifications via Brian Stubbs. Note:
Note:
Sources: Hansard, DfE Gender and education: the evidence on pupils in England, Brian Stubbs, Expanding Higher Education in the UK, Comparing Educational Performance, by C Banford and T Schuller, School Curriculum and Assessment Authority (SCAA 1996a) GCSE Results Analysis: an analysis of the 1995 GCSE results and trends over time
Between 1963 and 1986 A-Level grades were awarded according to norm - referenced percentile quotas (A < = 10 %, B = 15 %, C = 10 %, D = 15 %, E = 20 %, O / N = 20 %, F / U > = 10 % of candidates). The validity of this system was questioned in the early 1980s because, rather than reflecting a standard, norm referencing might have simply maintained a specific proportion of candidates at each grade. In small cohorts this could lead to grades which only indicated a candidate 's relative performance against others sitting that particular paper, and so not be comparable between cohorts (e.g., if one year, only 11 candidates were entered for A-Level English nationally, and the next year only 12, this would raise doubt whether the single A awarded in year one was equivalent to the single A awarded in year two). In 1984 the Secondary Examinations Council decided to replace the norm referencing with criteria referencing, wherein grades would be awarded on "examiner judgement ''. The criteria referencing scheme came into effect in June 1987, and since its introduction examiner judgment, along with the merger of the E and O / N grades and a change to a resitable modular format from June 2002, has increased the percentage of A grade awards from 10 to > 25 %, and the A-E awards from 70 to > 98 %.
In 2007 Robert Coe, of Durham University, published a report analysing the historic A-Level awards to candidates who 'd obtained the average norm - referenced ALIS TDA / ITDA test scores, he noted:
From 1988 until 2006 the achievement levels have risen by about an average of 2 grades in each subject. Exceptionally, from 1988 the rise appears to be about 3.5 grades for Mathematics.
This suggests that a candidate rejected with a U classification in mathematics in 1988 would likely be awarded a B / C grade in 2012, while in all subjects a 1980s C candidate would now be awarded an A * / A.
The OECD noted in 2012, that the same competing commercial entities are licensed to award A-Levels as GCSEs (see above).
An educationalist at Buckingham University thinks grades inflate when examiners check scripts that lie on boundaries between grades. Every year some are pushed up but virtually none down, resulting in a subtle year - on - year shift.
1980: 589,270
Sources: JCQ statistics for: 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005 / 4 and BBC News, a, also the Guardian, and (2), and (3)
Note: norm * -- June 1963 -- 1986 grades allocated per the norm - referenced percentile quotas described above.
The Higher Education Statistics Agency gathers and publishes annual statistics relating to the higher qualifications awarded in the UK. The Students and Qualifiers data sets indicate that the percentage of "GOOD '' first degree classifications have increased annually since 1995. For example, 7 % of all first - degree students who graduated in the academic year 1995 / 96 achieved first class honours; by 2008 / 09 this had risen to 14 %.
Between 1995 and 2011, the proportion of upper second class honours awarded for first degree courses increased from 40.42 % to 48.38 %, whilst lower second class honours dropped from 34.97 % to 28.9 %. The number of third class honours, "ordinary '' (i.e. pass), and unclassified awards dropped substantially during the same period. During this time, the total number of first degrees awarded in the UK increased by 56 %, from 212,000 to 331,000.
The UK Press frequently carries reports and publishes statistics that demonstrate the grade inflation since the early 1980s.
Note: The doubling of institutions and quadrupling of student numbers, following the Further and Higher Education Act 1992, makes any direct comparison of pre and post 1995 awards non trivial, if not meaningless.
Source: (4), (5)
Source: Sunday Times Good University Guide, 1983 -- 4 (1st Ed), 1984 -- 5 (2nd Ed), 2006, 2008, 2012
Source: Higher Education Statistics Agency Universities ' Statistical Record, 1972 / 73 - 1993 / 94: Undergraduate Records
Between 2005 and 2016 the proportion of students receiving an honor in the general baccalauréat doubled.
In CBSE, a 95 per cent aggregate is 21 times as prevalent today as it was in 2004, and a 90 per cent close to nine times as prevalent. In the ISC Board, a 95 per cent is almost twice as prevalent today as it was in 2012. CBSE called a meeting of all 40 school boards early in 2017 to urge them to discontinue "artificial spiking of marks ''. CBSE decided to lead by example and promised not to inflate its results. But although the 2017 results have seen a small correction, the board has clearly not discarded the practice completely. Almost 6.5 per cent of mathematics examinees in 2017 scored 95 or more -- 10 times higher than in 2004 -- and almost 6 per cent of physics examinees scored 95 or more, 35 times more than in 2004.
Grade inflation is a specific instance of a broader phenomena of ratings or reputation inflation where rating decisions are made by individuals. This has been occurring in peer - to - peer services such as Uber.
|
what is the most attended concert in history | List of highest - attended concerts - wikipedia
This page lists the highest - attended concerts of all time. The oldest 100,000 - crowd concert reported to Billboard Boxscore is Grateful Dead 's gig at the Raceway Park, Englishtown, New Jersey on September 3, 1977. The concert was attended by 107,019 people, which remains the largest ticketed concert in the United States to date. Frank Sinatra, Tina Turner, and Paul McCartney broke the record respectively in Maracanã Stadium. With an audience of over 184,000 people on April 21, 1990, McCartney held the record for 27 years. Italian singer Vasco Rossi surpassed McCartney 's record with his solo concert on July 1, 2017. The concert was a celebration of his 40 years of career.
Although the attendance numbers of free concerts are known to be exaggerations, several concerts have been reported to have a million audience or more. According to the Guinness World Records, Rod Stewart 's show in Copacabana Beach, Rio de Janeiro, remains the highest - attended free concert with an estimated 3.5 million audience.
The following are the highest - attended single - artist 's ticketed concerts (excluding music festivals) with attendance of 100,000 people or more.
The following are free concerts with reported attendance of one million people or more. It also includes multi-artist festivals which may not be directly comparable with single - artist concerts. Attendance numbers for many of the kinds of events listed here rely on estimations from the promoters and are known to be exaggerations.
|
is french or italian the language of love | Romance languages - Wikipedia
Pontic Steppe
Caucasus
East - Asia
Eastern Europe
Northern Europe
Pontic Steppe
Northern / Eastern Steppe
Europe
South - Asia
Steppe
Europe
Caucasus
India
Indo - Aryans
Iranians
Europe
East - Asia
Europe
Indo - Aryan
Iranian
Indian
Iranian
Other
Europe
The Romance languages (sometimes called the Romanic languages, Latin languages, or Neo-Latin languages) are the modern languages that evolved from Vulgar Latin between the sixth and ninth centuries and that thus form a branch of the Italic languages within the Indo - European language family.
Today, around 800 million people are native speakers worldwide, mainly in Europe, Africa and the Americas, but also elsewhere. Additionally, the major Romance languages have many non-native speakers and are in widespread use as lingua francas. This is especially the case for French, which is in widespread use throughout Central and West Africa, Madagascar, Mauritius and the Maghreb.
The five most widely spoken Romance languages by number of native speakers are Spanish (470 million), Portuguese (250 million), French (150 million), Italian (60 million), and Romanian (25 million).
Because of the difficulty of imposing boundaries on a continuum, various counts of the modern Romance languages are given; for example, Dalby lists 23 based on mutual intelligibility. The following, more extensive list, includes 35 current, living languages, and one recently extinct language, Dalmatian:
Romance languages are the continuation of Vulgar Latin, the popular and colloquial sociolect of Latin spoken by soldiers, settlers, and merchants of the Roman Empire, as distinguished from the classical form of the language spoken by the Roman upper classes, the form in which the language was generally written. Between 350 BC and 150 AD, the expansion of the Empire, together with its administrative and educational policies, made Latin the dominant native language in continental Western Europe. Latin also exerted a strong influence in southeastern Britain, the Roman province of Africa, western Germany, Pannonia and the Balkans north of the Jireček Line.
During the Empire 's decline, and after its fragmentation and collapse in the fifth century, varieties of Latin began to diverge within each local area at an accelerated rate and eventually evolved into a continuum of recognizably different typologies. The colonial empires established by Portugal, Spain, and France from the fifteenth century onward spread their languages to the other continents to such an extent that about two - thirds of all Romance language speakers today live outside Europe.
Despite other influences (e.g. substratum from pre-Roman languages, especially Continental Celtic languages; and superstratum from later Germanic or Slavic invasions), the phonology, morphology, and lexicon of all Romance languages consist mainly of evolved forms of Vulgar Latin. However, some notable differences occur between today 's Romance languages and their Roman ancestor. With only one or two exceptions, Romance languages have lost the declension system of Latin and, as a result, have SVO sentence structure and make extensive use of prepositions.
The term Romance comes from the Vulgar Latin adverb romanice, derived from Romanicus: for instance, in the expression romanice loqui, "to speak in Roman '' (that is, the Latin vernacular), contrasted with latine loqui, "to speak in Latin '' (Medieval Latin, the conservative version of the language used in writing and formal contexts or as a lingua franca), and with barbarice loqui, "to speak in Barbarian '' (the non-Latin languages of the peoples living outside the Roman Empire). From this adverb the noun romance originated, which applied initially to anything written romanice, or "in the Roman vernacular ''.
The word ' romance ' with the modern sense of romance novel or love affair has the same origin. In the medieval literature of Western Europe, serious writing was usually in Latin, while popular tales, often focusing on love, were composed in the vernacular and came to be called "romances ''.
Lexical and grammatical similarities among the Romance languages, and between Latin and each of them, are apparent from the following examples having the same meaning in various Romance lects:
English: She always closes the window before she dines / before dining.
Centro Cadore: La sera sempre la fenestra gnante de disna. Auronzo di Cadore: La sera sempro la fenestra davoi de disnà.
Romance - based creoles and pidgins
Some of the divergence comes from semantic change: where the same root word has developed different meanings. For example, the Portuguese word fresta is descended from Latin fenestra "window '' (and is thus cognate to French fenêtre, Italian finestra, Romanian fereastră and so on), but now means "skylight '' and "slit ''. Cognates may exist but have become rare, such as finiestra in Spanish, or dropped out of use entirely. The Spanish and Portuguese terms defenestrar meaning "to throw through a window '' and fenestrado meaning "replete with windows '' also have the same root, but are later borrowings from Latin.
Likewise, Portuguese also has the word cear, a cognate of Italian cenare and Spanish cenar, but uses it in the sense of "to have a late supper '' in most varieties, while the preferred word for "to dine '' is jantar (related to archaic Spanish yantar "to eat '') because of semantic changes in the 19th century. Galician has both fiestra (from medieval fẽestra, the ancestor of standard Portuguese fresta) and the less frequently used ventá and xanela.
As an alternative to lei (originally the genitive form), Italian has the pronoun ella, a cognate of the other words for "she '', but it is hardly ever used in speaking.
Spanish, Asturian, and Leonese ventana and Mirandese and Sardinian bentana come from Latin ventus "wind '' (cf. English window, etymologically ' wind eye '), and Portuguese janela, Galician xanela, Mirandese jinela from Latin * ianuella "small opening '', a derivative of ianua "door ''.
Sardinian balcone (alternative for ventàna / bentàna) comes from Old Italian and is similar to other Romance languages such as French balcon (from Italian balcone), Portuguese balcão, Romanian balcon, Spanish balcón, Catalan balcó and Corsican balconi (alternative for purtellu).
Documentary evidence is limited about Vulgar Latin for the purposes of comprehensive research, and the literature is often hard to interpret or generalize. Many of its speakers were soldiers, slaves, displaced peoples, and forced resettlers, more likely to be natives of conquered lands than natives of Rome. In Western Europe, Latin gradually replaced Celtic and Italic languages, which were related to it by a shared Indo - European origin. Commonalities in syntax and vocabulary facilitated the adoption of Latin.
Vulgar Latin is believed to have already had most of the features shared by all Romance languages, which distinguish them from Classical Latin, such as the almost complete loss of the Latin grammatical case system and its replacement by prepositions; the loss of the neuter grammatical gender and comparative inflections; replacement of some verb paradigms by innovations (e.g. the synthetic future gave way to an originally analytic strategy now typically formed by infinitive + evolved present indicative forms of ' have '); the use of articles; and the initial stages of the palatalization of the plosives / k /, / g /, and / t /.
To some scholars, this suggests the form of Vulgar Latin that evolved into the Romance languages was around during the time of the Roman Empire (from the end of the first century BC), and was spoken alongside the written Classical Latin which was reserved for official and formal occasions. Other scholars argue that the distinctions are more rightly viewed as indicative of sociolinguistic and register differences normally found within any language. Both were mutually intelligible as one and the same language, which was true until very approximately the second half of the 7th century. However, within two hundred years Latin became a dead language since "the Romanized people of Europe could no longer understand texts that were read aloud or recited to them, '' i.e. Latin had ceased to be a first language and became a foreign language that had to be learned, if the label Latin is constrained to refer to a state of the language frozen in past time and restricted to linguistic features for the most part typical of higher registers.
During the political decline of the Western Roman Empire in the fifth century, there were large - scale migrations into the empire, and the Latin - speaking world was fragmented into several independent states. Central Europe and the Balkans were occupied by the Germanic and Slavic tribes, as well as by the Huns, which isolated the Vlachs from the rest of Romance - speaking Europe.
British and African Romance, the forms of Vulgar Latin used in southeastern Britain and the Roman province of Africa, where it had been spoken by much of the urban population, disappeared in the Middle Ages (as did Pannonian Romance in what is now Hungary and Moselle Romance in Germany). But the Germanic tribes that had penetrated Roman Italy, Gaul, and Hispania eventually adopted Latin / Romance and the remnants of the culture of ancient Rome alongside existing inhabitants of those regions, and so Latin remained the dominant language there.
Over the course of the fourth to eighth centuries, Vulgar Latin, by this time highly dialectalized, broke up into discrete languages that were no longer mutually intelligible. Clear evidence of Latin change comes from the Reichenau Glosses, an eighth - century compilation of about 1,200 words from the fourth - century Vulgate of Jerome) that were no longer intelligible along with their eighth - century equivalents in proto - Franco - Provençal. The following are some examples with reflexes in several modern, closely related Romance languages for comparison:
In all of the above examples, the words appearing in the fourth century Vulgate are the same words as would have been used in Classical Latin of c. 50 BC. It is likely that some of these words had already disappeared from casual speech; but if so, they must have been still widely understood, as there is no recorded evidence that the common people of the time had difficulty understanding the language.
By the 8th century, the situation was very different. During the late 8th century, Charlemagne, holding that "Latin of his age was by classical standards intolerably corrupt '', successfully imposed Classical Latin as an artificial written vernacular for Western Europe. Unfortunately, this meant that parishioners could no longer understand the sermons of their priests, forcing the Council of Tours in 813 to issue an edict that priests needed to translate their speeches into the rustica romana lingua, an explicit acknowledgement of the reality of the Romance languages as separate languages from Latin. By this time, and possibly as early as the 6th century according to Price (1984), the Romance lects had split apart enough to be able to speak of separate Gallo - Romance, Ibero - Romance, Italo - Romance and Eastern Romance languages. Some researchers have postulated that the major divergences in the spoken dialects began in the 5th century, as the formerly widespread and efficient communication networks of the Western Roman Empire rapidly broke down, leading to the total disappearance of the Western Roman Empire by the end of the century. The critical period between the 5th -- 10th centuries AD is poorly documented because little or no writing from the chaotic "Dark Ages '' of the 5th -- 8th centuries has survived, and writing after that time was in consciously classicized Medieval Latin, with vernacular writing only beginning in earnest in the 11th or 12th centuries.
Between the 10th and 13th centuries, some local vernaculars developed a written form and began to supplant Latin in many of its roles. In some countries, such as Portugal, this transition was expedited by force of law; whereas in others, such as Italy, many prominent poets and writers used the vernacular of their own accord -- some of the most famous in Italy being Giacomo da Lentini and Dante Alighieri.
The invention of the printing press brought a tendency towards greater uniformity of standard languages within political boundaries, at the expense of other Romance languages and dialects less favored politically. In France, for instance, the dialect spoken in the region of Paris gradually spread to the entire country, and the Occitan of the south lost ground.
The Romance language most widely spoken natively today is Spanish (Castilian), followed by Portuguese, French, Italian and Romanian, which together cover a vast territory in Europe and beyond, and work as official and national languages in dozens of countries.
French, Italian, Portuguese, Spanish, and Romanian are also official languages of the European Union. Spanish, Portuguese, French, Italian, Romanian, and Catalan are the official languages of the Latin Union; and French and Spanish are two of the six official languages of the United Nations. Outside Europe, French, Portuguese and Spanish are spoken and enjoy official status in various countries that emerged from the respective colonial empires. Spanish is an official language in nine countries of South America, home to about half that continent 's population; in six countries of Central America (all except Belize); and in Mexico. In the Caribbean, it is official in Cuba, the Dominican Republic, and Puerto Rico. In all these countries, Latin American Spanish is the vernacular language of the majority of the population, giving Spanish the most native speakers of any Romance language. In Africa it is the official language of Equatorial Guinea, but has few native speakers there.
Portuguese, in its original homeland, Portugal, is spoken by virtually the entire population of 10 million. As the official language of Brazil, it is spoken by more than 200 million people in that country, as well as by neighboring residents of eastern Paraguay and northern Uruguay, accounting for a little more than half the population of South America. It is the official language of six African countries (Angola, Cape Verde, Guinea - Bissau, Mozambique, Equatorial Guinea, and São Tomé and Príncipe), and is spoken as a first language by perhaps 30 million residents of that continent. In Asia, Portuguese is co-official with other languages in East Timor and Macau, while most Portuguese - speakers in Asia -- some 400,000 -- are in Japan due to return immigration of Japanese Brazilians. In North America 1,000,000 people speak Portuguese as their home language. In Oceania, Portuguese is the second most spoken Romance language, after French, due mainly to the number of speakers in East Timor. Its closest relative, Galician, has official status in the autonomous community of Galicia in Spain, together with Spanish.
Outside Europe, French is spoken natively most in the Canadian province of Quebec, and in parts of New Brunswick and Ontario. Canada is officially bilingual, with French and English being the official languages. In parts of the Caribbean, such as Haiti, French has official status, but most people speak creoles such as Haitian Creole as their native language. French also has official status in much of Africa, but relatively few native speakers. In France 's overseas possessions, native use of French is increasing.
Although Italy also had some colonial possessions before World War II, its language did not remain official after the end of the colonial domination. As a result, Italian outside of Italy and Switzerland is now spoken only as a minority language by immigrant communities in North and South America and Australia. In some former Italian colonies in Africa -- namely Libya, Eritrea and Somalia -- it is spoken by a few educated people in commerce and government. Romania did not establish a colonial empire, but beyond its native territory in Southeastern Europe, it also spread to other countries on the Mediterranean (especially the other Romance countries, most notably Italy and Spain), and elsewhere such as Israel, where it is the native language of five percent of the population, and is spoken by many more as a secondary language; this is due to the large numbers of Romanian - born Jews who moved to Israel after World War II. Some 2.6 million people in the former Soviet republic of Moldova speak a variety of Romanian, called variously Moldovan or Romanian by them.
The total native speakers of Romance languages are divided as follows (with their ranking within the languages of the world in brackets):
Catalan is the official language of Andorra. In Spain, it is co-official with Spanish (Castilian) in Catalonia, the Valencian Community, and the Balearic Islands, and it is recognized, but not official, in La Franja, in Aragon. In addition, it is spoken by many residents of Alghero, on the island of Sardinia, and it is co-official in that city. Galician, with more than a million native speakers, is official together with Spanish in Galicia, and has legal recognition in neighbouring territories in Castilla y León. A few other languages have official recognition on a regional or otherwise limited level; for instance, Asturian and Aragonese in Spain; Mirandese in Portugal; Friulan, Sardinian and Franco - Provençal in Italy; and Romansh in Switzerland.
The remaining Romance languages survive mostly as spoken languages for informal contact. National governments have historically viewed linguistic diversity as an economic, administrative or military liability, as well as a potential source of separatist movements; therefore, they have generally fought to eliminate it, by extensively promoting the use of the official language, restricting the use of the "other '' languages in the media, characterizing them as mere "dialects '', or even persecuting them. As a result, all of these languages are considered endangered to varying degrees according to the UNESCO Red Book of Endangered Languages, ranging from "vulnerable '' (e.g. Sicilian and Venetian) to "severely endangered '' (Arpitan, most of the Occitan varieties). Since the late twentieth and early twenty - first centuries, increased sensitivity to the rights of minorities has allowed some of these languages to start recovering their prestige and lost rights. Yet it is unclear whether these political changes will be enough to reverse the decline of minority Romance languages.
The classification of the Romance languages is inherently difficult, because most of the linguistic area is a dialect continuum, and in some cases political biases can come into play. Along with Latin (which is not included among the Romance languages) and a few extinct languages of ancient Italy, they make up the Italic branch of the Indo - European family.
There are various schemes used to subdivide the Romance languages. Three of the most common schemes are as follows:
The main subfamilies that have been proposed by Ethnologue within the various classification schemes for Romance languages are:
This controversial three - way division is made primarily based on the outcome of Vulgar Latin (Proto - Romance) vowels:
Italo - Western is in turn split along the so - called La Spezia -- Rimini Line in northern Italy, which divides the central and southern Italian languages from the so - called Western Romance languages to the north and west. The primary characteristics dividing the two are:
In fact, the reality is somewhat more complex. All of the "southeast '' characteristics apply to all languages southeast of the line, and all of the "northwest '' characteristics apply to all languages in France and (most of) Spain. However, the Gallo - Italic languages are somewhere in between. All of these languages do have the "northwest '' characteristics of lenition and loss of gemination. However:
On top of this, the ancient Mozarabic language in southern Spain, at the far end of the "northwest '' group, had the "southeast '' characteristics of lack of lenition and palatalization of / k / to / tʃ /. Certain languages around the Pyrenees (e.g. some highland Aragonese dialects) also lack lenition, and northern French dialects such as Norman and Picard have palatalization of / k / to / tʃ / (although this is possibly an independent, secondary development, since / k / between vowels, i.e. when subject to lenition, developed to / dz / rather than / dʒ /, as would be expected for a primary development).
The usual solution to these issues is to create various nested subgroups. Western Romance is split into the Gallo - Iberian languages, in which lenition happens and which include nearly all the Western Romance languages, and the Pyrenean - Mozarabic group, which includes the remaining languages without lenition (and is unlikely to be a valid clade; probably at least two clades, one for Mozarabic and one for Pyrenean). Gallo - Iberian is split in turn into the Iberian languages (e.g. Spanish and Portuguese), and the larger Gallo - Romance languages (stretching from eastern Spain to northeast Italy).
Probably a more accurate description, however, would be to say that there was a focal point of innovation located in central France, from which a series of innovations spread out as areal changes. The La Spezia -- Rimini Line represents the farthest point to the southeast that these innovations reached, corresponding to the northern chain of the Apennine Mountains, which cuts straight across northern Italy and forms a major geographic barrier to further language spread.
This would explain why some of the "northwest '' features (almost all of which can be characterized as innovations) end at differing points in northern Italy, and why some of the languages in geographically remote parts of Spain (in the south, and high in the Pyrenees) are lacking some of these features. It also explains why the languages in France (especially standard French) seem to have innovated earlier and more completely than other Western Romance languages.
Many of the "southeast '' features also apply to the Eastern Romance languages (particularly, Romanian), despite the geographic discontinuity. Examples are lack of lenition, maintenance of intertonic vowels, use of vowel - changing plurals, and palatalization of / k / to / tʃ /. (Gemination is missing, which may be an independent development, and / kt / develops into / pt / rather than either of the normal Italo - Western developments.) This has led some researchers to postulate a basic two - way East - West division, with the "Eastern '' languages including Romanian and central and southern Italian.
Despite being the first romance language to evolve from Vulgar Latin, Sardinian does not fit well at all into this sort of division. It is clear that Sardinian became linguistically independent from the remainder of the Romance languages at an extremely early date, possibly already by the first century BC. Sardinian contains a large number of archaic features, including total lack of palatalization of / k / and / g / and a large amount of vocabulary preserved nowhere else, including some items already archaic by the time of Classical Latin (first century BC). Sardinian has plurals in / s / but post-vocalic lenition of voiceless consonants is normally limited to the status of an allophonic rule (e.g. (k) ane ' dog ' but su (g) ane or su (ɣ) ane ' the dog '), and there are a few innovations unseen elsewhere, such as a change of / au / to / a /. Use of su < ipsum as an article is a retained archaic feature that also exists in the Catalan of the Balearic Islands and that used to be more widespread in Occitano - Romance, and is known as article salat (literally the "salted article ''), while Sardinia shares delabialization of earlier / kw / and / gw / with Romania: Sard. abba, Rum. apă ' water '; Sard. limba, Rom. limbă ' language ' (cf. Italian acqua, lingua).
Gallo - Romance can be divided into the following subgroups:
The following groups are also sometimes considered part of Gallo - Romance:
The Gallo - Romance languages are generally considered the most innovative (least conservative) among the Romance languages. Characteristic Gallo - Romance features generally developed earliest and appear in their most extreme manifestation in the Langue d'oïl, gradually spreading out along riverways and transalpine roads.
In some ways, however, the Gallo - Romance languages are conservative. The older stages of many of the languages preserved a two - case system consisting of nominative and oblique, fully marked on nouns, adjectives and determiners, inherited almost directly from the Latin nominative and accusative and preserving a number of different declensional classes and irregular forms. The languages closest to the oïl epicenter preserve the case system the best, while languages at the periphery lose it early.
Notable characteristics of the Gallo - Romance languages are:
Some Romance languages have developed varieties which seem dramatically restructured as to their grammars or to be mixtures with other languages. It is not always clear whether they should be classified as Romance, pidgins, creole languages, or mixed languages. Some other languages, such as English, are sometimes thought of as creoles of semi-Romance ancestry. There are several dozens of creoles of French, Spanish, and Portuguese origin, some of them spoken as national languages in former European colonies.
Creoles of French:
Creoles of Spanish:
Creoles of Portuguese:
Latin and the Romance languages have also served as the inspiration and basis of numerous auxiliary and constructed languages, so - called "neo-romantic languages ''.
The concept was first developed in 1903 by Italian mathematician Giuseppe Peano, under the title Latino sine flexione. He wanted to create a naturalistic international language, as opposed to an autonomous constructed language like Esperanto or Volapuk which were designed for maximal simplicity of lexicon and derivation of words. Peano used Latin as the base of his language, because at the time of his flourishing it was the de facto international language of scientific communication.
Other languages developed since include Idiom Neutral, Occidental, Lingua Franca Nova, and most famously and successfully, Interlingua. Each of these languages has attempted to varying degrees to achieve a pseudo-Latin vocabulary as common as possible to living Romance languages.
There are also languages created for artistic purposes only, such as Talossan. Because Latin is a very well attested ancient language, some amateur linguists have even constructed Romance languages that mirror real languages that developed from other ancestral languages. These include Brithenig (which mirrors Welsh), Breathanach (mirrors Irish), Wenedyk (mirrors Polish), Þrjótrunn (mirrors Icelandic), and Helvetian (mirrors German).
Romance languages have a number of shared features across all languages:
The most significant changes between Classical Latin and Proto - Romance (and hence all the modern Romance languages) relate to the reduction or loss of the Latin case system, and the corresponding syntactic changes that were triggered.
The case system was drastically reduced from the vigorous six - case system of Latin. Although four cases can be constructed for Proto - Romance nouns (nominative, accusative, combined genitive / dative, and vocative), the vocative is marginal and present only in Romanian (where it may be an outright innovation), and of the remaining cases, no more than two are present in any one language. Romanian is the only modern Romance language with case marking on nouns, with a two - way opposition between nominative / accusative and genitive / dative. Some of the older Gallo - Romance languages (in particular, Old French, Old Occitan, Old Sursilvan and Old Friulian, and in traces Old Catalan and Old Venetian) had an opposition between nominative and general oblique, and in Ibero - Romance languages, such as Spanish and Portuguese, as well as in Italian (see under Case), a couple of examples are found which preserve the old nominative. As in English, case is preserved better on pronouns.
Concomitant with the loss of cases, freedom of word order was greatly reduced. Classical Latin had a generally verb - final (SOV) but overall quite free word order, with a significant amount of word scrambling and mixing of left - branching and right - branching constructions. The Romance languages eliminated word scrambling and nearly all left - branching constructions, with most languages developing a rigid SVO, right - branching syntax. (Old French, however, had a freer word order due to the two - case system still present, as well as a predominantly verb - second word order developed under the influence of the Germanic languages.) Some freedom, however, is allowed in the placement of adjectives relative to their head noun. In addition, some languages (e.g. Spanish, Romanian) have an "accusative preposition '' (Romanian pe, Spanish "personal a '') along with clitic doubling, which allows for some freedom in ordering the arguments of a verb.
The Romance languages developed grammatical articles where Latin had none. Articles are often introduced around the time a robust case system falls apart in order to disambiguate the remaining case markers (which are usually too ambiguous by themselves) and to serve as parsing clues that signal the presence of a noun (a function that used to beserved by the case endings themselves).
This was the pattern followed by the Romance languages: In the Romance languages that still preserved a functioning nominal case system (e.g., Romanian and Old French), only the combination of article and case ending serves to uniquely identify number and case (compare the similar situation in modern German). All Romance languages have a definite article (originally developed from ipse "self '' but replaced in nearly all languages by ille "that (over there) '') and an indefinite article (developed from ūnus "one ''). Many also have a partitive article (dē "of '' + definite article).
Latin had a large number of syntactic constructions expressed through infinitives, participles, and similar nominal constructs. Examples are the ablative absolute, the accusative - plus - infinitive construction used for reported speech, gerundive constructions, and the common use of reduced relative clauses expressed through participles. All of these are replaced in the Romance languages by subordinate clauses expressed with finite verbs, making the Romance languages much more "verbal '' and less "nominal '' than Latin. Under the influence of the Balkan sprachbund, Romanian has progressed the furthest, largely eliminating the infinitive. (It is being revived, however, due to the increasing influence of other Romance languages.)
Every language has a different set of vowels from every other. Common characteristics are as follows:
Most Romance languages have similar sets of consonants. The following is a combined table of the consonants of the five major Romance languages (French, Spanish, Italian, Portuguese, Romanian).
Key:
Notable changes:
Most instances of most of the sounds below that occur (or used to occur, as described above) in all of the languages are cognate. However:
Word stress was rigorously predictable in classical Latin except in a very few exceptional cases, either on the penultimate syllable (second from last) or antepenultimate syllable (third from last), according to the syllable weight of the penultimate syllable. Stress in the Romance Languages mostly remains on the same syllable as in Latin, but various sound changes have made it no longer so predictable. Minimal pairs distinguished only by stress exist in some languages, e.g. Italian Papa (ˈpa. pa) "Pope '' vs. papà (pa. ˈpa) "daddy '', or Spanish límite (ˈli. mi. te) "(a) limit '', present subjunctive limite (li. ˈmi. te) "(that) (I / he) limit '' and preterite limité (li. mi. ˈte) "(I) limited ''.
Erosion of unstressed syllables following the stress has caused most Spanish and Portuguese words to have either penultimate or ultimate stress: e.g. Latin trēdecim "thirteen '' > Spanish trece, Portuguese treze; Latin amāre "to love '' > Spanish / Portuguese amar. Most words with antepenultimate stress are learned borrowings from Latin, e.g. Spanish / Portuguese fábrica "factory '' (the corresponding inherited word is Spanish fragua, Portuguese frágua "forge ''). This process has gone even farther in French, with deletion of all post-stressed vowels, leading to consistent, predictable stress on the last syllable: e.g. Latin Stephanum "Stephen '' > Old French Estievne > French Étienne / e. ˈtjɛn /; Latin juvenis "young '' > Old French juevne > French jeune / ʒœn /. This applies even to borrowings: e.g. Latin fabrica > French borrowing fabrique / fa. ˈbʀik / (the inherited word in this case being monosyllabic forge < Pre-French * fauriga).
Other than French (with consistent final stress), the position of the stressed syllable generally falls on one of the last three syllables. Exceptions may be caused by clitics or (in Italian) certain verb endings, e.g. Italian telefonano (teˈlɛ.fo.na.no) "they telephone ''; Spanish entregándomelo (en. tɾe. ˈɣan. do. me. lo) "delivering it to me ''; Italian mettiamocene (meˈtːjaː.mo.tʃe.ne) "let 's put some of it in there ''; Portuguese dávamo - vo - lo (ˈda. vɐ.mu.vu.lu) "we were giving it to you ''. Stress on verbs is almost completely predictable in Spanish and Portuguese, but less so in Italian.
Nouns, adjectives, and pronouns can be marked for gender, number and case. Adjectives and pronouns must agree in all features with the noun they are bound to.
The Romance languages inherited from Latin two grammatical numbers, singular and plural; the only trace of a dual number comes from Latin ambō > Spanish and Portuguese ambos, Old Romanian îmbi > Romanian ambii, Old French ambe, Italian ambedue, entrambi.
Most Romance languages have two grammatical genders, masculine and feminine. The gender of animate nouns is generally natural (i.e. nouns referring to men are generally masculine, and vice versa), but for nonanimate nouns it is arbitrary.
Although Latin had a third gender (neuter), there is little trace of this in most languages. The biggest exception is Romanian, where there is a productive class of "neuter '' nouns, which include the descendants of many Latin neuter nouns and which behave like masculines in the singular and feminines in the plural, both in the endings used and in the agreement of adjectives and pronouns (e.g. un deget "one finger '' vs. două degete "two fingers '', cf. Latin digitus, pl. digiti).
Such nouns arose because of the identity of the Latin neuter singular - um with the masculine singular, and the identity of the Latin neuter plural - a with the feminine singular. A similar class exists in Italian, although it is no longer productive (e.g. il dito "the finger '' vs. le dita "the fingers '', l'uovo "the egg '' vs. le uova "the eggs ''). A similar phenomenon may be observed in Albanian (which is heavily Romance - influenced), and the category remains highly productive with a number of new words loaned or coined in the neuter ((një) hotel one hotel (m) vs. (tri) hotele three hotels (f)). (A few isolated nouns in Latin had different genders in the singular and plural, but this was an unrelated phenomenon; this is similarly the case with a few French nouns, such as amour, délice, orgue.)
Spanish also has vestiges of the neuter in the demonstrative adjectives: esto, eso, aquello, the pronoun ello (meaning "it '') and the article lo (used to intensify adjectives). Portuguese also has neuter demonstrative adjectives: "isto '', "isso '', "aquilo '' (meaning "this (near me) '', "this / that (near you) '', "that (far from the both of us) '').
Remnants of the neuter, interpretable now as "a sub-class of the non-feminine gender '' (Haase 2000: 233), are vigorous in Italy in an area running roughly from Ancona to Matera and just north of Rome to Naples. Oppositions with masculine typically have been recategorized, so that neuter signifies the referent in general, while masculine indicates a more specific instance, with the distinction marked by the definite article. In Southeast Umbrian, for example, neuter lo pane is ' the bread ', while masculine lu pane refers to an individual piece or loaf of bread. Similarly, neuter lo vinu is wine in general, while masculine lu vinu is a specific sort of wine, with the consequence that mass lo vinu has no plural counterpart, but lu vinu can take a sortal plural form li vini, referring to different types of wine. Phonological forms of articles vary by locale.
Latin had an extensive case system, where all nouns were declined in six cases (nominative, vocative, accusative, dative, genitive, and ablative) and two numbers. Many adjectives were additionally declined in three genders, leading to a possible 6 × 2 × 3 = 36 endings per adjective (although this was rarely the case). In practice, some category combinations had identical endings to other combinations, but a basic adjective like bonus "good '' still had 14 distinct endings.
In all Romance languages, this system was drastically reduced. In most modern Romance languages, in fact, case is no longer marked at all on nouns, adjectives and determiners, and most forms are derived from the Latin accusative case. Much like English, however, case has survived somewhat better on pronouns.
Most pronouns have distinct nominative, accusative, genitive and possessive forms (cf. English "I, me, mine, my ''). Many also have a separate dative form, a disjunctive form used after prepositions, and (in some languages) a special form used with the preposition con "with '' (a conservative feature inherited from Latin forms such as mēcum, tēcum, nōbīscum).
The system of inflectional classes is also drastically reduced. The basic system is most clearly indicated in Spanish, where there are only three classes, corresponding to the first, second and third declensions in Latin: plural in - as (feminine), plural in - os (masculine), plural in - es (either masculine or feminine). The singular endings exactly track the plural, except the singular - e is dropped after certain consonants.
The same system underlines many other modern Romance languages, such as Portuguese, French and Catalan. In these languages, however, further sound changes have resulted in various irregularities. In Portuguese, for example, loss of / l / and / n / between vowels (with nasalization in the latter case) produces various irregular plurals (nação -- nações "nation (s) ''; hotel -- hotéis "hotel (s) '').
In French and Catalan, loss of / o / and / e / in most unstressed final syllables has caused the - os and - es classes to merge. In French, merger of remaining / e / with final / a / into (ə), and its subsequent loss, has completely obscured the original Romance system, and loss of final / s / has caused most nouns to have identical pronunciation in singular and plural, although they are still marked differently in spelling (e.g. femme -- femmes "woman -- women '', both pronounced / fam /).
Noun inflection has survived in Romanian somewhat better than elsewhere. Determiners are still marked for two cases (nominative / accusative and genitive / dative) in both singular and plural, and feminine singular nouns have separate endings for the two cases. In addition, there is a separate vocative case, enriched with native development and Slavic borrowings (see some examples here) and the combination of noun with a following clitic definite article produces a separate set of "definite '' inflections for nouns.
The inflectional classes of Latin have also survived more in Romanian than elsewhere, e.g. om -- oameni "man -- men '' (Latin homo -- homines); corp -- corpuri "body -- bodies '' (Latin corpus -- corpora). (Many other exceptional forms, however, are due to later sound changes or analogy, e.g. casă -- case "house (s) '' vs. lună -- luni "moon (s) ''; frate -- fraţi "brother (s) '' vs. carte -- cărţi "book (s) '' vs. vale -- văi "valley (s) ''.)
In Italian, the situation is somewhere in between Spanish and Romanian. There are no case endings and relatively few classes, as in Spanish, but noun endings are generally formed with vowels instead of / s /, as in Romanian: amico -- amici "friend (s) (masc.) '', amica -- amiche "friend (s) (fem.) ''; cane -- cani "dog (s) ''. The masculine plural amici is thought to reflect the Latin nominative plural - ī rather than accusative plural - ōs (Spanish - os); however, the other plurals are thought to stem from special developments of Latin - ās and - ēs.
A different type of noun inflection survived into the medieval period in a number of western Romance languages (Old French, Old Occitan, and the older forms of a number of Rhaeto - Romance languages). This inflection distinguished nominative from oblique, grouping the accusative case with the oblique, rather than with the nominative as in Romanian.
The oblique case in these languages generally inherits from the Latin accusative; as a result, masculine nouns have distinct endings in the two cases while most feminine nouns do not.
A number of different inflectional classes are still represented at this stage. For example, the difference in the nominative case between masculine li voisins "the neighbor '' and li pere "the father '', and feminine la riens "the thing '' vs. la fame "the woman '', faithfully reflects the corresponding Latin inflectional differences (vicīnus vs. pater, fēmina vs. rēs).
A number of synchronically quite irregular differences between nominative and oblique reflect direct inheritances of Latin third - declension nouns with two different stems (one for the nominative singular, one for all other forms), most with of which had a stress shift between nominative and the other forms: li ber -- le baron "baron '' (barō -- barōnem); la suer -- la seror "sister '' (soror -- sorōrem); li prestre -- le prevoire "priest '' (presbyter -- presbyterem); li sire -- le seigneur "lord '' (senior -- seniōrem); li enfes -- l'enfant "child '' (infāns -- infantem).
A few of these multi-stem nouns derive from Latin forms without stress shift, e.g. li om -- le ome "man '' (homō -- hominem). All of these multi-stem nouns refer to people; other nouns with stress shift in Latin (e.g. amor -- amōrem "love '') have not survived. Some of the same nouns with multiple stems in Old French or Old Occitan have come down in Italian in the nominative rather than the accusative (e.g. uomo "man '' < homō, moglie "wife '' < mulier), suggesting that a similar system existed in pre-literary Italian.
The modern situation in Sursilvan (one of the Rhaeto - Romance languages) is unique in that the original nominative / oblique distinction has been reinterpreted as a predicative / attributive distinction:
As described above, case marking on pronouns is much more extensive than for nouns. Determiners (e.g. words such as "a '', "the '', "this '') are also marked for case in Romanian.
Most Romance languages have the following sets of pronouns and determiners:
Unlike in English, a separate neuter personal pronoun ("it '') generally does not exist, but the third - person singular and plural both distinguish masculine from feminine. Also, as described above, case is marked on pronouns even though it is not usually on nouns, similar to English. As in English, there are forms for nominative case (subject pronouns), oblique case (object pronouns), and genitive case (possessive pronouns); in addition, third - person pronouns distinguish accusative and dative. There is also an additional set of possessive determiners, distinct from the genitive case of the personal pronoun; this corresponds to the English difference between "my, your '' and "mine, yours ''.
The Romance languages do not retain the Latin third - person personal pronouns, but have innovated a separate set of third - person pronouns by borrowing the demonstrative ille ("that (over there) ''), and creating a separate reinforced demonstrative by attaching a variant of ecce "behold! '' (or "here is... '') to the pronoun.
Similarly, in place of the genitive of the Latin pronouns, most Romance languages adopted the reflexive possessive, which then serves indifferently as both reflexive and non-reflexive possessive. Note that the reflexive, and hence the third - person possessive, is unmarked for the gender of the person being referred to. Hence, although gendered possessive forms do exist -- e.g. Portuguese seu (masc.) vs. sua (fem.) -- these refer to the gender of the object possessed, not the possessor.
The gender of the possessor needs to be made clear by a collocation such as French la voiture à lui / elle, Portuguese o carro dele / dela, literally "the car of him / her ''. (In spoken Brazilian Portuguese, these collocations are the usual way of expressing the third - person possessive, since the former possessive seu carro now has the meaning "your car ''.)
The same demonstrative ille was borrowed to create the definite article (see below), which explains the similarity in form between personal pronoun and definite article. When the two are different, it is usually because of differing degrees of phonetic reduction. Generally, the personal pronoun is unreduced (beyond normal sound change), while the article has suffered various amounts of reduction, e.g. Spanish ella "she '' < illa vs. la "the (fem.) '' < - la < illa.
Object pronouns in Latin were normal words, but in the Romance languages they have become clitic forms, which must stand adjacent to a verb and merge phonologically with it. Originally, object pronouns could come either before or after the verb; sound change would often produce different forms in these two cases, with numerous additional complications and contracted forms when multiple clitic pronouns cooccurred.
Catalan still largely maintains this system with a highly complex clitic pronoun system. Most languages, however, have simplified this system by undoing some of the clitic mergers and requiring clitics to stand in a particular position relative to the verb (usually after imperatives, before other finite forms, and either before or after non-finite forms depending on the language).
When a pronoun can not serve as a clitic, a separate disjunctive form is used. These result from dative object pronouns pronounced with stress (which causes them to develop differently from the equivalent unstressed pronouns), or from subject pronouns.
Most Romance languages are null subject languages. The subject pronouns are used only for emphasis and take the stress, and as a result are not clitics. In French, however (as in Friulian and in some Gallo - Italian languages of northern Italy), verbal agreement marking has degraded to the point that subject pronouns have become mandatory, and have turned into clitics. These forms can not be stressed, so for emphasis the disjunctive pronouns must be used in combination with the clitic subject forms. Friulian and the Gallo - Italian languages have actually gone further than this and merged the subject pronouns onto the verb as a new type of verb agreement marking, which must be present even when there is a subject noun phrase. (Some non-standard varieties of French treat disjunctive pronouns as arguments and clitic pronouns as agreement markers.)
In medieval times, most Romance languages developed a distinction between familiar and polite second - person pronouns (a so - called T-V distinction), similar to the former English distinction between familiar "thou '' and polite "you ''. This distinction was determined by the relationship between the speakers. As in English, this generally developed by appropriating the plural second - person pronoun to serve in addition as a polite singular. French is still at this stage, with familiar singular tu vs. formal or plural vous. In cases like this, the pronoun requires plural agreement in all cases whenever a single affix marks both person and number (as in verb agreement endings and object and possessive pronouns), but singular agreement elsewhere where appropriate (e.g. vous - même "yourself '' vs. vous - mêmes "yourselves '').
Many languages, however, innovated further in developing an even more polite pronoun, generally composed of a noun phrase (e.g. Portuguese vossa mercê "your mercy '', progressively reduced to vossemecê, vosmecê and finally você) and taking third - person singular agreement. A plural equivalent was created at the same time or soon after (Portuguese vossas mercês, reduced to vocês), taking third - person plural agreement. Spanish innovated similarly, with usted (es) from earlier vuestra (s) merced (es).
In Portuguese and Spanish (as in other languages with similar forms), the "extra-polite '' forms in time came to be the normal polite forms, and the former polite (or plural) second - person vos knocked down to a familiar form, either becoming a familiar plural (as in European Spanish) or a familiar singular (as in many varieties of Latin American Spanish). In the latter case, it either competes with the original familiar singular tu (as in Guatemala), displaces it entirely (as in Argentina), or is itself displaced (as in Mexico, except in Chiapas). In American Spanish, the gap created by the loss of familiar plural vos was filled by originally polite ustedes, with the result that there is no familiar / polite distinction in the plural, just as in the original tu / vos system.
A similar path was followed by Italian and Romanian. Romanian uses dumneavoastră "your lordship '', while Italian the former polite phrase sua eccellenza "your excellency '' has simply been supplanted by the corresponding pronoun Ella or Lei (literally "she '', but capitalized when meaning "you ''). As in European Spanish, the original second - person plural voi serves as familiar plural. (In Italy, during fascist times leading up to World War II, voi was resurrected as a polite singular, and discarded again afterwards, although it remains in some southern dialects.)
Portuguese innovated again in developing a new extra-polite pronoun o senhor "the sir '', which in turn downgraded você. Hence, modern European Portuguese has a three - way distinction between "familiar '' tu, "equalizing '' você and "polite '' o senhor. (The original second - person plural vós was discarded centuries ago in speech, and is used today only in translations of the Bible, where tu and vós serve as universal singular and plural pronouns, respectively.)
Brazilian Portuguese, however, has diverged from this system, and most dialects simply use você (and plural vocês) as a general - purpose second - person pronoun, combined with te (from tu) as the clitic object pronoun. The form o senhor (and feminine a senhora) is sometimes used in speech, but only in situations where an English speaker would say "sir '' or "ma'am ''. The result is that second - person verb forms have disappeared, and the whole pronoun system has been radically realigned. However that is the case only in the spoken language of central and northern Brazil, with the northeastern and southern areas of the country still largely preserving the second - person verb form and the "tu '' and "você '' distinction.
Latin had no articles as such. The closest definite article was the non-specific demonstrative is, ea, id meaning approximately "this / that / the ''. The closest indefinite articles were the indefinite determiners aliquī, aliqua, aliquod "some (non-specific) '' and certus "a certain ''.
Romance languages have both indefinite and definite articles, but none of the above words form the basis for either of these. Usually the definite article is derived from the Latin demonstrative ille ("that ''), but some languages (e.g. Sardinian, and some dialects spoken around the Pyrenees) have forms from ipse (emphatic, as in "I myself ''). The indefinite article everywhere is derived from the number ūnus ("one '').
Some languages, e.g. French and Italian, have a partitive article that approximately translates as "some ''. This is used either with mass nouns or with plural nouns -- both cases where the indefinite article can not occur. A partitive article is used (and in French, required) whenever a bare noun refers to specific (but unspecified or unknown) quantity of the noun, but not when a bare noun refers to a class in general. For example, the partitive would be used in both of the following sentences:
But neither of these:
The sentence "Men arrived today '', however, (presumably) means "some specific men arrived today '' rather than "men, as a general class, arrived today '' (which would mean that there were no men before today). On the other hand, "I hate men '' does mean "I hate men, as a general class '' rather than "I hate some specific men ''.
As in many other cases, French has developed the farthest from Latin in its use of articles. In French, nearly all nouns, singular and plural, must be accompanied by an article (either indefinite, definite, or partitive) or demonstrative pronoun.
Due to pervasive sound changes in French, most nouns are pronounced identically in the singular and plural, and there is often heavy homophony between nouns and identically pronounced words of other classes. For example, all of the following are pronounced / sɛ̃ /: sain "healthy ''; saint "saint, holy ''; sein "breast ''; ceins "(you) put on, gird ''; ceint "(he) puts on, girds ''; ceint "put on, girded ''; and the equivalent noun and adjective plural forms sains, saints, seins, ceints. The article helps identify the noun forms saint or sein, and distinguish singular from plural; likewise, the mandatory subject of verbs helps identify the verb ceint. In more conservative Romance languages, neither articles nor subject pronouns are necessary, since all of the above words are pronounced differently. In Italian, for example, the equivalents are sano, santo, seno, cingi, cinge, cinto, sani, santi, seni, cinti, where all vowels and consonants are pronounced as written, and ⟨ s ⟩ / s / and ⟨ c ⟩ / t͡ʃ / are clearly distinct from each other.
Latin, at least originally, had a three - way distinction among demonstrative pronouns distinguished by distal value: hic ' this ', iste ' that (near you) ', ille ' that (over there) ', similar to the distinction that used to exist in English as "this '' vs. "that '' vs. "yon (der) ''. In urban Latin of Rome, iste came to have a specifically derogatory meaning, but this innovation apparently did not reach the provinces and is not reflected in the modern Romance languages. A number of these languages still have such a three - way distinction, although hic has been lost and the other pronouns have shifted somewhat in meaning. For example, Spanish has este "this '' vs. ese "that (near you) '' vs. aquel (fem. aquella) "that (over yonder) ''. The Spanish pronouns derive, respectively, from Latin iste ipse accu - ille, where accu - is an emphatic prefix derived from eccum "behold (it!) '' (still vigorous in Italy as Ecco! ' Behold! '), possibly with influence from atque "and ''.
Reinforced demonstratives such as accu - ille arose as ille came to be used as an article as well as a demonstrative. Such forms were often created even when not strictly needed to distinguish otherwise ambiguous forms. Italian, for example, has both questo "this '' (eccu - istum) and quello "that '' (eccu - illum), in addition to dialectal codesto "that (near you) '' (* eccu - tē - istum). French generally prefers forms derived from bare ecce "behold '', as in the pronoun ce "this one / that one '' (earlier ço, from ecce - hoc; cf. Italian ciò ' that ') and the determiner ce / cet "this / that '' (earlier cest, from ecce - istum).
Reinforced forms are likewise common in locative adverbs (words such as English here and there), based on related Latin forms such as hic "this '' vs. hīc "here '', hāc "this way '', and ille "that '' vs. illīc "there '', illāc "that way ''. Here again French prefers bare ecce while Spanish and Italian prefer eccum (French ici "here '' vs. Spanish aquí, Italian qui). In western languages such as Spanish, Portuguese and Catalan, doublets and triplets arose such as Portuguese aqui, acá, cá "(to) here '' (accu - hīc, accu - hāc, eccu - hāc). From these, a prefix a - was extracted, from which forms like aí "there (near you) '' (a - (i) bi) and ali "there (over yonder) '' (a - (i) llīc) were created; compare Catalan neuter pronouns açò (acce - hoc) "this '', això (a - (i) psum - hoc) "that (near you) '', allò (a - (i) llum - hoc) "that (yonder) ''.
Subsequent changes often reduced the number of demonstrative distinctions. Standard Italian, for example, has only a two - way distinction "this '' vs. "that '', as in English, with second - person and third - person demonstratives combined. In Catalan, however, a former three - way distinction aquest, aqueix, aquell has been reduced differently, with first - person and second - person demonstratives combined. Hence aquest means either "this '' or "that (near you) ''; on the phone, aquest is used to refer both to speaker and addressee.
Old French had a similar distinction to Italian (cist / cest vs. cil / cel), both of which could function as either adjectives or pronouns. Modern French, however, has no distinction between "this '' and "that '': ce / cet, cette < cest, ceste is only an adjective, and celui, celle < cel lui, celle is only a pronoun, and both forms indifferently mean either "this '' or "that ''. (The distinction between "this '' and "that '' can be made, if necessary, by adding the suffixes - ci "here '' or - là "there '', e.g. cette femme - ci "this woman '' vs. cette femme - là "that woman '', but this is rarely done except when specifically necessary to distinguish two entities from each other.)
Verbs have many conjugations, including in most languages:
Several tenses and aspects, especially of the indicative mood, have been preserved with little change in most languages, as shown in the following table for the Latin verb dīcere (to say), and its descendants.
The main tense and mood distinctions that were made in classical Latin are generally still present in the modern Romance languages, though many are now expressed through compound rather than simple verbs. The passive voice, which was mostly synthetic in classical Latin, has been completely replaced with compound forms.
For a more detailed illustration of how the verbs have changed with respect to classical Latin, see Romance verbs.
Note that in Catalan, the synthetic preterite is predominantly a literary tense, except in Valencian; but an analytic preterite (formed using an auxiliary vadō, which in other languages signals the future) persists in speech, with the same meaning. In Portuguese, a morphological present perfect does exist but has a different meaning (closer to "I have been doing '').
The following are common features of the Romance languages (inherited from Vulgar Latin) that are different from Classical Latin:
Romance languages have borrowed heavily, though mostly from other Romance languages. However, some, such as Spanish, Portuguese, Romanian, and French, have borrowed heavily from other language groups. Vulgar Latin borrowed first from indigenous languages of the Roman empire, and during the Germanic folk movements, from Germanic languages, especially Gothic; for Eastern Romance languages, during Bulgarian Empires, from Slavic languages, especially Bulgarian. Notable examples are * blancus "white '', replacing native albus (but Romansh alv, Dalmatian jualb, Romanian alb); * guerra "war '', replacing native bellum; and the words for the cardinal directions, where cognates of English "north '', "south '', "east '' and "west '' replaced the native words septentriō, merīdiēs (also "noon; midday nap ''; cf. Romanian meriză), oriens, and occidens. (See History of French -- The Franks.) Some Celtic words were incorporated into the core vocabulary, partly for words with no Latin equivalent (betulla "birch '', camisia "shirt '', cerevisia "beer ''), but in some cases replacing Latin vocabulary (gladius "sword '', replacing ensis; cambiāre "to exchange '', replacing mūtāre except in Romanian and Portuguese; carrus "cart '', replacing currus; pettia "piece '', largely displacing pars (later resurrected) and eliminating frustum). Many Greek loans also entered the lexicon, e.g. spatha "sword '' (Greek: σπάθη spáthē, replacing gladius which shifted to "iris '', cf. French épée, Spanish espada, Italian spada and Romanian spată); cara "face '' (Greek: κάρα kára, partly replacing faciēs); colpe "blow '' (Greek: κόλαφος kólaphos, replacing ictus, cf. Spanish golpe, French coup); cata "each '' (Greek: κατά katá, replacing quisque); common suffixes * - ijāre / - izāre (Greek: - ίζειν - izein, French oyer / - iser, Spanish - ear / - izar, Italian - eggiare / - izzare, etc.), - ista (Greek: - ιστής - istes).
Many basic nouns and verbs, especially those that were short or had irregular morphology, were replaced by longer derived forms with regular morphology. Nouns, and sometimes adjectives, were often replaced by diminutives, e.g. auris "ear '' > auricula (orig. "outer ear '') > oricla (Sardinian origra, Italian orecchia / o, Portuguese orelha, etc.); avis "bird '' > avicellus (orig. "chick, nestling '') > aucellu (Occitan aucèl, Friulan ucel, French oiseau, etc.); caput "head '' > capitium (Portuguese cabeça, Spanish cabeza, French chevet "headboard ''; but reflexes of caput were retained also, sometimes without change of meaning, as in Italian capo "head '', alongside testa); vetus "old '' > vetulus > veclus (Dalmatian vieklo, Italian vecchio, Portuguese velho, etc.). Sometimes augmentative constructions were used instead: piscis "fish '' > Old French peis > peisson (orig. "big fish '') > French poisson. Verbs were often replaced by frequentative constructions: canere "to sing '' > cantāre; iacere "to throw '' > iactāre > * iectāre (Italian gettare, Portuguese jeitar, Spanish echar, etc.); iuvāre > adiūtāre (Italian aiutare, Spanish ayudar, French aider, etc., meaning "help '', alongside e.g. iuvāre > Italian giovare "to be of use ''); vēnārī "hunt '' (Romanian "vâna '', Aromanian "avin, avinari '') > replaced by * captiāre "to hunt '', frequentative of capere "to seize '' (Italian cacciare, Portuguese caçar, Romansh catschar, French chasser, etc.).
Many Classical Latin words became archaic or poetic and were replaced by more colloquial terms: equus "horse '' > caballus (orig. "nag '') (but equa "mare '' remains, cf. Spanish yegua, Portuguese égua, Sardinian ebba, Romanian iapă); domus "house '' > casa (orig. "hut ''); ignis "fire '' > focus (orig. "hearth ''); strāta "street '' > rūga (orig. "furrow '') or callis (orig. "footpath '') (but strāta is continued in Italian strada, Romanian stradă and secondarily in e.g. Spanish / Portuguese estrada "causeway, paved road ''). In some cases, terms from common occupations became generalized: invenīre "to find '' replaced by Ibero - Romance afflāre (orig. "to sniff out '', in hunting, cf. Spanish hallar, Portuguese achar, Romanian afla (to find out)); advenīre "to arrive '' gave way to Ibero - Romance plicāre (orig. "to fold (sails; tents) '', cf. Spanish llegar, Portuguese chegar; Romanian pleca), elsewhere arripāre (orig. "to harbor at a riverbank '', cf. Italian arrivare, French arriver) (advenīre is continued with the meaning "to achieve, manage to do '' as in Middle French aveindre, or "to happen '' in Italian avvenire). The same thing sometimes happened to religious terms, due to the pervasive influence of Christianity: loquī "to speak '' succumbed to parabolāre (orig. "to tell parables '', cf. Occitan parlar, French parler, Italian parlare) or fabulārī (orig. "to tell stories '', cf. Spanish hablar, Portuguese falar), based on Jesus ' way of speaking in parables.
Many prepositions were used as verbal particles to make new roots and verb stems, e.g. Italian estrarre, Aromanian astragu, astradziri "to extract '' from Latin ex - "out of '' and trahere "to pull '' (Italian trarre "draw, pull ''), or to augment already existing words, e.g. French coudre, Italian cucire, Portuguese coser "to sew '', from cōnsuere "to sew up '', from suere "to sew '', with total loss of the bare stem. Many prepositions and commonly became compounded, e.g. de ex > French dès "as of '', ab ante > Italian avanti "forward ''. Some words derived from phrases, e.g. Portuguese agora, Spanish ahora "now '' < hāc hōrā "at this hour ''; French avec "with '' (prep.) < Old French avuec (adv.) < apud hoc ("near that ''); Spanish tamaño, Portuguese tamanho "size '' < tam magnum "so big ''; Italian codesto "this, that '' (near you) < Old Italian cotevesto < eccum tibi istum approx. "here 's that thing of yours ''; Portuguese você "you '' < vosmecê < vossemecê < Galician - Portuguese vossa mercee "your mercy ''.
A number of common Latin words that have disappeared in many or most Romance languages have survived either in the periphery or in remote corners (especially Sardinia and Romania), or as secondary terms, sometimes differing in meaning. For example, Latin caseum "cheese '' in the more outer places (Portuguese queijo, Spanish queso, Romansh caschiel, Sardinian càsu, Romanian caş), but in the central areas has been replaced by formāticum, originally "moulded (cheese) '' (French fromage, Occitan / Catalan formatge, Italian formaggio, with, however, cacio also available; similarly (com) edere "to eat (up) '', which survives as Spanish / Portuguese comer but elsewhere is replaced by mandūcāre, originally "to chew '' (French manger, Italian mangiare, Catalan menjar, but Spanish / Portuguese noun manjar "food '' or "uplifting meal ''). In some cases, one language happens to preserve a word displaced elsewhere, e.g. Italian ogni "each, every '' < omnes, displaced elsewhere by tōtum, originally "whole '' or by a reflex of Greek κατά (e.g. Italian ognuno, Catalan tothom "everyone ''; Italian ogni giorno, Spanish cada día "every day ''); Friulan vaî "to cry '' < flere "to weep ''; Vegliote otijemna "fishing pole '' < antenna "yardarm ''; Aromanian "sprunã '' (warm ashes) < pruna (burning coal). Sardinian even preserves some words that were already archaic in Classical Latin, e.g. àchina "grape '' < acinam, also found in Sicilian ràcina.
During the Middle Ages, scores of words were borrowed directly from Classical Latin (so - called Latinisms), either in their original form (learned loans) or in a somewhat nativized form (semi-learned loans). These resulted in many doublets -- pairs of inherited and learned words -- such as those in the table below:
Sometimes triplets arise: Latin articulus "joint '' > Portuguese artículo "joint, knuckle '' (learned), artigo "article '' (semi-learned), artelho "ankle '' (inherited; archaic and dialectal). In many cases, the learned word simply displaced the original popular word: e.g. Spanish crudo "crude, raw '' (Old Spanish cruo); French légume "vegetable '' (Old French leüm); Portuguese flor "flower '' (Galician - Portuguese chor). The learned loan always looks more like the original than the inherited word does, because regular sound change has been bypassed; and likewise, the learned word usually has a meaning closer to that of the original. In French, however, the stress of the learned loan may be on the "wrong '' syllable, whereas the stress of the inherited word always corresponds to the Latin stress: e.g. Latin vipera vs. French vipère, learned loan, and guivre / vouivre, inherited.
Borrowing from Classical Latin has produced a large number of suffix doublets. Examples from Spanish (learned form first): - ción vs. - zon; - cia vs. - za; - ificar vs. - iguar; - izar vs. - ear; - mento vs. - miento; - tud (< nominative - tūdō) vs. - dumbre (< accusative - tūdine); - ículo vs. - ejo; etc. Similar examples can be found in all the other Romance languages.
This borrowing also introduced large numbers of classical prefixes in their original form (dis -, ex -, post -, trans -) and reinforced many others (re -, popular Spanish / Portuguese des - < dis -, popular French dé - < dis -, popular Italian s - < ex -). Many Greek prefixes and suffixes (hellenisms) also found their way into the lexicon: tele -, poli - / poly -, meta -, pseudo -, - scope / scopo, - logie / logia / logía, etc.
Significant sound changes affected the consonants of the Romance languages.
There was a tendency to eliminate final consonants in Vulgar Latin, either by dropping them (apocope) or adding a vowel after them (epenthesis).
Many final consonants were rare, occurring only in certain prepositions (e.g. ad "towards '', apud "at, near (a person) ''), conjunctions (sed "but ''), demonstratives (e.g. illud "that (over there) '', hoc "this ''), and nominative singular noun forms, especially of neuter nouns (e.g. lac "milk '', mel "honey '', cor "heart ''). Many of these prepositions and conjunctions were replaced by others, while the nouns were regularized into forms based on their oblique stems that avoided the final consonants (e.g. * lacte, * mele, * core).
Final - m was dropped in Vulgar Latin. Even in Classical Latin, final - am, - em, - um (inflectional suffixes of the accusative case) were often elided in poetic meter, suggesting the m was weakly pronounced, probably marking the nasalisation of the vowel before it. This nasal vowel lost its nasalization in the Romance languages except in monosyllables, where it became / n / e.g. Spanish quien < quem "whom '', French rien "anything '' < rem "thing ''; note especially French and Catalan mon < meum "my (m.sg.) '' pronounced as one syllable (/ meu̯m / > * / meu̯n /, / mun /) but Spanish mío and Portuguese and Catalan meu < meum pronounced as two (/ ˈme. um / > * / ˈme. o /).
As a result, only the following final consonants occurred in Vulgar Latin:
Final - t was eventually dropped in many languages, although this often occurred several centuries after the Vulgar Latin period. For example, the reflex of - t was dropped in Old French and Old Spanish only around 1100. In Old French, this occurred only when a vowel still preceded the t (generally / ə / < Latin a). Hence amat "he loves '' > Old French aime but venit "he comes '' > Old French vient: the / t / was never dropped and survives into Modern French in liaison, e.g. vient - il? "is he coming? '' / vjɛ̃ti (l) / (the corresponding / t / in aime - t - il? is analogical, not inherited). Old French also kept the third - person plural ending - nt intact.
In Italo - Romance and the Eastern Romance languages, eventually all final consonants were either dropped or protected by an epenthetic vowel, except in clitic forms (e.g. prepositions con, per). Modern Standard Italian still has almost no consonant - final words, although Romanian has resurfaced them through later loss of final / u / and / i /. For example, amās "you love '' > ame > Italian ami; amant "they love '' > * aman > Ital. amano. On the evidence of "sloppily written '' Lombardic language documents, however, the loss of final / s / in Italy did not occur until the 7th or 8th century, after the Vulgar Latin period, and the presence of many former final consonants is betrayed by the syntactic gemination (raddoppiamento sintattico) that they trigger. It is also thought that after a long vowel / s / became / j / rather than simply disappearing: nōs > noi "we '', se (d) ēs > sei "you are '', crās > crai "tomorrow '' (southern Italian). In unstressed syllables, the resulting diphthongs were simplified: canēs > / ˈkanej / > cani "dogs ''; amīcās > / aˈmikaj / > amiche / aˈmike / "(female) friends '', where nominative amīcae should produce * * amice rather than amiche (note masculine amīcī > amici not * * amichi).
Central Western Romance languages eventually regained a large number of final consonants through the general loss of final / e / and / o /, e.g. Catalan llet "milk '' < lactem, foc "fire '' < focum, peix "fish '' < piscem. In French, most of these secondary final consonants (as well as primary ones) were lost before around 1700, but tertiary final consonants later arose through the loss of / ə / < - a. Hence masculine frīgidum "cold '' > Old French freit / frwεt / > froid / fʁwa /, feminine frigidam > Old French freide / frwεdə / > froide / fʁwad /.
Palatalization was one of the most important processes affecting consonants in Vulgar Latin. This eventually resulted in a whole series of "palatal '' and postalveolar consonants in most Romance languages, e.g. Italian / ʃ /, / ʒ /, / tʃ /, / dʒ /, / ts /, / dz /, / ɲ /, / ʎ /.
The following historical stages occurred:
Note how the environments become progressively less "palatal '', and the languages affected become progressively fewer.
The outcomes of palatalization depended on the historical stage, the consonants involved, and the languages involved. The primary division is between the Western Romance languages, with / ts / resulting from palatalization of / k /, and the remaining languages (Italo - Dalmatian and Eastern Romance), with / tʃ / resulting. It is often suggested that / tʃ / was the original result in all languages, with / tʃ / > / ts / a later innovation in the Western Romance languages. Evidence of this is the fact that Italian has both / ttʃ / and / tts / as outcomes of palatalization in different environments, while Western Romance has only / (t) ts /. Even more suggestive is the fact that the Mozarabic language in al - Andalus (modern southern Spain) had / tʃ / as the outcome despite being in the "Western Romance '' area and geographically disconnected from the remaining / tʃ / areas; this suggests that Mozarabic was an outlying "relic '' area where the change / tʃ / > / ts / failed to reach. (Northern French dialects, such as Norman and Picard, also had / tʃ /, but this may be a secondary development, i.e. due to a later sound change / ts / > / tʃ /.) Note that / ts, dz, dʒ / eventually became / s, z, ʒ / in most Western Romance languages. Thus Latin caelum (sky, heaven), pronounced (ˈkai̯lu (m)) with an initial (k), became Italian cielo (ˈtʃɛlo), Romanian cer (tʃer), Spanish cielo (ˈθjelo) / (ˈsjelo), French ciel (sjɛl), Catalan cel (ˈsɛɫ), and Portuguese céu (ˈsɛw).
The outcome of palatalized / d / and / ɡ / is less clear:
This suggests that palatalized / d / > / dj / > either / j / or / dz / depending on location, while palatalized / ɡ / > / j /; after this, / j / > / (d) dʒ / in most areas, but Spanish and Gascon (originating from isolated districts behind the western Pyrenees) were relic areas unaffected by this change.
In French, the outcomes of / k, ɡ / palatalized by / e, i, j / and by / a, au / were different: centum "hundred '' > cent / sɑ̃ / but cantum "song '' > chant / ʃɑ̃ /. French also underwent palatalization of labials before / j /: Vulgar Latin / pj, bj ~ vj, mj / > Old French / tʃ, dʒ, ndʒ / (sēpia "cuttlefish '' > seiche, rubeus "red '' > rouge, sīmia "monkey '' > singe).
The original outcomes of palatalization must have continued to be phonetically palatalized even after they had developed into alveolar / postalveolar / etc. consonants. This is clear from French, where all originally palatalized consonants triggered the development of a following glide / j / in certain circumstances (most visible in the endings - āre, - ātum / ātam). In some cases this / j / came from a consonant palatalized by an adjoining consonant after the late loss of a separating vowel. For example, mansiōnātam > / masjoˈnata / > masjˈnada / > / masjˈnjæðə / > early Old French maisnieḍe / maisˈniɛðə / "household ''. Similarly, mediētātem > / mejeˈtate / > / mejˈtade / > / mejˈtæðe / > early Old French meitieḍ / mejˈtjɛθ / > modern French moitié / mwaˈtje / "half ''. In both cases, phonetic palatalization must have remained in primitive Old French at least through the time when unstressed intertonic vowels were lost (? c. 8th century), well after the fragmentation of the Romance languages.
The effect of palatalization is indicated in the writing systems of almost all Romance languages, where the letters have the "hard '' pronunciation (k, ɡ) in most situations, but a "soft '' pronunciation (e.g. French / Portuguese (s, ʒ), Italian / Romanian (tʃ, dʒ)) before ⟨ e, i, y ⟩. (This orthographic trait has passed into Modern English through Norman French - speaking scribes writing Middle English; this replaced the earlier system of Old English, which had developed its own hard - soft distinction with the soft ⟨ c, g ⟩ representing (tʃ, j ~ dʒ).) This has the effect of keeping the modern spelling similar to the original Latin spelling, but complicates the relationship between sound and letter. In particular, the hard sounds must be written differently before ⟨ e, i, y ⟩ (e.g. Italian ⟨ ch, gh ⟩, Portuguese ⟨ qu, gu ⟩), and likewise for the soft sounds when not before these letters (e.g. Italian ⟨ ci, gi ⟩, Portuguese ⟨ ç, j ⟩). Furthermore, in Spanish, Catalan, Occitan and Brazilian Portuguese, the use of digraphs containing ⟨ u ⟩ to signal the hard pronunciation before ⟨ e, i, y ⟩ means that a different spelling is also needed to signal the sounds / kw, ɡw / before these vowels (Spanish ⟨ cu, gü ⟩, Catalan, Occitan and Brazilian Portuguese ⟨ qü, gü ⟩). This produces a number of orthographic alternations in verbs whose pronunciation is entirely regular. The following are examples of corresponding first - person plural indicative and subjunctive in a number of regular Portuguese verbs: marcamos, marquemos "we mark ''; caçamos, cacemos "we hunt ''; chegamos, cheguemos "we arrive ''; averiguamos, averigüemos "we verify ''; adequamos, adeqüemos "we adapt ''; oferecemos, ofereçamos "we offer ''; dirigimos, dirijamos "we drive '' erguemos, ergamos "we raise ''; delinquimos, delincamos "we commit a crime ''. In the case of Italian, the convention of digraphs < ch > and < gh > to represent / k / and / g / before written < e, i > results in similar orthographic alternations, such as dimentico ' I forget ', dimentichi ' you forget ', baco ' worm ', bachi ' worms ' with (k) or pago ' I pay ', paghi ' you pay ' and lago ' lake ', laghi ' lakes ' with (g). The use in Italian of < ci > and < gi > to represent / tʃ / or / dʒ / before vowels written < a, o, u > neatly distinguishes dico ' I say ' with / k / from dici ' you say ' with / tʃ / or ghiro ' dormouse ' / g / and giro ' turn, revolution ' / dʒ /, but with orthographic < ci > and < gi > also representing the sequence of / tʃ / or / dʒ / and the actual vowel / i / (/ ditʃi / dici, / dʒiro / giro), and no generally observed convention of indicating stress position, the status of i when followed by another vowel in spelling can be unrecognizable. For example, the written forms offer no indication that < cia > in camicia ' shirt ' represents a single unstressed syllable / tʃa / with no / i / at any level (/ kaˈmitʃa / → (kaˈmiːtʃa) ~ (kaˈmiːʃa)), but that underlying the same spelling < cia > in farmacia ' pharmacy ' is a bisyllabic sequence of / tʃ / and stressed / i / (/ farmaˈtʃia / → (farmaˈtʃiːa) ~ (farmaˈʃiːa)).
Stop consonants shifted by lenition in Vulgar Latin.
The voiced labial consonants / b / and / w / (represented by ⟨ b ⟩ and ⟨ v ⟩, respectively) both developed a fricative (β) as an intervocalic allophone. This is clear from the orthography; in medieval times, the spelling of a consonantal ⟨ v ⟩ is often used for what had been a ⟨ b ⟩ in Classical Latin, or the two spellings were used interchangeably. In many Romance languages (Italian, French, Portuguese, Romanian, etc.), this fricative later developed into a / v /; but in others (Spanish, Galician, some Catalan and Occitan dialects, etc.) reflexes of / b / and / w / simply merged into a single phoneme.
Several other consonants were "softened '' in intervocalic position in Western Romance (Spanish, Portuguese, French, Northern Italian), but normally not phonemically in the rest of Italy (except some cases of "elegant '' or Ecclesiastical words), nor apparently at all in Romanian. The dividing line between the two sets of dialects is called the La Spezia -- Rimini Line and is one of the most important isoglosses of the Romance dialects. The changes (instances of diachronic lenition) are as follows:
Single voiceless plosives became voiced: - p -, - t -, - c - > - b -, - d -, - g -. Subsequently, in some languages they were further weakened, either becoming fricatives or approximants, (β̞), (ð̞), (ɣ ˕) (as in Spanish) or disappearing entirely (as / t / and / k /, but not / p /, in French). The following example shows progressive weakening of original / t /: e.g. vītam > Italian vita (ˈvita), Portuguese vida (ˈvidɐ) (European Portuguese (ˈviðɐ)), Spanish vida (ˈbiða) (Southern Peninsular Spanish (ˈbia)), and French vie (vi). Some have speculated that these sound changes may be due in part to the influence of Continental Celtic languages.
Consonant length is no longer phonemically distinctive in most Romance languages. However some languages of Italy (Italian, Sardinian, Sicilian, and numerous other varieties of central and southern Italy) do have long consonants like / ɡɡ /, / dd /, / bb /, / kk /, / tt /, / pp /, / ll /, / mm /, / nn /, / ss /, / rr /, etc., where the doubling indicates either actual length or, in the case of plosives and affricates, a short hold before the consonant is released, in many cases with distinctive lexical value: e.g. note / ˈnɔ. te / (notes) vs. notte / ˈnɔt. te / (night), cade / ˈka. de / (s / he, it falls) vs. cadde / ˈkad. de / (s / he, it fell), caro / ˈka. ro / (dear, expensive) vs. carro / ˈkar. ro / (cart). They may even occur at the beginning of words in Romanesco, Neapolitan, Sicilian and other southern varieties, and are occasionally indicated in writing, e.g. Sicilian cchiù (more), and ccà (here). In general, the consonants / b /, / ts /, and / dz / are long at the start of a word, while the archiphoneme R is realised as a trill / r / in the same position. In much of central and southern Italy, the affricates / t͡ʃ / and / d͡ʒ / weaken synchronically to fricative (ʃ) and (ʒ) between vowels, while their geminate congeners do not, e.g. cacio / ˈka. t͡ʃo / → (ˈkaːʃo) (cheese) vs. caccio / ˈkat. t͡ʃo / → (ˈkat. t͡ʃo) (I chase).
A few languages have regained secondary geminate consonants. The double consonants of Piedmontese exist only after stressed / ə /, written ë, and are not etymological: vëdde (Latin vidēre, to see), sëcca (Latin sicca, dry, feminine of sech). In standard Catalan and Occitan, there exists a geminate sound / lː / written l·l (Catalan) or ll (Occitan), but it is usually pronounced as a simple sound in colloquial (and even some formal) speech in both languages.
In Western Romance, an epenthetic or prosthetic vowel was inserted at the beginning of any word that began with / s / and another consonant: spatha "sword '' > Spanish / Portuguese espada, Catalan espasa, Old French espeḍe > modern épée; Stephanum "Stephen '' > Spanish Esteban, Catalan Esteve, Portuguese Estêvão, Old French Estievne > modern Étienne; status "state '' > Spanish / Portuguese estado, Catalan estat, Old French estat > modern état; spiritus "spirit '' > Spanish espíritu, Portuguese espírito, Catalan esperit, French esprit. Epenthetic / e / in Western Romance languages was also probably influenced by Continental Celtic languages. While Western Romance words undergo word - initial epenthesis (prothesis), cognates in Italian do not: spatha > spada, Stephanum > Stefano, status > stato, spiritus > spirito. In Italian, syllabification rules were preserved instead by vowel - final articles, thus feminine spada as la spada, but instead of rendering the masculine * il spaghetto, lo spaghetto came to be the norm. Though receding at present, Italian once had an epenthetic / i / if a consonant preceded such clusters, so that ' in Switzerland ' was in (i) Svizzera. Some speakers still use the prothetic (i) productively, and it is fossilized in a few set phrases as per iscritto ' in writing ' (although in this case its survival may be due partly to the influence of the separate word iscritto < Latin īnscrīptus).
One profound change that affected Vulgar Latin was the reorganisation of its vowel system. Classical Latin had five short vowels, ă, ĕ, ĭ, ŏ, ŭ, and five long vowels, ā, ē, ī, ō, ū, each of which was an individual phoneme (see the table in the right, for their likely pronunciation in IPA), and four diphthongs, ae, oe, au and eu (five according to some authors, including ui). There were also long and short versions of y, representing the rounded vowel / y (ː) / in Greek borrowings, which however probably came to be pronounced / i (ː) / even before Romance vowel changes started.
There is evidence that in the imperial period all the short vowels except a differed by quality as well as by length from their long counterparts. So, for example ē was pronounced close - mid / eː / while ĕ was pronounced open - mid / ɛ /, and ī was pronounced close / iː / while ĭ was pronounced near - close / ɪ /.
During the Proto - Romance period, phonemic length distinctions were lost. Vowels came to be automatically pronounced long in stressed, open syllables (i.e. when followed by only one consonant), and pronounced short everywhere else. This situation is still maintained in modern Italian: cade (ˈkaːde) "he falls '' vs. cadde (ˈkadde) "he fell ''.
The Proto - Romance loss of phonemic length originally produced a system with nine different quality distinctions in monophthongs, where only original / ă ā / had merged. Soon, however, many of these vowels coalesced:
The Proto - Romance allophonic vowel - length system was rephonemicized in the Gallo - Romance languages as a result of the loss of many final vowels. Some northern Italian languages (e.g. Friulan) still maintain this secondary phonemic length, but most languages dropped it by either diphthongizing or shortening the new long vowels.
French phonemicized a third vowel length system around AD 1300 as a result of the sound change / VsC / > / VhC / > / VːC / (where V is any vowel and C any consonant). This vowel length was eventually lost by around AD 1700, but the former long vowels are still marked with a circumflex. A fourth vowel length system, still non-phonemic, has now arisen: All nasal vowels as well as the oral vowels / ɑ o ø / (which mostly derive from former long vowels) are pronounced long in all stressed closed syllables, and all vowels are pronounced long in syllables closed by the voiced fricatives / vz ʒ ʁ vʁ /. This system in turn has been phonemicized in some non-standard dialects (e.g. Haitian Creole), as a result of the loss of final / ʁ /.
The Latin diphthongs ae and oe, pronounced / ai / and / oi / in earlier Latin, were early on monophthongized.
ae became / ɛː / by the 1st century a.d. at the latest. Although this sound was still distinct from all existing vowels, the neutralization of Latin vowel length eventually caused its merger with / ɛ / < short e: e.g. caelum "sky '' > French ciel, Spanish / Italian cielo, Portuguese céu / sɛw /, with the same vowel as in mele "honey '' > French / Spanish miel, Italian miele, Portuguese mel / mɛl /. Some words show an early merger of ae with / eː /, as in praeda "booty '' > * prēda / preːda / > French proie (vs. expected * * priée), Italian preda (not * * prieda) "prey ''; or faenum "hay '' > * fēnum (feːnũ) > Spanish heno, French foin (but Italian fieno / fjɛno /).
oe generally merged with / eː /: poenam "punishment '' > Romance * / pena / > Spanish / Italian pena, French peine; foedus "ugly '' > Romance * / fedo / > Spanish feo, Portuguese feio. There are relatively few such outcomes, since oe was rare in Classical Latin (most original instances had become Classical ū, as in Old Latin oinos "one '' > Classical ūnus) and so oe was mostly limited to Greek loanwords, which were typically learned (high - register) terms.
au merged with ō / oː / in the popular speech of Rome already by the 1st century b.c. A number of authors remarked on this explicitly, e.g. Cicero 's taunt that the populist politician Publius Clodius Pulcher had changed his name from Claudius to ingratiate himself with the masses. This change never penetrated far from Rome, however, and the pronunciation / au / was maintained for centuries in the vast majority of Latin - speaking areas, although it eventually developed into some variety of o in many languages. For example, Italian and French have / ɔ / as the usual reflex, but this post-dates diphthongization of / ɔ / and the French - specific palatalization / ka / > / tʃa / (hence causa > French chose, Italian cosa / kɔza / not * * cuosa). Spanish has / o /, but Portuguese spelling maintains ⟨ ou ⟩, which has developed to / o / (and still remains as / ou / in some dialects, and / oi / in others). Occitan, Romanian, southern Italian languages, and many other minority Romance languages still have / au /. A few common words, however, show an early merger with ō / oː /, evidently reflecting a generalization of the popular Roman pronunciation: e.g. French queue, Italian coda / koda /, Occitan co (d) a, Romanian coadă (all meaning "tail '') must all derive from cōda rather than Classical cauda (but notice Portuguese cauda). Similarly, Portuguese orelha, French oreille, Romanian ureche, and Sardinian olícra, orícla "ear '' must derive from ōric (u) la rather than Classical auris (Occitan aurelha was probably influenced by the unrelated ausir < audīre "to hear ''), and the form oricla is in fact reflected in the Appendix Probi.
An early process that operated in all Romance languages to varying degrees was metaphony (vowel mutation), conceptually similar to the umlaut process so characteristic of the Germanic languages. Depending on the language, certain stressed vowels were raised (or sometimes diphthongized) either by a final / i / or / u / or by a directly following / j /. Metaphony is most extensive in the Italo - Romance languages, and applies to nearly all languages in Italy; however, it is absent from Tuscan, and hence from standard Italian. In many languages affected by metaphony, a distinction exists between final / u / (from most cases of Latin - um) and final / o / (from Latin - ō, - ud and some cases of - um, esp. masculine "mass '' nouns), and only the former triggers metaphony.
Some examples:
A number of languages diphthongized some of the free vowels, especially the open - mid vowels / ɛ ɔ /:
These diphthongizations had the effect of reducing or eliminating the distinctions between open - mid and close - mid vowels in many languages. In Spanish and Romanian, all open - mid vowels were diphthongized, and the distinction disappeared entirely. Portuguese is the most conservative in this respect, keeping the seven - vowel system more or less unchanged (but with changes in particular circumstances, e.g. due to metaphony). Other than before palatalized consonants, Catalan keeps / ɔ o / intact, but / ɛ e / split in a complex fashion into / ɛ e ə / and then coalesced again in the standard dialect (Eastern Catalan) in such a way that most original / ɛ e / have reversed their quality to become / e ɛ /.
In French and Italian, the distinction between open - mid and close - mid vowels occurred only in closed syllables. Standard Italian more or less maintains this. In French, / e / and / ɛ / merged by the twelfth century or so, and the distinction between / ɔ / and / o / was eliminated without merging by the sound changes / u / > / y /, / o / > / u /. Generally this led to a situation where both (e, o) and (ɛ, ɔ) occur allophonically, with the close - mid vowels in open syllables and the open - mid vowels in closed syllables. This is still the situation in modern Spanish, for example. In French, however, both (e / ɛ) and (o / ɔ) were partly rephonemicized: Both / e / and / ɛ / occur in open syllables as a result of / aj / > / ɛ /, and both / o / and / ɔ / occur in closed syllables as a result of / al / > / au / > / o /.
Old French also had numerous falling diphthongs resulting from diphthongization before palatal consonants or from a fronted / j / originally following palatal consonants in Proto - Romance or later: e.g. pācem / patsje / "peace '' > PWR * / padzje / (lenition) > OF paiz / pajts /; * punctum "point '' > Gallo - Romance * / ponjto / > * / pojɲto / (fronting) > OF point / põjnt /. During the Old French period, preconsonantal / l / (ɫ) vocalized to / w /, producing many new falling diphthongs: e.g. dulcem "sweet '' > PWR * / doltsje / > OF dolz / duɫts / > douz / duts /; fallet "fails, is deficient '' > OF falt > faut "is needed ''; bellus "beautiful '' > OF bels (bɛɫs) > beaus (bɛaws). By the end of the Middle French period, all falling diphthongs either monophthongized or switched to rising diphthongs: proto - OF / aj ɛj jɛj ej jej wɔj oj uj al ɛl el il ɔl ol ul / > early OF / aj ɛj i ej yj oj yj aw ɛaw ew i ɔw ow y / > modern spelling ⟨ ai ei i oi ui oi ui au eau eu i ou ou u ⟩ > mod. French / ɛ ɛ i wa ɥi wa ɥi oo ø iuuy /.
In both French and Portuguese, nasal vowels eventually developed from sequences of a vowel followed by a nasal consonant (/ m / or / n /). Originally, all vowels in both languages were nasalized before any nasal consonants, and nasal consonants not immediately followed by a vowel were eventually dropped. In French, nasal vowels before remaining nasal consonants were subsequently denasalized, but not before causing the vowels to lower somewhat, e.g. dōnat "he gives '' > OF dune / dunə / > donne / dɔn /, fēminam > femme / fam /. Other vowels remained diphthongized, and were dramatically lowered: fīnem "end '' > fin / fɛ̃ / (often pronounced (fæ̃)); linguam "tongue '' > langue / lɑ̃ɡ /; ūnum "one '' > un / œ̃ /, / ɛ̃ /.
In Portuguese, / n / between vowels was dropped, and the resulting hiatus eliminated through vowel contraction of various sorts, often producing diphthongs: manum, * manōs > PWR * manu, ˈmanos "hand (s) '' > mão, mãos / mɐ̃w̃, mɐ̃w̃s /; canem, canēs "dog (s) '' > PWR * kane, ˈkanes > * can, ˈcanes > cão, cães / kɐ̃w̃, kɐ̃j̃s /; ratiōnem, ratiōnēs "reason (s) '' > PWR * raˈdjzjone, raˈdjzjones > * raˈdzon, raˈdzones > razão, razões / χaˈzɐ̃w̃, χaˈzõj̃s / (Brazil), / ʁaˈzɐ̃ũ, ʁɐˈzõj̃ʃ / (Portugal). Sometimes the nasalization was eliminated: lūna "moon '' > Galician - Portuguese lũa > lua; vēna "vein '' > Galician - Portuguese vẽa > veia. Nasal vowels that remained actually tend to be raised (rather than lowered, as in French): fīnem "end '' > fim / fĩ /; centum "hundred '' > PWR tjsjɛnto > cento / ˈsẽtu /; pontem "bridge '' > PWR pɔnte > ponte / ˈpõtʃi / (Brazil), / ˈpõtɨ / (Portugal). In Portugal, vowels before a nasal consonant have become denasalized, but in Brazil they remain heavily nasalized.
Characteristic of the Gallo - Romance languages and Rhaeto - Romance languages are the front rounded vowels / y ø œ /. All of these languages show an unconditional change / u / > / y /, e.g. lūnam > French lune / lyn /, Occitan / ˈlyno /. Many of the languages in Switzerland and Italy show the further change / y / > / i /. Also very common is some variation of the French development / ɔː oː / (lengthened in open syllables) > / we ew / > / œ œ /, with mid back vowels diphthongizing in some circumstances and then re-monophthongizing into mid-front rounded vowels. (French has both / ø / and / œ /, with / ø / developing from / œ / in certain circumstances.)
There was more variability in the result of the unstressed vowels. Originally in Proto - Romance, the same nine vowels developed in unstressed as stressed syllables, and in Sardinian, they coalesced into the same five vowels in the same way.
In Italo - Western Romance, however, vowels in unstressed syllables were significantly different from stressed vowels, with yet a third outcome for final unstressed syllables. In non-final unstressed syllables, the seven - vowel system of stressed syllables developed, but then the low - mid vowels / ɛ ɔ / merged into the high - mid vowels / eo /. This system is still preserved, largely or completely, in all of the conservative Romance languages (e.g. Italian, Spanish, Portuguese, Catalan).
In final unstressed syllables, results were somewhat complex. One of the more difficult issues is the development of final short - u, which appears to have been raised to / u / rather than lowered to / o /, as happened in all other syllables. However, it is possible that in reality, final / u / comes from long * - ū < - um, where original final - m caused vowel lengthening as well as nasalization. Evidence of this comes from Rhaeto - Romance, in particular Sursilvan, which preserves reflexes of both final - us and - um, and where the latter, but not the former, triggers metaphony. This suggests the development - us > / ʊs / > / os /, but - um > / ũː / > / u /.
The original five - vowel system in final unstressed syllables was preserved as - is in some of the more conservative central Italian languages, but in most languages there was further coalescence:
Various later changes happened in individual languages, e.g.:
The so - called intertonic vowels are word - internal unstressed vowels, i.e. not in the initial, final, or tonic (i.e. stressed) syllable, hence intertonic. Intertonic vowels were the most subject to loss or modification. Already in Vulgar Latin intertonic vowels between a single consonant and a following / r / or / l / tended to drop: vétulum "old '' > veclum > Dalmatian vieklo, Sicilian vecchiu, Portuguese velho. But many languages ultimately dropped almost all intertonic vowels.
Generally, those languages south and east of the La Spezia -- Rimini Line (Romanian and Central - Southern Italian) maintained intertonic vowels, while those to the north and west (Western Romance) dropped all except / a /. Standard Italian generally maintained intertonic vowels, but typically raised unstressed / e / > / i /. Examples:
Portuguese is more conservative in maintaining some intertonic vowels other than / a /: e.g. * offerḗscere "to offer '' > Portuguese oferecer vs. Spanish ofrecer, French offrir (< * offerīre). French, on the other hand, drops even intertonic / a / after the stress: Stéphanum "Stephen '' > Spanish Esteban but Old French Estievne > French Étienne. Many cases of / a / before the stress also ultimately dropped in French: sacraméntum "sacrament '' > Old French sairement > French serment "oath ''.
The Romance languages for the most part have kept the writing system of Latin, adapting it to their evolution. One exception was Romanian before the nineteenth century, where, after the Roman retreat, literacy was reintroduced through the Romanian Cyrillic alphabet, a Slavic influence. A Cyrillic alphabet was also used for Romanian (Moldovan) in the USSR. The non-Christian populations of Spain also used the scripts of their religions (Arabic and Hebrew) to write Romance languages such as Ladino and Mozarabic in aljamiado.
The Romance languages are written with the classical Latin alphabet of 23 letters -- A, B, C, D, E, F, G, H, I, K, L, M, N, O, P, Q, R, S, T, V, X, Y, Z -- subsequently modified and augmented in various ways. In particular, the single Latin letter V split into V (consonant) and U (vowel), and the letter I split into I and J. The Latin letter K and the new letter W, which came to be widely used in Germanic languages, are seldom used in most Romance languages -- mostly for unassimilated foreign names and words. Indeed, in Italian prose kilometro is properly chilometro. Catalan eschews importation of "foreign '' letters more than most languages. Thus Wikipedia is Viquipèdia in Catalan but Wikipedia in Spanish.
While most of the 23 basic Latin letters have maintained their phonetic value, for some of them it has diverged considerably; and the new letters added since the Middle Ages have been put to different uses in different scripts. Some letters, notably H and Q, have been variously combined in digraphs or trigraphs (see below) to represent phonetic phenomena that could not be recorded with the basic Latin alphabet, or to get around previously established spelling conventions. Most languages added auxiliary marks (diacritics) to some letters, for these and other purposes.
The spelling rules of most Romance languages are fairly simple, and consistent within any language. Since the spelling systems are based on phonemic structures rather than phonetics, however, the actual pronunciation of what is represented in standard orthography can be subject to considerable regional variation, as well as to allophonic differentiation by position in the word or utterance. Among the letters representing the most conspicuous phonological variations, between Romance languages or with respect to Latin, are the following:
Otherwise, letters that are not combined as digraphs generally represent the same phonemes as suggested by the International Phonetic Alphabet (IPA), whose design was, in fact, greatly influenced by Romance spelling systems.
Since most Romance languages have more sounds than can be accommodated in the Roman Latin alphabet they all resort to the use of digraphs and trigraphs -- combinations of two or three letters with a single phonemic value. The concept (but not the actual combinations) is derived from Classical Latin, which used, for example, TH, PH, and CH when transliterating the Greek letters "θ '', "φ '' (later "φ ''), and "χ ''. These were once aspirated sounds in Greek before changing to corresponding fricatives, and the H represented what sounded to the Romans like an / h / following / t /, / p /, and / k / respectively. Some of the digraphs used in modern scripts are:
While the digraphs CH, PH, RH and TH were at one time used in many words of Greek origin, most languages have now replaced them with C / QU, F, R and T. Only French has kept these etymological spellings, which now represent / k / or / ʃ /, / f /, / ʀ / and / t /, respectively.
Gemination, in the languages where it occurs, is usually indicated by doubling the consonant, except when it does not contrast phonemically with the corresponding short consonant, in which case gemination is not indicated. In Jèrriais, long consonants are marked with an apostrophe: S 'S is a long / zz /, SS 'S is a long / ss /, and T 'T is a long / tt /. Phonemic contrast of geminates vs. single consonants is widespread in Italian, and normally indicated in the traditional orthography: fatto / fatto / ' done ' vs. fato / fato / ' fate, destiny '; cadde / kadde / ' s / he, it fell ' vs. cade / kade / ' s / he, it falls '. The double consonants in French orthography, however, are merely etymological. In Catalan, the gemination of the l is marked by a punt volat = flying point -- l l.
Romance languages also introduced various marks (diacritics) that may be attached to some letters, for various purposes. In some cases, diacritics are used as an alternative to digraphs and trigraphs; namely to represent a larger number of sounds than would be possible with the basic alphabet, or to distinguish between sounds that were previously written the same. Diacritics are also used to mark word stress, to indicate exceptional pronunciation of letters in certain words, and to distinguish words with same pronunciation (homophones).
Depending on the language, some letter - diacritic combinations may be considered distinct letters, e.g. for the purposes of lexical sorting. This is the case, for example, of Romanian ș ((ʃ)) and Spanish ñ ((ɲ)).
The following are the most common use of diacritics in Romance languages.
Most languages are written with a mixture of two distinct but phonetically identical variants or "cases '' of the alphabet: majuscule ("uppercase '' or "capital letters ''), derived from Roman stone - carved letter shapes, and minuscule ("lowercase ''), derived from Carolingian writing and Medieval quill pen handwriting which were later adapted by printers in the fifteenth and sixteenth centuries.
In particular, all Romance languages capitalize (use uppercase for the first letter of) the following words: the first word of each complete sentence, most words in names of people, places, and organizations, and most words in titles of books. The Romance languages do not follow the German practice of capitalizing all nouns including common ones. Unlike English, the names of months, days of the weeks, and derivatives of proper nouns are usually not capitalized: thus, in Italian one capitalizes Francia ("France '') and Francesco ("Francis ''), but not francese ("French '') or francescano ("Franciscan ''). However, each language has some exceptions to this general rule.
The tables below provide a vocabulary comparison that illustrates a number of examples of sound shifts that have occurred between Latin and Romance languages, along with a selection of minority languages. Words are given in their conventional spellings. In addition, for French the actual pronunciation is given, due to the dramatic differences between spelling and pronunciation. (French spelling approximately reflects the pronunciation of Old French, c. 1200 AD.)
Overviews:
Phonology:
Lexicon:
French:
Portuguese:
Spanish:
Italian:
Rhaeto - Romance:
|
ruby don't take your love to town album | Ruby, Do n't Take Your Love to Town (album) - Wikipedia
Ruby, Do n't Take Your Love to Town is the fourth album by the group The First Edition. This was the first album to credit the group as Kenny Rogers & The First Edition. The title song reached number 6 on the Billboard Hot 100 chart in the United States (and was a success mirrored world - wide), while "Reuben James '' became a top - 30 hit in 1970.
|
when does season 5 of the fisters come out | List of the Fosters episodes - wikipedia
The Fosters is an American drama series on Freeform (then known as ABC Family) that premiered on June 3, 2013. The series follows the lives of Lena Adams (Sherri Saum) and Stef Foster (Teri Polo), the Fosters, who are an interracial, lesbian couple living in the Mission Bay area of San Diego, raising Stef 's biological son, Brandon (David Lambert), along with adopted twins, Jesus (Jake T. Austin / Noah Centineo) and Mariana (Cierra Ramirez). Lena is a charter school vice principal and Stef is a police officer. Lena decides to take in "troubled '' teenager Callie (Maia Mitchell) and then later her younger brother Jude (Hayden Byerly), who have endured a series of abusive foster homes.
On January 10, 2017, Freeform announced that the series was renewed for a fifth season, which began on July 11, 2017. As of September 5, 2017, 91 episodes of The Fosters have aired.
At the start of the second part of the first season of The Fosters, a five - episode web series called The Fosters: Girls United was confirmed by ABC Family. It stars Maia Mitchell, Daffany Clark, Cherinda Kincherlow, Annamarie Kenoyer, Alicia Sixtos, Hayley Kiyoko, and Angela Gibbs. Girls United premiered on ABC Family 's official YouTube account on February 3, 2014. Each episode runs approximately 5 minutes long.
|
when was the electoral college created and by whom | Electoral College (United states) - wikipedia
The United States Electoral College is the mechanism established by the United States Constitution for the indirect election of the president of the United States and vice president of the United States. Citizens of the United States vote in each state and the District of Columbia at a general election to choose a slate of "electors '' pledged to vote for a particular party 's candidate.
The Twelfth Amendment requires each elector to cast one vote for president and another vote for vice president. In each state and the District of Columbia, electors are chosen every four years on the Tuesday after the first Monday in November, and then meet to cast ballots on the first Monday after the second Wednesday in December. The candidates who receive a majority of electoral votes among the states are elected president and vice president of the United States when the Electoral College vote is certified by Congress in January.
Each state chooses electors, equal in number to that state 's combined total of senators and representatives. There are a total of 538 electors, corresponding to the 435 representatives and 100 senators, plus the three electors for the District of Columbia as provided by the Twenty - third Amendment. The Constitution bars any federal official, elected or appointed, from being an elector. The Office of the Federal Register is charged with administering the Electoral College. Since the mid-19th century when all electors have been popularly chosen, the Electoral College has elected the candidate who received the most popular votes nationwide, except in four elections: 1876, 1888, 2000, and 2016. In 1824, there were six states in which electors were legislatively appointed, rather than popularly elected, so the true national popular vote is uncertain; the electors failed to select a winning candidate, so the matter was decided by the House of Representatives.
All states except California (before 1913), Maine, and Nebraska have chosen electors on a "winner - take - all '' basis since the 1880s. Under the winner - take - all system, the state 's electors are awarded to the candidate with the most votes in that state, thus maximizing the state 's influence in the national election. Maine and Nebraska use the "congressional district method, '' selecting one elector within each congressional district by popular vote and awarding two electors by a statewide popular vote. Although no elector is required by federal law to honor his pledge, there have been very few occasions when an elector voted contrary to a pledge, and never once has it impacted the final outcome of a national election.
If no candidate for president receives a majority of electoral votes for president, the Twelfth Amendment provides that the House of Representatives will select the president, with each of the fifty state delegations casting one vote. If no candidate for vice president receives a majority of electoral votes for vice president, then the Senate will select the vice president, with each of the 100 senators having one vote.
The Constitutional Convention in 1787 used the Virginia Plan as the basis for discussions, as the Virginia delegation had proposed it first. The Virginia Plan called for the Congress to elect the president. Delegates from a majority of states agreed to this mode of election. However, a committee formed to work out various details including the mode of election of the president, recommended instead the election be by a group of people apportioned among the states in the same numbers as their representatives in Congress (the formula for which had been resolved in lengthy debates resulting in the Connecticut Compromise and Three - Fifths Compromise), but chosen by each state "in such manner as its Legislature may direct. '' Committee member Gouverneur Morris explained the reasons for the change; among others, there were fears of "intrigue '' if the president were chosen by a small group of men who met together regularly, as well as concerns for the independence of the president if he were elected by the Congress. However, once the Electoral College had been decided on, several delegates (Mason, Butler, Morris, Wilson, and Madison) openly recognized its ability to protect the election process from cabal, corruption, intrigue, and faction. Some delegates, including James Wilson and James Madison, preferred popular election of the executive. Madison acknowledged that while a popular vote would be ideal, it would be difficult to get consensus on the proposal given the prevalence of slavery in the South:
There was one difficulty however of a serious nature attending an immediate choice by the people. The right of suffrage was much more diffusive in the Northern than the Southern States; and the latter could have no influence in the election on the score of Negroes. The substitution of electors obviated this difficulty and seemed on the whole to be liable to the fewest objections.
The Convention approved the Committee 's Electoral College proposal, with minor modifications, on September 6, 1787. Delegates from states with smaller populations or limited land area such as Connecticut, New Jersey, and Maryland generally favored the Electoral College with some consideration for states. At the compromise providing for a runoff among the top five candidates, the small states supposed that the House of Representatives with each state delegation casting one vote would decide most elections.
In The Federalist Papers, James Madison explained his views on the selection of the president and the Constitution. In Federalist No. 39, Madison argued the Constitution was designed to be a mixture of state - based and population - based government. Congress would have two houses: the state - based Senate and the population - based House of Representatives. Meanwhile, the president would be elected by a mixture of the two modes.
Alexander Hamilton in Federalist No. 68 laid out what he believed were the key advantages to the Electoral College. The electors come directly from the people and them alone for that purpose only, and for that time only. This avoided a party - run legislature, or a permanent body that could be influenced by foreign interests before each election. Hamilton explained the election was to take place among all the states, so no corruption in any state could taint "the great body of the people '' in their selection. The choice was to be made by a majority of the Electoral College, as majority rule is critical to the principles of republican government. Hamilton argued that electors meeting in the state capitals were able to have information unavailable to the general public. Hamilton also argued that since no federal officeholder could be an elector, none of the electors would be beholden to any presidential candidate.
Another consideration was the decision would be made without "tumult and disorder, '' as it would be a broad - based one made simultaneously in various locales where the decision - makers could deliberate reasonably, not in one place where decision - makers could be threatened or intimidated. If the Electoral College did not achieve a decisive majority, then the House of Representatives was to choose the president from among the top five candidates, ensuring selection of a presiding officer administering the laws would have both ability and good character. Hamilton was also concerned about somebody unqualified, but with a talent for "low intrigue, and the little arts of popularity, '' attaining high office.
Additionally, in the Federalist No. 10, James Madison argued against "an interested and overbearing majority '' and the "mischiefs of faction '' in an electoral system. He defined a faction as "a number of citizens whether amounting to a majority or minority of the whole, who are united and actuated by some common impulse of passion, or of interest, adverse to the rights of other citizens, or to the permanent and aggregate interests of the community. '' What was then called republican government (i.e., federalism, as opposed to direct democracy), with its varied distribution of voter rights and powers, would countervail against factions. Madison further postulated in the Federalist No. 10 that the greater the population and expanse of the Republic, the more difficulty factions would face in organizing due to such issues as sectionalism.
Although the United States Constitution refers to "Electors '' and "electors, '' neither the phrase "Electoral College '' nor any other name is used to describe the electors collectively. It was not until the early 19th century the name "Electoral College '' came into general usage as the collective designation for the electors selected to cast votes for president and vice president. The phrase was first written into federal law in 1845 and today the term appears in 3 U.S.C. § 4, in the section heading and in the text as "college of electors. ''
Article II, Section 1, Clause 2 of the Constitution states:
Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.
Article II, Section 1, Clause 4 of the Constitution states:
The Congress may determine the Time of chusing (sic) the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States.
Article II, Section 1, Clause 3 of the Constitution provided the original plan by which the electors chose the president and vice president. Under the original plan, the candidate who received a majority of votes from the electors would become president; the candidate receiving the second most votes would become vice president.
The original plan of the Electoral College was based upon several assumptions and anticipations of the Framers of the Constitution:
According to the text of Article II, however, each state government was free to have its own plan for selecting its electors, and the Constitution does not explicitly require states to popularly elect their electors. Several different methods for selecting electors are described at length below.
The emergence of political parties and nationally coordinated election campaigns soon complicated matters in the elections of 1796 and 1800. In 1796, Federalist Party candidate John Adams won the presidential election. Finishing in second place was Democratic - Republican Party candidate Thomas Jefferson, the Federalists ' opponent, who became the vice president. This resulted in the president and vice president not being of the same political party.
In 1800 the Democratic - Republican Party again nominated Jefferson for president and also nominated Aaron Burr for vice president. After the election, Jefferson and Burr both obtained a majority of electoral votes but tied one another with 73 votes each. Since ballots did not distinguish between votes for president and votes for vice president, every ballot cast for Burr technically counted as a vote for him to become president, despite Jefferson clearly being his party 's first choice. Lacking a clear winner by constitutional standards, the election had to be decided by the House of Representatives pursuant to the Constitution 's contingency election provision.
Having already lost the presidential contest, Federalist Party representatives in the lame duck House session seized upon the opportunity to embarrass their opposition and attempted to elect Burr over Jefferson. The House deadlocked for 35 ballots as neither candidate received the necessary majority vote of the state delegations in the House (the votes of nine states were needed for an election). Jefferson achieved electoral victory on the 36th ballot, but only after Federalist Party leader Alexander Hamilton -- who disfavored Burr 's personal character more than Jefferson 's policies -- had made known his preference for Jefferson.
Responding to the problems from those elections, the Congress proposed the Twelfth Amendment in 1803 -- prescribing electors cast separate ballots for president and vice president -- to replace the system outlined in Article II, Section 1, Clause 3. By June 1804, the states had ratified the amendment in time for the 1804 election.
Alexander Hamilton described the framers ' view of how electors would be chosen: "A small number of persons, selected by their fellow - citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated (tasks). '' The founders assumed this would take place district by district. That plan was carried out by many states until the 1880s. For example, in Massachusetts in 1820, the rule stated "the people shall vote by ballot, on which shall be designated who is voted for as an Elector for the district. '' In other words, the people did not place the name of a candidate for a president on the ballot, instead they voted for their local elector, whom they trusted later to cast a responsible vote for president.
Some states reasoned that the favorite presidential candidate among the people in their state would have a much better chance if all of the electors selected by their state were sure to vote the same way -- a "general ticket '' of electors pledged to a party candidate. So the slate of electors chosen by the state were no longer free agents, independent thinkers, or deliberative representatives. They became "voluntary party lackeys and intellectual non-entities. '' Once one state took that strategy, the others felt compelled to follow suit in order to compete for the strongest influence on the election.
When James Madison and Hamilton, two of the most important architects of the Electoral College, saw this strategy being taken by some states, they protested strongly. Madison and Hamilton both made it clear this approach violated the spirit of the Constitution. According to Hamilton, the selection of the president should be "made by men most capable of analyzing the qualities adapted to the station (of president). '' According to Hamilton, the electors were to analyze the list of potential presidents and select the best one. He also used the term "deliberate. '' Hamilton considered a pre-pledged elector to violate the spirit of Article II of the Constitution insofar as such electors could make no "analysis '' or "deliberate '' concerning the candidates. Madison agreed entirely, saying that when the Constitution was written, all of its authors assumed individual electors would be elected in their districts and it was inconceivable a "general ticket '' of electors dictated by a state would supplant the concept. Madison wrote to George Hay,
The district mode was mostly, if not exclusively in view when the Constitution was framed and adopted; & was exchanged for the general ticket (many years later).
The Founders assumed that electors would be elected by the citizens of their district and that elector was to be free to analyze and deliberate regarding who is best suited to be president.
Madison and Hamilton were so upset by what they saw as a distortion of the framers ' original intent that they advocated a constitutional amendment to prevent anything other than the district plan: "the election of Presidential Electors by districts, is an amendment very proper to be brought forward, '' Madison told George Hay in 1823. Hamilton went further. He actually drafted an amendment to the Constitution mandating the district plan for selecting electors.
In 1789, at - large popular vote, the winner - take - all method, began with Pennsylvania and Maryland; Virginia and Delaware used a district plan by popular vote, and in the five other states participating in the election (Connecticut, Georgia, Maryland, New Hampshire New Jersey and South Carolina), state legislatures chose. By 1800, Virginia and Rhode Island voted at - large, Kentucky, Maryland and North Carolina voted popularly by district, and eleven states voted by state legislature. Beginning in 1804 there was a definite trend towards the winner - take - all system for statewide popular vote.
States using their state legislature to choose presidential electors have included fourteen states from all regions of the country. By 1832, only South Carolina used the state legislature, and it abandoned the method after 1860. States using popular vote by district have included ten states from all regions of the country. By 1832 there was only Maryland, and from 1836 district plans fell out of use until the 20th century, though Michigan used a district plan for 1892 only.
Since 1836, statewide, winner - take - all popular voting for electors has been the almost universal practice. As of 2016, Maine (from 1972) and Nebraska (from 1996) use the district plan, with two at - large electors assigned to support the winner of the statewide popular vote.
Section 2 of the Fourteenth Amendment allows for a state 's representation in the House of Representatives to be reduced if a state unconstitutionally denies people the right to vote. The reduction is in keeping with the proportion of people denied a vote. This amendment refers to "the right to vote at any election for the choice of electors for President and Vice President of the United States, '' among other elections, the only place in the Constitution mentioning electors being selected by popular vote.
On May 8, 1866, during a debate on the Fourteenth Amendment, Thaddeus Stevens, the leader of the Republicans in the House of Representatives, delivered a speech on the amendment 's intent. Regarding Section 2, he said:
The second section I consider the most important in the article. It fixes the basis of representation in Congress. If any State shall exclude any of her adult male citizens from the elective franchise, or abridge that right, she shall forfeit her right to representation in the same proportion. The effect of this provision will be either to compel the States to grant universal suffrage or so shear them of their power as to keep them forever in a hopeless minority in the national Government, both legislative and executive.
Federal law (2 U.S.C. § 6) implements Section 2 's mandate.
Even though the aggregate national popular vote is calculated by state officials, media organizations, and the Federal Election Commission, the people only indirectly elect the president, as the national popular vote is not the basis for electing the president or vice president. The president and vice president of the United States are elected by the Electoral College, which consists of 538 presidential electors from the fifty states and Washington, D.C. Presidential electors are selected on a state - by - state basis, as determined by the laws of each state. Since the election of 1824, most states have appointed their electors on a winner - take - all basis, based on the statewide popular vote on Election Day. Maine and Nebraska are the only two current exceptions, as both states use the congressional district method. Although ballots list the names of the presidential and vice presidential candidates (who run on a ticket), voters actually choose electors when they vote for president and vice president. These presidential electors in turn cast electoral votes for those two offices. Electors usually pledge to vote for their party 's nominee, but some "faithless electors '' have voted for other candidates or refrained from voting.
A candidate must receive an absolute majority of electoral votes (currently 270) to win the presidency or the vice presidency. If no candidate receives a majority in the election for president or vice president, the election is determined via a contingency procedure established by the Twelfth Amendment. In such a situation, the House chooses one of the top three presidential electoral vote - winners as the president, while the Senate chooses one of the top two vice presidential electoral vote - winners as vice president.
A state 's number of electors equals the number of representatives plus two electors for both senators the state has in the United States Congress. The number of representatives is based on the respective populations, determined every 10 years by the United States Census. Each representative represents on average 711,000 persons.
Under the Twenty - third Amendment, Washington, D.C., is allocated as many electors as it would have if it were a state, but no more electors than the least populous state. The least populous state (which is Wyoming, according to the 2010 census) has three electors; thus, D.C. can not have more than three electors. Even if D.C. were a state, its population would entitle it to only three electors; based on its population per electoral vote, D.C. has the second highest per capita Electoral College representation, after Wyoming.
Currently, there is a total of 538 electors, there being 435 representatives and 100 senators, plus the three electors allocated to Washington, D.C. The six states with the most electors are California (55), Texas (38), New York (29), Florida (29), Illinois (20), and Pennsylvania (20). The seven smallest states by population -- Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming -- have three electors each. This is because each of these states is entitled to one representative and two senators.
Candidates for elector are nominated by state chapters of nationally oriented political parties in the months prior to Election Day. In some states, the electors are nominated by voters in primaries, the same way other presidential candidates are nominated. In some states, such as Oklahoma, Virginia and North Carolina, electors are nominated in party conventions. In Pennsylvania, the campaign committee of each candidate names their respective electoral college candidates (an attempt to discourage faithless electors). Varying by state, electors may also be elected by state legislatures, or appointed by the parties themselves.
Article II, Section 1, Clause 2 of the Constitution requires each state legislature to determine how electors for the state are to be chosen, but it disqualifies any person holding a federal office, either elected or appointed, from being an elector. Under Section 3 of the Fourteenth Amendment, any person who has sworn an oath to support the United States Constitution in order to hold either a state or federal office, and later rebelled against the United States directly or by giving assistance to those doing so, is disqualified from being an elector. However, the Congress may remove this disqualification by a two - thirds vote in each House.
Since the Civil War, all states have chosen presidential electors by popular vote. This process has been normalized to the point the names of the electors appear on the ballot in only eight states: Rhode Island, Tennessee, Louisiana, Arizona, Idaho, Oklahoma, North Dakota and South Dakota.
The Tuesday following the first Monday in November has been fixed as the day for holding federal elections, called the Election Day. In 48 states and Washington, D.C., the "winner - takes - all method '' is used (electors selected as a single bloc). Maine and Nebraska use the "congressional district method '', selecting one elector within each congressional district by popular vote and selecting the remaining two electors by a statewide popular vote. This method has been used in Maine since 1972 and in Nebraska since 1996.
The current system of choosing electors is called the "short ballot ''. In most states, voters choose a slate of electors, and only a few states list on the ballot the names of proposed electors. In some states, if a voter wants to write in a candidate for president, the voter is also required to write in the names of proposed electors.
After the election, each state prepares seven Certificates of Ascertainment, each listing the candidates for president and vice president, their pledged electors, and the total votes each candidacy received. One certificate is sent, as soon after Election Day as practicable, to the National Archivist in Washington D.C. The Certificates of Ascertainment are mandated to carry the State Seal, and the signature of the Governor (in the case of the District of Columbia, the Certificate is signed by the Mayor of the District of Columbia.)
The Electoral College never meets as one body. Electors meet in their respective state capitals (electors for the District of Columbia meet within the District) on the Monday after the second Wednesday in December, at which time they cast their electoral votes on separate ballots for president and vice president.
Although procedures in each state vary slightly, the electors generally follow a similar series of steps, and the Congress has constitutional authority to regulate the procedures the states follow. The meeting is opened by the election certification official -- often that state 's secretary of state or equivalent -- who reads the Certificate of Ascertainment. This document sets forth who was chosen to cast the electoral votes. The attendance of the electors is taken and any vacancies are noted in writing. The next step is the selection of a president or chairman of the meeting, sometimes also with a vice chairman. The electors sometimes choose a secretary, often not himself an elector, to take the minutes of the meeting. In many states, political officials give short speeches at this point in the proceedings.
When the time for balloting arrives, the electors choose one or two people to act as tellers. Some states provide for the placing in nomination of a candidate to receive the electoral votes (the candidate for president of the political party of the electors). Each elector submits a written ballot with the name of a candidate for president. In New Jersey, the electors cast ballots by checking the name of the candidate on a pre-printed card; in North Carolina, the electors write the name of the candidate on a blank card. The tellers count the ballots and announce the result. The next step is the casting of the vote for vice president, which follows a similar pattern.
Each state 's electors must complete six Certificates of Vote. Each Certificate of Vote must be signed by all of the electors and a Certificate of Ascertainment must be attached to each of the Certificates of Vote. Each Certificate of Vote must include the names of those who received an electoral vote for either the office of president or of vice president. The electors certify the Certificates of Vote and copies of the Certificates are then sent in the following fashion:
A staff member of the President of the Senate collects the Certificates of Vote as they arrive and prepares them for the joint session of the Congress. The Certificates are arranged -- unopened -- in alphabetical order and placed in two special mahogany boxes. Alabama through Missouri (including the District of Columbia) are placed in one box and Montana through Wyoming are placed in the other box. Before 1950, the Secretary of State 's office oversaw the certifications, but since then the Office of Federal Register in the Archivist 's office reviews them to make sure the documents sent to the archive and Congress match and that all formalities have been followed, sometimes requiring states to correct the documents.
Faithless electors are those who either cast electoral votes for someone other than the candidate of the party that they pledged to vote for or who abstain. Twenty - nine states plus the District of Columbia have passed laws to punish faithless electors, although none have ever been enforced. Many constitutional scholars claim that state restrictions would be struck down if challenged based on Article II and the Twelfth Amendment. In 1952, the constitutionality of state pledge laws was brought before the Supreme Court in Ray v. Blair, 343 U.S. 214 (1952). The Court ruled in favor of state laws requiring electors to pledge to vote for the winning candidate, as well as removing electors who refuse to pledge. As stated in the ruling, electors are acting as a functionary of the state, not the federal government. Therefore, states have the right to govern the process of choosing electors. The constitutionality of state laws punishing electors for actually casting a faithless vote, rather than refusing to pledge, has never been decided by the Supreme Court. However, in his dissent in Ray v. Blair, Justice Robert Jackson wrote: "no one faithful to our history can deny that the plan originally contemplated what is implicit in its text -- that electors would be free agents, to exercise an independent and nonpartisan judgment as to the men best qualified for the Nation 's highest offices. ''
While many laws punish a faithless elector only after the fact, states like Michigan also specify a faithless elector 's vote be voided.
As electoral slates are typically chosen by the political party or the party 's presidential nominee, electors usually have high loyalty to the party and its candidate: a faithless elector runs a greater risk of party censure than of criminal charges.
In 2000, elector Barbara Lett - Simmons of Washington, D.C., chose not to vote, rather than voting for Al Gore as she had pledged to do. In 2016, seven electors voted contrary to their pledges. Faithless electors have never changed the outcome of any presidential election.
The Twelfth Amendment mandates Congress assemble in joint session to count the electoral votes and declare the winners of the election. The session is ordinarily required to take place on January 6 in the calendar year immediately following the meetings of the presidential electors. Since the Twentieth Amendment, the newly elected Congress declares the winner of the election; all elections before 1936 were determined by the outgoing House.
The meeting is held at 1 p.m. in the Chamber of the U.S. House of Representatives. The sitting vice president is expected to preside, but in several cases the president pro tempore of the Senate has chaired the proceedings. The vice president and the Speaker of the House sit at the podium, with the vice president in the seat of the Speaker of the House. Senate pages bring in the two mahogany boxes containing each state 's certified vote and place them on tables in front of the senators and representatives. Each house appoints two tellers to count the vote (normally one member of each political party). Relevant portions of the Certificate of Vote are read for each state, in alphabetical order.
Members of Congress can object to any state 's vote count, provided objection is presented in writing and is signed by at least one member of each house of Congress. An objection supported by at least one senator and one representative will be followed by the suspension of the joint session and by separate debates and votes in each House of Congress; after both Houses deliberate on the objection, the joint session is resumed. A state 's certificate of vote can be rejected only if both Houses of Congress vote to accept the objection. In that case, the votes from the State in question are simply ignored. The votes of Arkansas and Louisiana were rejected in the presidential election of 1872.
Objections to the electoral vote count are rarely raised, although it did occur during the vote count in 2001 after the close 2000 presidential election between Governor George W. Bush of Texas and the vice president of the United States, Al Gore. Gore, who as vice president was required to preside over his own Electoral College defeat (by five electoral votes), denied the objections, all of which were raised by only several representatives and would have favored his candidacy, after no senators would agree to jointly object. Objections were again raised in the vote count of the 2004 elections, and on that occasion the document was presented by one representative and one senator. Although the joint session was suspended, the objections were quickly disposed of and rejected by both Houses of Congress. If there are no objections or all objections are overruled, the presiding officer simply includes a state 's votes, as declared in the certificate of vote, in the official tally.
After the certificates from all states are read and the respective votes are counted, the presiding officer simply announces the final result of the vote and, provided the required absolute majority of votes was achieved, declares the names of the persons elected president and vice president. This announcement concludes the joint session and formalizes the recognition of the president - elect and of the vice president - elect. The senators then depart from the House Chamber. The final tally is printed in the Senate and House journals.
The Twelfth Amendment requires the House of Representatives to go into session immediately to vote for a president if no candidate for president receives a majority of the electoral votes (since 1964, 270 of the 538 electoral votes).
In this event, the House of Representatives is limited to choosing from among the three candidates who received the most electoral votes for president. Each state delegation votes en bloc -- each delegation having a single vote; the District of Columbia does not receive a vote. A candidate must receive an absolute majority of state delegation votes (i.e., at present, a minimum of 26 votes) in order for that candidate to become the president - elect. Additionally, delegations from at least two thirds of all the states must be present for voting to take place. The House continues balloting until it elects a president.
The House of Representatives has chosen the president only twice: in 1801 under Article II, Section 1, Clause 3; and in 1825 under the Twelfth Amendment.
If no candidate for vice president receives an absolute majority of electoral votes, then the Senate must go into session to elect a vice president. The Senate is limited to choosing from the two candidates who received the most electoral votes for vice president. Normally this would mean two candidates, one less than the number of candidates available in the House vote. However, the text is written in such a way that all candidates with the most and second most electoral votes are eligible for the Senate election -- this number could theoretically be larger than two. The Senate votes in the normal manner in this case (i.e., ballots are individually cast by each senator, not by state delegations). However, two - thirds of the senators must be present for voting to take place.
Additionally, the Twelfth Amendment states a "majority of the whole number '' of senators (currently 51 of 100) is necessary for election. Further, the language requiring an absolute majority of Senate votes precludes the sitting vice president from breaking any tie that might occur, although some academics and journalists have speculated to the contrary.
The only time the Senate chose the vice president was in 1837. In that instance, the Senate adopted an alphabetical roll call and voting aloud. The rules further stated, "(I) fa majority of the number of senators shall vote for either the said Richard M. Johnson or Francis Granger, he shall be declared by the presiding officer of the Senate constitutionally elected Vice President of the United States ''; the Senate chose Johnson.
Section 3 of the Twentieth Amendment specifies if the House of Representatives has not chosen a president - elect in time for the inauguration (noon EST on January 20), then the vice president - elect becomes acting president until the House selects a president. Section 3 also specifies Congress may statutorily provide for who will be acting president if there is neither a president - elect nor a vice president - elect in time for the inauguration. Under the Presidential Succession Act of 1947, the Speaker of the House would become acting president until either the House selects a president or the Senate selects a vice president. Neither of these situations has ever occurred.
Source: Presidential Elections 1789 -- 2000 at Psephos (Adam Carr 's Election Archive) Note: In 1788, 1792, 1796, and 1800, each elector cast two votes for president.
Before the advent of the short ballot in the early 20th century, as described above, the most common means of electing the presidential electors was through the general ticket. The general ticket is quite similar to the current system and is often confused with it. In the general ticket, voters cast ballots for individuals running for presidential elector (while in the short ballot, voters cast ballots for an entire slate of electors). In the general ticket, the state canvass would report the number of votes cast for each candidate for elector, a complicated process in states like New York with multiple positions to fill. Both the general ticket and the short ballot are often considered at - large or winner - takes - all voting. The short ballot was adopted by the various states at different times; it was adopted for use by North Carolina and Ohio in 1932. Alabama was still using the general ticket as late as 1960 and was one of the last states to switch to the short ballot.
The question of the extent to which state constitutions may constrain the legislature 's choice of a method of choosing electors has been touched on in two U.S. Supreme Court cases. In McPherson v. Blacker, 146 U.S. 1 (1892), the Court cited Article II, Section 1, Clause 2 which states that a state 's electors are selected "in such manner as the legislature thereof may direct '' and wrote these words "operat (e) as a limitation upon the state in respect of any attempt to circumscribe the legislative power ''. In Bush v. Palm Beach County Canvassing Board, 531 U.S. 70 (2000), a Florida Supreme Court decision was vacated (not reversed) based on McPherson. On the other hand, three dissenting justices in Bush v. Gore, 531 U.S. 98 (2000), wrote: "(N) othing in Article II of the Federal Constitution frees the state legislature from the constraints in the State Constitution that created it. ''
In the earliest presidential elections, state legislative choice was the most common method of choosing electors. A majority of the state legislatures selected presidential electors in both 1792 (9 of 15) and 1800 (10 of 16), and half of them did so in 1812. Even in the 1824 election, a quarter of state legislatures (6 of 24) chose electors. In that election, Andrew Jackson lost in spite of having pluralities of both the popular and electoral votes, with the outcome being decided by the six state legislatures choosing the electors. Some state legislatures simply chose electors, while other states used a hybrid method in which state legislatures chose from a group of electors elected by popular vote. By 1828, with the rise of Jacksonian democracy, only Delaware and South Carolina used legislative choice. Delaware ended its practice the following election (1832), while South Carolina continued using the method until it seceded from the Union in December 1860. South Carolina used the popular vote for the first time in the 1868 election.
Excluding South Carolina, legislative appointment was used in only four situations after 1832:
Legislative appointment was brandished as a possibility in the 2000 election. Had the recount continued, the Florida legislature was prepared to appoint the Republican slate of electors to avoid missing the federal safe - harbor deadline for choosing electors.
The Constitution gives each state legislature the power to decide how its state 's electors are chosen and it can be easier and cheaper for a state legislature to simply appoint a slate of electors than to create a legislative framework for holding elections to determine the electors. As noted above, the two situations in which legislative choice has been used since the Civil War have both been because there was not enough time or money to prepare for an election. However, appointment by state legislature can have negative consequences: bicameral legislatures can deadlock more easily than the electorate. This is precisely what happened to New York in 1789 when the legislature failed to appoint any electors.
Another method used early in U.S. history was to divide the state into electoral districts. By this method, voters in each district would cast their ballots for the electors they supported and the winner in each district would become the elector. This was similar to how states are currently separated by congressional districts. However, the difference stems from the fact every state always had two more electoral districts than congressional districts. As with congressional districts, moreover, this method is vulnerable to gerrymandering.
Under such a system, electors would be selected in proportion to the votes cast for their candidate or party, rather than being selected by the statewide plurality vote.
There are two versions of the congressional district method: one has been implemented in Maine and Nebraska; another has been proposed in Virginia. Under the implemented congressional district method, the electoral votes are distributed based on the popular vote winner within each of the states ' congressional districts; the statewide popular vote winner receives two additional electoral votes.
In 2013, a different version of the congressional district method was proposed in Virginia. This version would distribute Virginia 's electoral votes based on the popular vote winner within each of Virginia 's congressional districts; the two statewide electoral votes would be awarded based on which candidate won the most congressional districts, rather than on who won Virginia 's statewide popular vote.
The congressional district method can more easily be implemented than other alternatives to the winner - takes - all method, in view of major party resistance to relatively enabling third parties under the proportional method. State legislation is sufficient to use this method. Advocates of the congressional district method believe the system would encourage higher voter turnout and incentivize presidential candidates to broaden their campaigns in non-competitive states. Winner - take - all systems ignore thousands of popular votes; in Democratic California there are Republican districts, in Republican Texas there are Democratic districts. Because candidates have an incentive to campaign in competitive districts, with a district plan, candidates have an incentive to actively campaign in over thirty states versus seven "swing '' states. Opponents of the system, however, argue candidates might only spend time in certain battleground districts instead of the entire state and cases of gerrymandering could become exacerbated as political parties attempt to draw as many safe districts as they can.
Unlike simple congressional district comparisons, the district plan popular vote bonus in the 2008 election would have given Obama 56 % of the Electoral College versus the 68 % he did win, it "would have more closely approximated the percentage of the popular vote won (53 %) ''.
Of the 43 states whose electoral votes could be affected by the congressional district method, only Maine and Nebraska apply it today. Maine has four electoral votes, based on its two representatives and two senators. Nebraska has two senators and three representatives, giving it five electoral votes. Maine began using the congressional district method in the election of 1972. Nebraska has used the congressional district method since the election of 1992. Schwartz, Maralee (April 7, 1991). "Nebraska 's Vote Change ''. The Washington Post. Michigan used the system for the 1892 presidential election, and several other states used various forms of the district plan before 1840: Virginia, Delaware, Maryland, Kentucky, North Carolina, Massachusetts, Illinois, Maine, Missouri, and New York.
The congressional district method allows a state the chance to split its electoral votes between multiple candidates. Prior to 2008, neither Maine nor Nebraska had ever split their electoral votes. Nebraska split its electoral votes for the first time in 2008, giving John McCain its statewide electors and those of two congressional districts, while Barack Obama won the electoral vote of Nebraska 's 2nd congressional district. Following the 2008 split, some Nebraska Republicans made efforts to discard the congressional district method and return to the winner - takes - all system. In January 2010, a bill was introduced in the Nebraska legislature to revert to a winner - take - all system; the bill died in committee in March 2011. Republicans had also passed bills in 1995 and 1997 to eliminate the congressional district method in Nebraska, but those bills were vetoed by Democratic Governor Ben Nelson.
In 2010, Republicans in Pennsylvania, who controlled both houses of the legislature as well as the governorship, put forward a plan to change the state 's winner - takes - all system to a congressional district method system. Pennsylvania had voted for the Democratic candidate in the five previous presidential elections, so some saw this as an attempt to take away Democratic electoral votes. Although Democrat Barack Obama won Pennsylvania in 2008, he won only 55 % of Pennsylvania 's popular vote. The district plan would have awarded him 11 of its 21 electoral votes, a 52.4 % that is closer to the popular vote yet still overcoming Republican gerrymandering. The plan later lost support. Other Republicans, including Michigan state representative Pete Lund, RNC Chairman Reince Priebus, and Wisconsin Governor Scott Walker, have floated similar ideas.
Arguments between proponents and opponents of the current electoral system include four separate but related topics: indirect election, disproportionate voting power by some states, the winner - takes - all distribution method (as chosen by 48 of the 50 states), and federalism. Arguments against the Electoral College in common discussion focus mostly on the allocation of the voting power among the states. Gary Bugh 's research of congressional debates over proposed constitutional amendments to abolish the Electoral College reveals reform opponents have often appealed to a traditional version of representation, whereas reform advocates have tended to reference a more democratic view.
The elections of 1876, 1888, 2000, and 2016 produced an Electoral College winner who did not receive at least a plurality of the nationwide popular vote. In 1824, there were six states in which electors were legislatively appointed, rather than popularly elected, so it is uncertain what the national popular vote would have been if all presidential electors had been popularly elected. When no candidate received a majority of electoral votes in 1824, the election was decided by the House of Representatives and so could be considered distinct from the latter four elections in which all of the states had popular selection of electors. The true national popular vote was also uncertain in the 1960 election, and the plurality for the winner depends on how votes for Alabama electors are allocated.
Opponents of the Electoral College claim such outcomes do not logically follow the normative concept of how a democratic system should function. One view is the Electoral College violates the principle of political equality, since presidential elections are not decided by the one - person one - vote principle. Outcomes of this sort are attributable to the federal nature of the system. Supporters of the Electoral College argue candidates must build a popular base that is geographically broader and more diverse in voter interests than either a simple national plurality or majority. Neither is this feature attributable to having intermediate elections of presidents, caused instead by the winner - takes - all method of allocating each state 's slate of electors. Allocation of electors in proportion to the state 's popular vote could reduce this effect.
Elections where the winning candidate loses the national popular vote typically result when the winner builds the requisite configuration of states (and thus captures their electoral votes) by small margins, but the losing candidate secures large voter margins in the remaining states. In this case, the very large margins secured by the losing candidate in the other states would aggregate to a plurality of the ballots cast nationally. However, commentators question the legitimacy of this national popular vote; pointing out that the national popular vote observed under the Electoral College system does not reflect the popular vote observed under a National Popular Vote system; as each electoral institution produces different incentives for, and strategy choices by, presidential campaigns. Because the national popular vote is irrelevant under the electoral college system, it is generally presumed candidates base their campaign strategies around the existence of the Electoral College; any close race has candidates campaigning to maximize electoral votes by focusing their get - out - the - vote efforts in crucially needed swing states and not attempting to maximize national popular vote totals by using limited campaign resources to run up margins or close up gaps in states considered "safe '' for themselves or their opponents, respectively. Conversely, the institutional structure of a national popular vote system would encourage candidates to pursue voter turnout wherever votes could be found, even in "safe '' states they are already expected to win, and in "safe '' states they have no hope of winning.
Educational YouTuber CGP Grey, who has produced several short videos criticizing the Electoral College, has illustrated how it is technically possible to win the necessary 270 electoral votes while winning only 22 % of the overall popular vote, by winning the barest simple majority of the 40 smallest states and the District of Columbia. Though the current political geography of the United States makes such an election unlikely (it would require winning both reliably Democratic jurisdictions like Massachusetts and D.C. and reliably Republican states like Utah and Alaska), he argues that a system in which such a result is even remotely possible is "indefensible. ''
The United States is the only country that elects a politically powerful president via an electoral college and the only one in which a candidate can become president without having obtained the highest number of votes in the sole or final round of popular voting.
According to this criticism, the Electoral College encourages political campaigners to focus on a few so - called "swing states '' while ignoring the rest of the country. Populous states in which pre-election poll results show no clear favorite are inundated with campaign visits, saturation television advertising, get - out - the - vote efforts by party organizers and debates, while "four out of five '' voters in the national election are "absolutely ignored '', according to one assessment. Since most states use a winner - takes - all arrangement in which the candidate with the most votes in that state receives all of the state 's electoral votes, there is a clear incentive to focus almost exclusively on only a few key undecided states; in recent elections, these states have included Pennsylvania, Ohio, and Florida in 2004 and 2008, and also Colorado in 2012. In contrast, states with large populations such as California, Texas, and New York, have in recent elections been considered "safe '' for a particular party -- Democratic for California and New York and Republican for Texas -- and therefore campaigns spend less time and money there. Many small states are also considered to be "safe '' for one of the two political parties and are also generally ignored by campaigners: of the 13 smallest states, six are reliably Democratic, six are reliably Republican, and only New Hampshire is considered as a swing state, according to critic George C. Edwards III. In the 2008 election, campaigns did not mount nationwide efforts but rather focused on select states.
Except in closely fought swing states, voter turnout is largely insignificant due to entrenched political party domination in most states. The Electoral College decreases the advantage a political party or campaign might gain for encouraging voters to turn out, except in those swing states. If the presidential election were decided by a national popular vote, in contrast, campaigns and parties would have a strong incentive to work to increase turnout everywhere. Individuals would similarly have a stronger incentive to persuade their friends and neighbors to turn out to vote. The differences in turnout between swing states and non-swing states under the current electoral college system suggest that replacing the Electoral College with direct election by popular vote would likely increase turnout and participation significantly.
According to this criticism, the electoral college reduces elections to a mere count of electors for a particular state, and, as a result, it obscures any voting problems within a particular state. For example, if a particular state blocks some groups from voting, perhaps by voter suppression methods such as imposing reading tests, poll taxes, registration requirements, or legally disfranchising specific minority groups, then voting inside that state would be reduced, but as the state 's electoral count would be the same, disenfranchisement has no effect on the overall electoral tally. Critics contend that such disenfranchisement is partially obscured by the Electoral College. A related argument is the Electoral College may have a dampening effect on voter turnout: there is no incentive for states to reach out to more of its citizens to include them in elections because the state 's electoral count remains fixed in any event. According to this view, if elections were by popular vote, then states would be motivated to include more citizens in elections since the state would then have more political clout nationally. Critics contend the electoral college system insulates states from negative publicity as well as possible federal penalties for disenfranching subgroups of citizens.
Legal scholars Akhil Amar and Vikram Amar have argued the original Electoral College compromise was enacted partially because it enabled the southern states to disenfranchise its slave populations. It permitted southern states to disfranchise large numbers of slaves while allowing these states to maintain political clout within the federation by using the three - fifths compromise. They noted that constitutional Framer James Madison believed the question of counting slaves had presented a serious challenge but that "the substitution of electors obviated this difficulty and seemed on the whole to be liable to the fewest objections. '' Akhil and Vikram Amar added that
The founders ' system also encouraged the continued disfranchisement of women. In a direct national election system, any state that gave women the vote would automatically have doubled its national clout. Under the Electoral College, however, a state had no such incentive to increase the franchise; as with slaves, what mattered was how many women lived in a state, not how many were empowered... a state with low voter turnout gets precisely the same number of electoral votes as if it had a high turnout. By contrast, a well - designed direct election system could spur states to get out the vote.
Territories of the United States, such as Puerto Rico, the Northern Mariana Islands, the U.S. Virgin Islands, American Samoa, and Guam, are not entitled to electors in presidential elections. Constitutionally, only U.S. states (per Article II, Section 1, Clause 2) and Washington, D.C. (per the Twenty - third Amendment) are entitled to electors. Guam has held non-binding straw polls for president since the 1980s to draw attention to this fact. This means that roughly 4 million Americans do not have the right to vote in presidential elections. Various scholars consequently conclude that the U.S. national - electoral process is not fully democratic.
Researchers have variously attempted to measure which states ' voters have the greatest impact in such an indirect election.
Each state gets a minimum of three electoral votes, regardless of population, which gives low - population states a disproportionate number of electors per capita. For example, an electoral vote represents nearly four times as many people in California as in Wyoming. Sparsely populated states are likely to be increasingly overrepresented in the electoral college over time, because Americans are increasingly moving to big cities, most of which are in big states. This analysis gives a strong advantage to the smallest states, but ignores any extra influence that comes from larger states ' ability to deliver their votes as a single bloc.
Countervailing analyses which do take into consideration the sizes of the electoral voting blocs, such as the Banzhaf power index (BPI) model based on probability theory lead to very different conclusions about voters relative power. In 1968, John F. Banzhaf III (who developed the Banzhaf power index) determined that a voter in the state of New York had, on average, 3.312 times as much voting power in presidential elections as a voter in any other U.S. state. It was found that based on 1990 census and districting, individual voters in California, the largest state, had 3.3 times more individual power to choose a President than voters of Montana, the largest of the minimum 3 elector states. Because Banzhaf 's method ignores the demographic makeup of the states, it has been criticized for treating votes like independent coin - flips. More empirically based models of voting yield results that seem to favor larger states less.
In practice, the winner - take - all manner of allocating a state 's electors generally decreases the importance of minor parties. However, it has been argued the Electoral College is not a cause of the two - party system, and that it had a tendency to improve the chances of third - party candidates in some situations.
Proponents of the Electoral College claim that it prevents a candidate from winning the presidency by simply winning in heavily populated urban areas, and pushes candidates to make a wider geographic appeal than they would if they simply had to win the national popular vote. They believe that adoption of the popular vote would shift the disproportionate focus to large cities at the expense of rural areas.
Proponents of a national popular vote for president dismiss such arguments, pointing out combined population of the 50 biggest cities (not including metropolitan areas) only amounts to 15 % of the population, and that candidates in popular vote elections for governor and U.S. Senate, and for statewide allocation of electoral votes, do not ignore voters in less populated areas. In addition, it is already possible to win the required 270 electoral votes by winning only the 11 most populous states; what currently prevents such a result is the organic political diversity between those states (three reliably Republican states, four swing states, and four reliably Democratic states), not any inherent quality of the Electoral College itself. If all of those states came to lean reliably for one party, then the Electoral College itself would bring about an urban - centric victory.
The United States of America is a federal coalition that consists of component states. Proponents of the current system argue the collective opinion of even a small state merits attention at the federal level greater than that given to a small, though numerically equivalent, portion of a very populous state. The system also allows each state the freedom, within constitutional bounds, to design its own laws on voting and enfranchisement without an undue incentive to maximize the number of votes cast.
For many years early in the nation 's history, up until the Jacksonian Era, many states appointed their electors by a vote of the state legislature, and proponents argue that, in the end, the election of the president must still come down to the decisions of each state, or the federal nature of the United States will give way to a single massive, centralized government.
In his book A More Perfect Constitution, Professor Larry Sabato elaborated on this advantage of the Electoral College, arguing to "mend it, do n't end it, '' in part because of its usefulness in forcing candidates to pay attention to lightly populated states and reinforcing the role of the state in federalism.
Instead of decreasing the power of minority groups by depressing voter turnout, proponents argue that by making the votes of a given state an all - or - nothing affair, minority groups can provide the critical edge that allows a candidate to win. This encourages candidates to court a wide variety of such minorities and advocacy groups.
Proponents of the Electoral College see its negative effect on third parties as beneficial. They argue the two party system has provided stability because it encourages a delayed adjustment during times of rapid political and cultural change. They believe it protects the most powerful office in the country from control by what these proponents view as regional minorities until they can moderate their views to win broad, long - term support across the nation. Advocates of a national popular vote for president suggest that this effect would also be true in popular vote elections. Of 918 elections for governor between 1948 and 2009, for example, more than 90 % were won by candidates securing more than 50 % of the vote, and none have been won with less than 35 % of the vote.
According to this argument, the fact the Electoral College is made up of real people instead of mere numbers allows for human judgment and flexibility to make a decision, if it happens that a candidate dies or becomes legally disabled around the time of the election. Advocates of the current system argue that human electors would be in a better position to choose a suitable replacement than the general voting public. According to this view, electors could act decisively during the critical time interval between when ballot choices become fixed in state ballots until mid-December when the electors formally cast their ballots. In the election of 1872, losing Liberal Republican candidate Horace Greeley died during this time interval, which resulted in disarray for the Democratic Party, who also supported Greeley, but the Greeley electors were able to split their votes for different alternate candidates. A situation in which the winning candidate died has never happened. In the election of 1912, Vice President Sherman died shortly before the election when it was too late for states to remove his name from their ballots; accordingly, Sherman was listed posthumously, but the eight electoral votes that Sherman would have received were cast instead for Nicholas Murray Butler.
Some supporters of the Electoral College note that it isolates the impact of any election fraud, or other such problems, to the state where it occurs. It prevents instances where a party dominant in one state may dishonestly inflate the votes for a candidate and thereby affect the election outcome. For instance, recounts occur only on a state - by - state basis, not nationwide. Results in a single state where the popular vote is very close -- such as Florida in 2000 -- can decide the national election.
The closest the United States has come to abolishing the Electoral College occurred during the 91st Congress (1969 -- 1971). The presidential election of 1968 resulted in Richard Nixon receiving 301 electoral votes (56 % of electors), Hubert Humphrey 191 (35.5 %), and George Wallace 46 (8.5 %) with 13.5 % of the popular vote. However, Nixon had received only 511,944 more popular votes than Humphrey, 43.5 % to 42.9 %, less than 1 % of the national total.
Representative Emanuel Celler (D -- New York), chairman of the House Judiciary Committee, responded to public concerns over the disparity between the popular vote and electoral vote by introducing House Joint Resolution 681, a proposed Constitutional amendment that would have replaced the Electoral College with a simpler plurality system based on the national popular vote. With this system, the pair of candidates who had received the highest number of votes would win the presidency and vice presidency provided they won at least 40 % of the national popular vote. If no pair received 40 % of the popular vote, a runoff election would be held in which the choice of president and vice president would be made from the two pairs of persons who had received the highest number of votes in the first election. The word "pair '' was defined as "two persons who shall have consented to the joining of their names as candidates for the offices of President and Vice President. ''
On April 29, 1969, the House Judiciary Committee voted 28 to 6 to approve the proposal. Debate on the proposal before the full House of Representatives ended on September 11, 1969 and was eventually passed with bipartisan support on September 18, 1969, by a vote of 339 to 70.
On September 30, 1969, President Richard Nixon gave his endorsement for adoption of the proposal, encouraging the Senate to pass its version of the proposal, which had been sponsored as Senate Joint Resolution 1 by Senator Birch Bayh (D -- Indiana).
On October 8, 1969, the New York Times reported that 30 state legislatures were "either certain or likely to approve a constitutional amendment embodying the direct election plan if it passes its final Congressional test in the Senate. '' Ratification of 38 state legislatures would have been needed for adoption. The paper also reported that six other states had yet to state a preference, six were leaning toward opposition and eight were solidly opposed.
On August 14, 1970, the Senate Judiciary Committee sent its report advocating passage of the proposal to the full Senate. The Judiciary Committee had approved the proposal by a vote of 11 to 6. The six members who opposed the plan, Democratic Senators James Eastland of Mississippi, John Little McClellan of Arkansas, and Sam Ervin of North Carolina, along with Republican Senators Roman Hruska of Nebraska, Hiram Fong of Hawaii, and Strom Thurmond of South Carolina, all argued that although the present system had potential loopholes, it had worked well throughout the years. Senator Bayh indicated that supporters of the measure were about a dozen votes shy from the 67 needed for the proposal to pass the full Senate. He called upon President Nixon to attempt to persuade undecided Republican senators to support the proposal. However, Nixon, while not reneging on his previous endorsement, chose not to make any further personal appeals to back the proposal.
On September 8, 1970, the Senate commenced openly debating the proposal and the proposal was quickly filibustered. The lead objectors to the proposal were mostly Southern senators and conservatives from small states, both Democrats and Republicans, who argued abolishing the Electoral College would reduce their states ' political influence. On September 17, 1970, a motion for cloture, which would have ended the filibuster, received 54 votes to 36 for cloture, failing to receive the then required a two - thirds majority of senators voting. A second motion for cloture on September 29, 1970, also failed, by 53 to 34. Thereafter, the Senate majority leader, Mike Mansfield of Montana, moved to lay the proposal aside so the Senate could attend to other business. However, the proposal was never considered again and died when the 91st Congress ended on January 3, 1971.
On March 22, 1977, President Jimmy Carter wrote a letter of reform to Congress that also included his expression of essentially abolishing the Electoral College. The letter read in part:
My fourth recommendation is that the Congress adopt a Constitutional amendment to provide for direct popular election of the President. Such an amendment, which would abolish the Electoral College, will ensure that the candidate chosen by the voters actually becomes President. Under the Electoral College, it is always possible that the winner of the popular vote will not be elected. This has already happened in three elections, 1824, 1876, and 1888. In the last election, the result could have been changed by a small shift of votes in Ohio and Hawaii, despite a popular vote difference of 1.7 million. I do not recommend a Constitutional amendment lightly. I think the amendment process must be reserved for an issue of overriding governmental significance. But the method by which we elect our President is such an issue. I will not be proposing a specific direct election amendment. I prefer to allow the Congress to proceed with its work without the interruption of a new proposal.
President Carter 's proposed program for the reform of the Electoral College was very liberal for a modern president during this time, and in some aspects of the package, it went beyond original expectations. Newspapers like The New York Times saw President Carter 's proposal at that time as "a modest surprise '' because of the indication of Carter that he would be interested in only eliminating the electors but retaining the electoral vote system in a modified form.
Newspaper reaction to Carter 's proposal ranged from some editorials praising the proposal to other editorials, like that in the Chicago Tribune, criticizing the president for proposing the end of the Electoral College.
In a letter to The New York Times, Representative Jonathan B. Bingham highlighted the danger of the "flawed, outdated mechanism of the Electoral College '' by underscoring how a shift of fewer than 10,000 votes in two key states would have led to President Gerald Ford being reelected despite Jimmy Carter 's nationwide 1.7 million - vote margin.
On January 5, 2017, Representative Steve Cohen introduced a joint resolution proposing a constitutional amendment that would replace the Electoral College with the popular election of the President and Vice President. Unlike the Bayh -- Celler amendment 40 % threshold for election, Cohen 's proposal only requires a candidate to have the "greatest number of votes '' to be elected.
Several states plus the District of Columbia have joined the National Popular Vote Interstate Compact. The compact is based on the current rule in Article II, Section 1, Clause 2 of the Constitution, which gives each state legislature the plenary power to determine how it chooses its electors. Those jurisdictions joining the compact agree to eventually pledge their electors to the winner of the national popular vote.
The compact will not come into effect until the number of states agreeing to the compact equals a majority (at least 270) of all electors. Some scholars have suggested that Article I, Section 10, Clause 3 of the Constitution requires congressional consent before the compact could be enforceable; thus, any attempted implementation of the compact could face court challenges to its constitutionality.
As of 2017, 10 states and the District of Columbia have joined the compact; collectively, these jurisdictions control 165 electoral votes, which is 61 % of the 270 required for the compact to take effect. Only strongly "blue '' states states have joined the compact, each of which returned large victory margins for Barack Obama in the 2012 election..
|
where does the san andreas fault pass through | San Andreas fault - wikipedia
The San Andreas Fault is a continental transform fault that extends roughly 1,200 kilometers (750 mi) through California. It forms the tectonic boundary between the Pacific Plate and the North American Plate, and its motion is right - lateral strike - slip (horizontal). The fault divides into three segments, each with different characteristics and a different degree of earthquake risk, the most significant being the southern segment, which passes within about 35 miles (56 km) of Los Angeles. The slip rate along the fault ranges from 20 to 35 mm (0.79 to 1.38 in) / yr.
The fault was first identified in 1895 by Professor Andrew Lawson of UC Berkeley, who discovered the northern zone. It is often described as having been named after San Andreas Lake, a small body of water that was formed in a valley between the two plates. However, according to some of his reports from 1895 and 1908, Lawson actually named it after the surrounding San Andreas Valley. Following the 1906 San Francisco earthquake, Lawson concluded that the fault extended all the way into southern California.
In 1953, geologist Thomas Dibblee astounded the scientific establishment with his conclusion that hundreds of miles of lateral movement could occur along the fault. A project called the San Andreas Fault Observatory at Depth (SAFOD) near Parkfield, Monterey County, is drilling into the fault to improve prediction and recording of future earthquakes.
The northern segment of the fault runs from Hollister, through the Santa Cruz Mountains, epicenter of the 1989 Loma Prieta earthquake, then up the San Francisco Peninsula, where it was first identified by Professor Lawson in 1895, then offshore at Daly City near Mussel Rock. This is the approximate location of the epicenter of the 1906 San Francisco earthquake. The fault returns onshore at Bolinas Lagoon just north of Stinson Beach in Marin County. It returns underwater through the linear trough of Tomales Bay which separates the Point Reyes Peninsula from the mainland, runs just east of the Bodega Heads through Bodega Bay and back underwater, returning onshore at Fort Ross. (In this region around the San Francisco Bay Area several significant "sister faults '' run more - or-less parallel, and each of these can create significantly destructive earthquakes.) From Fort Ross, the northern segment continues overland, forming in part a linear valley through which the Gualala River flows. It goes back offshore at Point Arena. After that, it runs underwater along the coast until it nears Cape Mendocino, where it begins to bend to the west, terminating at the Mendocino Triple Junction.
The central segment of the San Andreas Fault runs in a northwestern direction from Parkfield to Hollister. While the southern section of the fault and the parts through Parkfield experience earthquakes, the rest of the central section of the fault exhibits a phenomenon called aseismic creep, where the fault slips continuously without causing earthquakes.
The southern segment (also known as the Mojave segment) begins near Bombay Beach, California. Box Canyon, near the Salton Sea, contains upturned strata associated with that section of the fault. The fault then runs along the southern base of the San Bernardino Mountains, crosses through the Cajon Pass and continues northwest along the northern base of the San Gabriel Mountains. These mountains are a result of movement along the San Andreas Fault and are commonly called the Transverse Range. In Palmdale, a portion of the fault is easily examined at a roadcut for the Antelope Valley Freeway. The fault continues northwest alongside the Elizabeth Lake Road to the town of Elizabeth Lake. As it passes the towns of Gorman, Tejon Pass and Frazier Park, the fault begins to bend northward, forming the "Big Bend ''. This restraining bend is thought to be where the fault locks up in Southern California, with an earthquake - recurrence interval of roughly 140 -- 160 years. Northwest of Frazier Park, the fault runs through the Carrizo Plain, a long, treeless plain where much of the fault is plainly visible. The Elkhorn Scarp defines the fault trace along much of its length within the plain.
The southern segment, which stretches from Parkfield in Monterey County all the way to the Salton Sea, is capable of an 8.1 - magnitude earthquake. At its closest, this fault passes about 35 miles (56 km) to the northeast of Los Angeles. Such a large earthquake on this southern segment would kill thousands of people in Los Angeles, San Bernardino, Riverside, and surrounding areas, and cause hundreds of billions of dollars in damage.
The Pacific Plate, to the west of the fault, is moving in a northwest direction while the North American Plate to the east is moving toward the southwest, but relatively southeast under the influence of plate tectonics. The rate of slippage averages about 33 to 37 millimeters (1.3 to 1.5 in) a year across California.
The southwestward motion of the North American Plate towards the Pacific is creating compressional forces along the eastern side of the fault. The effect is expressed as the Coast Ranges. The northwest movement of the Pacific Plate is also creating significant compressional forces which are especially pronounced where the North American Plate has forced the San Andreas to jog westward. This has led to the formation of the Transverse Ranges in Southern California, and to a lesser but still significant extent, the Santa Cruz Mountains (the location of the Loma Prieta earthquake in 1989).
Studies of the relative motions of the Pacific and North American plates have shown that only about 75 percent of the motion can be accounted for in the movements of the San Andreas and its various branch faults. The rest of the motion has been found in an area east of the Sierra Nevada mountains called the Walker Lane or Eastern California Shear Zone. The reason for this is not clear. Several hypotheses have been offered and research is ongoing. One hypothesis -- which gained interest following the Landers earthquake in 1992 -- suggests the plate boundary may be shifting eastward away from the San Andreas towards Walker Lane.
Assuming the plate boundary does not change as hypothesized, projected motion indicates that the landmass west of the San Andreas Fault, including Los Angeles, will eventually slide past San Francisco, then continue northwestward toward the Aleutian Trench, over a period of perhaps twenty million years.
The San Andreas began to form in the mid Cenozoic about 30 Mya (million years ago). At this time, a spreading center between the Pacific Plate and the Farallon Plate (which is now mostly subducted, with remnants including the Juan de Fuca Plate, Rivera Plate, Cocos Plate, and the Nazca Plate) was beginning to reach the subduction zone off the western coast of North America. As the relative motion between the Pacific and North American Plates was different from the relative motion between the Farallon and North American Plates, the spreading ridge began to be "subducted '', creating a new relative motion and a new style of deformation along the plate boundaries. These geological features are what are chiefly seen along San Andreas Fault. It also includes a possible driver for the deformation of the Basin and Range, separation of the Baja California Peninsula, and rotation of the Transverse Range.
The main southern section of the San Andreas Fault proper has only existed for about 5 million years. The first known incarnation of the southern part of the fault was Clemens Well - Fenner - San Francisquito fault zone around 22 -- 13 Ma. This system added the San Gabriel Fault as a primary focus of movement between 10 -- 5 Ma. Currently, it is believed that the modern San Andreas will eventually transfer its motion toward a fault within the Eastern California Shear Zone. This complicated evolution, especially along the southern segment, is mostly caused by either the "Big Bend '' and / or a difference in the motion vector between the plates and the trend of the fault and it surrounding branches.
The fault was first identified in Northern California by UC Berkeley geology professor Andrew Lawson in 1895 and named by him after the Laguna de San Andreas, a small lake which lies in a linear valley formed by the fault just south of San Francisco. Eleven years later, Lawson discovered that the San Andreas Fault stretched southward into southern California after reviewing the effects of the 1906 San Francisco earthquake. Large - scale (hundreds of miles) lateral movement along the fault was first proposed in a 1953 paper by geologists Mason Hill and Thomas Dibblee. This idea, which was considered radical at the time, has since been vindicated by modern plate tectonics.
Seismologists discovered that the San Andreas Fault near Parkfield in central California consistently produces a magnitude 6.0 earthquake approximately once every 22 years. Following recorded seismic events in 1857, 1881, 1901, 1922, 1934, and 1966, scientists predicted that another earthquake should occur in Parkfield in 1993. It eventually occurred in 2004. Due to the frequency of predictable activity, Parkfield has become one of the most important areas in the world for large earthquake research.
In 2004, work began just north of Parkfield on the San Andreas Fault Observatory at Depth (SAFOD). The goal of SAFOD is to drill a hole nearly 3 kilometres (1.9 mi) into the Earth 's crust and into the San Andreas Fault. An array of sensors will be installed to record earthquakes that happen near this area.
The San Andreas Fault System has been the subject of a flood of studies. In particular, scientific research performed during the last 23 years has given rise to about 3,400 publications.
A study published in 2006 in the journal Nature found that the San Andreas fault has reached a sufficient stress level for an earthquake of magnitude greater than 7.0 on the moment magnitude scale to occur. This study also found that the risk of a large earthquake may be increasing more rapidly than scientists had previously believed. Moreover, the risk is currently concentrated on the southern section of the fault, i.e. the region around Los Angeles, because massive earthquakes have occurred relatively recently on the central (1857) and northern (1906) segments of the fault, while the southern section has not seen any similar rupture for at least 300 years. According to this study, a massive earthquake on that southern section of the San Andreas fault would result in major damage to the Palm Springs - Indio metropolitan area and other cities in San Bernardino, Riverside and Imperial counties in California, and Mexicali Municipality in Baja California. It would be strongly felt (and potentially cause significant damage) throughout much of Southern California, including densely populated areas of Los Angeles County, Ventura County, Orange County, San Diego County, Ensenada Municipality and Tijuana Municipality, Baja California, San Luis Rio Colorado in Sonora and Yuma, Arizona. Older buildings would be especially prone to damage or collapse, as would buildings built on unconsolidated gravel or in coastal areas where water tables are high (and thus subject to soil liquefaction). The paper concluded:
The information available suggests that the fault is ready for the next big earthquake but exactly when the triggering will happen and when the earthquake will occur we can not tell (...) It could be tomorrow or it could be 10 years or more from now.
Nevertheless, in the ten years since that publication there has not been a substantial quake in the Los Angeles area, and two major reports issued by the U.S. Geological Survey (USGS) have made variable predictions as to the risk of future seismic events. The ability to predict major earthquakes with sufficient precision to warrant increased precautions has remained elusive.
The U.S. Geological Survey most recent forecast, known as UCERF3 (Uniform California Earthquake Rupture Forecast 3), released in November 2013, estimated that an earthquake of magnitude 6.7 M or greater (i.e. equal to or greater than the 1994 Northridge earthquake) occurs about once every 6.7 years statewide. The same report also estimated there is a 7 % probability that an earthquake of magnitude 8.0 or greater will occur in the next 30 years somewhere along the San Andreas Fault. A different USGS study in 2008 tried to assess the physical, social and economic consequences of a major earthquake in southern California. That study predicted that a magnitude 7.8 earthquake along the southern San Andreas Fault could cause about 1,800 deaths and $213 billion in damage.
A 2008 paper, studying past earthquakes along the Pacific coastal zone, found a correlation in time between seismic events on the northern San Andreas Fault and the southern part of the Cascadia subduction zone (which stretches from Vancouver Island to northern California). Scientists believe quakes on the Cascadia subduction zone may have triggered most of the major quakes on the northern San Andreas within the past 3,000 years. The evidence also shows the rupture direction going from north to south in each of these time - correlated events. However the 1906 San Francisco earthquake seems to have been the exception to this correlation because the plate movement was moved mostly from south to north and it was not preceded by a major quake in the Cascadia zone.
The San Andreas Fault has had some notable earthquakes in historic times:
|
where is the round of 32 being played | 2018 NCAA Division I men 's basketball tournament - wikipedia
The 2018 NCAA Division I Men 's Basketball Tournament was a 68 - team single - elimination tournament to determine the men 's National Collegiate Athletic Association (NCAA) Division I college basketball national champion for the 2017 -- 18 season. The 80th edition of the tournament began on March 13, 2018, and concluded with the championship game on April 2 at the Alamodome in San Antonio, Texas.
During the first round, UMBC became the first 16 - seed to defeat a 1 - seed in the men 's tournament by defeating Virginia 74 -- 54. For the first time in tournament history, none of the four top seeded teams in a single region (the South) advanced to the Sweet 16. Also, the tournament featured the first regional final matchup of a 9 - seed (Kansas State) and an 11 - seed (Loyola - Chicago).
Villanova, Michigan, Kansas, and Loyola - Chicago, the "Cinderella team '' of the tournament, reached the Final Four. Villanova defeated Michigan in the championship game, 79 -- 62.
Atlantic Sun Conference champion Lipscomb made its NCAA tournament debut.
A total of 68 teams entered the 2018 tournament. 32 automatic bids were awarded, one to each program that won their conference tournament. The remaining 36 bids were "at - large '', with selections extended by the NCAA Selection Committee.
Eight teams (the four lowest - seeded automatic qualifiers and the four lowest - seeded at - large teams) played in the First Four (the successor to what had been popularly known as "play - in games '' through the 2010 tournament). The winners of these games advanced to the main draw of the tournament.
The Selection Committee seeded the entire field from 1 to 68.
The following sites were selected to host each round of the 2018 tournament:
First Four
First and Second Rounds
Regional Semifinals and Finals (Sweet Sixteen and Elite Eight)
National Semifinals and Championship (Final Four and Championship)
For the fourth time, the Alamodome and city of San Antonio are hosting the Final Four. This is the first tournament since 1994 in which no games were played in an NFL stadium, as the Alamodome is a college football stadium, although the Alamodome hosted some home games for the New Orleans Saints during their 2005 season. The 2018 tournament featured three new arenas in previous host cities. Philips Arena, the home of the Atlanta Hawks and replacement for the previously used Omni Coliseum, hosted the South regional games, and the new Little Caesars Arena, home of the Detroit Pistons and Detroit Red Wings, hosted games. And for the first time since 1994, the tournament returned to Wichita and the state of Kansas where Intrust Bank Arena hosted first round games.
The state of North Carolina was threatened with a 2018 - 2022 championship venue boycott by the NCAA, due to the HB2 law passed in 2016. However, the law was repealed (but with provisos) days before the NCAA met to make decisions on venues in April 2017. At that time, the NCAA board of governors "reluctantly voted to allow consideration of championship bids in North Carolina by our committees that are presently meeting ''. Therefore, Charlotte was eligible and served as a first weekend venue for the 2018 tournament.
Four teams, out of 351 in Division I, were ineligible to participate in the 2018 tournament due to failing to meet APR requirements: Alabama A&M, Grambling State, Savannah State, and Southeast Missouri State. However, the NCAA granted the Savannah State Tigers a waiver which would have allowed the team to participate in the tournament, but the team failed to qualify.
The following 32 teams were automatic qualifiers for the 2018 NCAA field by virtue of winning their conference 's automatic bid.
The tournament seeds were determined through the NCAA basketball tournament selection process. The seeds and regions were determined as follows:
* See First Four
The 2018 tournament was the first time since the 1978 tournament that the six Division I college basketball - playing schools based in the Washington, DC metropolitan area -- American, Georgetown, George Mason, George Washington, Howard, and Maryland -- were collectively shut out of the NCAA Tournament.
All times are listed as Eastern Daylight Time (UTC − 4) * -- Denotes overtime period
During the Final Four round, regardless of the seeds of the participating teams, the champion of the top overall top seed 's region (Virginia 's South Region) plays against the champion of the fourth - ranked top seed 's region (Xavier 's West Region), and the champion of the second overall top seed 's region (Villanova 's East Region) plays against the champion of the third - ranked top seed 's region (Kansas 's Midwest Region).
The Pac - 12 lost all of its teams after the first day of the main tournament draw, marking the first time since the Big 12 began play in 1996 that one of the six major conferences -- defined as the ACC, Big Ten, Big 12, Pac - 12, SEC, and both versions of the Big East -- failed to have a team advance to the tournament 's round of 32.
CBS Sports and Turner Sports had U.S. television rights to the Tournament under the NCAA March Madness brand. As part of a cycle beginning in 2016, TBS held the rights to the Final Four and to the championship game. Additionally, TBS held the rights to the 2018 Selection Show, which returned to a two - hour format, was presented in front of a studio audience, and promoted that the entire field of the tournament would be unveiled within the first ten minutes of the broadcast. The broadcast was heavily criticized for its quality (including technical problems and an embedded product placement segment for Pizza Hut), as well as initially unveiling the 68 - team field in alphabetical order (beginning with automatic qualifiers, followed by the at - large teams) rather than unveiling the matchups region - by - region (which was criticized for having less suspense than the traditional format).
Westwood One had exclusive radio rights to the entire tournament.
Live video of games was available for streaming through the following means:
Live audio of games was available for streaming through the following means:
|
when did drinking age change to 21 in ma | U.S. history of alcohol minimum purchase age by State - wikipedia
The alcohol laws of the United States regarding minimum age for purchase have changed over time. This history is given in the table below. Unless otherwise noted, if different alcohol categories have different minimum purchase ages, the age listed below is set at the lowest age given (e.g. if the purchase age is 18 for beer and 21 for wine or spirits, as was the case in several states, the age in the table will read as "18 '', not "21 ''). In addition, the purchase age is not necessarily the same as the minimum age for consumption of alcoholic beverages, although they have often been the same.
As one can see in the table below, there has been much volatility in the states ' drinking ages since the repeal of Prohibition in 1933. Shortly after the ratification of the 21st amendment in December, most states set their purchase ages at 21 since that was the voting age at the time. Most of these limits remained constant until the early 1970s. From 1969 to 1976, some 30 states lowered their purchase ages, generally to 18. This was primarily because the voting age was lowered from 21 to 18 in 1971 with the 26th amendment. Many states started to lower their minimum drinking age in response, most of this occurring in 1972 or 1973. Twelve states kept their purchase ages at 21 since repeal of Prohibition and never changed them.
From 1976 to 1983, several states voluntarily raised their purchase ages to 19 (or, less commonly, 20 or 21), in part to combat drunk driving fatalities. In 1984, Congress passed the National Minimum Drinking Age Act, which required states to raise their ages for purchase and public possession to 21 by October 1986 or lose 10 % of their federal highway funds. By mid-1988, all 50 states and the District of Columbia had raised their purchase ages to 21 (but not Puerto Rico, Guam, or the Virgin Islands, see Additional Notes below). South Dakota and Wyoming were the final two states to comply with the age 21 mandate. The current drinking age of 21 remains a point of contention among many Americans, because of it being higher than the age of majority (18 in most states) and higher than the drinking ages of most other countries. The National Minimum Drinking Age Act is also seen as a congressional sidestep of the tenth amendment. Although debates have not been highly publicized, a few states have proposed legislation to lower their drinking age, while Guam has raised its drinking age to 21 in July 2010.
For an established religious purpose; When a person under twenty - one years of age is accompanied by a parent, spouse, or legal guardian twenty - one years of age or older; For medical purposes when purchased as an over the counter medication, or when prescribed or administered by a licensed physician, pharmacist, dentist, nurse, hospital, or medical institution; In a private residence, which shall include a residential dwelling and up to twenty contiguous acres, on which the dwelling is located, owned by the same person who owns the dwelling; The sale, handling, transport, or service in dispensing of any alcoholic beverage pursuant to lawful ownership of an establishment or to lawful employment of a person under twenty - one years of age by a duly licensed manufacturer, wholesaler, or retailer of beverage alcohol.)
94. Citation for Wisconsin drinking law: https://www.revenue.wi.gov/Pages/FAQS/ise-atundrg.aspx
|
who were the leaders of russia during ww2 | List of leaders of Russia - wikipedia
Leaders of Russia are political heads of state.
|
the name of the game mamma mia here we go again | Mamma Mia! Here We Go Again - Wikipedia
Mamma Mia! Here We Go Again (also known as Mamma Mia! 2 or Mamma Mia! 2: Here We Go Again) is a 2018 jukebox musical romantic comedy film directed and written by Ol Parker, from a story by Parker, Catherine Johnson, and Richard Curtis. It is a follow - up to the 2008 film Mamma Mia!, which in turn is based on the musical of the same name using the music of ABBA. The film features an ensemble cast, including Lily James, Amanda Seyfried, Christine Baranski, Julie Walters, Pierce Brosnan, Andy García, Dominic Cooper, Colin Firth, Stellan Skarsgård, Jessica Keenan Wynn, Alexa Davies, Jeremy Irvine, Hugh Skinner, Josh Dylan, Cher, and Meryl Streep. Both a prequel and a sequel, the plot is set after the events of the first film, and also features flashbacks to 1979, telling the story of Donna Sheridan 's arrival on the island of Kalokairi and her first meetings with her daughter Sophie 's three possible fathers.
Due to the financial success of the first film, Universal Pictures had long been interested in a sequel. The film was officially announced in May 2017, with Parker hired to write and direct. In June 2017, many of the original cast confirmed their involvement, with James being cast in the role of Young Donna that July. Filming took place from August to December 2017 in Croatia, and at Shepperton Studios in Surrey, England. A British and American joint venture, the film was co-produced by Playtone, Littlestar Productions and Legendary Entertainment
Mamma Mia! Here We Go Again premiered at the Hammersmith Apollo in London on July 16, 2018 and was released in the United Kingdom and the United States on July 20, 2018, ten years to the week of its predecessor 's release, in both standard and IMAX formats. The film has grossed over $280 million worldwide and received generally positive reviews, with critics praising the performances and musical numbers, though the necessity of the film and underuse of some of the original cast (particularly Streep) received some criticism.
Sophie Sheridan is preparing for the grand reopening of her mother Donna 's hotel, following Donna 's death a year earlier. She is upset because two of her fathers, Harry and Bill, are unable to make it to the reopening and she is having trouble in her relationship with Sky, who is in New York, over her memorializing her mother 's life.
In 1979, a young Donna is getting ready to travel the world. While in Paris, she meets and parties with Harry. She later misses her boat to Kalokairi but is offered a ride by Bill, and along the way, they are able to help a stranded fisherman, Alexio, make it in time to stop the love of his life from marrying another. Unbeknownst to Donna, Harry has followed her to Greece; but he arrived too late, and sadly watches the boat sailing off in the distance.
In the present, Tanya and Rosie arrive to support Sophie with the reopening and it 's revealed that Rosie and Bill have split up. Sophie then visits Sam, who is still grieving over the death of Donna. Back in the past, Donna arrives on the island and while exploring the farmhouse, a sudden storm causes her to discover a spooked horse in the basement. She goes in search of help only to find a young Sam riding his motorcycle and he helps her to save the horse. Back in the present, a storm has caused serious disruption to Sophie 's plans for the grand reopening and prevented media coverage of the event.
Back in the past, Donna and Sam are enjoying a whirlwind romance that ends when Donna discovers a picture of Sam 's fiancée in his drawer. A devastated Donna demands that Sam leave the island. In the present, Sam tells Sophie about her value to her mother. Meanwhile, Harry leaves his business deal in Tokyo to support Sophie, and separately Bill gets the same idea. Bill and Harry meet at the docks but are told there are no boats. However, Bill meets the fisherman Alexio and thereby secures boat passage for himself, Harry as well as the newly arrived Sky.
In the past, a depressed Donna is heartbroken over Sam but is able to channel her anger into singing with Tanya and Rosie. She meets Bill again and they go out on his boat; while they are gone, Sam returns, having recently ended his engagement for Donna, but is saddened to hear that she is with another man and leaves the island again. Donna discovers she is pregnant but has no idea which one of her three recent lovers is the father. Sofia, the mother of the owner of the bar where Donna and the Dynamos performed, overhears Donna 's wish to stay on the island and offers to let Donna live at her farmhouse and Donna happily accepts. It is there that she eventually gives birth to Sophie.
Back in the present, the guests have arrived at the party and Sophie is reunited with her other two fathers and Sky. Sophie reveals to Sky she is pregnant and has never felt closer to her mother, having now understood what her mother went through. Bill and Rosie reunite over their grief for Donna. Sophie 's estranged grandmother and Donna 's mother, Ruby, arrives despite Sophie deciding not to invite her. She reveals that Sky tracked her down in New York and she wants to build a real relationship with Sophie. Sophie then performs a song with Tanya and Rosie in honor of her mother, with her grandmother tearfully telling her afterward how proud she is of her. It is then revealed that the manager of the hotel, Fernando, is Ruby 's ex-lover from 1959 in Mexico, and the two are joyously reunited.
Nine months later, Sophie has given birth to a baby boy and everyone has gathered for his christening where Tanya flirts with Fernando 's brother. The ceremony takes place with Donna 's spirit watching over her daughter with pride. All the characters, including Donna and the younger cast, sing "Super Trouper '' at a huge party at Hotel Bella Donna.
Cameo appearances
A soundtrack album was released on July 13, 2018 by Capitol and Polydor Records in the United States and internationally, respectively. The album was produced by Benny Andersson, who also served as the album 's executive producer alongside Björn Ulvaeus and Judy Craymer. Each song is featured within the film, with the exception of "I Wonder (Departure) '' and "The Day Before You Came ''.
Due to Mamma Mia! 's financial success, Hollywood studio chief David Linde, co-chairman of Universal Pictures, told the Daily Mail that it would take a while, but there could be a sequel. He stated that he would be delighted if Judy Craymer, Catherine Johnson, Phyllida Lloyd, Benny Andersson and Björn Ulvaeus agreed to the project, noting that there are still many ABBA songs to make use of.
Mamma Mia! Here We Go Again was announced on May 19, 2017, with a release date of July 20, 2018. It was written and directed by Ol Parker. On September 27, 2017, Benny Andersson confirmed 3 ABBA songs that would be featured in the film: "When I Kissed the Teacher, '' "I Wonder (Departure), '' and "Angeleyes. '' "I Wonder (Departure) '' was cut from the film, but is included on the soundtrack album.
On June 1, 2017, it was announced that Seyfried would return as Sophie. Later that month, Dominic Cooper confirmed that he would return for the sequel, along with Streep, Firth and Brosnan as Sky, Donna, Harry, and Sam, respectively. In July 2017, Baranski was also confirmed to return as Tanya. On July 12, 2017, Lily James was cast to play the role of young Donna. On August 3, 2017, Jeremy Irvine and Alexa Davies were also cast in the film, with Irvine playing Brosnan 's character Sam in a past era, and Hugh Skinner to play Young Harry, Davies as a young Rosie, played by Julie Walters. On August 16, 2017, it was announced that Jessica Keenan Wynn had been cast as a young Tanya, who is played by Baranski. Julie Walters and Stellan Skarsgård also reprised their roles as Rosie and Bill, respectively. On October 16, 2017, it was announced that singer and actress Cher had joined the cast, in her first on - screen film role since 2010, and her first film with Streep since Silkwood.
Principal photography on the film began on August 12, 2017 in Croatia, including the island of Vis. In October 2017, the cast gathered at Shepperton Studios in Surrey, England, to film song and dance numbers with Cher. Filming wrapped on December 2, 2017.
Mamma Mia! Here We Go Again was released on July 20, 2018 by Universal Pictures, in the UK, US and other selected countries in both standard and IMAX formats. The film premiered on July 16, 2018 at the Hammersmith Apollo in London.
The first trailer for the film was released on December 21, 2017, in front of Pitch Perfect 3, another Universal Pictures film. Cher performed "Fernando '' at the Las Vegas CinemaCon on April 25, 2018, after footage of the film was shown. Universal sponsored YouTube stars the Merrell Twins to perform a cover version of the song Mamma Mia to promote the film.
As of August 12, 2018, Mamma Mia! Here We Go Again has grossed $103.8 million in the United States and Canada, and $177 million in other territories, for a total worldwide gross of $280.8 million, against a production budget of $75 million.
In June 2018, three weeks prior to its release, official industry tracking had the film debuting to $27 -- 33 million, which increased to as much as $36 million by the week of its release. It made $14.3 million on its first day, including $3.4 million from Thursday night previews. It went on to debut to $35 million, finishing second, behind fellow newcomer The Equalizer 2 ($36 million), and besting the opening of the first film ($27.8 million) by over 24 %. It fell 57 % to $15.1 million in its second weekend, finishing second behind newcomer Mission: Impossible -- Fallout. In its third weekend the film grossed $9 million, dropping to fourth place, and $5.8 million in its fourth weekend, finishing seventh.
In the United Kingdom, the film grossed $12.7 million in its opening weekend, topping the box office and achieving the fourth biggest opening for a film in 2018. In its second weekend of international release, the film made $26.6 million (for a running total of $98.6 million). Its largest new markets were France ($1.7 million), Poland ($1.3 million), Switzerland ($223,000) and Croatia ($151,000), while its best holdovers were Australia ($9.5 million), the UK ($8.6 million) and Germany ($8.2 million).
On review aggregation website Rotten Tomatoes, the film holds an approval rating of 80 % based on 194 reviews, with a weighted average of 6.3 / 10. The website 's critical consensus reads, "Mamma Mia! Here We Go Again doubles down on just about everything fans loved about the original -- and my my, how can fans resist it? '' On Metacritic, which assigns a normalized rating to reviews, the film has a weighted average score of 60 out of 100, based on 46 critics, indicating "mixed or average reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A -- '' on an A+ to F scale, the same score as its predecessor, while PostTrak reported filmgoers gave it an 83 % overall positive score.
Peter Bradshaw of The Guardian termed the sequel as "weirdly irresistible '' and gave it three out of five stars. He described his reaction to the first film as "a combination of hives and bubonic plague, '' but concedes that this time, the relentlessness and greater self - aware comedy made him smile. He concludes: "More enjoyable than I thought. But please. Enough now. '' Mark Kermode of The Observer gave the film five stars and commented, "This slick sequel delivers sharp one - liners, joyously contrived plot twists and an emotional punch that left our critic reeling. '' Joy Watson from Exclaim! thoroughly enjoyed the movie, saying, "With multitalented writer and director Ol Parker at the helm, and perhaps more importantly, Richard Curtis (king of the British rom - com) behind the scenes to balance out the cheesiness, Here We Go Again builds giddy energy and tension throughout until Cher arrives in a helicopter to blow the whole thing sky high with an ode to "Fernando. '' Pure bliss. ''
Peter Travers of Rolling Stone awarded the film two and a half stars out of five, noting the absence of Streep for the majority of the film hindered his enjoyment, and saying, "her absence is deeply felt since the three - time Oscar winner sang and danced her heart out as Donna Sheridan ''. Lindsay Bahr of Associated Press awarded the film three out of four stars, calling it "wholly ridiculous '', but complimenting its self - awareness. She also praised James ' performance and singing talent. Richard Roeper of the Chicago Sun - Times gave the sequel a mixed review, awarding it two stars out of four, criticizing the reprises of "Dancing Queen '' and "Super Trouper '' as uninspired, and feeling that some of the musical numbers dragged the pacing. He considered the younger counterparts to the main characters "energetic '' and "likeable. '' Stephanie Zacharek of Time gave the film a mixed review, writing "Mamma Mia! Here We Go Again is atrocious. And wonderful. It 's all the reasons you should never go to the movies. And all the reasons you should race to get a ticket. ''
|
who is washington addressing in the atlanta exposition | Atlanta Exposition speech - wikipedia
The Cotton States and International Exposition Speech was an address on the topic of race relations given by Booker T. Washington on September 18, 1895. The speech laid the foundation for the Atlanta compromise, an agreement between African - American leaders and Southern white leaders in which Southern blacks would work meekly and submit to white political rule, while Southern whites guaranteed that blacks would receive basic education and due process of law.
The speech, presented before a predominantly white audience at the Cotton States and International Exposition (the site of today 's Piedmont Park) in Atlanta, Georgia, has been recognized as one of the most important and influential speeches in American history. The speech was preceded by the reading of a dedicatory ode written by Frank Lebby Stanton.
Washington began with a call to the blacks, who composed one third of the Southern population, to join the world of work. He declared that the South was where blacks were given their chance, as opposed to the North, especially in the worlds of commerce and industry. He told the white audience that rather than relying on the immigrant population arriving at the rate of a million people a year, they should hire some of the nation 's eight million blacks. He praised blacks ' loyalty, fidelity and love in service to the white population, but warned that they could be a great burden on society if oppression continued, stating that the progress of the South was inherently tied to the treatment of blacks and protection of their liberties.
He addressed the inequality between commercial legality and social acceptance, proclaiming that "The opportunity to earn a dollar in a factory just now is worth infinitely more than the opportunity to spend a dollar in an opera house. '' Washington also suggested toleration of segregation by claiming that blacks and whites could exist as separate fingers of a hand.
The title "Atlanta Compromise '' was given to the speech by W.E.B. Du Bois, who believed it was insufficiently committed to the pursuit of social and political equality for blacks.
This phrase surfaced numerous times throughout Washington 's speech. Generally, the phrase had different meanings for whites and blacks. For whites, Washington seemed to be challenging their common misperceptions of black labor. The North had been experiencing labor troubles in the early 1890s (Homestead Strike, Pullman Strike, etc.) and Washington sought to capitalize on these issues by offering Southern black labor as an alternative, especially since his Tuskegee Institute was in the business of training such workers. For blacks, however, the "Bucket motif '' represented a call to personal uplift and diligence, as the South needed them to rebuild following the Civil War.
This phrase appeared at the end of the speech 's fifth paragraph. It is commonly referred to as the "Hand simile. '' Certain historians, like Louis Harlan, saw this simile as Washington 's personal embrace of racial segregation. The entire simile reads as follows:
In all things purely social we can be as separate as the fingers, yet one as the hand in all things essential to mutual progress.
Ultimately, many Southern whites (Porter King, William Yates Atkinson, etc.) praised Washington for including such a simile, because it effectively disarmed any immediate threat posed by blacks toward segregation (accommodationism).
|
what is the acidity of red wine vinegar | Vinegar - wikipedia
Vinegar is a liquid consisting of about 5 -- 20 % acetic acid (CH COOH), water, and other trace chemicals, which may include flavorings. The acetic acid is produced by the fermentation of ethanol by acetic acid bacteria. Vinegar is now mainly used as a cooking ingredient, or in pickling. As the most easily manufactured mild acid, it has historically had a great variety of industrial, medical, and domestic uses, some of which (such as its use as a general household cleaner) are still commonly practiced today.
Commercial vinegar is produced either by a fast or a slow fermentation processes. In general, slow methods are used in traditional vinegars where fermentation proceeds slowly over the course of a few months or up to a year. The longer fermentation period allows for the accumulation of a non-toxic slime composed of acetic acid bacteria. Fast methods add mother of vinegar (bacterial culture) to the source liquid before adding air to oxygenate and promote the fastest fermentation. In fast production processes, vinegar may be produced between 20 hours to three days.
The conversion of ethanol (CH CH OH) and oxygen (O) to acetic acid (CH COOH) takes place by the following reaction:
Vinegar has been made and used by people for thousands of years. Traces of it have been found in Egyptian urns from around 3000 BC.
Apple cider vinegar is made from cider or apple must, and has a brownish - gold color. It is sometimes sold unfiltered and unpasteurized with the mother of vinegar present, as a natural product. It can be diluted with fruit juice or water or sweetened (usually with honey) for consumption.
Balsamic vinegar is an aromatic aged vinegar produced in the Modena and Reggio Emilia provinces of Italy. The original product -- Traditional Balsamic Vinegar -- is made from the concentrated juice, or must, of white Trebbiano grapes. It is very dark brown, rich, sweet, and complex, with the finest grades being aged in successive casks made variously of oak, mulberry, chestnut, cherry, juniper, and ash wood. Originally a costly product available to only the Italian upper classes, traditional balsamic vinegar is marked "tradizionale '' or "DOC '' to denote its Protected Designation of Origin status, and is aged for 12 to 25 years. A cheaper non-DOC commercial form described as "aceto balsamico di Modena '' (balsamic vinegar of Modena) became widely known and available around the world in the late 20th century, typically made with concentrated grape juice mixed with a strong vinegar, then coloured and slightly sweetened with caramel and sugar.
Regardless of how it is produced, balsamic vinegar must be made from a grape product. It contains no balsam fruit. A high acidity level is somewhat hidden by the sweetness of the other ingredients, making it very mellow.
Vinegar made from sugarcane juice is most popular in the Philippines, in particular in the northern Ilocos Region (where it is called Sukang Iloko), although it also is produced in France and the United States. It ranges from dark yellow to golden brown in color, and has a mellow flavor, similar in some respects to rice vinegar, though with a somewhat "fresher '' taste. Because it contains no residual sugar, it is no sweeter than any other vinegar. In the Philippines it often is labeled as sukang maasim (Tagalog for "sour vinegar '').
Cane vinegars from Ilocos are made in two different ways. One way is to simply place sugar cane juice in large jars and it will become sour by the direct action of bacteria on the sugar. The other way is through fermentation to produce a local wine known as basi. Low - quality basi is then allowed to undergo acetic acid fermentation that converts alcohol into acetic acid. Contaminated basi also becomes vinegar.
A white variation has become quite popular in Brazil in recent years, where it is the cheapest type of vinegar sold. It is now common for other types of vinegar (made from wine, rice and apple cider) to be sold mixed with cane vinegar to lower the cost.
Sugarcane sirka is made from sugarcane juice in Punjab, India. During summer people put cane juice in earthenware pots with iron nails. The fermentation takes place due to the action of wild yeast. The cane juice is converted to vinegar having a blackish color. The sirka is used to preserve pickles and for flavoring curries.
Coconut vinegar, made from fermented coconut water or sap, is used extensively in Southeast Asian cuisine (notably the Philippines), as well as in some cuisines of India and Sri Lanka, especially Goan cuisine. A cloudy white liquid, it has a particularly sharp, acidic taste with a slightly yeasty note.
Vinegar made from dates is a traditional product of the Middle East.
The term "distilled vinegar '' as used in the United States (called "spirit vinegar '' in the UK, "white vinegar '' in Canada) is something of a misnomer because it is not produced by distillation but by fermentation of distilled alcohol. The fermentate is diluted to produce a colorless solution of 5 % to 8 % acetic acid in water, with a pH of about 2.6. This is variously known as distilled spirit, "virgin '' vinegar, or white vinegar, and is used in cooking, baking, meat preservation, and pickling, as well as for medicinal, laboratory, and cleaning purposes. The most common starting material in some regions, because of its low cost, is malt, or in the United States, corn. It is sometimes derived from petroleum. Distilled vinegar in the UK is produced by the distillation of malt to give a clear vinegar which maintains some of the malt flavour. Distilled vinegar is used predominantly for cooking, although in Scotland it is used as an alternative to brown or light malt vinegar. White distilled vinegar can also be used for cleaning.
Chinese black vinegar is an aged product made from rice, wheat, millet, sorghum, or a combination thereof. It has an inky black color and a complex, malty flavor. There is no fixed recipe, so some Chinese black vinegars may contain added sugar, spices, or caramel color. The most popular variety, Zhenjiang vinegar, originates in the city of Zhenjiang in Jiangsu Province, eastern China. Shanxi mature vinegar is another popular type of Chinese vinegar that is made exclusively from sorghum and other grains. Nowadays in Shanxi province, there are still some traditional vinegar workshops producing handmade vinegar which aged for at least five years with a high acidity. Only the vinegar made in Taiyuan and some counties in Jinzhong and aged for at least three years is considered authentic Shanxi mature vinegar according to the latest national standard.
A somewhat lighter form of black vinegar, made from rice, is produced in Japan, where it is called kurozu.
Fruit vinegars are made from fruit wines, usually without any additional flavoring. Common flavors of fruit vinegar include apple, blackcurrant, raspberry, quince, and tomato. Typically, the flavors of the original fruits remain in the final product.
Most fruit vinegars are produced in Europe, where there is a growing market for high - price vinegars made solely from specific fruits (as opposed to non-fruit vinegars that are infused with fruits or fruit flavors). Several varieties, however, also are produced in Asia. Persimmon vinegar, called gam sikcho, is popular in South Korea. Jujube vinegar, called zaocu or hongzaocu, and wolfberry vinegar are produced in China.
Vinegar made from honey is rare, although commercially available honey vinegars are produced in Italy, Portugal, France, Romania, and Spain.
A byproduct of commercial kiwifruit growing is a large amount of waste in the form of misshapen or otherwise - rejected fruit (which may constitute up to 30 percent of the crop) and kiwifruit pomace, the presscake residue left after kiwifruit juice manufacture. One of the uses for this waste is the production of kiwifruit vinegar, produced commercially in New Zealand since at least the early 1990s, and in China in 2008.
Kombucha vinegar is made from kombucha, a symbiotic culture of yeast and bacteria. The bacteria produce a complex array of nutrients and populate the vinegar with bacteria that some claim promote a healthy digestive tract, although no scientific studies have confirmed this. Kombucha vinegar primarily is used to make a vinaigrette, and is flavored by adding strawberries, blackberries, mint, or blueberries at the beginning of fermentation.
Malt vinegar, also called alegar, is made by malting barley, causing the starch in the grain to turn to maltose. Then an ale is brewed from the maltose and allowed to turn into vinegar, which is then aged. It is typically light - brown in color. In the United Kingdom and Canada, malt vinegar (along with salt) is a traditional seasoning for fish and chips. Some fish and chip shops replace it with non-brewed condiment.
According to Canadian regulations, malt vinegar is defined as a vinegar that includes undistilled malt that has not yet undergone acetous fermentation. It must be dextro - rotary and can not contain less than 1.8 grams of solids and 0.2 grams of ash per 100 millilitres at 20 degrees celsius. It may contain additional cereals or caramel.
Palm vinegar, made from the fermented sap from flower clusters of the nipa palm (also called attap palm), is used most often in the Philippines, where it is produced, and where it is called sukang paombong. It has a citrusy flavor note to it and imparts a distinctly musky aroma. Its pH is between five and six.
Pomegranate vinegar (Hebrew: חומץ רימונים) is used widely in Israel as a dress for salad but also in meat stew and in dips.
Vinegar made from raisins, called khall ʻinab (Arabic: خل عنب "grape vinegar '') is used in cuisines of the Middle East, and is produced there. It is cloudy and medium brown in color, with a mild flavor.
Rice vinegar is most popular in the cuisines of East and Southeast Asia. It is available in "white '' (light yellow), red, and black varieties. The Japanese prefer a light rice vinegar for the preparation of sushi rice and salad dressings. Red rice vinegar traditionally is colored with red yeast rice. Black rice vinegar (made with black glutinous rice) is most popular in China, and it is also widely used in other East Asian countries.
White rice vinegar has a mild acidity with a somewhat "flat '' and uncomplex flavor. Some varieties of rice vinegar are sweetened or otherwise seasoned with spices or other added flavorings.
Sherry vinegar is linked to the production of sherrywines of Jerez. Dark - mahogany in color, it is made exclusively from the acetic fermentation of wines. It is concentrated and has generous aromas, including a note of wood, ideal for vinaigrettes and flavoring various foods.
The term ' spirit vinegar ' is sometimes reserved for the stronger variety (5 % to 21 % acetic acid) made from sugar cane or from chemically produced acetic acid. To be called "Spirit Vinegar '', the product must come from an agricultural source and must be made by "double fermentation ''. The first fermentation is sugar to alcohol and the second alcohol to acetic acid. Product made from chemically produced acetic acid can not be called "vinegar ''. In the UK the term allowed is "Non-brewed condiment ''.
Wine vinegar is made from red or white wine, and is the most commonly used vinegar in Southern and Central Europe, Cyprus and Israel. As with wine, there is a considerable range in quality. Better - quality wine vinegars are matured in wood for up to two years, and exhibit a complex, mellow flavor. Wine vinegar tends to have a lower acidity than white or cider vinegars. More expensive wine vinegars are made from individual varieties of wine, such as champagne, sherry, or pinot gris.
Vinegar is commonly used in food preparation, in particular in pickling processes, vinaigrettes, and other salad dressings. It is an ingredient in sauces such as hot sauce, mustard, ketchup, and mayonnaise. Vinegar is sometimes used while making chutneys. It is often used as a condiment. Marinades often contain vinegar. In terms of its shelf life, vinegar 's acidic nature allows it to last indefinitely without the use of refrigeration.
Several beverages are made using vinegar, for instance Posca. The ancient Greek oxymel is a drink made from vinegar and honey, and sekanjabin is a traditional Persian drink similar to oxymel. Other preparations, known colloquially as "shrubs '', range from simply mixing sugar water or honey water with small amounts of fruity vinegar, to making syrup by laying fruit or mint in vinegar essence for several days, then sieving off solid parts, and adding considerable amounts of sugar. Some prefer to boil the shrub as a final step. These recipes have lost much of their popularity with the rise of carbonated beverages, such as soft drinks.
Many traditional remedies and treatments have been ascribed to vinegar over millennia and in many different cultures, although no medical uses are verified in controlled clinical trials. Some folk medicine uses have side effects that represent health risks.
Small amounts of vinegar (approximately 25 g of domestic vinegar) added to food, or taken along with a meal, were proposed in preliminary research to reduce the glycemic index of carbohydrate food for people with and without diabetes.
Some preliminary research indicates that taking vinegar with food increases satiety and reduces the amount of food consumed.
The growth of several common foodborne pathogens sensitive to acidity is inhibited by common vinegar (5 % acetic acid).
Among these are:
A 6 % solution of acetic acid, comparable in acidity to vinegar, can effectively kill mycobacteria, as tested against drug - resistant tuberculosis bacteria as well as other mycobacteria in culture.
The phenolic composition analysis of vinegar shows the presence of numerous flavonoids, phenolic acids and aldehydes.
Applying vinegar to common jellyfish stings deactivates the nematocysts, although not as effectively as hot water. This does not apply to the Portuguese man o ' war, which, although generally considered to be a jellyfish, is not; vinegar applied to Portuguese man o ' war stings can cause their nematocysts to discharge venom, making the pain worse.
Vinegar is not effective against lice. Combined with 60 % salicylic acid, it is significantly more effective than placebo for the treatment of warts.
Esophageal injury by apple cider vinegar tablets has been reported, and, because vinegar products sold for medicinal purposes are neither regulated nor standardized, they vary widely in content, pH, and other respects. Long - term heavy vinegar ingestion has one recorded case of possibly causing hypokalemia, high blood levels of renin, and osteoporosis.
White vinegar is often used as a household cleaning agent. Because it is acidic, it can dissolve mineral deposits from glass, coffee makers, and other smooth surfaces. For most uses, dilution with water is recommended for safety and to avoid damaging the surfaces being cleaned.
Vinegar is an excellent solvent for cleaning epoxy resin and hardener, even after the epoxy has begun to harden. Malt vinegar sprinkled onto crumpled newspaper is a traditional, and still - popular, method of cleaning grease - smeared windows and mirrors in the United Kingdom. Vinegar can be used for polishing brass or bronze. Vinegar is widely known as an effective cleaner of stainless steel and glass.
Vinegar has been reputed to have strong antibacterial properties. One test by Good Housekeeping 's microbiologist found that 5 % vinegar is 90 % effective against mold and 99.9 % effective against bacteria, though another study showed that vinegar is less effective than Clorox and Lysol against poliovirus. In modern times, experts have advised against using vinegar as a household disinfectant against human pathogens, as it is less effective than chemical disinfectants.
Vinegar is ideal for washing produce because it breaks down the wax coating and kills bacteria and mold. The editors of Cook 's Illustrated found vinegar to be the most effective and safest way to wash fruits and vegetables, beating antibacterial soap, water and just a scrub brush in removing bacteria.
Vinegar has been marketed as an environmentally - friendly solution for many household cleaning problems. For example, vinegar has been cited recently as an eco-friendly urine cleaner for pets.
Vinegar is effective in removing clogs from drains, polishing silver, copper and brass as well as ungluing sticker - type price tags. Vinegar is one of the best ways to restore colour to upholstery like curtains and carpet.
Vinegar also can help remove wallpaper. If the paper is coated with a mixture of vinegar and boiling water, it breaks down the glue for easy removal.
20 % acetic acid vinegar can be used as a herbicide. Acetic acid is not absorbed into root systems; the vinegar will kill top growth, but perennial plants may reshoot.
Most commercial vinegar solutions available to consumers for household use do not exceed 5 %. Solutions above 10 % require careful handling, as they are corrosive and damaging to the skin.
When a bottle of vinegar is opened, mother of vinegar may develop. It is considered harmless and can be removed by filtering.
Vinegar eels (Turbatrix aceti), a form of nematode, may occur in some forms of vinegar unless the vinegar is kept covered. These feed on the mother of vinegar and can occur in naturally fermenting vinegar.
Some countries prohibit the selling of vinegar over a certain percentage acidity. As an example, the government of Canada limits the acetic acid of vinegars to between 4.1 % and 12.3 %.
According to legend, in France during the Black Plague, four thieves were able to rob houses of plague victims without being infected themselves. When finally caught, the judge offered to grant the men their freedom, on the condition that they revealed how they managed to stay healthy. They claimed that a medicine woman sold them a potion made of garlic soaked in soured red wine (vinegar). Variants of the recipe, called Four Thieves Vinegar, have been passed down for hundreds of years and are a staple of New Orleans hoodoo practices.
A solution of vinegar can be used for water slide decal application as used on scale models and musical instruments, among other things. One part white distilled vinegar (5 % acidity) diluted with two parts of distilled or filtered water creates a suitable solution for the application of water - slide decals to hard surfaces. The solution is very similar to the commercial products, often described as "decal softener '', sold by hobby shops. The slight acidity of the solution softens the decal and enhances its flexibility, permitting the decal to cling to contours more efficiently.
When baking soda and vinegar are combined, the bicarbonate ion of the baking soda reacts to form carbonic acid, which decomposes into carbon dioxide and water.
Media related to Vinegar at Wikimedia Commons
|
forget the last lecture here the real story | The Last lecture - wikipedia
The Last Lecture is a New York Times best - selling book co-authored by Randy Pausch -- a professor of computer science, human - computer interaction, and design at Carnegie Mellon University in Pittsburgh, Pennsylvania -- and Jeffrey Zaslow of the Wall Street Journal. The book speaks on a lecture Pausch gave in September 2007 entitled "Really Achieving Your Childhood Dreams ''.
Pausch delivered his "Last Lecture '', titled "Really Achieving Your Childhood Dreams '', at Carnegie Mellon on September 18, 2007. This talk was modeled after an ongoing series of lectures where top academics are asked to think deeply about what matters to them, and then give a hypothetical "final talk '', i.e., "what wisdom would you try to impart to the world if you knew it was your last chance? ''
A month before giving the lecture, Pausch had received a prognosis that the pancreatic cancer, with which he had been diagnosed a year earlier, was terminal. Before speaking, Pausch received a long standing ovation from a large crowd of over 400 colleagues and students. When he motioned them to sit down, saying, "Make me earn it '', someone in the audience shouted back, "You did! '' During the lecture Pausch was upbeat and humorous, shrugging off the pity often given to those diagnosed with terminal illness. At one point, to prove his own vitality, Pausch dropped down and did push - ups on stage.
Pausch begins by setting up the various topics being discussed. The first of three subjects, his childhood dreams, is introduced by relaying the overall premise of why he is stating his dreams, saying, "inspiration and permission to dream are huge ''. The second topic in "Really Achieving Your Childhood Dreams '' is titled "Enabling the Dreams of Others ''. In this section, Pausch discusses his creation of the course "Building Virtual Worlds '' that involves the student development of virtual realities. Through this course, Pausch creates a program called "Alice - The Infinitely Scalable Dream Factory '' because he wants tens of millions of people to chase their dreams. This software allows kids to make movies and games, giving them the opportunity to learn something hard while still having fun. He believes that "the best way to teach somebody something is to have them think that they 're learning something else. '' For the third and final topic in his lecture, called "Lessons Learned '', Dr. Pausch reiterates and introduces a few new lessons that he has learned and accumulated over his lifetime. Arguably the most meaningful point Pausch made comes at the very end of his lecture, when he states: "It 's not about how to achieve your dreams, it 's about how to lead your life. If you lead your life the right way, the karma will take care of itself, the dreams will come to you. ''
The Last Lecture fleshes out Pausch 's lecture and discusses everything he wanted his children to know after his pancreatic cancer had taken his life. It includes stories of his childhood, lessons he wants his children to learn, and things he wants his children to know about him. He repeatedly stresses that one should have fun in everything one does, and that one should live life to its fullest because one never knows when it might be taken.
In the book, Pausch remarks that people told him he looked like he was in perfect health, even though he was dying of cancer. He discusses finding a happy medium between denial and being overwhelmed. He also states that he would rather have cancer than be hit by a bus, because if he were hit by a bus, he would not have had the time he spent with his family nor the opportunity to prepare them for his death.
The 2012 edition of the book features a short foreword written by Jai, his widow, reflecting on the time since her husband 's death.
The Last Lecture achieved commercial success. It became a New York Times bestseller in 2008, and remained on the list for 112 weeks, continuing into the summer of 2011. It has been translated into 48 languages and has sold more than 5 million copies in the United States alone. There was also speculation that the book would be turned into a movie, which was personally turned down by Pausch. He commented that "there 's a reason to do the book, but if it 's telling the story of the lecture in the medium of film, we already have that '', in a reference to the video of the lecture.
|
2006 lexus is engine 3.5 l v6 (350) | Lexus IS - wikipedia
The Lexus IS (Japanese: レクサス ・ IS, Rekusasu IS) is a compact executive car sold by Lexus since 1999. The IS was originally sold under the Toyota Altezza nameplate in Japan from 1998 (the word Altezza is Italian for "highness ''). The IS was introduced as an entry - level sport model positioned below the ES in the Lexus lineup. The Altezza name is still used at times to refer to chromed car taillights like those fitted to the first - generation model, known as "Altezza lights ''.
The first - generation Altezza (codename XE10) was launched in Japan in October 1998, while the Lexus IS 200 (GXE10) made its debut in Europe in 1999 and in North America as the IS 300 (JCE10) in 2000. The first - generation, inline - 6 - powered IS featured sedan and wagon variants. The second - generation IS (codename XE20) was launched globally in 2005 with V6 - powered IS 250 (GSE20) and IS 350 (GSE21) sedan models, followed by a high - performance V8 sedan version, the IS F, in 2007, and hardtop convertible versions, the IS 250 C and IS 350 C, in 2008. The third - generation Lexus IS premiered in January 2013 and includes the V6 - powered IS 350 and IS 250, hybrid IS 300h and performance - tuned F Sport variants. The IS designation stands for Intelligent Sport.
Produced as a direct competitor to the luxury sports sedans of the leading European luxury marques, the XE10 series Toyota Altezza and Lexus IS was designed with a greater performance emphasis than typically seen on prior Japanese luxury vehicles. The engineering work was led by Nobuaki Katayama from 1994 to 1998 under the 038T program code, who was responsible for the AE86 project. Design work by Tomoyasu Nishi was frozen in 1996 and filed under patent number 1030135 on December 5, 1996, at the Japan Patent Office. At its introduction to Japan, it was exclusive to Japanese dealerships called Toyota Netz Store, until Lexus was introduced to Japan in 2006. The Japan - sold AS200 Altezza sedan and AS300 Altezza Gita formed the basis for the Lexus IS 200 and IS 300 models respectively, sold in markets outside Japan, primarily North America, Australia, and Europe. The Altezza Gita was a hatchback - station wagon version sold in Japan and was known in the US and Europe as the Lexus IS SportCross. The AS300 Altezza Gita was the only Altezza with the 2JZ - GE engine, while in export markets, this engine was available in the sedan models as well, as the Lexus IS300 Sedan.
Introduced in 1998 with the AS200 (Chassis code GXE10) and RS200 (chassis code SXE10) sedans, the compact vehicle was produced using a shortened, front - engine, rear - wheel - drive midsize platform, allowing Japanese buyers to take advantage of tax savings imposed by Japanese government regulations concerning vehicle exterior dimensions and engine displacement, and adapted parts from the larger second - generation Aristo / GS. The 2.0 - liter 1G - FE inline - six powered AS200 (GXE10, sedan) featured a five - speed manual transmission as standard, while a four - speed automatic was optional. The 2.0 - liter 3S - GE inline - four - powered RS200 (SXE10, sedan) featured a six - speed manual transmission, while a five - speed automatic was optional. The different size engine choices gave Japanese buyers a choice of which annual road tax obligation they wanted to pay, and the larger engine offered more standard equipment as compensation.
The design received critical acclaim at its 1998 launch and was awarded Japan 's "Car of the Year '' honor for 1998 -- 1999. A few months later, Lexus began marketing the IS 200 equivalent models in Europe. The IS 200 in Europe was listed as producing 153 brake horsepower (114 kW), with a top speed of 216 kilometres per hour (134 mph), and 0 to 100 kilometres per hour (0 -- 62 mph) acceleration in 9.3 seconds. The styling cues of the rear light clusters on the first - generation models were copied by a number of after - market accessory manufacturers for applications on other vehicles. This iconic style of one or more internal lamp units, covered with a clear (or tinted) perspex cover made popular by Lexus, became known in many circles as ' Lexus - style ' or ' Altezza lights '. The taillight style became so popular, that it influenced the development of clear - glass LED taillights. The XE10 's chief engineer was Nobuaki Katayama, while the chief test driver and test engineer was Hiromu Naruse.
In July 2000, a hatchback / station wagon model, the AS300 (Chassis code JCE10), was introduced featuring a 3.0 - liter 2JZ - GE inline - six engine. Equipped with rear - or all - wheel drive (JCE10, RWD Gita wagon; JCE15, 4WD Gita wagon), the AS300 was only available with an automatic gearbox; a five - speed automatic for the RWD Gita wagon and a four - speed automatic for the 4WD Gita wagon. The six - cylinder version (2JZ - GE) was only available in Japan on the Gita models. In the US, the IS 300 sedan debuted in 2000 as 2001 model and the wagon debuted in 2001 as a 2002 model with the same 3.0 - liter six - cylinder engine (the 2.0 - liter six - cylinder was not available), while in Europe, the IS 300 joined the IS 200 in the model lineup. All IS 300 models in the US were initially only available with the five - speed automatic transmission; this was also the case in Europe. However, a five - speed manual was made available in the US in 2001 for the 2002 model year (not available on the SportCross wagon). Visually the exterior of the European IS 200 Sport and 300 were almost identical, the only differences being the boot insignia and the larger - engined model initially having clear front indicators (later generalized to IS 200 range).
The first - generation IS interior featured unique elements not typically found in other Lexus models. These included a chrome metal ball shifter (USDM only, other markets received a leather - trimmed shifter), (optional) pop - up navigation screen, and chronograph - style instrument panel (with mini gauges for temperature, fuel economy, and volts). For the European and Australian markets, the IS 300 gained full leather seats rather than the leather / escaine of the 200, plus Auto - dimming rear view and side mirrors, and HID headlamps. In the US, the Environmental Protection Agency listed the IS 300 as a subcompact car; although it technically had enough overall volume to be called a compact, rear seat room exhibited subcompact dimensions.
The US National Highway Traffic Safety Administration (NHTSA) crash test results in 2001 gave the IS 300 the maximum five stars in the Side Driver and Side Rear Passenger categories, and four stars in the Frontal Driver and Frontal Passenger categories. The Insurance Institute for Highway Safety (IIHS) rated the IS "Good '' overall for frontal collisions and "Good '' in all six measured front impact categories.
For the first - generation IS in the US market, sales hit a high of 22,486 units in 2001; subsequent sales years were less than forecast, and below the 10,000 - unit mark in 2004. The IS 200 fared better relative to sales targets in Europe and Asia, while still well short of the sales volume achieved by the Mercedes - Benz C - Class and other, mostly German - made competitors. This trend was indicative of Lexus ' smaller global status; while Lexus ' range of cars was very successful in North America, the marque 's sales lagged behind its German rivals in Europe. In Europe, the lack of a manual gearbox option for the IS 300 may have limited sales in contrast to its rivals, the BMW 3 Series and the Mercedes C - Class.
In 2000, TTE had released a compressor kit for the IS 200 on the European market. An Eaton supercharger at 0.3 - bar pressure boosted the power to 153 kilowatts (205 hp) without sacrificing fuel consumption (+ 3.3 %). The kit was initially available as an aftermarket fitment, but could also be obtained as OEM Lexus accessory on new cars through the official Lexus dealer network and was fully covered by the standard warranty. This model variant was discontinued when the IS 300 was released on the European market.
In 2003 for the 2004 model year, the IS line received a minor facelift (designed by Hiroyuki Tada). On the exterior, was a new 11 - spoke wheel design, new fog lights, and smoked surrounding trim for the headlights and taillights. On the interior, a new 2 - position memory function was added for the driver seat, a maintenance indicator light, automatic drive - away door locking system, a new storage compartment on the dash (for models without the navigation system) and new trim highlights.
An official concept model, the MillenWorks - built Lexus IS 430 was unveiled at the SEMA Show in Las Vegas, Nevada in 2003. The IS 430 prototype was an IS 300 fitted with a 4.3 - liter V8 from the Lexus GS. Lexus dubbed the IS 430 a one - off with no plans for production. In Europe, Toyota Team Europe (TTE) shoehorned a supercharged 4.3 - liter V8 into an IS 300 bodyshell, the result was a 405 PS (298 kW) ECE sedan.
The second - generation IS was introduced at the Geneva Motor Show in March 2005 as a pre-production model, with the production version debuting at the 2005 New York Auto Show that April. Sales of the sedan began worldwide in September and October 2005 as a 2006 model, with the Toyota Altezza name discontinued the introduction of the Lexus division in Japan, and the slow - selling SportCross station wagon version deleted from the lineup altogether.
Production of the sedan commenced in September 2005 at the Miyata plant in Miyawaka, Fukuoka, supplemented in October 2005 with the Tahara plant at Tahara, Aichi. Production of the IS F started in December 2007 at Tahara. The facility at Miyata began manufacture of the IS C in April 2009.
In North America, IS models sold at launch included the IS 250 and IS 350 sedans; in parts of Europe, the IS models sold by Lexus included the IS 250 and IS 220d sedans. The IS 250 was also available in Australia, New Zealand, Thailand, Singapore, Hong Kong, Taiwan, Chile (automatic only), South Africa and South Korea.
All second - generation IS models offered a more typical Lexus interior compared to the previous generation with a focus on luxurious accoutrements. The interior featured memory leather seats, lightsaber - like electroluminescent instrument display lighting and LED interior lighting accents, the choice of faux - metallic or optional Bird 's Eye Maple wood trim (aluminum composite on the IS F), and SmartAccess keyless entry with push - button start. Options ranged from touchscreen navigation with backup camera to a Mark Levinson premium sound system and Dynamic Radar Cruise Control.
On 6 December 2006, Lexus officially confirmed the existence of a high - performance variant of the second - generation IS called the IS F. The Lexus IS F sedan (USE20) premiered at the 2007 North American International Auto Show on 8 January 2007 as the launch product of Lexus ' F marque lineup of performance - focused vehicles. The IS F went on sale several months later in North America and Europe. The IS F was capable of 0 -- 60 mph (0 -- 97 km / h) in 4.6 seconds, and had a top speed of 170 mph (270 km / h) (electronically limited).
The introduction of the second - generation IS model marked a resurgence in sales for the IS line, with a 332 % increase overall in 2006 compared to the previous year. In its first year of sales, the IS sold over 49,000 units, making it one of the ten best - selling luxury cars in the US. The IS line later took a median position in the entry - luxury market; in 2008 it sold behind the variants of the BMW 3 Series, new Mercedes - Benz C - Class, and Cadillac CTS, and ahead of the Acura TL, Audi A4, and Infiniti G35 sedan. Outside the US, the Lexus IS spearheaded Lexus ' growing sales efforts in Europe, Australia, and South Africa, becoming the best - selling model in Lexus ' lineup in many of the aforementioned markets. In the US, as of 2011, the Lexus IS was the third place best - selling vehicle from the marque after the Lexus RX and Lexus ES.
In 2008, the IS line received a styling refresh, and the suspension and steering were retuned for improved stability and control. After three years with only one body style, the IS returned with a second body style, this time as a hardtop convertible, on 2 October 2008 when the IS 250 C debuted at the Paris Motor Show. A more powerful IS 350 C also became available, with engine specifications analogous to those on the sedan models. The IS convertible went on sale in Europe in 2009, in North America in May 2009, and an IS 300 C was also produced for certain regions. The mid-cycle refresh in 2008 saw slight styling revisions to the interior.
In 2010, coinciding with the second IS line refresh, the revised diesel IS 220d was detuned for improved fuel consumption figures but lowered power output by 27 bhp (20 kW; 27 PS). Building on its "F Sport '' line of parts and accessories for the IS 250 / 350, Lexus added factory - produced F Sport IS models in 2010. The second refresh also includes further interior updates for the IS line.
Changes to IS C include Intelligent Transport Systems and Dedicated Short Range Communication units become standard equipment. Change to US model of F SPORT Package includes revised silver metallic interior trim. Change to Japan F SPORT Package includes new dark rose interior color, medium silver ornament panel. F SPORT performance accessories include 19 - inch forged wheels (set of four), with hardware; brake upgrades, front axle set, rear axle set, carbon fiber engine cover, carbon fiber leather shift knob, floor mats (four - piece set), lowering spring set, performance air intake, performance dual exhaust, shock set (set of four), sway bar set Japan models went on sale in 2013 - 08 - 22. Early models include IS 250C, IS 350C. US models went on sale as 2014 model year vehicle. Early models include IS 250C, IS 350C.
Changes to IS F include carbon rear spoiler, front LED fog lamp, all sports seats include embossed ' F ' logo at head rests, Alcantara upholstery at door trim and center console, standard Intelligent Transport Systems and Dedicated Short Range Communication unit. IS F Dynamic Sport Tuning model (available in Japan) includes 7 PS (5 kW; 7 hp) engine power boost via low - friction piston and pump, strengthened body contact, exclusive carbon front spoiler / rear diffuser, 7 kg (15 lb) lower body weight via exclusive titanium muffler, exclusive orange colour brake caliper with LEXUS logo, exclusive orange accent engine head cover, exclusive carbon interior panel at centre console and door switch base with nameplate, choice of 7 body colours including exclusive starlight black glass flake. Japan models went on sale in 2013 - 09 - 05. US models went on sale as 2014 model year vehicle.
Safety features on the IS models ranged from multiple airbags to stability control systems. A Pre-Collision System (PCS) was the first offered in the entry - luxury performance sedan market segment. NHTSA crash test results rated the second - generation IS the maximum five stars in the Side Driver and Rollover categories, and four stars in the Frontal Driver, Frontal Passenger, and Side Rear Passenger categories; Insurance Institute for Highway Safety scores were "Good '' overall score for all fourteen measured categories in the front and side impact crash tests.
The second - generation IS marked the next introduction of Lexus ' new L - finesse design philosophy on a production vehicle, following the premiere of the 2006 Lexus GS performance sedan. The sedan 's exterior design featured sleeker, coupe - like contours, a fastback profile, and a repeated arrowhead motif in the front fascia and side windows. The IS sedans had a drag coefficient of Cd = 0.28. The forward design was reminiscent of the earlier Lexus LF - C convertible coupe concept.
The IS 250, IS 350 and IS F feature a D - 4 (IS250) or D - 4S (IS350 and IS F) direct injection system with direct fuel injectors (D - 4 and D - 4S) and port fuel injectors (D - 4S only). Certain Asian markets feature the IS 300 (GSE22) without direct injection.
Several concept models preceded the launch of the third - generation IS. The first was the LF - LC (2012). It is a rear - wheel drive concept coupe with mesh pattern of the spindle grille in 3D sculpture form, daytime running lights shaped like an "L '', vertical front fog lamps in fading dot matrix pattern, glass roof with cantilevered pillar with a glass - to - glass juncture inspired by modern architecture, rear fog lamps, twin 12.3 - inch LCD screens provide information and navigation display, leather and suede interior upholstery with brushed metal trim and wood accents, race - inspired front seats are formed of multiple layers and repeat the interlacing curves that define the cabin interior, racing - style steering wheel upholstered in carbon fibre with integrated controls and start button. The vehicle was unveiled in 2012 North American International Auto Show.
That concept was followed by the LF - LC Blue (2012), which is a rear - wheel drive concept coupe based on the LF - LC, with Opal Blue body color, Atkinson cycle combustion engine, battery pack, white and brown interior. The vehicle was unveiled in 2012 Australian International Motor Show, and later in 2012 LA Auto Show.
The LF - CC concept (2012) is a rear - wheel drive coupe incorporating designs from LF - LC concept and Lexus LFA. It included a 2.5 - litre 4 - cylinder Atkinson cycle petrol engine with D - 4S direct injection technology, water - cooled permanent magnet electric motor, 3 LED - projector headlamp design, Daytime Running Lights (DRLs) are integrated into the upper bumper surface, rear spoiler integrated within the boot lid, L - shaped combination lamps with three - dimensional design, Fluid Titanium body colour, 2 - zone dashboard, seats, door panels and instrument binnacle hood upholstered in amber leather. The vehicle was unveiled in 2012 Paris Motor Show, followed by Auto Shanghai 2013.
Exterior design work was done by Masanari Sakae during 2010 -- 2011 and Yuki Isogai (F - Sport) in 2011.
The IS F Sport models include enhanced handling and performance, Adaptive Variable Suspension and Variable Gear Ratio Steering (IS 350). Not only does the F Sport handle differently, but its more aggressive styling certainly sets it apart from the basic production model. F - Sport styling includes an edition specific F - Sport pattern front grille, F - Sport logo badges, and five spoke split graphite wheels. Inside the cabin, you 'll find bright, carbon fiber - like trim, extra bolstered performance seats, an all - black headliner, and an impressive moving vessel gauge cluster, inspired by the Lexus LFA, that displays navigation and audio information. The F - Sport models have an edition specific Ultra White exterior and vibrant, Rioja Red interior. The 2014 model year also served as the first year to offer all - wheel drive in the IS F Sport line up.
The new IS sedan was unveiled at the January 2013 North American International Auto Show, followed by Auto Shanghai 2013, Octagon Club in South Korea
International models went on sale in mid-2013. Early models included the IS 250 RWD, IS 250 AWD, IS 300h and IS 350 RWD. The hybrid IS 300h will be sold in Europe, Japan, and select international markets.
US models went on sale as 2014 model year vehicles on June 28, 2013. Early models include IS 250 RWD, IS 250 AWD, IS 350 RWD, IS 350 AWD. In 2015, for the 2016 model year, the IS 250 was discontinued and replaced by the rear wheel drive only IS 200t. The IS 300 is only offered with all - wheel drive, while the top of the line IS 350 can be ordered with either drivetrain.
Chinese models went on sale in 2013. Early models include IS 250, IS 250 F SPORT.
Japanese models went on sale on 16 May 2013. Early models include IS 250, IS 250 AWD, IS 350, IS 300h.
European models arrived at dealerships in 2013 June / July. Early models include IS 250, IS 300h.
South Korean models went on sale on 27 June 2013. Early models include IS 250 Supreme, IS 250 Executive.
Australian models went on sale in July 2013. Only RWD versions were on offer, the models included the IS 250, IS 300h and the IS 350. The IS 250 was dropped from the line - up in September 2015 replaced with the IS 200t.
In April 2016, Lexus teased an image of a revamped third generation model which includes new headlights, taillights, front fascia, and hood. It debuted at the April 2016 Beijing Auto Show with new interior technology improvements including 10.3 inch infotainment screen, new steering wheel, contrast stitching along the dash, and a lot more.
Toyota Racing Development F SPORT parts for Japanese Lexus IS sedan included front spoiler, side spoiler, rear spoiler, sport muffler and rear diffuser, diamond - like carbon shock absorber, 19 - inch aluminium wheel set (19x8. 5J front and 19x9J rear rims, 45 mm front and 50 mm rear insets, 245 / 35ZR19 front and 265 / 30ZR19 rear tires), member brace, performance damper.
A race car based on the Lexus LF - CC entered the 2014 Super GT GT500 class, replacing the SC 430. Vehicle shakedown began at the Suzuka Circuit.
Production at Tahara plant in Japan began on 25 April 2013.
As of June 2013, sales of Lexus IS has reached 1919 units.
Between 16 May 2013 and 16 June 2013, the order of IS sedans reached approximately 7600 units, including 2100 IS 250 and IS 350, 5500 IS 300h.
As part of the 2014 Lexus IS sports sedan launch in the US, Lexus, and the Tony Hawk Foundation asked their fans and supporters to be part of a fan based decal to be featured on the Lexus IS F CCS - R race car competing in Pikes Peak International Hill Climb. Fans were able to enter their names via a Lexus Facebook post, Lexus Google+ post, comment on a Lexus YouTube IS F CCS - R video, or through Twitter and Instagram using # Lexus14K.
As part of the 2014 Lexus IS sports sedan launch in the US, 2 new television ads (Crowd, Color Shift) were produced by Lexus ' agency of record, Team One, with Original music from Devo 's Mark Mothersbaugh, and directed by Jonas Åkerlund. The ' Crowd ' ad emphasizes that things designed to draw a crowd are good, but leaving the crowd behind is more rewarding. The ' Color Shift ' ad shows it 's more fun and exciting to blend out than blend in. The Two additional ads (This is Your Move, Intense) were created by Lexus ' multicultural agency, Walton Isaacson, as part of the campaign. ' This is Your Move ' was geared to the African - American audience, features Los Angeles Dodgers center fielder Matt Kemp as he searches for something that matches his ambitious and driven personality. ' Intense ' is targeted to the Hispanic audience and follows a young couple as they experience the thrills of driving the redesigned IS 250.
As part of the 2014 Lexus IS sports sedan launch in the US, Lexus outfitted respective editors of Motor Trend and ArrestedMotion.com with the first of Kogeto 's ' Joey ' panoramic cameras to showcase the "performance and stunning design '' of the 2014 Lexus IS.
As part of the 2014 Lexus IS sports sedan launch in the US, Lexus invited more than 200 followers on Instagram, along with their smartphones, to make a commercial of the 2014 Lexus IS using hundreds of their photos of the car strung together into a video.
As part of the 2014 Lexus IS sport sedan launch in the US, Lexus created and hosted a MADE Fashion Week event in 2013 - 09 - 05 debuting a first - ever live holographic performance art experience titled ' Lexus Design Disrupted ', featured supermodel Coco Rocha and a bold retrospective from the archives of designer Giles Deacon in a creative concept inspired by the IS and the brand 's commitment to design and technology.
As part of the 2014 Lexus IS sports sedan launch in the US, Lexus partnered with NBCUniversal for the ' It 's Your Move After Dark ' campaign. The ads took advantage of real - time marketing by allowing viewers to contribute ad concepts via social networks to influence the creative for the Lexus advertisements. The campaign featured a series of live, improvisational short comedy ads that will run in the commercial pods during NBC 's Late Night with Jimmy Fallon. The ads were based on real - time viewer social media submissions each Thursday and performed by New York 's comedy troupes including Fun Young Guys, Magnet Theater Touring Company, MB 's Dream and Stone Cold Fox. Every Thursday night for four weeks beginning 19 September, as part of an early commercial break on NBC 's Late Night with Jimmy Fallon, improv comedians asked viewers to suggest ad concepts with the # LexusIS hashtag via social media platforms, including Facebook, Instagram, Tumblr and Twitter. Submissions would influence the content of the ad and a live, on - air improve performance based on the viewer 's ad suggestion will follow at the final commercial break. East and west coast live broadcasts of the commercials will be completely different each time based on their respective social media suggestions. Each Thursday 's advertisement would be broadcast live from under the Brooklyn Bridge in New York City. In anticipation of the campaign launch, a 15 - second promotional teaser was premiered on 18 September in NBC 's late night programming commercial pods. Additionally, the selected comedic concepts and submissions were made be available for viewing and sharing on a custom page at NBC.com the day after each live broadcast. Fans can continue to engage in exclusive, behind - the - scenes content from the campaign on NBC.com.
As part of the 2014 Lexus IS sports sedan launch in the US, Lexus partnered with DeviantART to start a campaign to challenge the design community to show their vision for the 2014 IS with custom exterior treatments and modifications. The ultimate IS sports sedan concept would be modified by VIP Auto Salon in 10 weeks to reflect the rendering, and be displayed at the Lexus space at SEMA.
In November 2016, Lexus partnered with Huy Fong Foods to produce a Sriracha edition of the IS for the 2016 LA Auto Show. The car is painted red with chili - like flakes and has green accents that evoke a Sriracha bottle cap. In addition, the trunk is filled with 43 bottles of Sriracha sauce.
The first - generation IS 200 / 300 and RS200 series was used by many racing teams, including TRD, to race in various touring car racing series across Asia. In Europe, the Lexus IS 200 was raced in the British Touring Car Championship (through organizations such as BTC Racing), and the IS 300 was raced in the US via the Motorola Cup North American Street Stock Championship touring car series (with the manufacturer - sanctioned Team Lexus).
In 2001, Team Lexus entered three IS 300s in the third race of the 2001 Grand - Am Cup season at Phoenix, Arizona, and won their first IS 300 victory that year at the Virginia International Raceway. In 2002, Team Lexus raced the IS 300 in the Grand - Am Cup ST1 (Street Tuner) class, winning both the Drivers ' and Team Championships, as well as a sweep of the top three finishes at Circuit Mont - Tremblant in Quebec, Canada.
In 2008, the second - generation IS 350 was entered in the Super GT race series in the GT300 class (cars with approximately 300 horsepower). The No. 19 Team Racing Project Bandoh IS 350 driven by Manabu Orido and Tsubasa Abe achieved its first victory in its fifth race at the Motegi GT300 race. In 2009, The Project Bandoh WedsSport IS 350, driven by Manabu Orido and Tatsuya Kataoka, won both driver and team title in the GT300 class that season.
In April 2009, a Lexus IS F entered by Gazoo Racing finished second to the team 's Lexus LF - A in the SP8 class in the ADAC - Westfalenfahrt VLN 4h endurance race. An IS F was also entered in the 2009 24 Hours Nürburgring race and finished third in the SP8 class. In August 2009, an IS F entered by Gazoo Racing and driven by Peter Lyon, Hideshi Matsuda, and Kazunori Yamauchi won the SP8 class at the DMV Grenzlandrennen VLN race. Kazunori Yamauchi is the developer of Gran Turismo series, of which the IS line is playable in several versions, and the IS F racer carried test equipment for future game modes. The 3 drivers, along with Owen Mildenhall, participated in the 2010 24 Hours Nürburgring and finished in 4th place in the SP8 class, behind the 1st place ranked Lexus LFA.
In 2012, Japanese drift racer Daigo Saito entered an IS 250 C in the Formula Drift Asia series. The car, which was a victim of the 2011 Japan earthquake and tsunami and due to be scrapped, was purchased by Saito and heavily customized for drift racing use. The most notable modification was the swapping of the stock engine to a 2JZ - GTE from a Mark IV Toyota Supra. With 1200 horsepower under the hood, Daigo obliterated the competition in that season, winning all the rounds and earning the championship in convincing fashion.
Sales data for Lexus IS generations are as follows, with chart numbers sourced from manufacturer yearly data.
|
how many cirque du soleil shows have there been | Cirque du Soleil - wikipedia
Cirque du Soleil (pronounced (sɪʁk dzy sɔ. lɛj), "Circus of the Sun '' or "Sun Circus '') is a Canadian entertainment company. It is the largest theatrical producer in the world. Based in Montreal, Quebec, Canada, and located in the inner - city area of Saint - Michel, it was founded in Baie - Saint - Paul on July 7, 1984, by two former street performers, Guy Laliberté and Gilles Ste - Croix.
Initially named Les Échassiers ((lez ‿ e. ʃa. sje), "The Waders ''), they toured Quebec in 1980 as a performing troupe. Their initial financial hardship was relieved in 1983 by a government grant from the Canada Council for the Arts, as part of the 450th anniversary celebrations of Jacques Cartier 's voyage to Canada. Le Grand Tour du Cirque du Soleil was a success in 1984, and after securing a second year of funding, Laliberté hired Guy Caron from the National Circus School to re-create it as a "proper circus ''. Its theatrical, character - driven approach and the absence of performing animals helped define Cirque du Soleil as the contemporary circus ("nouveau cirque '') that it remains today.
Each show is a synthesis of circus styles from around the world, with its own central theme and storyline. Shows employ continuous live music, with performers rather than stagehands changing the props. After financial successes and failures in the late 1980s, Nouvelle Expérience was created -- with the direction of Franco Dragone -- which not only made Cirque du Soleil profitable by 1990, but allowed it to create new shows.
Cirque du Soleil expanded rapidly through the 1990s and 2000s, going from one show to 19 shows in over 271 cities on every continent except Antarctica. The shows employ approximately 4,000 people from over 40 countries and generate an estimated annual revenue exceeding US $810 million. The multiple permanent Las Vegas shows alone play to more than 9,000 people a night, 5 % of the city 's visitors, adding to the 90 million people who have experienced Cirque du Soleil 's shows worldwide.
In 2000, Laliberté bought out Gauthier, and with 95 % ownership, has continued to expand the brand. In 2008, Laliberté split 20 % of his share equally between two investment groups Istithmar World and Nakheel of Dubai, in order to further finance the company 's goals. In partnership with these two groups, Cirque du Soleil had planned to build a residency show in the United Arab Emirates in 2012 directed by Guy Caron (Dralion) and Michael Curry. But since Dubai 's financial problems in 2010 caused by the 2008 recession, it was stated by Laliberté that the project has been "put on ice '' for the time being and may be looking for another financial partner to bankroll the company 's future plans, even willing to give up another 10 % of his share. Several more shows are in development around the world, along with a television deal, women 's clothing line and the possible venture into other mediums such as spas, restaurants and nightclubs. Cirque du Soleil also produces a small number of private and corporate events each year (past clients have been the royal family of Dubai and the 2007 Super Bowl).
The company 's creations have received numerous prizes and distinctions, including a Bambi Award in 1997, a Rose d'Or in 1989, three Drama Desk Awards in 1991, 1998 and 2013, three Gemini Awards, four Primetime Emmy Awards, and a star on the Hollywood Walk of Fame. In 2000, Cirque du Soleil was awarded the National Arts Centre Award, a companion award of the Governor General 's Performing Arts Awards. In 2002, Cirque du Soleil was inducted into Canada 's Walk of Fame.
In 2015, TPG Capital, Fosun Industrial Holdings and Caisse de depot et placement du Quebec purchased 90 % of Cirque du Soleil. The sale received regulatory approval from the Government of Canada on 30 June 2015.
At age 18, interested in pursuing some kind of performing career, Guy Laliberté quit college and left home. He toured Europe as a folk musician and busker. By the time he returned home to Canada in 1979, he had learned the art of fire breathing. Although he became "employed '' at a hydroelectric power plant in James Bay, his job ended after only three days due to a labour strike. He decided not to look for another job, instead supporting himself on his unemployment insurance. He helped organize a summer fair in Baie - Saint - Paul with the help of a pair of friends named Daniel Gauthier and Gilles Ste - Croix.
Gauthier and Ste - Croix were managing a youth hostel for performing artists named Le Balcon Vert at that time. By the summer of 1979, Ste - Croix had been developing the idea of turning the Balcon Vert and the talented performers who lived there into an organized performing troupe. As part of a publicity stunt to convince the Quebec government to help fund his production, Ste - Croix walked the 56 miles (90 km) from Baie - Saint - Paul to Quebec City on stilts. The ploy worked, giving the three men the money to create Les Échassiers de Baie - Saint - Paul. Employing many of the people who would later make up Cirque du Soleil, Les Échassiers toured Quebec during the summer of 1980.
Although well received by audiences and critics alike, Les Échassiers was a financial failure. Laliberté spent that winter in Hawaii plying his trade while Ste - Croix stayed in Quebec to set up a nonprofit holding company named "The High - Heeled Club '' to mitigate the losses of the previous summer. In 1981, they met with better results. By that fall, Les Échassiers de Baie - Saint - Paul had broken even. The success inspired Laliberté and Ste - Croix to organize a summer fair in their hometown of Baie - Saint - Paul.
This touring festival, called "La Fête Foraine '', first took place in July 1982. La Fête Foraine featured workshops to teach the circus arts to the public, after which those who participated could take part in a performance. Ironically, the festival was barred from its own hosting town after complaints from local citizens. Laliberté managed and produced the fair over the next couple of years, nurturing it into a moderate financial success. But it was in 1983 that the government of Quebec gave him a $1.5 million grant to host a production the following year as part of Quebec 's 450th anniversary celebration of the French explorer Jacques Cartier 's discovery of Canada. Laliberté named his creation "Le Grand Tour du Cirque du Soleil ''.
The duration of each touring show was traditionally split into two acts of an hour each separated by a 30 - minute interval; however, as of 2014, due to cost cutting issues, the shows have now been reduced to a shorter 55 - minute first act followed by a 50 - minute second act, still including a 30 - minute interval. Permanent shows are usually 90 minutes in length without any intermission. This excludes Joyà (the permanent show in Riviera Maya, Mexico), which is only 70 minutes in length. Typically touring shows as well as resident shows perform a standard 10 shows a week. Touring shows usually have one ' dark day ' (with no performances) while resident shows have two.
Originally intended to only be a one - year project, Cirque du Soleil was scheduled to perform in 11 towns in Quebec over the course of 13 weeks running concurrent with the third La Fête Foraine. The first shows were riddled with difficulty, starting with the collapse of the big top after the increased weight of rainwater caused the central mast to snap. Working with a borrowed tent, Laliberté then had to contend with difficulties with the European performers. They were so unhappy with the Quebec circus 's inexperience that they had, at one point, sent a letter to the media complaining about how they were being treated.
The problems were only transient, however, and by the time 1984 had come to a close, Le Grand Tour du Cirque du Soleil was a success. Having only $60,000 left in the bank, Laliberté went back to the Canadian government to secure funding for a second year. While the Canadian federal government was enthusiastic, the Quebec provincial government was resistant to the idea. It was not until Quebec 's premier, René Lévesque, intervened on their behalf that the provincial government relented. The original big top tent that was used during the 1984 Le Grand Tour du Cirque du Soleil tour can now be seen at Carnivàle Lune Bleue, a 1930s - style carnival that is home to the Cirque Maroc acrobats.
After securing funding from the Canadian government for a second year, Laliberté took steps to renovate Cirque du Soleil from a group of street performers into a "proper circus ''. To accomplish this he hired the head of the National Circus School, Guy Caron, as Cirque du Soleil 's artistic director. The influences that Laliberté and Caron had in reshaping their circus were extensive. They wanted strong emotional music that was played from beginning to end by musicians. They wanted to emulate the Moscow Circus ' method of having the acts tell a story. Performers, rather than a technical crew, move equipment and props on and off stage so that it did not disrupt the momentum of the "storyline ''. Most importantly, their vision was to create a circus with neither a ring nor animals. The rationale was that the lack of both of these things draws the audience more into the performance.
To help design the next major show, Laliberté and Caron hired Franco Dragone, another instructor from the National Circus School who had been working in Belgium. When he joined the troupe in 1985, he brought with him his experience in commedia dell'arte techniques, which he imparted to the performers. Although his experience would be limited in the next show due to budget restraints, he would go on to direct every show up to, but not including Dralion.
By 1986, the company was once again in serious financial trouble. During 1985 they had taken the show outside Quebec to a lukewarm response. In Toronto they performed in front of a 25 % capacity crowd after not having enough money to properly market the show. Gilles Ste - Croix, dressed in a monkey suit, walked through downtown Toronto as a desperate publicity stunt. A later stop in Niagara Falls turned out to be equally problematic.
Several factors prevented the company from going bankrupt that year. The Desjardins Group, which was Cirque du Soleil 's financial institution at the time, covered about $200,000 of bad checks. Also, a financier named Daniel Lamarre, who worked for one of the largest public relations firms in Quebec, represented the company for free, knowing that they did n't have the money to pay his fee. The Quebec government itself also came through again, granting Laliberté enough money to stay solvent for another year.
In 1987, after Laliberté re-privatized Cirque du Soleil, it was invited to perform at the Los Angeles Arts Festival. Although they continued to be plagued by financial difficulties, Normand Latourelle took the gamble and went to Los Angeles, despite only having enough money to make a one - way trip. Had the show been a failure, the company would not have had enough money to get their performers and equipment back to Montreal.
The festival turned out to be a huge success, both critically and financially. The show attracted the attention of entertainment executives, including Columbia Pictures, which met with Laliberté and Gauthier under the pretense of wanting to make a movie about Cirque du Soleil. Laliberté was unhappy with the deal, claiming that it gave too many rights to Columbia, which was attempting to secure all rights to the production. Laliberté pulled out of the deal before it could be concluded, and that experience stands out as a key reason why Cirque du Soleil remained independent and privately owned for 28 years, until Guy Laliberte announced in April 2015 that he was selling his majority stake to a group headed by a U.S. private equity firm, and its Chinese partners.
In 1988, Guy Caron left the company due to artistic differences over what to do with the money generated by Cirque du Soleil 's first financially successful tour. Laliberté wanted to use it to expand and start a second show while Caron wanted the money to be saved, with a portion going back to the National Circus School. An agreement was never met and Caron, along with a large number of artists loyal to him, departed. This stalled plans that year to start a new touring show.
Laliberté sought out Gilles Ste - Croix as replacement for the artistic director position. Ste - Croix, who had been away from the company since 1985, agreed to return. The company went through more internal troubles, including a failed attempt to add Normand Latourelle as a third man to the partnership. This triumvirate lasted only six months before internal disagreements prompted Gauthier and Laliberté to buy out Latourelle. By the end of 1989, Cirque du Soleil was once again in a deficit.
With Saltimbanco finished and touring in the United States and Canada, Cirque du Soleil toured Japan in the summer of 1992 at the behest of the Fuji Television Network. Taking acts from Nouvelle Expérience and Cirque Réinventé, they created a show for this tour, titled Fascination. Although Fascination was never seen outside Japan, it represented the first time that Cirque du Soleil had produced a show that took place in an arena rather than a big top. It was also the first that Cirque du Soleil performed outside of North America.
Also in 1992, Cirque du Soleil made its first collaboration with Switzerland 's Circus Knie in a production named Knie Presents Cirque du Soleil that toured for nine months from 20 March to 29 November 1992 through 60 cities in Switzerland, opening in Rapperswil and closing in Bellinzona. The production merged Circus Knie 's animal acts with Cirque du Soleil 's acrobatic acts. The stage resembled that of Cirque du Soleil 's previous shows La Magie Continue and Le Cirque Reinventé, but was modified to accommodate Circus Knie 's animals. The show also featured acts seen previously in Le Cirque Reinventé, including:
In April 2015, Cirque du Soleil 's Special Events division, which had been responsible for coordinating various public and private events, formed a separate company called 45 Degrees. Led by Yasmine Khalil, the new company has continued to produce special events for Cirque du Soleil while expanding to offer creative content outside Cirque du Soleil as well.
The main characters in this edition will be taken to the future, to a new dimension between reality and fiction. The brother and sister will be faced by their own separation in a totally unknown universe on a trip in which each one must overcome the challenges of their own reality for themselves. Only their instinct and their strong fraternal bond will be the key to their meeting again. '' It will be directed by Mukhtar Omar Sharif, Mukhtar who is an ex-cirque artist turned choreographer who has worked on projects including the Beijing Olympic Game 's opening and closing ceremonies and Cirque 's ' One night for One Drop ' in 2013 and 2014.
As of October 2015, Cirque du Soleil renounced its intention to be involved in Las Vegas nightclubs and has since dissociated itself from all lounges and clubs listed below. These lounges are no longer affiliated with Cirque du Soleil.
Revolution is a 5,000 - square - foot (500 m) lounge concept designed for The Mirage resort in Las Vegas, in which cast members perform to the music of The Beatles. Cirque du Soleil drew inspiration from the Beatles ' lyrics to design some of the lounge 's features. For instance, the ceiling is decorated with 30,000 dichroic crystals, representing "Lucy in the Sky with Diamonds ''. The VIP tables use infrared technology that allows guests to create artwork, which is then projected onto amorphic columns.
Cirque du Soleil 's second lounge was the Gold Lounge, which is located in the Aria Resort and Casino in Las Vegas and is 3,756 square feet (349 m). The design is reminiscent of Elvis ' mansion, Graceland, and black and gold are utilized extensively throughout the décor. The bar has the same shape as the bar in the Elvis mansion as well. The music played here changes throughout the night, including upbeat classic rock, commercial house music, upbeat Elvis remixes, minimal hip hop, Top 40, and pop.
In May 2013 The Light Group opened the Light nightclub in collaboration with Cirque du Soleil at the Mandalay Bay Hotel and Casino in Las Vegas, costing $25 million. Light was the first time Cirque du Soleil worked as part of a nightclub. Among other features the club has a large wall of LED screens, and the room is illuminated with fog, lasers and strobes. DJs at the events include charting artists such as Kaskade and Tiesto, with prices ranging from $30 to $10,000 for certain table placements.
It was announced on 11 October 2014 that in partnership with Saban Brands, Cirque du Soleil Media would produce an animated children 's (pre-school aged) series called Luna Petunia and the showrunner was announced as children 's TV writer Bradley Zweig. The plot revolves around a little girl who plays in a dreamland where she learns how to make the impossible possible. It began airing on Netflix in September 2016.
In a collaboration with NBC, Cirque du Soleil helped to produce a live - television broadcast of The Wiz. The broadcast will premiered in December 2015 on NBC. Tony Award - winning director Kenny Leon directed the show along with Broadway writer / actor Harvey Fierstein, who contributed new material to the original Broadway script. Queen Latifah, Mary J. Blige, Stephanie Mills, Ne - Yo, David Alan Grier, Common, Elijah Kelley, Amber Riley, and Uzo Aduba and newcomer Shanice Williams are set to star. It was speculated that a live version of the show would play on Broadway during the 2016 - 2017 season, however this plan fell through.
Cirque du Soleil shows normally tour under a grand chapiteau (i.e. big top) for an extended period of time until they are modified, if necessary, for touring in arenas and other venues. The infrastructure that tours with each show could easily be called a mobile village; it includes the Grand Chapiteau, a large entrance tent, artistic tent, kitchen, school, and other items necessary to support the cast and crew.
The company 's tours have significant financial impacts on the cities they visit by renting lots for shows, parking spaces, selling and buying promotions, and contributing to the local economy with hotel stays, purchasing food, and hiring local help. For example, during its stay in Santa Monica, California, Koozå brought an estimated US $16,700,000 (equivalent to $18,642,764 in 2016) to the city government and local businesses.
Cirque du Soleil Images creates original products for television, video and DVD and distributes its productions worldwide.
Its creations have been awarded numerous prizes and distinctions, including two Gemini Awards and a Primetime Emmy Award for Cirque du Soleil: Fire Within (in 2003) and three Primetime Emmy Awards for Dralion (in 2001).
In November 2003, gymnast Matthew Cusick (represented by the Lambda Legal Defense and Education Fund) filed a discrimination complaint against Cirque du Soleil in the Equal Employment Opportunity Commission, alleging a violation of the Americans With Disabilities Act. Cusick (a trainee performer who was scheduled to begin working at Mystère) alleged that in April 2002, Cirque du Soleil fired him because he tested HIV - positive, even though company doctors had already cleared him as healthy enough to perform. Cirque du Soleil alleged that due to the nature of Cusick 's disease coupled with his job 's high risk of injury, there was a significant risk of his infecting other performers, crew or audience members. Cirque du Soleil said that they had several HIV - positive employees, but in the case of Cusick, the risk of him spreading his infection while performing was too high to take the risk. A boycott ensued and Just Out ran a story on it (with the headline "Flipping off the Cirque ''). Cirque du Soleil settled with Cusick in April 2004; under the settlement, the company began a company - wide anti-discrimination training program, changed its employment practices pertaining to HIV - positive applicants; paid Cusick $60,000 in lost wages, $200,000 in front pay, and $300,000 in compensatory damages; and paid $40,000 in attorney fees to Lambda Legal.
An additional complaint was filed on Cusick 's behalf by the San Francisco Human Rights Commission. Their complaint stemmed the City of San Francisco 's ban on city contracting with employers that discriminate based on HIV status; the circus leases property owned by the city - owned Port of San Francisco.
Cirque du Soleil opposed Neil Goldberg and his company Cirque Productions over its use of the word "Cirque '' in the late 1990s. Goldberg 's company was awarded a trademark on its name "Cirque Dreams '' in 2005.
In August 1999, Fremonster Theatrical filed an application for the trademark Cirque de Flambé. This application was opposed by the owners of the Cirque du Soleil trademark in August 2002, on the grounds that it would cause confusion and "(dilute) the distinctive quality '' of Cirque du Soleil 's trademarks. A judge dismissed the opposition and the Cirque de Flambé trademark application was approved in 2005.
In April 2016, Cirque du Soleil filed a copyright infringement lawsuit against Justin Timberlake, Timbaland, and Sony Music Entertainment in federal court in New York, alleging that Timberlake 's song "Do n't Hold the Wall '' (co-written with Timbaland) from the 2013 album 20 / 20 infringed the copyright of Cirque du Soleil 's song "Steel Dream '' from its 1997 album Quidam.
In 2016, Cirque du Soleil announced the cancellation of all its 2016 touring shows to North Carolina, citing the recent signing of the Public Facilities Privacy & Security Act by North Carolina governor Pat McCrory. This cancelations affected OVO in both Greensboro and Charlotte, and Toruk in Raleigh. The company announced in a press release that "Cirque du Soleil strongly believes in diversity and equality for every individual and is opposed to discrimination in any form. The new HB2 legislation passed in North Carolina is an important regression to ensuring human rights for all. '' Cirque has been criticized for this decision and accused of taking a double standard, for cancelling the shows in North Carolina while many times they have performed their shows in countries like the United Arab Emirates which violates a number of fundamental human rights.
In 2009, Oleksandr Zhurov, a 24 - year - old from Ukraine, fell off a russian swing while training at the headquarter of the company in Montreal. He died from head injuries sustained in the accident.
The first death during a performance occurred on June 29, 2013. Acrobat Sarah Guyard - Guillot, from Paris, France, was killed after she fell 90 ft (27 m) into an open pit at the MGM Grand during the Kà show. After the fall, everyone on the stage looked "visually scared and frightened ''. Then the audience could hear her groans and screams from the floor.
On November 29, 2016, crew worker Olivier Rochette was struck and killed by a telescopic lift while setting up the stage for that evening 's production of Luzia in San Francisco. Rochette was the son of company co-founder Gilles Ste - Croix.
|
where did the black death start and how did it spread | Black Death - wikipedia
The Black Death was one of the most devastating pandemics in human history, resulting in the deaths of an estimated 75 to 200 million people in Eurasia and peaking in Europe from 1346 to 1353. The bacterium Yersinia pestis, resulting in several forms of plague, is believed to have been the cause. The plague created a series of religious, social, and economic upheavals, which had profound effects on the course of European history.
The Black Death is thought to have originated in the dry plains of Central Asia, where it then travelled along the Silk Road, reaching Crimea by 1343. From there, it was most likely carried by Oriental rat fleas living on the black rats that were regular passengers on merchant ships, spreading throughout the Mediterranean and Europe.
The Black Death is estimated to have killed 30 -- 60 % of Europe 's total population. In total, the plague may have reduced the world population from an estimated 450 million down to 350 -- 375 million in the 14th century. The world population as a whole did not recover to pre-plague levels until the 17th century. The plague recurred as outbreaks in Europe until the 19th century.
The plague disease, caused by Yersinia pestis, is enzootic (commonly present) in populations of fleas carried by ground rodents, including marmots, in various areas including Central Asia, Kurdistan, Western Asia, Northern India and Uganda. Due to climate change in Asia, rodents began to flee the dried out grasslands to more populated areas, spreading the disease. Nestorian graves dating to 1338 -- 1339 near Lake Issyk Kul in Kyrgyzstan have inscriptions referring to plague and are thought by many epidemiologists to mark the outbreak of the epidemic, from which it could easily have spread to China and India. In October 2010, medical geneticists suggested that all three of the great waves of the plague originated in China. In China, the 13th - century Mongol conquest caused a decline in farming and trading. However, economic recovery had been observed at the beginning of the 14th century. In the 1330s, a large number of natural disasters and plagues led to widespread famine, starting in 1331, with a deadly plague arriving soon after. Epidemics that may have included plague killed an estimated 25 million Chinese and other Asians during the 15 years before it reached Constantinople in 1347.
The disease may have travelled along the Silk Road with Mongol armies and traders or it could have come via ship. By the end of 1346, reports of plague had reached the seaports of Europe: "India was depopulated, Tartary, Mesopotamia, Syria, Armenia were covered with dead bodies ''.
Plague was reportedly first introduced to Europe via Genoese traders at the port city of Kaffa in the Crimea in 1347. After a protracted siege, during which the Mongol army under Jani Beg was suffering from the disease, the army catapulted the infected corpses over the city walls of Kaffa to infect the inhabitants. The Genoese traders fled, taking the plague by ship into Sicily and the south of Europe, whence it spread north. Whether or not this hypothesis is accurate, it is clear that several existing conditions such as war, famine, and weather contributed to the severity of the Black Death.
There appear to have been several introductions into Europe. The plague reached Sicily in October 1347, carried by twelve Genoese galleys, and rapidly spread all over the island. Galleys from Kaffa reached Genoa and Venice in January 1348, but it was the outbreak in Pisa a few weeks later that was the entry point to northern Italy. Towards the end of January, one of the galleys expelled from Italy arrived in Marseille.
From Italy, the disease spread northwest across Europe, striking France, Spain, Portugal and England by June 1348, then turned and spread east through Germany and Scandinavia from 1348 to 1350. It was introduced in Norway in 1349 when a ship landed at Askøy, then spread to Bjørgvin (modern Bergen) and Iceland. Finally it spread to northwestern Russia in 1351. The plague was somewhat less common in parts of Europe that had smaller trade relations with their neighbours, including the majority of the Basque Country, isolated parts of Belgium and the Netherlands, and isolated alpine villages throughout the continent.
Modern researchers do not think that the plague ever became endemic in Europe or its rat population. The disease repeatedly wiped out the rodent carriers so that the fleas died out until a new outbreak from Central Asia repeated the process. The outbreaks have been shown to occur roughly 15 years after a warmer and wetter period in areas where plague is endemic in other species such as gerbils.
The plague struck various regions in the Middle East during the pandemic, leading to serious depopulation and permanent change in both economic and social structures. As it spread to western Europe, the disease entered the region from southern Russia also. By autumn 1347, the plague reached Alexandria in Egypt, probably through the port 's trade with Constantinople, and ports on the Black Sea. During 1347, the disease travelled eastward to Gaza, and north along the eastern coast to cities in Lebanon, Syria and Palestine, including Ashkelon, Acre, Jerusalem, Sidon, Damascus, Homs, and Aleppo. In 1348 -- 1349, the disease reached Antioch. The city 's residents fled to the north, most of them dying during the journey, but the infection had been spread to the people of Asia Minor.
Mecca became infected in 1349. During the same year, records show the city of Mawsil (Mosul) suffered a massive epidemic, and the city of Baghdad experienced a second round of the disease. In 1351 Yemen experienced an outbreak of the plague, coinciding with the return of Sultan al - Mujahid Ali of Yemen from imprisonment in Cairo. His party may have brought the disease with them from Egypt.
Contemporary accounts of the plague are often varied or imprecise. The most commonly noted symptom was the appearance of buboes (or gavocciolos) in the groin, the neck and armpits, which oozed pus and bled when opened. Boccaccio 's description is graphic:
In men and women alike it first betrayed itself by the emergence of certain tumours in the groin or armpits, some of which grew as large as a common apple, others as an egg... From the two said parts of the body this deadly gavocciolo soon began to propagate and spread itself in all directions indifferently; after which the form of the malady began to change, black spots or livid making their appearance in many cases on the arm or the thigh or elsewhere, now few and large, now minute and numerous. As the gavocciolo had been and still was an infallible token of approaching death, such also were these spots on whomsoever they showed themselves.
The only medical detail that is questionable in Boccaccio 's description is that the gavocciolo was an ' infallible token of approaching death ', as, if the bubo discharges, recovery is possible.
This was followed by acute fever and vomiting of blood. Most victims died two to seven days after initial infection. Freckle - like spots and rashes, which could have been caused by flea - bites, were identified as another potential sign of the plague.
Some accounts, like that of Lodewijk Heyligen, whose master the Cardinal Colonna died of the plague in 1348, noted a distinct form of the disease that infected the lungs and led to respiratory problems and is identified with pneumonic plague.
It is said that the plague takes three forms. In the first people suffer an infection of the lungs, which leads to breathing difficulties. Whoever has this corruption or contamination to any extent can not escape but will die within two days. Another form... in which boils erupt under the armpits,... a third form in which people of both sexes are attacked in the groin.
Medical knowledge had stagnated during the Middle Ages. The most authoritative account at the time came from the medical faculty in Paris in a report to the king of France that blamed the heavens, in the form of a conjunction of three planets in 1345 that caused a "great pestilence in the air ''. This report became the first and most widely circulated of a series of plague tracts that sought to give advice to sufferers. That the plague was caused by bad air became the most widely accepted theory. Today, this is known as the miasma theory. The word plague had no special significance at this time, and only the recurrence of outbreaks during the Middle Ages gave it the name that has become the medical term.
The importance of hygiene was recognised only in the nineteenth century; until then it was common that the streets were filthy, with live animals of all sorts around and human parasites abounding. A transmissible disease will spread easily in such conditions. One development as a result of the Black Death was the establishment of the idea of quarantine in Dubrovnik in 1377 after continuing outbreaks.
The dominant explanation for the Black Death is the plague theory, which attributes the outbreak to Yersinia pestis, also responsible for an epidemic that began in southern China in 1865, eventually spreading to India. The investigation of the pathogen that caused the 19th - century plague was begun by teams of scientists who visited Hong Kong in 1894, among whom was the French - Swiss bacteriologist Alexandre Yersin, after whom the pathogen was named Yersinia pestis. The mechanism by which Y. pestis was usually transmitted was established in 1898 by Paul - Louis Simond and was found to involve the bites of fleas whose midguts had become obstructed by replicating Y. pestis several days after feeding on an infected host. This blockage results in starvation and aggressive feeding behaviour by the fleas, which repeatedly attempt to clear their blockage by regurgitation, resulting in thousands of plague bacteria being flushed into the feeding site, infecting the host. The bubonic plague mechanism was also dependent on two populations of rodents: one resistant to the disease, which act as hosts, keeping the disease endemic, and a second that lack resistance. When the second population dies, the fleas move on to other hosts, including people, thus creating a human epidemic.
The historian Francis Aidan Gasquet wrote about the Great Pestilence in 1893 and suggested that "it would appear to be some form of the ordinary Eastern or bubonic plague ''. He was able to adopt the epidemiology of the bubonic plague for the Black Death for the second edition in 1908, implicating rats and fleas in the process, and his interpretation was widely accepted for other ancient and medieval epidemics, such as the Justinian plague that was prevalent in the Eastern Roman Empire from 541 to 700 CE.
An estimate of the mortality rate for the modern bubonic plague, following the introduction of antibiotics, is 11 %, although it may be higher in underdeveloped regions. Symptoms of the disease include fever of 38 -- 41 ° C (100 -- 106 ° F), headaches, painful aching joints, nausea and vomiting, and a general feeling of malaise. Left untreated, of those that contract the bubonic plague, 80 per cent die within eight days. Pneumonic plague has a mortality rate of 90 to 95 per cent. Symptoms include fever, cough, and blood - tinged sputum. As the disease progresses, sputum becomes free - flowing and bright red. Septicemic plague is the least common of the three forms, with a mortality rate near 100 %. Symptoms are high fevers and purple skin patches (purpura due to disseminated intravascular coagulation). In cases of pneumonic and particularly septicemic plague, the progress of the disease is so rapid that there would often be no time for the development of the enlarged lymph nodes that were noted as buboes.
A number of alternative theories -- implicating other diseases in the Black Death pandemic -- have also been proposed by some modern scientists (see below -- "Alternative Explanations '').
In October 2010, the open - access scientific journal PLoS Pathogens published a paper by a multinational team who undertook a new investigation into the role of Yersinia pestis in the Black Death following the disputed identification by Drancourt and Raoult in 1998. They assessed the presence of DNA / RNA with polymerase chain reaction (PCR) techniques for Y. pestis from the tooth sockets in human skeletons from mass graves in northern, central and southern Europe that were associated archaeologically with the Black Death and subsequent resurgences. The authors concluded that this new research, together with prior analyses from the south of France and Germany, "ends the debate about the cause of the Black Death, and unambiguously demonstrates that Y. pestis was the causative agent of the epidemic plague that devastated Europe during the Middle Ages ''.
The study also found that there were two previously unknown but related clades (genetic branches) of the Y. pestis genome associated with medieval mass graves. These clades (which are thought to be extinct) were found to be ancestral to modern isolates of the modern Y. pestis strains Y. p. orientalis and Y. p. medievalis, suggesting the plague may have entered Europe in two waves. Surveys of plague pit remains in France and England indicate the first variant entered Europe through the port of Marseille around November 1347 and spread through France over the next two years, eventually reaching England in the spring of 1349, where it spread through the country in three epidemics. Surveys of plague pit remains from the Dutch town of Bergen op Zoom showed the Y. pestis genotype responsible for the pandemic that spread through the Low Countries from 1350 differed from that found in Britain and France, implying Bergen op Zoom (and possibly other parts of the southern Netherlands) was not directly infected from England or France in 1349 and suggesting a second wave of plague, different from those in Britain and France, may have been carried to the Low Countries from Norway, the Hanseatic cities or another site.
The results of the Haensch study have since been confirmed and amended. Based on genetic evidence derived from Black Death victims in the East Smithfield burial site in England, Schuenemann et al. concluded in 2011 "that the Black Death in medieval Europe was caused by a variant of Y. pestis that may no longer exist. '' A study published in Nature in October 2011 sequenced the genome of Y. pestis from plague victims and indicated that the strain that caused the Black Death is ancestral to most modern strains of the disease.
DNA taken from 25 skeletons from the 14th century found in London have shown the plague is a strain of Y. pestis that is almost identical to that which hit Madagascar in 2013.
The plague theory was first significantly challenged by the work of British bacteriologist J.F.D. Shrewsbury in 1970, who noted that the reported rates of mortality in rural areas during the 14th - century pandemic were inconsistent with the modern bubonic plague, leading him to conclude that contemporary accounts were exaggerations. In 1984 zoologist Graham Twigg produced the first major work to challenge the bubonic plague theory directly, and his doubts about the identity of the Black Death have been taken up by a number of authors, including Samuel K. Cohn, Jr. (2002 and 2013), David Herlihy (1997), and Susan Scott and Christopher Duncan (2001).
It is recognised that an epidemiological account of the plague is as important as an identification of symptoms, but researchers are hampered by the lack of reliable statistics from this period. Most work has been done on the spread of the plague in England, and even estimates of overall population at the start vary by over 100 % as no census was undertaken between the time of publication of the Domesday Book and the year 1377. Estimates of plague victims are usually extrapolated from figures from the clergy.
In addition to arguing that the rat population was insufficient to account for a bubonic plague pandemic, sceptics of the bubonic plague theory point out that the symptoms of the Black Death are not unique (and arguably in some accounts may differ from bubonic plague); that transference via fleas in goods was likely to be of marginal significance; and that the DNA results may be flawed and might not have been repeated elsewhere, despite extensive samples from other mass graves. Other arguments include the lack of accounts of the death of rats before outbreaks of plague between the 14th and 17th centuries; temperatures that are too cold in northern Europe for the survival of fleas; that, despite primitive transport systems, the spread of the Black Death was much faster than that of modern bubonic plague; that mortality rates of the Black Death appear to be very high; that, while modern bubonic plague is largely endemic as a rural disease, the Black Death indiscriminately struck urban and rural areas; and that the pattern of the Black Death, with major outbreaks in the same areas separated by 5 to 15 years, differs from modern bubonic plague -- which often becomes endemic for decades with annual flare - ups.
McCormick has suggested that earlier archaeologists were simply not interested in the "laborious '' processes needed to discover rat remains. Walløe complains that all of these authors "take it for granted that Simond 's infection model, black rat → rat flea → human, which was developed to explain the spread of plague in India, is the only way an epidemic of Yersinia pestis infection could spread '', whilst pointing to several other possibilities. Similarly, Green has argued that greater attention is needed to the range of (especially non-commensal) animals that might be involved in the transmission of plague.
A variety of alternatives to the Y. pestis have been put forward. Twigg suggested that the cause was a form of anthrax, and Norman Cantor (2001) thought it may have been a combination of anthrax and other pandemics. Scott and Duncan have argued that the pandemic was a form of infectious disease that they characterise as hemorrhagic plague similar to Ebola. Archaeologist Barney Sloane has argued that there is insufficient evidence of the extinction of a large number of rats in the archaeological record of the medieval waterfront in London and that the plague spread too quickly to support the thesis that the Y. pestis was spread from fleas on rats; he argues that transmission must have been person to person. However, no single alternative solution has achieved widespread acceptance. Many scholars arguing for the Y. pestis as the major agent of the pandemic suggest that its extent and symptoms can be explained by a combination of bubonic plague with other diseases, including typhus, smallpox and respiratory infections. In addition to the bubonic infection, others point to additional septicemic (a type of "blood poisoning '') and pneumonic (an airborne plague that attacks the lungs before the rest of the body) forms of the plague, which lengthen the duration of outbreaks throughout the seasons and help account for its high mortality rate and additional recorded symptoms. In 2014, scientists with Public Health England announced the results of an examination of 25 bodies exhumed from the Clerkenwell area of London, as well as of wills registered in London during the period, which supported the pneumonic hypothesis.
There are no exact figures for the death toll; the rate varied widely by locality. In urban centres, the greater the population before the outbreak, the longer the duration of the period of abnormal mortality. It killed some 75 to 200 million people in Eurasia. According to medieval historian Philip Daileader in 2007:
The trend of recent research is pointing to a figure more like 45 -- 50 % of the European population dying during a four - year period. There is a fair amount of geographic variation. In Mediterranean Europe, areas such as Italy, the south of France and Spain, where plague ran for about four years consecutively, it was probably closer to 75 -- 80 % of the population. In Germany and England... it was probably closer to 20 %.
A death rate as high as 60 % in Europe has been suggested by Norwegian historian Ole Benedictow:
Detailed study of the mortality data available points to two conspicuous features in relation to the mortality caused by the Black Death: namely the extreme level of mortality caused by the Black Death, and the remarkable similarity or consistency of the level of mortality, from Spain in southern Europe to England in north - western Europe. The data is sufficiently widespread and numerous to make it likely that the Black Death swept away around 60 per cent of Europe 's population. It is generally assumed that the size of Europe 's population at the time was around 80 million. This implies that around 50 million people died in the Black Death.
The most widely accepted estimate for the Middle East, including Iraq, Iran and Syria, during this time, is for a death rate of about a third. The Black Death killed about 40 % of Egypt 's population. Half of Paris 's population of 100,000 people died. In Italy, the population of Florence was reduced from 110,000 -- 120,000 inhabitants in 1338 down to 50,000 in 1351. At least 60 % of the population of Hamburg and Bremen perished, and a similar percentage of Londoners may have died from the disease as well. In London approximately 62,000 people died between the years between 1346 and 1353. While contemporary reports account of mass burial pits being created in response to the large numbers of dead, recent scientific investigations of a burial pit in Central London found well - preserved individuals to be buried in isolated, evenly spaced graves, suggesting at least some pre-planning and Christian burials at this time. Before 1350, there were about 170,000 settlements in Germany, and this was reduced by nearly 40,000 by 1450. In 1348, the plague spread so rapidly that before any physicians or government authorities had time to reflect upon its origins, about a third of the European population had already perished. In crowded cities, it was not uncommon for as much as 50 % of the population to die. The disease bypassed some areas, and the most isolated areas were less vulnerable to contagion. Monks and priests were especially hard - hit since they cared for victims of the Black Death.
Renewed religious fervour and fanaticism bloomed in the wake of the Black Death. Some Europeans targeted "various groups such as Jews, friars, foreigners, beggars, pilgrims '', lepers, and Romani, thinking that they were to blame for the crisis. Lepers, and other individuals with skin diseases such as acne or psoriasis, were singled out and exterminated throughout Europe.
Because 14th - century healers were at a loss to explain the cause, Europeans turned to astrological forces, earthquakes, and the poisoning of wells by Jews as possible reasons for the plague 's emergence. The governments of Europe had no apparent response to the crisis because no one knew its cause or how it spread. The mechanism of infection and transmission of diseases was little understood in the 14th century; many people believed the epidemic was a punishment by God for their sins. This belief led to the idea that the cure to the disease was to win God 's forgiveness
There were many attacks against Jewish communities. In February 1349, the citizens of Strasbourg murdered 2,000 Jews. In August 1349, the Jewish communities in Mainz and Cologne were annihilated. By 1351, 60 major and 150 smaller Jewish communities had been destroyed.
The plague repeatedly returned to haunt Europe and the Mediterranean throughout the 14th to 17th centuries. According to Biraben, the plague was present somewhere in Europe in every year between 1346 and 1671. The Second Pandemic was particularly widespread in the following years: 1360 -- 1363; 1374; 1400; 1438 -- 1439; 1456 -- 1457; 1464 -- 1466; 1481 -- 1485; 1500 -- 1503; 1518 -- 1531; 1544 -- 1548; 1563 -- 1566; 1573 -- 1588; 1596 -- 1599; 1602 -- 1611; 1623 -- 1640; 1644 -- 1654; and 1664 -- 1667. Subsequent outbreaks, though severe, marked the retreat from most of Europe (18th century) and northern Africa (19th century). According to Geoffrey Parker, "France alone lost almost a million people to the plague in the epidemic of 1628 -- 31. ''
In England, in the absence of census figures, historians propose a range of preincident population figures from as high as 7 million to as low as 4 million in 1300, and a postincident population figure as low as 2 million. By the end of 1350, the Black Death subsided, but it never really died out in England. Over the next few hundred years, further outbreaks occurred in 1361 -- 1362, 1369, 1379 -- 1383, 1389 -- 1393, and throughout the first half of the 15th century. An outbreak in 1471 took as much as 10 -- 15 % of the population, while the death rate of the plague of 1479 -- 1480 could have been as high as 20 %. The most general outbreaks in Tudor and Stuart England seem to have begun in 1498, 1535, 1543, 1563, 1589, 1603, 1625, and 1636, and ended with the Great Plague of London in 1665.
In 1466, perhaps 40,000 people died of the plague in Paris. During the 16th and 17th centuries, the plague was present in Paris around 30 per cent of the time. The Black Death ravaged Europe for three years before it continued on into Russia, where the disease was present somewhere in the country 25 times between 1350 and 1490. Plague epidemics ravaged London in 1563, 1593, 1603, 1625, 1636, and 1665, reducing its population by 10 to 30 % during those years. Over 10 % of Amsterdam 's population died in 1623 -- 1625, and again in 1635 -- 1636, 1655, and 1664. Plague occurred in Venice 22 times between 1361 and 1528. The plague of 1576 -- 1577 killed 50,000 in Venice, almost a third of the population. Late outbreaks in central Europe included the Italian Plague of 1629 -- 1631, which is associated with troop movements during the Thirty Years ' War, and the Great Plague of Vienna in 1679. Over 60 % of Norway 's population died in 1348 -- 1350. The last plague outbreak ravaged Oslo in 1654.
In the first half of the 17th century, a plague claimed some 1.7 million victims in Italy, or about 14 % of the population. In 1656, the plague killed about half of Naples ' 300,000 inhabitants. More than 1.25 million deaths resulted from the extreme incidence of plague in 17th - century Spain. The plague of 1649 probably reduced the population of Seville by half. In 1709 -- 1713, a plague epidemic that followed the Great Northern War (1700 -- 1721, Sweden v. Russia and allies) killed about 100,000 in Sweden, and 300,000 in Prussia. The plague killed two - thirds of the inhabitants of Helsinki, and claimed a third of Stockholm 's population. Europe 's last major epidemic occurred in 1720 in Marseille.
The Black Death ravaged much of the Islamic world. Plague was present in at least one location in the Islamic world virtually every year between 1500 and 1850. Plague repeatedly struck the cities of North Africa. Algiers lost 30,000 -- 50,000 inhabitants to it in 1620 -- 1621, and again in 1654 -- 1657, 1665, 1691, and 1740 -- 1742. Plague remained a major event in Ottoman society until the second quarter of the 19th century. Between 1701 and 1750, thirty - seven larger and smaller epidemics were recorded in Constantinople, and an additional thirty - one between 1751 and 1800. Baghdad has suffered severely from visitations of the plague, and sometimes two - thirds of its population has been wiped out.
The third plague pandemic (1855 -- 1859) started in China in the mid-19th century, spreading to all inhabited continents and killing 10 million people in India alone. Twelve plague outbreaks in Australia between 1900 and 1925 resulted in well over 1,000 deaths, chiefly in Sydney. This led to the establishment of a Public Health Department there which undertook some leading - edge research on plague transmission from rat fleas to humans via the bacillus Yersinia pestis.
The first North American plague epidemic was the San Francisco plague of 1900 -- 1904, followed by another outbreak in 1907 -- 1908. From 1944 through 1993, 362 cases of human plague were reported in the United States; approximately 90 % occurred in four western states: Arizona, California, Colorado, and New Mexico.
Modern treatment methods include insecticides, the use of antibiotics, and a plague vaccine. The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug - resistant form of the bacterium was found in Madagascar in 1995. A further outbreak in Madagascar was reported in November 2014. In October 2017 the deadliest outbreak of the plague in modern times hit Madagascar killing 170 people and infecting thousands.
The 12th - century French physician Gilles de Corbeil 's On the Signs and Symptoms of Diseases (Latin: De signis et sinthomatibus egritudinum) uses the phrase "black death '' (atra mors) to refer to a pestilential fever (febris pestilentialis).
Writers contemporary with the plague referred to the event as the "Great Mortality '' or the "Great Plague ''.
The phrase "black death '' (mors nigra) was used in 1350 by Simon de Covino or Couvin, a Belgian astronomer, who wrote the poem "On the Judgment of the Sun at a Feast of Saturn '' (De judicio Solis in convivio Saturni), which attributes the plague to a conjunction of Jupiter and Saturn. In 1908, Gasquet claimed that use of the name atra mors for the 14th - century epidemic first appeared in a 1631 book on Danish history by J. I. Pontanus: "Commonly and from its effects, they called it the black death '' (Vulgo & ab effectu atram mortem vocatibant). The name spread through Scandinavia and then Germany, gradually becoming attached to the mid 14th - century epidemic as a proper name. In England, it was not until 1823, that the medieval epidemic was first called the Black Death.
|
what is the name of arsenal's stadium | Emirates stadium - wikipedia
The Emirates Stadium (known as Ashburton Grove prior to sponsorship, and as Arsenal Stadium for UEFA competitions) is a football stadium in Holloway, London, England, and the home of Arsenal Football Club. With a capacity of over 60,000, it is the third - largest football stadium in England after Wembley Stadium and Old Trafford.
In 1997, Arsenal explored the possibility of relocating to a new stadium, having been denied planning permission by Islington Council to expand its home ground of Highbury. After considering various options (including purchasing Wembley Stadium), the club bought an industrial and waste disposal estate in Ashburton Grove in 2000. A year later, they received the council 's approval to build a stadium on the site; manager Arsène Wenger described this as the "biggest decision in Arsenal 's history '' since the board appointed Herbert Chapman. Relocation began in 2002, but financial difficulties delayed work until February 2004. Emirates was later announced as the main sponsor for the stadium. The entire stadium project was completed in 2006 at a cost of £ 390 million. The club 's former stadium was redeveloped as Highbury Square for an additional £ 130 million.
The stadium has undergone a process of "Arsenalisation '' since 2009 with the aim of restoring Arsenal 's heritage and history. The ground has hosted international fixtures and music concerts.
Spectator safety at football grounds was a major concern for clubs and governing bodies during the 1980s. In May 1985, a fire engulfed Valley Parade, claiming the lives of 56 football fans. Rioting between Liverpool and Juventus supporters during the 1985 European Cup Final led to 39 fatalities and 600 injured in what became known as the Heysel Stadium disaster. In April 1989, 96 Liverpool supporters died in Britain 's worst sporting tragedy -- the Hillsbrough disaster -- because of South Yorkshire Police 's failure to control crowds. An inquest was immediately launched into the human crush and crowd safety at sports grounds. Led by Lord Taylor of Gosforth, the Taylor Report was finalised in January 1990 and recommended the removal of terraces (standing areas) in favour of seating.
Under the amended Football Spectators Act 1989, it became compulsory for first and second tier English clubs to have their stadia all - seated in time for the 1994 -- 95 season. Arsenal, like many other clubs, experienced difficulty raising income for converted terraced areas. At the end of the 1990 -- 91 season, the club introduced a bond scheme which offered supporters the right to purchase a season ticket at its renovated North Bank stand of Highbury. The board felt this was the only viable option after considering other proposals; they did not want to compromise on traditions nor curb manager George Graham 's transfer dealings. At a price of between £ 1,000 to £ 1,500, the 150 - year bond was criticised by supporters, who argued it potentially blocked the participation of those less well - off from supporting Arsenal. A campaign directed by the Independent Arsenal Supporters ' Association brought relative success as only a third of all bonds were sold.
The North Bank was the final stand to be refurbished. It opened in August 1993 at a cost of £ 20 million. The rework significantly reduced the stadium 's capacity, from 57,000 at the beginning of the decade to under 40,000. High ticket prices to serve the club 's existing debts and low attendance figures forced Arsenal to explore the possibility of building a larger stadium in 1997. The club wanted to attract an evergrowing fanbase and financially compete with the biggest clubs in England. By comparison, Manchester United enjoyed a rise in gate receipts; the club went from £ 43.9 million in 1994 to £ 87.9 million in 1997 because of Old Trafford 's expansion.
Arsenal 's initial proposal to rebuild Highbury was met with disapproval from local residents, as it required the demolition of 25 neighbouring houses. It later became problematic once the East Stand of the stadium was granted Grade II listing in July 1997. After much consultation, the club abandoned its plan, deciding a capacity of 48,000 was not large enough. Arsenal then investigated the possibility of relocating to Wembley Stadium and in March 1998 made an official bid to purchase the ground. The Football Association (FA) and the English National Stadium Trust opposed Arsenal 's offer, claiming it harmed England 's bid for the 2006 FIFA World Cup, which FIFA itself denied. In April 1998, Arsenal withdrew its bid and Wembley was purchased by the English National Stadium Trust. The club however was given permission to host its UEFA Champions League home ties at Wembley for the 1998 -- 99 and 1999 -- 2000 seasons. Although Arsenal 's time in the competition was brief, twice exiting the group stages, the club set its record home attendance (73,707 against Lens) and earned record gate income in the 1998 -- 99 season, highlighting potential profitability.
In November 1999, Arsenal examined the feasibility of building a new stadium in Ashburton Grove. Anthony Spencer, estate agent and club property adviser, recommended the area to director Danny Fiszman and vice-chairman David Dein having scoured over North London for potential areas. The land, 500 yards (460 m) from Highbury was composed of a rubbish processing plant and industrial estate, 80 % owned by the Islington Council, Railtrack and Sainsbury 's. After passing the first significant milestone at Islington Council 's planning committee, Arsenal submitted a planning application for a new - build 60,000 seater stadium in November 2000. This included a redevelopment project at Drayton Park, converting the existing ground Highbury to flats and building a new waste station in Lough Road. As part of the scheme, Arsenal intended to create 1,800 new jobs for the community and 2,300 new homes. Improvements to three railway stations, Holloway Road, Drayton Park and Finsbury Park, were included to cope with the increased capacity requirements from matchday crowds.
Islington Stadium Communities Alliance (ISCA) -- an alliance of 16 groups representing local residents and businesses, was set up in January 2000 as a body against the redevelopment. Alison Carmichael, a spokeswoman for the group, said of the move, "It may look like Arsenal are doing great things for the area, but in its detail the plan is awful. We blame the council; the football club just wants to expand to make more money. '' Tom Lamb, an ISCA member, was concerned about as air pollution and growing traffic, adding "that is a consequence which most Arsenal fans would never see, because they are in Islington only for about thirty days a year. ''
Seven months after the planning application was submitted, a poll showed that 75 % of respondents (2,133 residents) were against the scheme. By October 2001, the club asserted that a poll of Islington residents found that 70 % were in favour, and received the backing from the then Mayor of London, Ken Livingstone. The club launched a campaign to aid the project in the run up to Christmas and planted the slogan "Let Arsenal support Islington '' on advertising hoardings and in the backdrop of manager Arsène Wenger 's press conferences.
Islington Council approved Arsenal 's planning application on 10 December 2001, voting in favour of the Ashburton Grove development. The council also consented to the transfer of the existing waste recycling plant in Ashburton Grove to Lough Road. Livingstone approved of the plans a month later, and it was then motioned to then - Transport Secretary Stephen Byers, who initially delayed making a final decision. He had considered whether to refer the scheme to a public inquiry, but eventually decided not to. Planning permission was granted by Islington Council in May 2002, but local residents and ISCA launched a late challenge to the High Court, citing the plans were against the law. Duncan Ouseley dismissed the case in July 2002, paving the way for Arsenal to start work.
The club succeeded in a further legal challenge bought by small firms in January 2005 as the High Court upheld a decision by then - Deputy Prime Minister John Prescott to grant a compulsory purchase order in support of the scheme. The stadium later became issue in the local elections in May 2006. The Metropolitan Police restricted supporters ' coaches to being parked in the nearby Sobel Sports Centre rather than in the underground stadium car park, and restricted access to 14 streets on match days. These police restrictions were conditions of the stadiums ' health and safety certificate which the stadium requires to operate and open. The road closures were passed at a council meeting in July 2005.
Securing finance for the stadium project proved a challenge as Arsenal received no public subsidy from the government. Whereas Wenger claimed French clubs "pay nothing at all for their stadium, nothing at all for their maintenance, '' and "Bayern Munich paid one euro for their ground, '' Arsenal were required to buy the site outright in one of London 's most expensive areas. The club therefore sought other ways of generating income, such as making a profit on player trading. Arsenal recouped over £ 50 million from transfers involving Nicolas Anelka to Real Madrid, and Marc Overmars and Emmanuel Petit to Barcelona. The transfer of Anelka partly funded the club 's new training ground, in London Colney, which opened in October 1999.
The club also agreed new sponsorship deals; in September 2000, Granada Media Group purchased a 5 % stake in Arsenal for £ 47 million. As part of the acquisition, Granada became the premier media agent for Arsenal, handling advertising, sponsorship, merchandising, publishing and licensing agreements. The club 's managing director Keith Edelman confirmed in a statement that the investment would be used directly to fund for the new stadium. The collapse of ITV Digital (part - owned by Granada) in April 2002 coincided with news that the company was tied in to pay £ 30 million once arrangements for the new stadium were finalised.
In September 2002, Arsenal formulated plans to reduce its players ' wage bill after making a pre-tax loss of £ 22.3 million for the 2001 -- 02 financial year. The club appointed NM Rothschild & Sons to examine its financial situation and advise whether it was feasible for construction to press ahead at the end of March 2003. Although Arsenal secured a £ 260 million loan from a group of banks led by the Royal Bank of Scotland, the club suspended work on Ashburton Grove in April 2003, saying, "We have experienced a number of delays in arrangements for our new stadium project in recent months across a range of issues. The impact of these delays is that we will now be unable to deliver a stadium opening for the start of the 2005 -- 06 season. '' The cost of building the stadium, forecasted at £ 400 million, had risen by £ 100 million during that period.
Throughout the summer of 2003, Arsenal gave fans the opportunity to register their interest in a relaunched bond scheme. The club planned to issue 3,000 bonds for between £ 3,500 and £ 5,000 each for a season ticket at Highbury, then at Ashburton Grove. Supporters reacted negatively to the news; AISA chairman Steven Powell said in a statement: "We are disappointed that the club has not consulted supporters before announcing a new bond scheme. '' Though Arsenal never stated how many bonds were sold, they did raise several million pounds through the scheme. The club also extended its contract with sportswear provider Nike, in a deal worth £ 55 million over seven years. Nike paid a minimum of £ 1 million each year as a royalty payment, contingent on sales.
Funding for the stadium was secured in February 2004. Later in the year Emirates bought naming rights for the stadium, in a 15 - year deal estimated at £ 100 million. The stadium name is colloquially shortened from "Emirates Stadium '' to "The Emirates '', although some supporters continue to use the former name "Ashburton Grove '' or even "The Grove '', particularly those who object to the concept of corporate sponsorship of stadium names. Due to UEFA regulations on stadium sponsors, the ground is referred to as Arsenal Stadium for European matches, which was the official name of Highbury. Emirates and Arsenal agreed a new deal worth £ 150 million in November 2012, and naming rights were extended to 2028 as part of the deal.
Actual construction of the stadium began once Arsenal secured funding. The club appointed Sir Robert McAlpine in January 2002 to carry out building work and the stadium was designed by HOK Sport (now Populous), who were the architects for Stadium Australia and the redevelopment of Ascot Racecourse. Construction consultants Arcadis and engineering firm Buro Happold were also involved in the process.
The first phase of demolition was completed in March 2004, and two months later, stand piling on the West, East and North stands had been concluded. Two bridges over the Northern City railway line connecting the stadium to Drayton Park were also built; these were completed in August 2004. The stadium topped out in August 2005 and external glazing, power and water tank installation was completed by December 2005. The first seat in the new stadium was ceremonially installed on 13 March 2006 by Arsenal midfielder Abou Diaby. DD GrassMaster was selected as the pitch installer and Hewitt Sportsturf was contracted to design and construct the playing field. Floodlights were successfully tested for the first time on 25 June 2006, and a day later, the goalposts were erected.
In order to obtain the licences needed to open, the Emirates Stadium hosted three non-full capacity events. The first "ramp - up '' event was a shareholder open day on 18 July 2006, the second an open training session for 20,000 selected club members held two days later. The third event was Dennis Bergkamp 's testimonial match against Ajax on 22 July 2006. The Emirates Stadium was officially opened by Prince Philip, Duke of Edinburgh on 26 October 2006; his wife Queen Elizabeth II had suffered a back injury and was unable to carry out her duty. Prince Philip quipped to the crowd, "Well, you may not have my wife, but you 've got the second-most experienced plaque unveiler in the world. '' The royal visit echoed the attendance of the Queen 's uncle, the Prince of Wales (later King Edward VIII) at the official opening of Highbury 's West Stand in 1932. As a result of the change of plan, the Queen extended to the club the honour of inviting the chairman, manager and first team to join her for afternoon tea at Buckingham Palace. Held on on 15 February 2007, the engagement marked the first time a football club had been invited to the palace for such an event.
Interest on the £ 260 million debt was set at a commercial fixed rate over a 14 - year period. To refinance the cost, Arsenal planned to convert the money into a 30 - year bond financed by banks. The proposed bond issue went ahead in July 2006. Arsenal issued £ 210 million worth of 13.5 - year bonds with a spread of 52 basis points over government bonds and £ 50 million of 7.1 - year bonds with a spread of 22 basis points over LIBOR. It was the first publicly marketed, asset - backed bond issue by a European football club. The effective interest rate on these bonds is 5.14 % and 5.97 %, respectively, and are due to be paid back over a 25 - year period; the move to bonds has reduced the club 's annual debt service cost to approximately £ 20 million a year. In September 2010, Arsenal announced that the Highbury Square development -- one of the main sources of income to reduce the stadium debt -- was now debt free and making revenue.
When Arsenal moved to the Emirates Stadium, the club prioritised repaying the loans over strengthening the playing squad. Arsenal 's self - sustaining model relied heavily on qualifying for the UEFA Champions League; as Wenger recalled in 2016: "We had to be three years in the Champions League out of five and have an average of 54,000 people, and we did n't know we would be capable of that. '' The club sold several experienced players throughout the late 2000s and early 2010s and raised ticket prices, upsetting supporters who have called for change. Wenger took umbrage over criticism and revealed the bank loans were contingent on his commitment to the club: "The banks wanted the technical consistency to guarantee that we have a chance to pay (them) back. I did commit and I stayed and under very difficult circumstances. So for me to come back and on top of that (critics) reproach me for not having won the championship during that period it is a bit overboard. '' Wenger later described the stadium move as the toughest period of his life because of the restricted finances in place.
In August 2009, Arsenal began a programme of "Arsenalisation '' of the Emirates Stadium after listening to feedback from supporters in a forum. The intention was to turn the stadium into a "visible stronghold of all things Arsenal through a variety of artistic and creative means '', led by Arsenal chief executive Ivan Gazidis.
Among the first changes were white seats installed in the pattern of the club 's trademark cannon, located in the lower level stands opposite the entrance tunnel. "The Spirit of Highbury '', a shrine depicting every player to have played for Arsenal during its 93 - year residence, was erected in late 2009 outside the stadium at the south end. Eight large murals on the exterior of the stadium were installed, each depicting four Arsenal legends linking arms, such that the effect of the completed design is 32 legends in a huddle embracing the stadium:
Around the lower concourse of the stadium are additional murals depicting 12 "greatest moments '' in Arsenal history, voted for by a poll on the club 's website. Prior to the start of the 2010 -- 11 season, Arsenal renamed the coloured seating quadrants of the ground as the East Stand, West Stand, North Bank, and Clock End. Akin to Highbury, this involved the installation of a clock above the newly renamed Clock End which was unveiled in a league match against Blackpool. In April 2011, Arsenal renamed two bridges near the stadium in honour of club directors Ken Friar and Danny Fiszman. As part of the club 's 125 anniversary celebrations in December 2011, Arsenal unveiled three statues of former captain Tony Adams, record goalscorer Thierry Henry and manager Herbert Chapman outside of the stadium. Before Arsenal 's match against Sunderland in February 2014, the club unveiled a statue of former striker Dennis Bergkamp, outside the west stand of Emirates Stadium.
Banners and flags, often designed by supporters group REDaction, are hung around the ground. A large "49 '' flag, representing the run of 49 unbeaten league games, is passed around the lower tier before kick off.
Described as "beautiful '' and "intimidating '' by architect Christopher Lee of Populous, the Emirates Stadium is a four - tiered bowl with translucent polycarbonate roofing over the stands, but not over the pitch. The underside is clad with metallic panels and the roof is supported by four triangular trusses, made of welded tubular steel. Two trusses span 200 metres (660 ft) in a north -- south direction while a further two span an east -- west direction. The trusses are supported by the stadium 's vertical concrete cores, eight of which connected to them by steel tripods. They in turn each house four stairways, a passenger lift as well as service access. Façades are either glazed or woven between the cores which allows visitors on the podium to see inside the stadium. The glass and steel construction was devised by Populous to give an impression that the stadium sparkles in sunlight and glows in the night.
The upper and lower parts of the stadium feature standard seating. The stadium has two levels below ground that house its support facilities such as commercial kitchens, changing rooms and press and education centres. The main middle tier, known as the "Club Level '', is premium priced and also includes the director 's box. There are 7,139 seats at this level, which are sold on licences lasting from one to four years. Immediately above the club tier there is a small circle consisting of 150 boxes of 10, 12 and 15 seats. The total number of spectators at this level is 2,222. The high demand for tickets, as well as the relative wealth of their London fans, means revenue from premium seating and corporate boxes is nearly as high as the revenue from the entire stadium at Highbury.
The upper tier is contoured to leave open space in the corners of the ground, and the roof is significantly canted inwards. Both of these features are meant to provide as much airflow and sunlight to the pitch as possible. The stadium gives an illusion that supporters in the upper tier on one side of the ground are unable to see supporters in the upper tier opposite. As part of a deal with Sony, the stadium was the first in the world to incorporate HDTV streaming. In the north - west and south - east corners of the stadium are two giant screens suspended from the roof.
The pitch is 105 by 68 metres (115 by 74 yd) in size and the total grass area at Emirates is 113 by 76 metres (124 by 83 yd). Like Highbury, it runs north -- south, with the players ' tunnel and the dugouts on the west side of the pitch underneath the main TV camera. The away fans are found in the south - east corner of the lower tier. The away supporter configuration can be expanded from 1,500 seats to 4,500 seats behind the south goal in the lower tier, and a further 4,500 seats can be made available also in the upper tier, bringing the total to 9,000 supporters (the regulation 15 % required for domestic cup competitions such as the FA Cup and EFL Cup).
The Emirates Stadium pays tribute to Arsenal 's former home, Highbury. The club 's offices are officially called Highbury House, located north - east of Emirates Stadium, and house the bust of Herbert Chapman that used to reside at Highbury. Three other busts that used to reside at Highbury of Claude Ferrier (architect of Highbury 's East stand), Denis Hill - Wood (former Arsenal chairman) and manager Arsène Wenger have also been moved to Emirates Stadium and are in display in the entrance of the Diamond Club. Additionally, the two bridges over the railway line to the east of the stadium, connecting the stadium to Drayton Park, are called the Clock End and North Bank bridges, after the stands at Highbury; the clock that gave its name to the old Clock End has been resited on the new clock end which features a newer, larger replica of the clock. The Arsenal club museum, which was formerly held in the North Bank Stand, opened in October 2006 and is located to the north of the stadium, within the Northern Triangle building. It houses the marble statues that were once held in the marble halls of Highbury.
As of 2008, Arsenal 's season ticket waiting list stood at 40,000 people. The potential for expanding the stadium is a common topic within fan debate. Potential expansion methods include, make seats smaller, filling in the dips in the corners of the stadium though this would block airflow, and replacing the roof with a third level and building a new roof. There has also been discussion on the implementation of safe standing.
Aside from sporting uses, the Emirates Stadium operates as a conference centre. On 27 March 2008, it played host to a summit between British Prime Minister Gordon Brown and French President Nicolas Sarkozy, in part because the stadium was regarded as "a shining example of Anglo -- French co-operation ''. The stadium has been used as a location for the audition stage of reality shows The X Factor, Britain 's Got Talent and Big Brother. In 2016, the Emirates was a venue for Celebrity Masterchef, where contestants prepare meals for club staff members.
Aside from sporting uses, the Emirates has been used as a music venue which increases the maximum capacity to 72,000. Bruce Springsteen and the E Street Band became the first band to play a concert at the stadium on 30 May 2008. They played a second gig the following night. British band Coldplay played three concerts at the Emirates in the June 2012, having sold out the first two dates within 30 minutes of going on sale. They were the first band to sell out the stadium for music purposes. Green Day set an gig attendance record when performing at the Emirates in June 2013.
The stadium has also been used for a number of international friendly matches all of which have featured the Brazil national football team. The first match was against Argentina on 3 September 2006 which ended in a 3 -- 0 victory for Brazil.
It is difficult to get accurate attendance figures as Arsenal do not release these, but choose to use tickets sold. For the 2016 - 17 season, the reported average home league attendance was 59,957. The highest attendance for an Arsenal match at Emirates Stadium as of 2012 is 60,161, for a 2 -- 2 draw with Manchester United on 3 November 2007. The average attendance for competitive first - team fixtures in the stadium 's first season, 2006 -- 07, was 59,837, with a Premier League average attendance of 60,045. The Record for the most away fans that have attended the Emirates Stadium was 9,000, set by Plymouth Argyle, where Arsenal won 3 -- 1 in the FA Cup 3rd round in 2009. The lowest attendance for an Arsenal match at Emirates Stadium as of 2012 is 46,539 against Shrewsbury Town in the Football League Cup third round on 20 September 2011, where Arsenal won 3 -- 1. The first player to score in a league game at the Emirates Stadium was Aston Villa 's Olof Mellberg after 53 minutes. The first Arsenal player to score at the Emirates Stadium was midfielder Gilberto Silva.
The Emirates Stadium is served by a number of London Underground stations and bus routes. Arsenal tube station is the closest for the northern portion of the stadium, with Highbury & Islington tube station servicing the southern end. While Holloway Road tube station is the closest to the southern portion, it is entry - only before matches and exit - only afterwards to prevent overcrowding. Drayton Park station, adjacent to the Clock End Bridge is shut on matchdays as the rail services to this station do not operate at weekends nor after 10 pm. £ 7.6 million was set aside in the planning permission for upgrading Drayton Park and Holloway Road; however Transport for London decided not to upgrade either station, in favour of improvement works at the interchanges at Highbury & Islington and Finsbury Park, both of which are served by Underground and National Rail services and are approximately a ten - minute walk away. The Emirates Stadium is the only football stadium that stands beside the East Coast Main Line between London and Edinburgh and is just over 2 miles from London King 's Cross.
Driving to the Emirates Stadium is strongly discouraged as there are strict matchday parking restrictions in operation around the stadium. An hour before kick - off to one hour after the final whistle there is a complete ban on vehicle movement on a number of the surrounding roads, except for Islington residents and businesses with a road closure access permit. The parking restrictions mean that the stadium is highly dependent on the Underground service, particularly when there is no overground service in operation. Industrial action forced Arsenal to reschedule the match for the following month.
The stadium opens to ticket holders two hours before kick - off. The main club shop, named ' The Armoury ', and ticket offices are located near the West Stand, with other an additional store at the base of the North Bank Bridge, named ' All Arsenal ' and the ' Arsenal Store ' next to Finsbury Park station. Arsenal operates an electronic ticketing system where members of ' The Arsenal ' (the club 's fan membership scheme) use their membership cards to enter the stadium, thus removing the need for turnstile operators. Non-members are issued with one - off paper tickets embedded with an RFID tag allowing them to enter the stadium.
General
Specific
|
where does south african airways fly in the us | List of South African Airways destinations - wikipedia
This is a list of South African Airways destinations, as of November 2016. It does not include destinations served by South African Express. As of June 2016, South African Airways served eight destinations outside Africa. By that time, the top five international routes were Johannesburg -- New York City -- JFK, Johannesburg -- London Heathrow, Johannesburg -- Hong Kong, Johannesburg -- Frankfurt and Johannesburg -- Perth.
|
when is god first called father in the bible | God the Father - wikipedia
God the Father is a title given to God in various religions, most prominently in Christianity. In mainstream trinitarian Christianity, God the Father is regarded as the first person of the Trinity, followed by the second person God the Son (Jesus Christ) and the third person God the Holy Spirit. Since the second century, Christian creeds included affirmation of belief in "God the Father (Almighty) '', primarily as his capacity as "Father and creator of the universe ''. Yet, in Christianity the concept of God as the father of Jesus Christ goes metaphysically further than the concept of God as the Creator and father of all people, as indicated in the Apostle 's Creed where the expression of belief in the "Father almighty, creator of heaven and earth '' is immediately, but separately followed by in "Jesus Christ, his only Son, our Lord '', thus expressing both senses of fatherhood.
In Christianity, God is addressed as the Father, in part because of his active interest in human affairs, in the way that a father would take an interest in his children who are dependent on him and as a father, he will respond to humanity, his children, acting in their best interests. Many believe they can communicate with God and come closer to him through prayer -- a key element of achieving communion with God.
In general, the title Father (capitalized) signifies God 's role as the life - giver, the authority, and powerful protector, often viewed as immense, omnipotent, omniscient, omnipresent with infinite power and charity that goes beyond human understanding. For instance, after completing his monumental work Summa Theologica, Catholic St. Thomas Aquinas concluded that he had not yet begun to understand ' God the Father '. Although the term "Father '' implies masculine characteristics, God is usually defined as having the form of a spirit without any human biological gender, e.g. the Catechism of the Catholic Church # 239 specifically states that "God is neither man nor woman: he is God ''. Although God is never directly addressed as "Mother '', at times motherly attributes may be interpreted in Old Testament references such as Isa 42: 14, Isa 49: 14 -- 15 or Isa 66: 12 -- 13.
In the New Testament, the Christian concept of God the Father may be seen as a continuation of the Jewish concept, but with specific additions and changes, which over time made the Christian concept become even more distinct by the start of the Middle Ages. The conformity to the Old Testament concepts is shown in Matthew 4: 10 and Luke 4: 8 where in response to temptation Jesus quotes Deuteronomy 6: 13 and states: "It is written, you shall worship the Lord your God, and him only shall you serve. '' 1 Corinthians 8: 6 shows the distinct Christian teaching about the agency of Christ by first stating: "there is one God, the Father, of whom are all things, and we unto him '' and immediately continuing with "and one Lord, Jesus Christ, through whom are all things, and we through him. '' This passage clearly acknowledges the Jewish teachings on the uniqueness of God, yet also states the role of Jesus as an agent in creation. Over time, the Christian doctrine began to fully diverge from Judaism through the teachings of the Church Fathers in the second century and by the fourth century belief in the Trinity was formalized. According to Mary Rose D'Angelo and James Barr, the Aramaic term Abba was in the early times of the New Testament neither markedly a term of endearment, nor a formal word; but the word normally used by sons and daughters, throughout their lives, in the family context.
According to Marianne Thompson, in the Old Testament, God is called "Father '' with a unique sense of familiarity. In addition to the sense in which God is "Father '' to all men because he created the world (and in that sense "fathered '' the world), the same God is also uniquely the law - giver to his chosen people. He maintains a special, covenantal father - child relationship with the people, giving them the Shabbat, stewardship of his prophecies, and a unique heritage in the things of God, calling Israel "my son '' because he delivered the descendants of Jacob out of slavery in Egypt according to his covenants and oaths to their fathers, Avraham, Isaac and Yaacov. In the Hebrew Bible, in Isaiah 63: 16 (JP) it reads: "For You are our father, for Abraham did not know us, neither did Israel recognize us; You, O Lord, are our father; our redeemer of old is your name. '' To God, according to Judaism, is attributed the fatherly role of protector. He is titled the Father of the poor, of the orphan and the widow, their guarantor of justice. He is also titled the Father of the king, as the teacher and helper over the judge of Israel.
According to Alon Goshen - Gottstein, in the Old Testament "Father '' is generally a metaphor; it is not a proper name for God but rather one of many titles by which Jews speak of and to God. In Christianity fatherhood is taken in a more literal and substantive sense, and is explicit about the need for the Son as a means of accessing the Father, making for a more metaphysical rather than metaphorical interpretation.
There is a deep sense in which Christians believe that they are made participants in the eternal relationship of Father and Son, through Jesus Christ. Christians call themselves adopted children of God:
But when the fullness of time had come, God sent forth his Son, born of woman, born under the law, to redeem those who were under the law, so that we might receive adoption as sons. And because you are sons, God has sent the Spirit of his Son into our hearts, crying, "Abba! Father! '' So you are no longer a slave, but a son, and if a son, then an heir through God.
In Christianity the concept of God as the Father of Jesus is distinct from the concept of God as the Creator and Father of all people, as indicated in the Apostle 's Creed. The profession in the creed begins with expressing belief in the "Father almighty, creator of heaven and earth '' and then immediately, but separately, in "Jesus Christ, his only Son, our Lord '', thus expressing both senses of fatherhood within the creed.
Since the second century, creeds in the Western Church have included affirmation of belief in "God the Father (Almighty) '', the primary reference being to "God in his capacity as Father and creator of the universe ''. This did not exclude either the fact the "eternal father of the universe was also the Father of Jesus the Christ '' or that he had even "vouchsafed to adopt (the believer) as his son by grace ''.
Creeds in the Eastern Church (known to have come from a later date) began with an affirmation of faith in "one God '' and almost always expanded this by adding "the Father Almighty, Maker of all things visible and invisible '' or words to that effect.
By the end of the first century, Clement of Rome had repeatedly referred to the Father, Son and Holy Spirit, and linked the Father to creation, 1 Clement 19.2 stating: "let us look steadfastly to the Father and Creator of the universe ''. Around AD 213 in Adversus Praxeas (chapter 3) Tertullian is believed to have provided a formal representation of the concept of the Trinity, i.e. that God exists as one "substance '' but three "Persons '': The Father, the Son and the Holy Spirit, and with God the Father being the Head. Tertullian also discussed how the Holy Spirit proceeds from the Father and the Son. While the expression "from the Father through the Son '' is also found among them.
The Nicene Creed, which dates to 325, states that the Son (Jesus Christ) is "eternally begotten of the Father '', indicating that their divine Father - Son relationship is seen as not tied to an event within time or human history.
To Trinitarian Christians (which include Roman Catholics, Eastern Orthodox, Oriental Orthodox, and most but not all Protestant denominations), God the Father is not a separate God from God the Son (of whom Jesus is the incarnation) and the Holy Spirit, the other hypostases of the Christian Godhead. In Eastern Orthodox theology, God the Father is the arche or principium ("beginning ''), the "source '' or "origin '' of both the Son and the Holy Spirit, and is considered the eternal source of the Godhead. The Father is the one who eternally begets the Son, and the Father eternally breathes the Holy Spirit.
As a member of the Trinity, God the Father is one with, co-equal to, co-eternal, and consubstantial with the Son and the Holy Spirit, each Person being the one eternal God and in no way separated: all alike are uncreated and omnipotent. Because of this, the Trinity is beyond reason and can only be known by revelation.
The Trinitarian concept of God the Father is not pantheistic in that he is not viewed as identical to the universe or a vague notion that persists in it, but exists fully outside of creation, as its Creator. He is viewed as a loving and caring God, a Heavenly Father who is active both in the world and in people 's lives. He created all things visible and invisible in love and wisdom, and created man for his own sake.
The emergence of Trinitarian theology of God the Father in early Christianity was based on two key ideas: first the shared identity of the Yahweh of the Old Testament and the God of Jesus in the New Testament, and then the self - distinction and yet the unity between Jesus and his Father. An example of the unity of Son and Father is Matthew 11: 27: "No one knows the Son except the Father and no one knows the Father except the Son '', asserting the mutual knowledge of Father and Son.
The concept of fatherhood of God does appear in the Old Testament, but is not a major theme. While the view of God as the Father is used in the Old Testament, it only became a focus in the New Testament, as Jesus frequently referred to it. This is manifested in the Lord 's prayer which combines the earthly needs of daily bread with the reciprocal concept of forgiveness. And Jesus ' emphasis on his special relationship with the Father highlights the importance of the distinct yet unified natures of Jesus and the Father, building to the unity of Father and Son in the Trinity.
The paternal view of God as the Father extends beyond Jesus to his disciples, and the entire Church, as reflected in the petitions Jesus submitted to the Father for his followers at the end of the Farewell Discourse, the night before his crucifixion. Instances of this in the Farewell Discourse are John 14: 20 as Jesus addresses the disciples: "I am in my Father, and you in me, and I in you '' and in John 17: 22 as he prays to the Father: "I have given them the glory that you gave me, that they may be one as we are one. ''
A number of Christian groups reject the doctrine of the Trinity, but differ from one another in their views regarding God the Father.
In Mormon theology, the most prominent conception of God is as a divine council of three distinct beings: Elohim (the Father), Jehovah (the Son, or Jesus), and the Holy Spirit. The Father and Son are considered to have perfected, physical bodies, while the Holy Spirit has a body of spirit. Mormons believe that God the Father presides over both the Son and Holy Spirit, where God the Father is greater than both, but they are one in the sense that they have a unity of purpose. Mormons do not distinguish God as a separate ontological species from humans, a concept they believe was originated by post-apostolic theologians who incorporated elements of Greek philosophy into Christian doctrine. Mormons teach that the title of Father is not figurative, because humans are literally the spirit offspring of God (Acts 17: 28 -- 29, Hebrews 12: 9). In this sense, they consider Jesus Christ their older brother (John 20: 17), as He is the first born, or first begotten of God 's children (Colossians 1: 15, Hebrews 1: 26; 12: 23). Biblical references to Christ as the "only begotten '', in contrast, refer to God being the Father of Christ 's mortal body, born of the virgin Mary. The terms "Father '' and "Son '' imply a lineage of beings in Mormonism and in all non-symbolic usage of these words. In the Mormon hymn, "If You Could Hie to Kolob '', there is no beginning to the lineage of exalted, resurrected personages that are in perfect unity.
In Jehovah 's Witness theology, only God the Father (Jehovah) is the one true almighty God, even over his Son Jesus Christ. They teach that the pre-existent Christ is God 's First - begotten Son, and that the Holy Spirit is God 's active force (projected energy). They believe these three are united in purpose, but are not one being and are not equal in power. While the Witnesses acknowledge Christ 's pre-existence, perfection, and unique "Sonship '' from God the Father, and believe that Christ had an essential role in creation and redemption, and is the Messiah, they believe that only the Father is without beginning. They say that the Son was the Father 's only direct creation, before all ages. God the Father is emphasized in Jehovah 's Witness meetings and services more than Christ the Son, as they teach that the Father is greater than the Son.
Oneness Pentecostalism teaches that God is a singular spirit who is one person, not three divine persons, individuals or minds. God the Father is the title of the Supreme Creator. The titles of the Son and Holy Spirit are merely titles reflecting the different personal manifestations of the One True God the Father in the universe.
Although similarities exist among religions, the common language and the shared concepts about God and his title Father among the Abrahamic religions is quite limited, and each religion has very specific belief structures and religious nomenclature with respect to the subject. While a religious teacher in one faith may be able to explain the concepts to his own audience with ease, significant barriers remain in communicating those concepts across religious boundaries.
In the Bahá'í faith God is also addressed as father. The Bahá'í view of God is essentially monotheistic. God is the imperishable, uncreated being who is the source of all existence.
In Hinduism, Bhagavan Krishna in the Bhagavad Gita, chapter 9, verse 17, stated: "I am the Father of this world, the Mother, the Dispenser and the Grandfather '', one commentator adding: "God being the source of the universe and the beings in it, He is held as the Father, the Mother and the Grandfather ''. A genderless Brahman is also considered the Creator and Life - giver, and the Shakta Goddess is viewed as the divine mother and life - bearer.
The Islamic concept of God differs from the Christian and Jewish views, the term "father '' is not formally applied to God by Muslims, and the Christian notion of the Trinity is rejected in Islam. Muslims also believe God is Wali. Wali means "custodian '', "protector '' and "helper ''. Allah is also called Rahim, meaning "Merciful, Compassionate ''.
In Islamic theology, God is the all - powerful and all - knowing creator, sustainer, ordainer, and judge of the universe.
Even though traditional Islamic teaching does not formally prohibit using the term "Father '' in reference to God, it does not propagate or encourage it. There are some narratives of the Islamic prophet Muhammad in which he compares the mercy of God toward his worshipers to that of a mother to her infant child.
Islamic teaching rejects the Christian Father - Son relationship of God and Jesus, and states that Jesus is a prophet of God, not the Son of God. Islamic theology strictly reiterates the Absolute Oneness of God, and totally separates him from other beings (whether humans, angel or any other holy figure), and rejects any form of dualism or Trinitarianism. Chapter 112 of the Qur'an states:
Say: He is God, the One and Only; God, the Eternal, Absolute; He begetteth not, nor is He begotten; And there is none like unto Him. (Sura 112: 1 -- 4, Yusuf Ali)
In Judaism, God is described as "Father '' as he is seen as the absolute one, indivisible and incomparable, transcendent, immanent, and non-corporeal God of creation and history. The God in Judaism is the giver of the sabbath and the two torahs -- written, oral, mystical tradition -- to his chosen people. In Judaism, the use of the "Father '' title is generally a metaphor, referring to the role as Life - giver and Law - giver, and is one of many titles by which Jews speak of and to God.
The Jewish concept of God is similar to the Christian view, being that Christianity has Jewish roots, though there are some differences, regarding things like the role of a Mediator. And also the concept of "God the Father '' in Biblical Judaism is generally more metaphorical. (Numbers 23: 19).
The Jewish concept of God is that God is non-corporeal, transcendent and immanent, the ultimate source of love, and a metaphorical "Father ''. The Torah declares: "God is not a man (איש: (' iysh)) that He should lie, nor is He a mortal (בן -- אדם: (ben - ' adam)) that He should relent ''. (Book of Numbers 23: 19 Hebrew: לא אישׁ אל ויכזב ובן ־ אדם ויתנחם ההוא אמר ולא יעשׂה ודבר ולא יקימנה )
The Aramaic term for father (Hebrew: אבא , abba) appears in traditional Jewish liturgy and Jewish prayers to God (e.g. in the Kaddish).
According to Ariela Pelaia, in a prayer of Rosh Hashanah, Areshet Sfateinu, an ambivalent attitude toward God is demonstrated, due to His role as a Father and as a King. Free translation of the relevant sentence may be: "today every creature is judged, either as sons or as slaves. If as sons, forgive us like a father forgives his son. If as slaves, we wait, hoping for good, until the verdict, your holy majesty. '' Another famous prayer emphasizing this dichotomy is called Avinu Malkeinu, which means "Our Father Our King '' in Hebrew. Usually the entire congregation will sing the last verse of this prayer in unison, which says: "Our Father, our King, answer us as though we have no deed to plead our cause, save us with mercy and loving - kindness. ''
In Sikhism, God is considered uncompromisingly monotheistic, as symbolized by Ik Onkar (one Creator), a central tenet of Sikh philosophy. The Guru Granth consistently refers to the creator as "He '' and "Father ''. This is because the Granth is written in north Indian Indo - Aryan languages (mixture of Punjabi and dialects of Hindi) which have no neutral gender. Since the Granth says that the God is indescribable, the God has no gender according to Sikhism.
God in the Sikh scriptures has been referred to by several names, picked from Indian and Semitic traditions. He is called in terms of human relations as father, mother, brother, relation, friend, lover, beloved, husband. Other names, expressive of his supremacy, are thakur, prabhu, svami, sah, patsah, sahib, sain (Lord, Master).
For about a thousand years, no attempt was made to portray God the Father in human form, because early Christians believed that the words of Exodus 33: 20 "Thou canst not see my face: for there shall no man see Me and live '' and of the Gospel of John 1: 18: "No man hath seen God at any time '' were meant to apply not only to the Father, but to all attempts at the depiction of the Father. Typically only a small part of the body of Father would be represented, usually the hand, or sometimes the face, but rarely the whole person, and in many images, the figure of the Son supplants the Father, so a smaller portion of the person of the Father is depicted.
In the early medieval period God was often represented by Christ as the Logos, which continued to be very common even after the separate figure of God the Father appeared. Western art eventually required some way to illustrate the presence of the Father, so through successive representations a set of artistic styles for the depiction of the Father in human form gradually emerged around the tenth century CE.
By the twelfth century depictions of a figure of God the Father, essentially based on the Ancient of Days in the Book of Daniel had started to appear in French manuscripts and in stained glass church windows in England. In the 14th century the illustrated Naples Bible had a depiction of God the Father in the Burning bush. By the 15th century, the Rohan Book of Hours included depictions of God the Father in human form or anthropomorphic imagery. The depiction remains rare and often controversial in Eastern Orthodox art, and by the time of the Renaissance artistic representations of God the Father were freely used in the Western Church.
|
when did the prison population begin to rise dramatically | History of United States prison systems - wikipedia
Imprisonment as a form of criminal punishment only became widespread in the United States just before the American Revolution, though penal incarceration efforts had been ongoing in England since as early as the 1500s, and prisons in the form of dungeons and various detention facilities had existed since long before then. Prison building efforts in the United States came in three major waves. The first began during the Jacksonian Era and led to widespread use of imprisonment and rehabilitative labor as the primary penalty for most crimes in nearly all states by the time of the American Civil War. The second began after the Civil War and gained momentum during the Progressive Era, bringing a number of new mechanisms -- such as parole, probation, and indeterminate sentencing -- into the mainstream of American penal practice. Finally, since the early 1970s, the United States has engaged in a historically unprecedented expansion of its imprisonment systems at both the federal and state level. Since 1973, the number of incarcerated persons in the United States has increased five-fold, and in a given year 7,000,000 people were under the supervision or control of correctional services in the United States. These periods of prison construction and reform produced major changes in the structure of prison systems and their missions, the responsibilities of federal and state agencies for administering and supervising them, as well as the legal and political status of prisoners themselves.
Incarceration as a form of criminal punishment is "a comparatively recent episode in Anglo - American jurisprudence, '' according to historian Adam J. Hirsch. Before the nineteenth century, sentences of penal confinement were rare in the criminal courts of British North America. But penal incarceration had been utilized in England as early as the reign of the Tudors, if not before. When post-revolutionary prisons emerged in United States, they were, in Hirsch 's words, not a "fundamental departure '' from the former American colonies ' intellectual past. Early American prisons systems like Massachusetts ' Castle Island Penitentiary, built in 1780, essentially imitated the model of the 1500s English workhouse.
The English workhouse, an intellectual forerunner of early United States penitentiaries, was first developed as a "cure '' for the idleness of the poor. Over time English officials and reformers came to see the workhouse as a more general system for rehabilitating criminals of all kinds.
Common wisdom in the England of the 1500s attributed property crime to idleness. "Idleness '' had been a status crime since Parliament enacted the Statute of Laborers in the mid-fourteenth century. By 1530, English subjects convicted of leading a "Rogishe or Vagabonds Trade or Lyfe '' were subject to whipping and mutilation, and recidivists could face the death penalty.
In 1557, many in England perceived that vagrancy was on the rise. That same year, the City of London reopened the Bridewell as a warehouse for vagrants arrested within the city limits. By order of any two of the Bridewell 's governors, a person could be committed to the prison for a term of custody ranging from several weeks to several years. In the decades that followed, "houses of correction '' or "workhouses '' like the Bridewell became a fixture of towns across England -- a change made permanent when Parliament began requiring every county in the realm to build a workhouse in 1576.
The workhouse was not just a custodial institution. At least some of its proponents hoped that the experience of incarceration would rehabilitate workhouse residents through hard labor. Supporters expressed the belief that forced abstinence from "idleness '' would make vagrants into productive citizens. Other supporters argued that the threat of the workhouse would deter vagrancy, and that inmate labor could provide a means of support for the workhouse itself. Governance of these institutions was controlled by written regulations promulgated by local authorities, and local justices of the peace monitored compliance.
Although "vagrants '' were the first inhabitants of the workhouse -- not felons or other criminals -- expansion of its use to criminals was discussed. Sir Thomas More described in Utopia (1516) how an ideal government should punish citizens with slavery, not death, and expressly recommended use of penal enslavement in England. Thomas Starkey, chaplain to Henry VIII, suggested that convicted felons "be put in some commyn work... so by theyr life yet the commyn welth schold take some profit. '' Edward Hext, justice of the peace in Somersetshire in the 1500s, recommended that criminals be put to labor in the workhouse after receiving the traditional punishments of the day.
During the seventeenth and eighteenth centuries, several programs experimented with sentencing various petty criminals to the workhouse. Many petty criminals were sentenced to the workhouse by way of the vagrancy laws even before these efforts. A commission appointed by King James I in 1622 to reprieve felons condemned to death with banishment to the American colonies was also given authority to sentence offenders "to toyle in some such heavie and painful manuall workes and labors here at home and be kept in chains in the house of correction or other places, '' until the King or his ministers decided otherwise. Within three years, a growing body of laws authorized incarceration in the workhouse for specifically enumerated petty crimes.
Throughout the 1700s, even as England 's "Bloody Code '' took shape, incarceration at hard labor was held out as an acceptable punishment for criminals of various kinds -- e.g., those who received a suspended death sentence via the benefit of clergy or a pardon, those who were not transported to the colonies, or those convicted of petty larceny. In 1779 -- at a time when the American Revolution had made convict transportation to North America impracticable -- the English Parliament passed the Penitentiary Act, mandating the construction of two London prisons with internal regulations modeled on the Dutch workhouse -- i.e., prisoners would labor more or less constantly during the day, with their diet, clothing, and communication strictly controlled. Although the Penitentiary Act promised to make penal incarceration the focal point of English criminal law, a series of the penitentiaries it prescribed were never constructed.
Despite the ultimate failure of the Penitentiary Act, however, the legislation marked the culmination of a series of legislative efforts that "disclose () the... antiquity, continuity, and durability '' of rehabilitative incarceration ideology in Anglo - American criminal law, according to historian Adam J. Hirsch. The first United States penitentiaries involved elements of the early English workhouses -- hard labor by day and strict supervision of inmates.
A second group that supported penal incarceration in England included clergymen and "lay pietists '' of various religious denominations who made efforts during the 1700s to reduce the severity of the English criminal justice system. Initially, reformers like John Howard focused on the harsh conditions of pre-trial detention in English jails. But many philanthropists did not limit their efforts to jail administration and inmate hygiene; they were also interested in the spiritual health of inmates and curbing the common practice of mixing all prisoners together at random. Their ideas about inmate classification and solitary confinement match another undercurrent of penal innovation in the United States that persisted into the Progressive Era.
Beginning with Samuel Denne 's Letter to Lord Ladbroke (1771) and Jonas Hanway 's Solitude in Imprisonment (1776), philanthropic literature on English penal reform began to concentrate on the post-conviction rehabilitation of criminals in the prison setting. Although they did not speak with a single voice, the philanthropist penologists tended to view crime as an outbreak of the criminal 's estrangement from God. Hanway, for example, believed that the challenge of rehabilitating the criminal law lay in restoring his faith in, and fear of the Christian God, in order to "qualify (him) for happiness in both worlds. ''
Many eighteenth - century English philanthropists proposed solitary confinement as a way to rehabilitate inmates morally. Since at least 1740, philanthropic thinkers touted the use of penal solitude for two primary purposes: (1) to isolate prison inmates from the moral contagion of other prisoners, and (2) to jump - start their spiritual recovery. The philanthropists found solitude far superior to hard labor, which only reached the convict 's worldly self, failing to get at the underlying spiritual causes of crime. In their conception of prison as a "penitentiary, '' or place of repentance for sin, the English philanthropists departed from Continental models and gave birth to a largely novel idea -- according to social historians Michael Meranze and Michael Ignatieff -- which in turn found its way into penal practice in the United States.
A major political obstacle to implementing the philanthropists ' solitary program in England was financial: Building individual cells for each prisoner cost more than the congregate housing arrangements typical of eighteenth - century English jails. But by the 1790s, local solitary confinement facilities for convicted criminals appeared in Gloucestershire and several other English counties.
The philanthropists ' focus on isolation and moral contamination became the foundation for early penitentiaries in the United States. Philadelphians of the period eagerly followed the reports of philanthropist reformer John Howard And the archetypical penitentiaries that emerged in the 1820s United States -- e.g., Auburn and Eastern State penitentiaries -- both implemented a solitary regime aimed at morally rehabilitating prisoners. The concept of inmate classification -- or dividing prisoners according to their behavior, age, etc. -- remains in use in United States prisons to this day.
A third group involved in English penal reform were the "rationalists '' or "utlitarians ''. According to historian Adam J. Hirsch, eighteenth - century rationalist criminology "rejected scripture in favor of human logic and reason as the only valid guide to constructing social institutions.
Eighteenth - century rational philosophers like Cesare Beccaria and Jeremy Bentham developed a "novel theory of crime '' -- specifically, that what made an action subject to criminal punishment was the harm it caused to other members of society. For the rationalists, sins that did not result in social harm were outside the purview of civil courts. With John Locke 's "sensational psychology '' as a guide, which maintained that environment alone defined human behavior, many rationalists sought the roots of a criminal 's behavior in his or her past environment.
Rationalists differed as to what environmental factors gave rise to criminality. Some rationalists, including Cesare Beccaria, blamed criminality on the uncertainty criminal punishment, whereas earlier criminologists had linked criminal deterrence to the severity of punishment. In essence, Beccaria believed that where arrest, conviction, and sentencing for crime were "rapid and infallible, '' punishments for crime could remain moderate. Beccaria did not take issue with the substance of contemporary penal codes -- e.g., whipping and the pillory; rather, he took issue with their form and implementation.
Other rationalists, like Jeremy Bentham, believed that deterrence alone could not end criminality and looked instead to the social environment as the ultimate source of crime. Bentham 's conception of criminality led him to concur with philanthropist reformers on the need for rehabilitation of offenders. But, unlike the philanthropists, Bentham and like - minded rationalists believed the true goal of rehabilitation was to show convicts the logical "inexpedience '' of crime, not their estrangement from religion. For these rationalists, society was the source of and the solution to crime.
Ultimately, hard labor became the preferred rationalist therapy. Bentham eventually adopted this approach, and his well - known 1791 design for the Panopticon prison called for inmates to labor in solitary cells for the course of their imprisonment. Another rationalist, William Eden, collaborated with John Howard and Justice William Blackstone in drafting the Penitentiary Act of 1779, which called for a penal regime of hard labor.
According to social and legal historian Adam J. Hirsch, the rationalists had only a secondary impact on United States penal practices. But their ideas -- whether consciously adopted by United States prison reformers or not -- resonate in various United States penal initiatives to the present day.
Although convicts played a significant role in British settlement of North America, according to legal historian Adam J. Hirsch "(t) he wholesale incarceration of criminals is in truth a comparatively recent episode in the history of Anglo - American jurisprudence. '' Imprisonment facilities were present from the earliest English settlement of North America, but the fundamental purpose of these facilities changed in the early years of United States legal history as a result of a geographically widespread "penitentiary '' movement. The form and function of prison systems in the United States has continued to change as a result of political and scientific developments, as well as notable reform movements during the Jacksonian Era, Reconstruction Era, Progressive Era, and the 1970s. But the status of penal incarceration as the primary mechanism for criminal punishment has remained the same since its first emergence in the wake of the American Revolution.
Prisoners and prisons appeared in North America simultaneous to the arrival of European settlers. Among the ninety or so men who sailed with the explorer known as Christopher Columbus were a young black man abducted from the Canary Islands and at least four convicts. By 1570, Spanish soldiers in St. Augustine, Florida, had built the first substantial prison in North America. As other European nations began to compete with Spain for land and wealth in the New World, they too turned to convicts to fill out the crews on their ships.
According to social historian Marie Gottschalk, convicts were "indispensable '' to English settlement efforts in what is now the United States. In the late sixteenth century, Richard Hakluyt called for the large - scale conscription of criminals to settle the New World for England. But official action on Haklyut 's proposal lagged until 1606, when the English crown escalated its colonization efforts.
Sir John Popham 's colonial venture in present - day Maine was stocked, a contemporary critic complained, "out of all the gaols (jails) of England. '' The Virginia Company, the corporate entity responsible for settling Jamestown, authorized its colonists to seize Native American children wherever they could "for conversion... to the knowledge and worship of the true God and their redeemer, Christ Jesus. '' The colonists themselves lived, in effect, as prisoners of the Company 's governor and his agents. Men caught trying to escape were tortured to death; seamstresses who erred in their sewing were subject to whipping. One Richard Barnes, accused of uttering "base and detracting words '' against the governor, was ordered to be "disarmed and have his arms broken and his tongue bored through with an awl '' before being banished from the settlement entirely.
When control of the Virginia Company passed to Sir Edwin Sandys in 1618, efforts to bring large numbers of settlers to the New World against their will gained traction alongside less coercive measures like indentured servitude. Vagrancy statutes began to provide for penal transportation to the American colonies as an alternative to capital punishment in this period, during the reign of Queen Elizabeth I. At the same time, the legal definition of "vagrancy '' was greatly expanded.
Soon, a royal commissions endorsed the notion that any felon -- except those convicted of murder, witchcraft, burglary, or rape -- could legally be transported to Virginia or the West Indies to work as a plantation servant. Sandys also proposed sending maids to Jamestown as "breeders, '' whose costs of passage could be paid for by the planters who took them on as "wives. '' Soon, over sixty such women had made the passage to Virginia, and more followed. King James I 's royal administration also sent "vagrant '' children to the New World as servants. a letter in the Virginia Company 's records suggests that as many as 1,500 children were sent to Virginia between 1619 and 1617. By 1619, African prisoners were brought to Jamestown and sold as slaves as well, marking England 's entry into the Atlantic slave trade.
The infusion of kidnapped children, maids, convicts, and Africans to Virginia during the early part of the seventeenth century inaugurated a pattern that would continue for nearly two centuries. By 1650, the majority of British emigrants to colonial North America went as "prisoners '' of one sort or another -- whether as indentured servants, convict laborers, or slaves.
The prisoner trade became the "moving force '' of English colonial policy after the Restoration -- i.e., from the summer of 1660 onward -- according to By 1680, the Reverend Morgan Godwyn estimated that almost 10,000 persons were spirited away to the Americas annually by the English crown.
Parliament accelerated the prisoner trade in the eighteenth century. Under England 's Bloody Code, a large portion of the realm 's convicted criminal population faced the death penalty. But pardons were common. During the eighteenth century, the majority of those sentenced to die in English courts were pardoned -- often in exchange for voluntary transport to the colonies. In 1717, Parliament empowered the English courts to directly sentence offenders to transportation, and by 1769 transportation was the leading punishment for serious crime in Great Britain. Over two - thirds of those sentenced during sessions of the Old Bailey in 1769 were transported. The list of "serious crimes '' warranting transportation continued to expand throughout the eighteenth century, as it had during the seventeenth. Historian A. Roger Ekirch estimates that as many as one - quarter of all British emigrants to colonial America during the 1700s were convicts. In the 1720s, James Oglethorpe settled the colony of Georgia almost entirely with convict settlers.
The typical transported convict during the 1700s was brought to the North American colonies on board a "prison ship. '' Upon arrival, the convict 's keepers would bathe and clothe him or her (and, in extreme cases, provide a fresh wig) in preparation for a convict auction. Newspapers advertised the arrival of a convict cargo in advance, and buyers would come at an appointed hour to purchase convicts off the auction block.
Prisons played an essential role in the convict trade. Some ancient prisons, like the Fleet and Newgate, still remained in use during the high period of the American prisoner trade in the eighteenth century. But more typically an old house, medieval dungeon space, or private structure would act as a holding pen for those bound for American plantations or the Royal Navy (under impressment). Operating clandestine prisons in major port cities for detainees whose transportation to the New World was not strictly legal, became a lucrative trade on both sides of the Atlantic in this period. Unlike contemporary prisons, those associated with the convict trade served a custodial, not a punitive function.
Many colonists in British North America resented convict transportation. As early as 1683, Pennsylvania 's colonial legislature attempted to bar felons from being introduced within its borders. Benjamin Franklin called convict transportation "an insult and contempt, the cruellest, that ever one people offered to another. '' Franklin suggested that the colonies send some of North America 's rattlesnakes to England, to be set loose in its finest parks, in revenge. But transportation of convicts to England 's North American colonies continued until the American Revolution, and many officials in England saw it as a humane necessity in light of the harshness of the penal code and contemporary conditions in English jails. Dr. Samuel Johnson, upon hearing that British authorities might bow to continuing agitation in the American colonies against transportation, reportedly told James Boswell: "Why they are a race of convicts, and ought to be thankful for anything we allow them short of hanging! ''
When the American Revolution ended the prisoner trade to North America, the abrupt halt threw Britain 's penal system into disarray, as prisons and jails quickly filled with the many convicts who previously would have moved on to the colonies. Conditions steadily worsened. It was during this crisis period in the English criminal justice system that penal reformer John Howard began his work. Howard 's comprehensive study of British penal practice, The State of the Prisons in England and Wales, was first published in 1777 -- one year after the start of the Revolution.
The jail was built in 1690 by order of Plimouth and Massachusetts Bay Colony Courts. Used as a jail from 1690 -- 1820; at one time moved and attached to the Constable 's home. The ' Old Gaol ' was added to the National Register of Historic Places in 1971.
Although jails were an early fixture of colonial North American communities, they generally did not serve as places of incarceration as a form of criminal punishment. Instead, the main role of the colonial American jail was as a non-punitive detention facility for pre-trial and pre-sentence criminal defendants, as well as imprisoned debtors. The most common penal sanctions of the day were fines, whipping, and community - oriented punishments like the stocks.
Jails were among the earliest public structures built in colonial British North America. The 1629 colonial charter of the Massachusetts Bay Colony, for example, granted the shareholders behind the venture the right to establish laws for their settlement "not contrarie to the lawes of our realm in England '' and to administer "lawfull correction '' to violators, and Massachusetts established a house of correction for punishing criminals by 1635. Colonial Pennsylvania built two houses of correction starting in 1682, and Connecticut established one in 1727. By the eighteenth century, every county in the North American colonies had a jail.
Colonial American jails were not the "ordinary mechanism of correction '' for criminal offenders, according to social historian David Rothman. Criminal incarceration as a penal sanction was "plainly a second choice, '' either a supplement to or a substitute for traditional criminal punishments of the day, in the words of historian Adam J. Hirsch. Eighteenth - century criminal codes provided for a far wider range of criminal punishments than contemporary state and federal criminal laws in the United States. Fines, whippings, the stocks, the pillory, the public cage, banishment, capital punishment at the gallows, penal servitude in private homes -- all of these punishments came before imprisonment in British colonial America.
The most common sentence of the colonial era was a fine or a whipping, but the stocks were another common punishment -- so much so that most colonies, like Virginia in 1662, hastened to build these before either the courthouse or the jail. The theocratic communities of Puritan Massachusetts imposed faith - based punishments like the admonition -- a formal censure, apology, and pronouncement of criminal sentence (generally reduced or suspended), performed in front of the church - going community. Sentences to the colonial American workhouse -- when they were actually imposed on defendants -- rarely exceeded three months, and sometimes spanned just a single day.
Colonial jails served a variety of public functions other than penal imprisonment. Civil imprisonment for debt was one of these, but colonial jails also served as warehouses for prisoners - of - war and political prisoners (especially during the American Revolution). They were also an integral part of the transportation and slavery systems -- not only as warehouses for convicts and slaves being put up for auction, but also as a means of disciplining both kinds of servants.
The colonial jail 's primary criminal law function was as a pre-trial and pre-sentence detention facility. Generally, only the poorest or most despised defendants found their way into the jails of colonial North America, since colonial judges rarely denied requests for bail. The only penal function of significance that colonial jails served was for contempt -- but this was a coercive technique designed to protect the power of the courts, not a penal sanction in its own right.
The colonial jail differed from today 's United States prisons not only in its purpose, but in its structure. Many were no more than a cage or closet. Colonial jailers ran their institutions on a "familial '' model and resided in an apartment attached to the jail, sometimes with a family of their own. The colonial jail 's design resembled an ordinary domestic residence, and inmates essentially rented their bed and paid the jailer for necessities.
Before the close of the American Revolution, few statutes or regulations defined the colonial jailers ' duty of care or other responsibilities. Upkeep was often haphazard, and escapes quite common. Few official efforts were made to maintain inmates ' health or see to their other basic needs.
The first major prison reform movement in the United States came after the American Revolution, at the start of the nineteenth century. According to historians Adam J. Hirsch and David Rothman, the reform of this period was shaped less by intellectual movements in England than by a general clamor for action in a time of population growth and increasing social mobility, which prompted a critical reappraisal and revision of penal corrective techniques. To address these changes, post-colonial legislators and reformers began to stress the need for a system of hard labor to replace ineffectual corporal and traditional punishments. Ultimately, these early efforts yielded the United States ' first penitentiary systems.
The onset of the eighteenth century brought major demographic and social change to colonial and, eventually post-colonial American life. The century was marked by rapid population growth throughout the colonies -- a result of lower mortality rates and increasing (though small at first) rates of immigration. In the aftermath of the Revolutionary War, this trend persisted. Between 1790 and 1830, the population of the newly independent North American states greatly increased, and the number and density of urban centers did as well. The population of Massachusetts almost doubled in this period, while it tripled in Pennsylvania and increased five-fold in New York. In 1790, no American city had more than fifty thousand residents; however, by 1830 nearly 500,000 people lived in cities larger than that.
The population of the former British colonies also became increasingly mobile during the eighteenth century, especially after the Revolution. Movement to urban centers, in and out of emerging territories, and up and down a more fluid social ladder throughout the century made it difficult for the localism and hierarchy that had structured American life in the seventeenth century to retain their former significance. The Revolution only accelerated patterns of dislocation and transience, leaving displaced families and former soldiers struggling to adapt to the strictures of a stunted post-war economy. The emergence of cities created a kind of community very different from the pre-revolutionary model. The crowded streets of emerging urban centers like Philadelphia seemed to contemporary observers to dangerously blur class, sex, and racial boundaries.
Demographic change in the eighteenth century coincided with shifts in the configuration of crime. After 1700, literary evidence from a variety of sources -- e.g., ministers, newspapers, and judges -- suggest that property crime rates rose (or, at least, were perceived to). Conviction rates appear to have risen during the last half of the eighteenth century, rapidly so in the 1770s and afterward and especially in urban areas. Contemporary accounts also suggest widespread transiency among former criminals.
Communities began to think about their town as something less than the sum of all its inhabitants during this period, and the notion of a distinct criminal class began to materialize. In the Philadelphia of the 1780s, for example, city authorities worried about the proliferation of taverns on the outskirts of the city, "sites of an alternative, interracial, lower - class culture '' that was, in the words of one observer, "the very root of vice. '' In Boston, a higher urban crime rate led to the creation of a specialized, urban court in 1800.
The efficacy of traditional, community - based punishments waned during the eighteenth century. Penal servitude, a mainstay of British and colonial American criminal justice, became nearly extinct during the seventeenth century, at the same time that Northern states, beginning with Vermont in 1777, began to abolish slavery. Fines and bonds for good behavior -- one of the most common criminal sentences of the colonial era -- were nearly impossible to enforce among the transient poor. As the former American colonists expanded their political loyalty beyond the parochial to their new state governments, promoting a broader sense of the public welfare, banishment (or "warning out '') also seemed inappropriate, since it merely passed criminals onto a neighboring community. Public shaming punishments like the pillory had always been inherently unstable methods of enforcing the public order, since they depended in large part on the participation of the accused and the public. As the eighteenth century matured, and a social distance between the criminal and the community became more manifest, mutual antipathy (rather than community compassion and offender penitence) became more common at public executions and other punishments. In urban centers like Philadelphia, growing class and racial tensions -- especially in the wake of the Revolution -- led crowds to actively sympathize with the accused at executions and other public punishments.
Colonial governments began making efforts to reform their penal architecture and excise many traditional punishments even before the Revolution. Massachusetts, Pennsylvania, and Connecticut all inaugurated efforts to reconstitute their penal systems in the years leading up to the war to make incarceration at hard labor the sole punishment for most crimes. Although war interrupted these efforts, they were renewed afterward. A "climate change '' in post-Revolutionary politics, in the words of historian Adam J. Hirsch, made colonial legislatures open to legal change of all sorts after the Revolution, as they retooled their constitutions and criminal codes to reflect their separation from England. The Anglophobic politics of the day bolstered efforts to do away with punishments inherited from English legal practice.
Reformers in the United States also began to discuss the effect of criminal punishment itself on criminality in the post-revolutionary period, and at least some concluded that the barbarism of colonial - era punishments, inherited from English penal practice, did more harm than good. "The mild voice of reason and humanity, '' wrote New York penal reformer Thomas Eddy in 1801, "reached not the thrones of princes or the halls of legislators. '' "The mother country had stifled the colonists ' benevolent instincts, '' according to Eddy, "compelling them to emulate the crude customs of the old world. The result was the predominance of archaic and punitive laws that only served to perpetuate crime. '' Attorney William Bradford made an argument similar to Eddy 's in a 1793 treatise.
By the second decade of the nineteenth century every state except North Carolina, South Carolina, and Florida had amended its criminal code to provide for incarceration (primarily at hard labor) as the primary punishment for all but the most serious offenses. Provincial laws in Massachusetts began to prescribe short terms in the workhouse for deterrence throughout the eighteenth century and, by mid-century, the first statutes mandating long - term hard labor in the workhouse as a penal sanction appeared. In New York, a 1785 bill, restricted in effect to New York City, authorized municipal officials to substitute up to six months ' hard labor in the workhouse in all cases where prior law had mandated corporal punishment. In 1796, an additional bill expanded this program to the entire state of New York. Pennsylvania established a hard labor law in 1786. Hard - labor programs expanded to New Jersey in 1797, to Virginia in 1796, to Kentucky in 1798, and to Vermont, New Hampshire, and Maryland in 1800.
This move toward imprisonment did not translate to an immediate break from traditional forms of punishment. Many new criminal provisions merely expanded the discretion of judges to choose from among various punishments, including imprisonment. The 1785 amendments to Massachusetts ' arson statute, for instance, expanded the available punishments for setting fire to a non-dwelling from whipping to hard labor, imprisonment in jail, the pillory, whipping, fining, or any or all of those punishments in combination. Massachusetts judges wielded this new - found discretion in various ways for twenty years, before fines, incarceration, or the death penalty became the sole available sanctions under the state 's penal code. Other states -- e.g., New York, Pennsylvania, and Connecticut -- also lagged in their shift toward incarceration.
Prison construction kept pace with post-revolutionary legal change. All states that revised their criminal codes to provide for incarceration also constructed new state prisons. But the focus of penal reformers in the post-revolutionary years remained largely external to the institutions they built, according to David Rothman. For reformers of the day, Rothman claims, the fact of imprisonment -- not the institution 's internal routine and its effect on the offender -- was of primary concern. Incarceration seemed more humane than traditional punishments like hanging and whipping, and it theoretically matched punishment more specifically to the crime. But it would take another period of reform, in the Jacksonian Era, for state prison initiatives to take the shape of actual justice institutions.
By 1800, eleven of the then - sixteen United States -- i.e., Pennsylvania, New York, New Jersey, Massachusetts, Kentucky, Vermont, Maryland, New Hampshire, Georgia, and Virginia -- had in place some form of penal incarceration. But the primary focus of contemporary criminology remained on the legal system, according to historian David Rothman, not the institutions in which convicts served their sentences. This changed during the Jacksonian Era, as contemporary notions of criminality continued to shift.
Starting in the 1820s, a new institution, the "penitentiary '', gradually became the focal point of criminal justice in the United States. At the same time, other novel institutions -- the asylum and the almshouse -- redefined care for the mentally ill and the poor. For its proponents, the penitentiary was an ambitious program whose external appearance, internal arrangements, and daily routine would counteract the disorder and immorality thought to be breeding crime in American society. Although its adoption was haphazard at first, and marked by political strife -- especially in the South -- the penitentiary became an established institution in the United States by the end of the 1830s.
Jacksonian - Era reformers and prison officials began seeking the origins of crime in the personal histories of criminals and traced the roots of crime to society itself. In the words of historian David Rothman, "They were certain that children lacking discipline quickly fell victim to the influence of vice at loose in the community. '' Jacksonian reformers specifically tied rapid population growth and social mobility to the disorder and immorality of contemporary society.
To combat society 's decay and the risks presented by it, Jacksonian penologists designed an institutional setting to remove "deviants '' from the corruption of their families and communities. In this corruption - free environment, the deviant could learn the vital moral lessons he or she had previously ignored while sheltered from the temptations of vice. This solution ultimately took the shape of the penitentiary.
In the 1820s, New York and Pennsylvania began new prison initiatives that inspired similar efforts in a number of other states. Post-revolutionary carceral regimes had conformed to the English workhouse tradition; inmates labored together by day and shared congregate quarters at night.
Beginning in 1790, Pennsylvania became the first of the United States to institute solitary confinement for incarcerated convicts. After 1790, those sentenced to hard labor in Pennsylvania were moved indoors to an inner block of solitary cells in Philadelphia 's Walnut Street Jail. New York began implementing solitary living quarters at New York City 's Newgate Prison in 1796.
From the efforts at the Walnut Street Jail and Newgate Prison, two competing systems of imprisonment emerged in the United States by the 1820s. The "Auburn '' (or "Congregate System '') emerged from New York 's prison of the same name between 1819 and 1823. And the "Pennsylvania '' (or "Separate System '') emerged in that state between 1826 and 1829. The only material difference between the two systems was whether inmates would ever leave their solitary cells -- under the Pennsylvania System, inmates almost never did, but under the Auburn System most inmates labored in congregate workshops by day and slept alone.
To advocates of both systems, the promise of institutionalization depended upon isolating the prisoner from the moral contamination of society and establishing discipline in him (or, in rarer cases, her). But the debate as to which system was superior continued into the mid-nineteenth century, pitting some of the period 's most prominent reformers against one another. Samuel Gridley Howe promoted the Pennsylvania System in opposition to Matthew Carey, an Auburn proponent; Dorothea Dix took up the Pennsylvania System against Louis Dwight; and Francis Lieber supported Pennsylvania against Francis Wayland. The Auburn system eventually prevailed, however, due largely to its lesser cost.
The Pennsylvania system, first implemented in the early 1830s at that state 's Eastern State Penitentiary outskirts of Philadelphia and Western State Penitentiary at Pittsburgh, was designed to maintain the complete separation of inmates at all times. Until 1904, prisoners entered the institution with a black hood over their head, so they would never know who their fellow convicts were, before being led to the cell where they would serve the remainder of their sentence in near - constant solitude. The Cherry Hill complex entailed a massive expenditure of state funds; its walls alone cost $200,000, and its final price tag reached $750,000, one of the largest state expenditures of its day.
Like its competitor Auburn system, Eastern State 's regimen was premised on the inmate 's potential for individual rehabilitation. Solitude, not labor, was its hallmark; labor was reserved only for those inmates who affirmatively earned the privilege. All contact with the outside world more or less ceased for Eastern State prisoners. Proponents boasted that a Pennsylvania inmate was "perfectly secluded from the world... hopelessly separated from... family, and from all communication with and knowledge of them for the whole term of imprisonment. ''
Through isolation and silence -- complete separation from the moral contaminants of the outside worlds -- Pennsylvania supporters surmised that inmates would begin a reformation. "Each individual, '' a representative tract reads, "will necessarily be made the instrument of his own punishment; his conscience will be the avenger of society. ''
Proponents insisted that the Pennsylvania system would involve only mild disciplinary measures, reasoning that isolated men would have neither the resources nor the occasion to violate rules or to escape. But from the outset Eastern State 's keepers used corporal punishments to enforce order. Officials used the "iron gag, '' a bridle - like metal bit placed in the inmate 's mouth and chained around his neck and head; the "shower bath, '' repeated dumping of cold water onto a restrained convict; or the "mad chair, '' into which inmates were strapped in such a way so as to prevent their bodies from resting.
Ultimately, only three prisons ever enacted the costly Pennsylvania program. But nearly all penal reformers of the antebellum period believed in Pennsylvania 's use of solitary confinement. The system remained largely intact at Eastern State Penitentiary into the early twentieth century.
The Auburn or "Congregate '' System became the archetypical model penitentiary in the 1830s and 1840s, as its use expanded from New York 's Auburn Penitentiary into the Northeast, the Midwest, and the South. The Auburn system 's combination of congregate labor in prison workshops and solitary confinement by night became a near - universal ideal in United States prison systems, if not an actual reality.
Under the Auburn system, prisoners slept alone at night and labored together in a congregate workshop during the day for the entirety of their fixed criminal sentence as set by a judge. Prisoners at Auburn were not to converse at any time, or even to exchange glances. Guards patrolled secret passageways behind the walls of the prison 's workshops in moccasins, so inmates could never be sure whether or not they were under surveillance.
One official described Auburn 's discipline as "tak (ing) measures for convincing the felon that he is no longer his own master; no longer in a condition to practice deceptions in idleness; that he must learn and practice diligently some useful trade, whereby, when he is let out of the prison to obtain an honest living. '' Inmates were permitted no intelligence of events on the outside. In the words of an early warden, Auburn inmates were "to be literally buried from the world. '' The institution 's regime remained largely intact until after the Civil War.
Auburn was the second state prison built in New York State. The first, Newgate, located in present - day Greenwich Village in New York City, contained no solitary cells beyond a few set aside for "worst offenders. '' Its first keeper, Quaker Thomas Eddy, believed rehabilitation of the criminal was the primary end of punishment (though Eddy also believed that his charges were "wicked and depraved, capable of every atrocity, and ever plotting some means of violence and escape. '') Eddy was not inclined to rely on prisoners ' fear of his severity; his "chief disciplinary weapon '' was solitary confinement on limited rations, he forbade his guards from striking inmates, and permitted "well - behaved '' inmates to have a supervised visit with family once every three months. Eddy made largely unsuccessful efforts to establish profitable prison labor programs, which he had hoped would cover incarceration costs and provide seed money for inmates ' re-entry into society in the form of the "overstint '' -- i.e., a small portion of the profits of an inmate 's labor while incarcerated, payable at his or her release. Discipline nevertheless remained hard to enforce, and major riots occurred in 1799 and 1800 -- the latter only subdued via military intervention. Conditions continued to worsen in the wake of the riots, especially during a crime wave that followed the War of 1812.
New York legislators set aside funds for construction of the Auburn prison to address the disappointments of Newgate and alleviate its persistent overcrowding. Almost from the outset, Auburn officials, with the consent of the legislature, eschewed the "humane '' style envisioned by Thomas Eddy for Newgate. Floggings of up to thirty - nine lashes in duration as punishment for disciplinary infractions were permitted under an 1819 state law, which also authorized the use of the stocks and the irons. The practice of providing convicts with some of the proceeds of their labor at the time of release, the "overstint, '' was discontinued. The severity of the new regime likely caused another series of riots in 1820, after which the legislature formed a New York State Prison Guard for putting down future disturbances.
Officials also began implementing a classification system at Auburn in the wake of the riots, dividing inmates into three groups: (1) the worst, who were placed on constant solitary lockdown; (2) middling offenders, who were kept in solitary and worked in groups when well - behaved; and (3) the "least guilty and depraved, '' who were permitted to sleep in solitary and work in groups. Construction on a new solitary cell block for category (1) inmates ended in December 1821, after which these "hardened '' offenders moved into their new home. Within a little over a year, however, five of these men had died of consumption, another forty - one were seriously ill, and several had gone insane. After visiting the prison and seeing the residents of the new cell block, Governor Joseph C. Yates was so appalled by their condition that he pardoned several of them outright.
Scandal struck Auburn again when a female inmate became pregnant in solitary confinement and, later, died after repeated beatings and the onset of pneumonia. (Because Auburn relied on female inmates for its washing and cleaning services, women remained part of the population but the first separate women 's institution in New York was not completed until 1893.) A jury convicted the keeper who beat the woman of assault and battery, and fined him $25, but he remained on the job. A grand jury investigation into other aspects of the prison 's management followed but was hampered, among other obstacles, by the fact that convicts could not present evidence in court. Even so, the grand jury eventually concluded that Auburn 's keepers had been permitted to flog inmates without a higher official present, a violation of state law. But neither the warden nor any other officer was ever prosecuted, and the use and intensity of flogging only increased at Auburn, as well as the newer Sing Sing prison, in subsequent years.
Despite its early scandals and regular political power struggles that left it with an unstable administrative structure, Auburn remained a model institution nationwide for decades to come. Massachusetts opened a new prison in 1826 modeled on the Auburn system, and within the first decade of Auburn 's existence, New Hampshire, Vermont, Maryland, Kentucky, Ohio, Tennessee, and the District of Columbia all constructed prisons patterned on its congregate system. By the eve of the American Civil War, Illinois, Indiana, Georgia, Missouri, Mississippi, Texas, and Arkansas, with varying success, had all inaugurated efforts to establish an Auburn - model prison in their jurisdictions.
The widespread move to penitentiaries in the antebellum United States changed the geography of criminal punishment, as well as its central therapy. Offenders were now ferried across water or into walled compounds to centralized institutions of the criminal justice system hidden from public view. The penitentiary thus largely ended community involvement in the penal process -- beyond a limited role in the criminal trial itself -- though many prisons permitted visitors who paid a fee to view the inmates throughout the nineteenth century.
On eve of American Civil War, crime did not pose a major concern in the Southern United States. Southerners in the main considered crime to be a Northern problem. A traditional extra-legal system of remedying slights, based in honor culture made personal violence the hallmark of Southern crime. Southern penitentiary systems brought only the most hardened criminals under centralized state control. Most criminals remained outside of formal state control structures -- especially outside of Southern cities.
The historical record suggests that, in contrast to Northerners, Southernern states experienced a unique political anxiety about whether to construct prisons during the antebellum period. Disagreements over republican principles -- i.e., the role of the state in social governance -- became the focus of a persistent debate about the necessity of southern penitentiaries in the decades between independence and the Civil War.
To many Southerners, writes historian Edward L. Ayers, "republicanism '' translated simply to freedom from the will of anyone else: Centralized power, even in the name of an activist republican government, promised more evil than good. Ayers concludes that this form of Southern republicanism owed its particular shape to slavery. The South 's slave economy perpetuated a rural, localized culture, he argues, in which men distrusted strangers ' claims to power. In this political milieu, the notion of surrendering individual liberties of any kind -- even those of criminals -- for some abstract conception of "social improvement '' was abhorrent to many.
But criminal incarceration appealed to others in the South. These Southerners believed that freedom would best grow under the protection of an enlightened state government that made the criminal law more effective by eradicating its more brutal practices and offering criminals the possibility of rehabilitation and restoration to society. Some also believed that penitentiaries would help to remove the contagion of depravity from republican society by segregating those who threatened the republican ideal (the "disturbing class ''). Notions of living up to the world 's ideas of "progress '' also animated Southern penal reformers. When the Georgia legislature considered abolishing the state 's penitentiary after a devastating fire in 1829, reformers there worried their state would become the first to renounce republican "progress. ''
A sizable portion of the Southern population -- if not the majority -- did not support the establishment of the penitentiary. Of the two times that voters in the region had an opportunity to express their opinion of the penitentiary system at the ballot box -- in Alabama and North Carolina -- the penitentiary lost overwhelmingly. Some viewed traditional public punishments as the most republican mechanism for criminal justice, due to their inherent transparency. Some worried that, since the quantity of suffering under penitentiary system would sure to far exceed that of the traditional system, Southern jurors would maintain their historic disposal toward acquittal. Evangelical Southern clergymen also opposed the penitentiary -- especially when its implementation accompanied statutory restriction of the death penalty, which they deemed a biblical requirement for certain crimes.
Opposition to the penitentiary crossed party lines; neither the Whigs nor the Democrats lent consistent support to the institution in the antebellum period. But consistent and enthusiastic support for the penitentiary did come, almost uniformly, from Southern governors. The motives of these governors are note entirely unclear, historian Edward L. Ayers concludes: Perhaps they hoped that the additional patronage positions offered by a penitentiary would augment the historically weak power of the Southern executive; perhaps they were legitimately concerned with the problem of crime; or perhaps both considerations played a role. Grand juries -- drawn from Southern "elites '' -- also issued regular calls for penitentiaries in this period.
Ultimately, the penitentiary 's supporters prevailed in the South, as in the North. Southern legislators enacted prison legislation in state after Southern state before the Civil War, often over public opposition. Their motives in doing so appear mixed. According to Edward L. Ayers, some Southern legislators appear to have believed they knew what was best for their people in any case. Since many Southern legislators came from the elite classes, Ayers also observes, they may also have had a personal "class control '' motive for enacting penitentiary legislation, even while they could point to their participation in penitentiary efforts as evidence of their own benevolence. Historian Michael S. Hindus concludes that Southern hesitation about the penitentiary, at least in South Carolina, stemmed from the slave system, which made the creation of a white criminal underclass undesirable.
Southern states erected penitentiaries alongside their Northern counterparts in the early nineteenth century. Virginia (1796), Maryland (1829), Tennessee (1831), Georgia (1832), Louisiana and Missouri (1834 -- 1837), and Mississippi and Alabama (1837 -- 1842) all erected penitentiary facilities during the antebellum period. Only the North Carolina, South Carolina and largely uninhabited Florida failed to build any penitentiary before the Civil War
Virginia was the first state after Pennsylvania, in 1796, to dramatically reduce the number of crimes punishable by death, and its legislators simultaneously called for the construction of a "gaol and penitentiary house '' as the cornerstone of a new criminal justice regime. Designed by Benjamin Henry Latrobe, the state 's first prison at Richmond resembled Jeremy Bentham 's Panopticon design (as well as the not - yet - built Eastern State Penitentiary 's). All inmates served a mandatory period of solitary confinement after initial entry.
Unfortunately for its inhabitants, the site at Richmond where Virginia 's first penitentiary was built bordered a stagnant pool, in which sewage collected. The prison 's cells had no heating system and water oozed from its walls, leading inmates ' extremities to freeze during the winter months. Prisoners could perform no work during the solitary portion of their sentence, which they served completely isolated in near - total darkness, and many went mad during this portion of their sentence. Those prisoners who survived the isolation period joined other inmates in the prison workshop to make goods for the state militia. The workshop never turned a profit. Escapes were common.
But despite Virginia 's example, Kentucky, Maryland, and Georgia all constructed prisons before 1820, and the trend continued in the South thereafter. Early Southern prisons were marked by escapes, violence, and arson. The personal reformation of inmates was left almost solely to underpaid prison chaplains. Bitter opposition from the public and rampant overcrowding both marked Southern penal systems during the antebellum period. But once established, southern penitentiaries took on lives of their own, with each state 's system experiencing a complex history of innovation and stagnation, efficient and inefficient wardens, relative prosperity and poverty, fires, escapes, and legislative attacks; but they did follow a common trajectory.
During the period in which slavery existed, few black Southerners in the lower South were imprisoned, and virtually none of those imprisoned were slaves. Most often, slaves accused of crimes -- especially less serious offenses -- were tried informally in extra-legal plantation "courts, '' although it was not uncommon for slaves to come within the formal jurisdiction of the Southern courts. The majority of Southern inmates during the antebellum period were foreign - born whites. Nevertheless, in the upper South, free blacks made up a significant (and disproportionate) one - third of state prison populations. Governors and legislators in both the upper and lower South became concerned about racial mixing in their prison systems. Virginia experimented for a time with selling free blacks convicted of "serious '' crimes into slavery until public opposition led to the measure 's repeal (but only after forty such persons were sold).
Very few women, black or white, were imprisoned in the antebellum South. But for those women who did come under the control of Southern prisons, conditions were often "horrendous, '' according to Edward L. Ayers. Although they were not made to shave their heads like male convicts, female inmates in the antebellum South did not live in specialized facilities -- as was the case in many antebellum Northern prisons -- and sexual abuse was common.
As in the North, the costs of imprisonment preoccupied Southern authorities, although it appears that Southerners devoted more concern to this problem than their Northern counterparts. Southern governors of the antebellum period tended to have little patience for prisons that did not turn a profit or, at least, break even. Southern prisons adopted many of the same money - making tactics as their Northern counterparts. Prisons earned money by charging fees to visitors. They also earned money by harnessing convict labor to produce simple goods that were in steady demand, like slave shoes, wagons, pails, and bricks. But this fomented unrest among workers and tradesmen in Southern towns and cities. Governor Andrew Johnson of Tennessee, a former tailor, waged political war on his state 's penitentiary and the industries it had introduced among its inmates. To avoid these conflicts, some states -- like Georgia and Mississippi -- experimented with prison industry for state - run enterprises. But in the end few penitentiaries, North or South, turned a profit during the antebellum period.
Presaging Reconstruction - era developments, however, Virginia, Georgia, and Tennessee began considering the idea of leasing their convicts to private businesspersons by the 1850s. Prisoners in Missouri, Alabama, Texas, Kentucky, and Louisiana all leased their convicts during the antebellum period under a variety of arrangements -- some inside the prison itself (as Northern prisons were also doing), and others outside of the state 's own facilities.
Between 1800 and 1860, the vast majority of the Southern population worked in agriculture. Whereas the proportion of the Northern population working on farms dropped in this period from 70 to 40 percent, 80 percent of Southerners were consistently engaged in farm - related work. Reflecting this, only one - tenth of Southerners lived in what the contemporary census criteria described as an urban area (compared to nearly one - quarter of Northerners).
Antebellum southern cities stood at juncture of the region 's slave economy and the international market economy, and economics appear to have played crucial role in shaping the face of crime in Southern cities. These urban centers tended to attract young and propertyless white males, not only from the Southern countryside, but also from the North and abroad. Urban immigration in the South reached a peak during the 1850s, when an economic boom in cotton produced "flush times. '' Poor young men and others -- white and black -- settled on the peripheries of Southern cities like Savannah, Georgia. Here they came into contact with the wealthy and more stable elements of modern society, producing demographics similar to those in post-revolutionary Philadelphia and other Northern cities.
The first modern Southern police forces emerged between 1845 and the Civil War in large part due to the class - based tensions that developed in Southern cities. Some Southern cities -- notably New Orleans and Charleston -- experimented with police forces even earlier in the eighteenth century as a means of controlling their large urban slave populations. But most Southern cities relied on volunteer night - watch forces prior to mid-century. The transition to uniformed police forces was not especially smooth: Major political opposition arose as a result of the perceived corruption, inefficiency, and threat to individual liberty posed by the new police.
According to Edward L. Ayers, Southern police forces of the antebellum period tended to enforce uniformity by creating crime out of "disorder '' and "nuisance '' enforcement. The vast majority of theft prosecutions in the antebellum South arose in its cities. And property offenders made up a disproportionate share of the convict population. Although thieves and burglars constituted fewer than 20 percent of the criminals convicted in Southern courts, they made up about half the South 's prison population.
During the period between independence and the Civil War, Southern inmates were disproportionately ethnic. Foreign - born persons made up less than 3 percent of the South 's free population. In fact, only one - eighth of all immigrants to the United States during the antebellum period settled in the South. Yet foreign immigrants represented anywhere from 8 to 37 percent of the prison population of the Southern states during this period.
Crime in Southern cities generally mirrored that of Northern ones during the antebellum years. Both sections experienced a spike in imprisonment rates during a national market depression on the eve of the American Civil War. The North had experienced a similar depression during the 1830s and 1840s -- with a concurrent increase in imprisonment -- that the agrarian South did not. But urban crime in the South differed from that in the North in one key way -- its violence. A significantly higher percentage of violence characterized Southern criminal offenders of all class levels. Young white males made up the bulk of violent offenders in the urban South.
Slavery in the urban South also played a role in the development of its penal institutions. Urban slave - owners often utilized jails to "store '' their human property and to punish slaves for disciplinary infractions. Slaveholding in urban areas tended to be less rigid than in the rural South. Nearly 60 percent of slaves living in Savannah, Georgia, for example, did not reside with their master; many were allowed to hire themselves out for wages (though they had to share the proceeds with their owner). In this environment, where racial control was more difficult to enforce, Southern whites were constantly on guard against black criminality. Charleston, South Carolina, established a specialized workhouse for masters to send their slaves for punishment for a fee. In Savannah, Georgia, owners could send their slaves to the city jail to have punishment administered.
Industrialization proceeded haphazardly across the South during the antebellum period, and large sections of the rural population participated in a subsistence economy like that of the colonial era. Patterns of crime in these regions reflected these economic realities; violence, not thefts, took up most of the docket space in rural Southern courts.
Unlike antebellum urban spaces, the ups and downs of the market economy had a lesser impact on crime in the South 's rural areas. Far fewer theft cases appear on criminal dockets in the rural antebellum South than in its cities (though rural judges and juries, like their urban counterparts, dealt with property offenders more harshly than violent ones). Crime in rural area areas consisted almost solely of violent offenses.
Most counties in the antebellum South -- as in the North -- maintained a jail for housing pre-trial and pre-sentence detainees. These varied in size and quality of construction considerably as a result of disparities in wealth between various counties. Unlike Southern cities, however, rural counties rarely used the jail as criminal punishment in the antebellum period, even as states across the Northeast and the Midwest shifted the focus of their criminal justice process to rehabilitative incarceration. Instead, fines were the mainstay of rural Southern justice.
The non-use of imprisonment as a criminal punishment in the rural antebellum South reflected the haphazard administration of criminal justice in these regions. Under the general criminal procedure of the day, victims of theft or violence swore out complaints before their local justice of the peace, who in turn issued arrest warrants for the accused. The county sheriff would execute the warrant and bring the defendant before a magistrate judge, who would conduct a preliminary hearing, after which he could either dismiss the case or bind the accused over to the Superior Court for a grand jury hearing. (Some cases, however, particularly those involving moral offenses like drinking and gambling, were initiated by the grand jury of its own accord.)
Criminal procedure in the antebellum rural South offered many avenues of escape to a criminal defendant, and only the poorest resided in the jail while awaiting trial or sentencing. Those defendants who did spend time in jail before trial had to wait for the prosecutor 's biannual visit to their county. Southern prosecutors generally did not live in the local area where they prosecuted cases and were generally ill - prepared. Disinterested jurors were also hard to come by, given the generally intimate nature of rural Southern communities. Relative leniency in sentencing for appears to have marked most judicial proceedings for violent offenses -- the most common. Historical evidence suggests that juries indicted a greater number of potential offenders than the judicial system could handle in the belief that many troublemakers -- especially the landless -- would leave the country altogether.
Few immigrants or free blacks lived in the rural South in the pre-Civil War years, and slaves remained under the dominant control of a separate criminal justice system administered by planters throughout the period. Thus, most criminal defendants were Southern - born whites (and all socio - economic classes were represented on criminal dockets). Blacks occasionally came within the purview of the conventional criminal justice apparatus from their dealings with whites in the "gray market, '' among other offenses. But the danger to whites and blacks alike from illicit trading, the violence that often erupted at their meetings, and the tendency of whites to take advantage of their legally impotent black counterparties all made these occurrences relatively rare.
The American Civil War and its aftermath witnessed renewed efforts to reform America 's system and rationale for imprisonment. Most state prisons remained unchanged since the wave of penitentiary building during the Jacksonian Era and, as a result, were in a state of physical and administrative deterioration. Auburn and Eastern State penitentiaries, the paradigmatic prisons of Jacksonian reform, were little different. New reformers confronted the problems of decaying antebellum prisons with a new penal regime that focused on the rehabilitation of the individual -- this time with an emphasis on using institutional inducements as a means of affecting behavioral change. At the same time, Reconstruction - era penology also focused on emerging "scientific '' views of criminality related to race and heredity, as the post-war years witnessed the birth of a eugenics movement in the United States.
Social historian David Rothman describes the story of post-reconstruction prison administration as one of decline from the ambitions Jacksonian period. Facing major overcrowding and understaffing issues, prison officials reverted to "amazingly bizarre '' methods of controlling their charges, Rothman writes. Among the punishments that proliferated in this period were:
Although wardens tended to believe these measures were necessary for control, contemporary observers generally found them "unquestionably cruel and unusual, '' according to Rothman.
Northern states continued to lease the labor of their convicts to private business interests in the post-war years. The Thirteenth Amendment, adopted in 1865, expressly permitted slavery "as a punishment for a crime whereof the party shall have been duly convicted. '' In Northern prisons, the state generally housed and fed inmate laborers, while contractors brought all necessary machinery to the prison facility and leased the inmates ' time.
Abuses were common, according to investigative reporter Scott Christianson, as employers and guards tried to extract as much time and effort from prisoners as possible. At New Jersey 's prison at Trenton, after an inmate died while being "stretched '' by the prison staff, a committee investigating discipline at the prison determined that officials had poured alcohol on epileptics and set them on fire to see if they were faking convulsions in order to skip work. At an Ohio penitentiary, unproductive convicts were made to sit naked in puddles of water and receive electric shocks from an induction coil. In New York, public investigations of practices in the state 's prisons became increasingly frequent during the 1840s, 1860s, and 1870s -- though with little actual effect on conditions. They revealed that a prisoner had been poisoned to death for not working in one institution; another was found to have been kept chained to the floor for ten months in solitary confinement, until he eventually suffered a mental breakdown.
By and large, Americans of the 1870s, 1880s, and 1890s did little to address the disciplinary and other abuses in United States penitentiaries of the time. One reason for this apathy, according to authors Scott Christianson and David Rothman, was the composition of contemporary prison populations. Following the Civil War, the volume of immigration to the United States increased alongside expanding nativist sentiment, which had been a fixture of national politics since long before the War. During the 1870s, as many as 3 million immigrants arrived on the shores of the United States. By the 1880s, the influx rose to 5.2 million, as immigrants fled persecution and unrest in eastern and southern Europe. This trend continued until immigration reached a zenith between 1904 and 1914 of 1 million persons per year.
Already in the 1850s and 1860s, prisons (along with asylums for the mentally ill) were becoming the special preserve of the foreign - born and the poor. This trend accelerated as the nineteenth century drew to a close. In Illinois, for example, 60 percent of inmates in 1890 were foreign - born or second - generation immigrants -- Irish and German, mostly. Less than one - third of the Illinois inmates had completed grammar school, only 5 percent had a high school or college education, and the great majority held unskilled or semi-skilled jobs. In the 1890s California, 45 percent of prisoners were foreign - born -- predominantly of Chinese, Mexican, Irish, and German descent -- and the majority were laborers, waiters, cooks, or farmers. Throughout the post-war years, the rate of imprisonment for foreign - born Americans was twice that of native - born ones; black Americans were incarcerated, North and South, at three times the rate of white Americans.
The Civil War 's end also witnessed the emergence of pseudo-scientific theories concerning biological superiority and inherited social inferiority. Commentators grafted the Darwinian concept of "survival of the fittest '' onto notions of social class. Charles Loring Brace, author of The Dangerous Classes of New York (1872), warned his readers that attempts to cure poverty through charity would backfire by lessening the poor 's chance of survival. Richard L. Dugdale, civic - minded New York merchant, toured thirteen county jails during the 1870s as a voluntary inspector for the prestigious Prison Association of New York. Reflecting on his observations in later writings, Dugdale traced crime to hereditary criminality and promiscuity.
These views on race and genetics, Christianson and Rothman conclude, affected the various official supervisory bodies established to monitor regulatory compliance in United States prisons. Although these monitoring boards (established either by the state executive or legislature) would ostensibly ferret out abuses in the prison system, in the end their apathy toward the incarcerated population rendered them largely ill - equipped for task of ensuring even humane care, Rothman argues. State and federal judges, for their part, refrained from monitoring prison conditions until the 1950s.
Persistent beliefs in inherited criminality and social inferiority also stoked a growing eugenics movement during the Reconstruction Era, which sought to "improve '' the human race through controlled breeding and eliminate "poor '' or "inferior '' tendencies. By the late 1890s, eugenics programs were enjoying a "full - blown renaissance '' in American prisons and institutions for the mentally ill, with leading physicians, psychologists, and wardens as proponents. Italian criminologist Cesare Lombroso published a highly influential tract in 1878 entitled L'uomo delinquente (or, The Criminal Man), which theorized that a primitive criminal type existed who was identifiable by physical symptoms or "stigmata. ''
Phrenology also became a popular "science '' among prison officials; at the height of the study 's popularity, the influential Reconstruction Era matron of Sing Sing Prison, Elizabeth W. Farnham, was one of its adherents, and officials at Eastern State Penitentiary maintained phrenological data on all inmates during the post-war years.
As the field of physical anthropology gained traction in the 1880s, prisons became laboratories for studying eugenics, psychology, human intelligence, medicine, drug treatment, genetics, and birth control. Support for these initiatives sprang from the influential prison reform organizations in the United States at the time -- e.g., the Prison Reform Congress, the National Conference for Charities and Corrections, the National Prison Congress, the Prison Association of New York, and the Philadelphia Society for Alleviating the Miseries of Public Prisons.
New methods of identifying criminal tendencies and classifying offenders by threat level emerged from prison - based research. In 1896, for instance, New York began requiring all persons sentenced to a penal institution for thirty days or more to be measured and photographed for state records. Eugenics studies in the prison setting led to the development of the vasectomy as a replacement for total castration.
Eugenics studies of the day aimed to prevent the extinction or genetic deterioration of mankind through restraints on reproduction, according to author Scott Christianson. In the mid-1890s, the Kansas "Home for the Feeble - Minded '' began performing mass castrations on all of its residents. And Indiana became the first state to enact a compulsory sterilization act for certain mentally ill and criminal persons in 1907. John D. Rockefeller Jr., a eugenics devotee, became involved in social Darwinist experiments in New York. In the 1910s, Rockefeller created the Bureau for Social Hygiene, which conducted experiments on female prisoners, with the state 's consent and financial support, to determine the roots of their criminality and "mental defectiveness. ''
A new group of prison reformers emerged in the Reconstruction Era that maintained some optimism about the institution and initiated efforts to make the prison a center for moral rehabilitation. Their efforts led to some change in contemporary prisons, but it would take another period of reform during the Progressive Era for any significant structural revisions to the prison systems of the United States.
The primary failure of Reconstruction Era penitentiaries, according to historian David Rothman, was administrative. State governors typically appointed political patrons to prison posts, which were usually not full - time or salaried. In the 1870s, for example, the board of the Utica, New York, asylum was composed of two bankers, a grain merchant, two manufacturers, two lawyers, and two general businessman. Prison oversight boards like the Utica one, composed of local businessmen, tended to defer to prison officials in most matters and focus solely on financial oversight, Rothman writes, and therefore tended to perpetuate the status quo.
Prison reform efforts of the Reconstruction Era came from a variety of sources. Fears about genetic contamination by the "criminal class '' and its effect on the future of mankind led to numerous moral policing efforts aimed at curbing promiscuity, prostitution, and "white slavery '' in this period. Meanwhile, campaigns to criminalize domestic violence, especially toward children, and related temperance movements led to renewed commitment to "law and order '' in many communities from the 1870s onward. When legislators ignored demands for more protection for women and children, feminist activists lobbied for harsher punishments for male criminal offenders -- including the whipping post, castration, and longer prison terms.
Another group of reformers continued to justify penitentiaries for negative reasons -- i.e., for fear that a sustained and successful attack on the prison system and its failings might yield a return to the "barbarism '' of colonial - era punishments. Nevertheless, a degree of optimism continued to dominate the thinking of reformers in the post-Civil War period, according to historian David Rothman.
By October 1870, notable Reconstruction Era prison reformers Enoch Wines, Franklin Sanborn, Theodore Dwight, and Zebulon Brockway -- among others -- convened with the National Congress of Penitentiary and Reformatory Discipline in Cincinnati, Ohio. The resolutions that emerged from the conference, called the Declaration of Principles, became the chief planks of the penitentiary reform agenda in the United States for the next several decades. The essence of the National Congress ' agenda in the Declaration was a renewed commitment to the "moral regeneration '' of offenders (especially young ones) through a new model of penitentiary.
The National Congress ' Declaration of Principles characterized crime as "a sort of moral disease. '' The Declaration stated that the "great object (of)... the treatment of criminals... should be (their) moral regeneration... the reformation of criminals, not the infliction of vindictive suffering. '' The Declaration took inspiration from the "Irish mark system '' pioneered by penologist Sir Walter Crofton. The object of Crofton 's system was to teach prisoners how to lead an upright life through use of "good - time '' credits (for early release) and other behavioral incentives. The Declaration 's primary goals were: (1) to cultivate prisoners ' sense of self - respect; and (2) to place the prisoner 's destiny in his or her own hands. But the Declaration more broadly:
The National Congress and those who responded to its agenda also hoped to implement a more open - ended sentencing code. They advocated for the replacement of the peremptory (or mandatory) sentences of the day, set by a judge after trial, with sentences of indeterminate length. True "proof of reformation, '' the Congress Declaration provided, should replace the "mere lapse of time '' in winning an inmate 's release from confinement. These suggestions anticipated the near - comprehensive adoption of indeterminate sentencing during the Progressive Era.
In spite of its many "progressive '' suggestions for penal reform, the National Congress showed little sensitivity to the plight of freed blacks and immigrants in the penal system, in the view of author Scott Christianson. Christianson notes that the National Congress ' membership generally subscribed to the prevailing contemporary notion that blacks and foreigners were disproportionately represented in the prison system due to their inherent depravity and social inferiority.
The rise and decline of the Elmira Reformatory in New York during the latter part of the nineteenth century represents the most ambitious attempt in the Reconstruction Era to fulfill the goals set by the National Congress in the Declaration of Principles. Built in 1876, the Elmira institution was designed to hold first - time felons, between the ages of sixteen and thirty, who were serving an indeterminate term of imprisonment set by their sentencing judge. Elmira inmates had to earn their way out of the institution through good behavior, as assessed through an elaborate grading system. The only limits on inmates ' terms of imprisonment were whatever upper threshold the legislature set for their offense.
Elmira 's administration underscores the fundamental tension of contemporary penal reform, according to authors Scott Christianson and David Rothman. On the one hand, its purpose was to rehabilitate offenders; on the other, its reform principles were tempered by a belief in the heretability of criminal behavior. Elmira 's first warden, National Congress member Zebulon Brockway, wrote in 1884 that at least one - half of his charges were "incorrigible '' due to their genetics. Brockway further characterized modern criminals as "to a considerable extent the product of our civilization and... of emigration to our shore from the degenerated populations of crowded European marts. '' Brockway reserved the harshest disciplinary measures -- e.g., frequent whippings and solitary confinement -- for those he deemed "incorrigible '' (primarily the mentally and physically disabled).
Elmira was regarded by many contemporaries as a well - run, model institution in its early years. Nevertheless, by 1893 the reformatory was seriously overcrowded and Brockway 's ideas about genetic degeneracy, low - intelligence, and criminality came under fire as a result of his brutality toward the mentally and physically disabled. An 1894 executive investigation of Elmira 's disciplinary practices concluded that discipline in the institution was harsh, although it eventually cleared Brockway of charges that he practiced "cruel, brutal, excessive, degrading and unusual punishment of inmates. '' But continuing stigma led the Brockway to resign from his post at Elmira by 1900.
Historian David Rothman characterizes Brockway 's departure from Elmira as marking the institution 's failure as a reformed penitentiary, since its methods were hardly different from those of other Jacksonian Era institutions that had survived into the post-war years. But Rothman also concludes that the Elmira experience suggested to contemporary reformers only that management was to blame, not their proposed system of incarceration generally. The Progressive Era of the early twentieth century thus witnessed renewed efforts to implement the penal agenda espoused by the National Congress and its adherents in 1870 -- albeit with some noteworthy structural additions.
The Civil War brought overwhelming change to Southern society and its criminal justice system. As freed slaves joined the Southern population, they came under the primary control of local governments for the first time. At the same time, the market economy began to affect individuals and regions in the South that were previously untouched. Widespread poverty at the end of the nineteenth century unraveled the South 's prior race - based social fabric. In Reconstruction - era cities like Savannah, Georgia, intricate codes of racial etiquette began to unravel almost immediately after the war with the onset of emancipation. The local police forces that had been available in the antebellum South, depleted during the war, could not enforce the racial order as they had before. Nor was the white population, stricken by poverty and resentment, as united in its racial policing as it was during the antebellum period. By the end of Reconstruction, a new configuration of crime and punishment had emerged in the South: a hybrid, racialized form of incarceration at hard labor, with convicts leased to private businesses, that endured well into the twentieth century.
The economic turmoil of the post-war South reconstituted race relations and the nature of crime in the region, as whites attempted to reassert their supremacy. Earlier, extra-legal efforts toward reestablishing white supremacy, like those of the Ku Klux Klan, gradually gave way to more certain and less volatile forms of race control, according to historian Edward L. Ayers. Racial animosity and hatred grew as the races became ever more separate, Ayers argues, and Southern legal institutions turned much of their attention to preserving the racial status quo for whites.
Patterns of "mono - racial law enforcement, '' as Ayers refers to it, were established in Southern states almost immediately after the American Civil War. Cities that had never had police forces moved quickly to establish them, and whites became far less critical of urban police forces in post-war politics, whereas in the antebellum period they had engendered major political debate. Savannah, Georgia 's post-war police force was made up of Confederate veterans, who patrolled the city in gray uniforms, armed with rifles, revolvers, and sabers. They were led by an ex-Confederate General, Richard H. Anderson. Ayers concludes that white policemen protecting white citizens became the model for law enforcement efforts across the South after the American Civil War.
Depressed economic conditions impacted both white and black farmers in the post-war South, as cotton prices entered a worldwide decline and interest rates on personal debt rose with "astonishing '' speed after the close of hostilities. Property crime convictions in the Southern countryside, rare in the antebellum years, rose precipitously throughout the 1870s (though violent crime by white offenders continued to take up the majority of the rural courts ' business).
Former slaves who migrated to Southern cities, where they often received the lowest - paying jobs, were generally affected more acutely by economic downturns than their rural counterparts. Five years after the Civil War, 90 percent of the black population of Savannah, Georgia, owned no property. Increases in black property crime prosecutions in Savannah correlate to major economic downturns of the post-war period.
Whites made few attempts to disguise the injustice in their courts, according to historian Edward L. Ayers. Blacks were uniformly excluded from juries and denied any opportunity to participate in the criminal justice process aside from being defendants. Thefts by black offenders became a new focus of the Southern justice systems and began to supplant violent crimes by white offenders in court dockets. Whether they were from the city or the countryside, those accused of property crime stood the greatest chance of conviction in post-war southern courts. But black defendants were convicted in the highest numbers. During the last half of the nineteenth century, three out of every five white defendants accused of property crime in Southern courts were convicted, while four out of every five black defendants were. Conviction rates for whites, meanwhile, dropped substantially from antebellum levels throughout the last half of the nineteenth century.
This system of justice led, in the opinion of W.E.B. Du Bois, to a system in which neither blacks nor whites respected the criminal justice system -- whites because they were so rarely held accountable, and blacks because their own accountability felt so disproportionate. Ultimately, thousands of black Southerners served long terms on chain gangs for petty theft and misdemeanors in the 1860s and 1870s, while thousands more went into the convict lease system.
In criminal sentencing, blacks were disproportionately sentenced to incarceration -- whether to the chain gang, convict leasing operation, or penitentiary -- in relation to their white peers. Black incarceration peaked before and after radical Reconstruction, when Southern whites exercised virtually unchecked power and restored "efficiency '' to the criminal courts. For example, 384 of North Carolina 's 455 prisoners in 1874 were black, and in 1878 the proportion had increased slightly to 846 of 952. By 1871, 609 of Virginia 's 828 convicts in -- including all but four of its sixty - seven female prisoners -- were black. But this phenomenon was not specific to the South: The proportion of black inmates in Northern prisons was virtually identical to that in Southern prisons throughout the second half of the nineteenth century.
Rural courts met so rarely in the post-war years that prisoners often sat in jails for months while awaiting trial, at the government 's expense. Chain gangs emerged in the post-war years as an initial solution to this economic deficit. Urban and rural counties moved the locus of criminal punishment from municipalities and towns to the county and began to change the economics of punishment from a heavy expense to a source of public "revenue '' -- at least in terms of infrastructure improvements. Even misdemeanors could be turned to economic advantage; defendants were often sentenced to only a few on the chain gang, with an additional three to eight months tacked onto the sentence to cover "costs. '' As the Southern economy foundered in the wake of the peculiar institution 's destruction, and property crime rose, state governments increasingly explored the economic potential of convict labor throughout the Reconstruction period and into the twentieth century.
"The most far - reaching change in the history of crime and punishment in the nineteenth - century South, '' according to historian Edward L. Ayers, was "the state 's assumption of control over blacks from their ex-masters... '' The process by which this occurred was "halting and tenuous, '' but the transition began the moment a master told his slaves they were free. '' In this landscape, Ayers writes, the Freedmen 's Bureau vied with Southern whites -- through official government apparatuses and informal organizations like the Ku Klux Klan -- over opposing notions of justice in the post-war South.
Southern whites in the main tried to salvage as much of the antebellum order as possible in the wake of the American Civil War, waiting to see what changes might be forced upon them. The "Black Codes '' enacted almost immediately after the war -- Mississippi and South Carolina passed theirs as early as 1865 -- were an initial effort in this direction. Although they did not use racial terms, the Codes defined and punished a new crime, "vagrancy, '' broadly enough to guarantee that most newly free black Americans would remain in a de facto condition of servitude. The Codes vested considerable discretion in local judges and juries to carry out this mission: County courts could choose lengths and types of punishment previously unavailable. The available punishments for vagrancy, arson, rape, and burglary in particular -- thought by whites to be peculiarly black crimes -- widened considerably in the post-war years.
Soon after hostilities officially ceased between the United States and the Confederate States of America, black "vagrants '' in Nashville, Tennessee, and New Orleans, Louisiana, were being fined and sent to the city workhouse. In San Antonio, Texas, and Montgomery, Alabama, free blacks were arrested, imprisoned, and put to work on the streets to pay for their own upkeep. A Northern journalist who passed through Selma, Alabama, immediately after of the war, was told that no white man had ever been sentenced to the chain gang, but that blacks were now being condemned to it for such "crimes '' as "using abusive language towards a white man '' or selling farm produce within the town limits.
At the same time that Reconstruction Era Southern governments enacted the "Black Codes '', they also began to change the nature of the state 's penal machinery to make it into an economic development tool. Social historian Marie Gottschalk characterizes the use of penal labor by Southern state governments during the post-war years as an "important bridge between an agricultural economy based on slavery and the industrialization and agricultural modernization of the New South. ''
Many prisons in the South were in a state of disrepair by the end of the American Civil War, and state budgets across the region were exhausted. Mississippi 's penitentiary, for instance, was devastated during the war, and its funds depleted. In 1867 the state 's military government began leasing convicts to rebuild wrecked railroad and levees within the state. By 1872, it began leasing convicts to Nathan Bedford Forrest, a former Confederate general and slave trader, as well as the first Imperial Wizard of the then emerging Ku Klux Klan.
Texas also experienced a major postwar depression, in the midst of which its legislators enacted tough new laws calling for forced inmate labor within prison walls and at other works of public utility outside of the state 's detention facilities. Soon Texas began leasing convicts to railroads, irrigation and navigation projects, and lead, copper, and iron mines.
Virginia 's prison at Richmond collapsed in the wake of the City 's 1865 surrender, but occupying Union forces rounded up as many convicts as they could in order to return them to work. Alabama began leasing out its Wetumpka Prison to private businessmen soon after the Civil War.
During the Reconstruction Era, the North Carolina legislature authorized state judges to sentence offenders to work on chain gangs on county roads, railroads, or other internal improvements for a maximum term of one year -- though escapees who were recaptured would have to serve double their original sentence. North Carolina had failed to erect a penitentiary in the antebellum period, and its legislators planned to build an Auburn - style penitentiary to replace the penal labor system. But graft and shady dealings soon rendered a new prison impracticable, and North Carolina convicts continued to be leased to railroad companies.
Freed blacks became the primary workers in the South 's emerging penal labor system. Those accused of property crime -- white or black -- stood the greatest chance of conviction in post-war Southern courts. But black property offenders were convicted more often than white ones -- at a rate of eight convictions for every ten black defendants, compared to six of every ten white defendants. Overall, conviction rates for whites dropped substantially from antebellum levels during the Reconstruction Era and continued to decline throughout the last half of the nineteenth century.
The Freedmen 's Bureau, charged with implementing congressional reconstruction throughout the former Confederate states, was the primary political body that opposed the increasing racial overtones of Southern criminal justice during the Reconstruction Era. The Bureau 's mission reflected a strong faith in impersonal legalism, according to historian Edward L. Ayers, and its agents were to act as guarantors of blacks ' legal equality. The Bureau maintained courts in the South from 1865 to 1868 to adjudicate minor civil and criminal cases involving freed slaves. Ultimately, Ayers concludes, the Bureau largely failed to protect freed slaves from crime and violence by whites, or from the injustices of the Southern legal system, although the Bureau did provide much needed services to freed slaves in the form of food, clothing, school support, and assistance in contracts. The Greensboro, North Carolina Herald more bluntly stated that the Freedmen 's Bureau was no match for the "Organic Law of the Land '' in the South, white supremacy.
In the rural South, the Freedmen 's Bureau was only as strong as its isolated agents, who were often unable to assert their will over that of the whites in their jurisdiction. Manpower issues and local white resentment led to early compromises under which southern civilians were allowed to serve as magistrates on the Freedmen 's Courts, although the move was opposed by many former slaves.
In cities like Savannah, Georgia, the Freedmen 's Courts appeared even more disposed to enforcing the wishes of local whites, sentencing former slaves (and veterans of the Union Army) to chain gangs, corporal punishments, and public shaming. The Savannah Freedmen 's Courts even approved arrests for such "offenses '' as "shouting at a religious colored meeting, '' or speaking disrespectfully to a white man.
The Bureau 's influence on post-war patterns of crime and punishment was temporary and limited. The United States Congress believed that only its unprecedented federal intrusion into state affairs through the Bureau could bring true republicanism to the South, according to Edward L. Ayers, but Southerners instinctively resented this as a grave violation of their own republican ideals. Southerners had always tended to circumscribe the sphere of written, institutionalized law, Ayers argues, and once they began to associate it with outside oppression from the federal government, they saw little reason to respect it at all. From this resentment, vigilante groups like the Ku Klux Klan arose in opposition to the Bureau and its mission -- though, in the words of Ayers, the Klan was a "relatively brief episode in a long history of post-war group violence in the South, '' where extralegal retribution was and continued to be a tradition.
For their part, former slaves in the Reconstruction - era South made efforts of their own to counteract white supremacist violence and injustice. In March 1866, Abraham Winfield and ten other black men petitioned the head of the Georgia Freedmen 's Bureau for relief from the oppression of the Bureau 's Court in Savannah -- especially for Civil War veterans. In rural areas like Greene County, Georgia, blacks met vigilante violence from whites with violence of their own. But with the withdrawal of the Freedmen 's Bureau in 1868 and continuing political violence from whites, blacks ultimately lost this struggle, according to historian Edward L. Ayers. Southern courts were largely unable -- even they were willing -- to bring whites to justice for violence against black Southerners. By the early to mid-1870s, white political supremacy had been established anew across most of the South.
In Southern cities, a different form of violence emerged in the post-war years. Race riots erupted in Southern cities almost immediately after the war and continued for years afterward. Edward L. Ayers concludes that antebellum legal restraints on blacks and widespread poverty were the primary cause of many of these clashes. Whites resented labor competition from blacks in the depressed post-war Southern economy, and police forces -- many composed of unreconstructed Southerners -- often resorted to violence. The ultimate goal for both blacks and whites was to obtain political power in the vacuum created by war and emancipation; again, blacks ultimately lost this struggle during the Reconstruction period.
Convict leasing, practiced in the North from the earliest days of the penitentiary movement, was taken up by Southern states in earnest following the American Civil War. The use of convict labor remained popular nationwide throughout the post-war period. An 1885 national survey reported that 138 institutions employed over 53,000 inmates in industries, who produced goods valued at $28.8 million. Although this was a relatively small sum in comparison to the estimated $5.4 billion in goods produced by free labor in 1880, prison labor was big business for those involved in particular industries.
But convict leasing in the post-war South came to play a more central role in crime and punishment than in the North, and it continued to do so with the approbation of the South 's leading men until well into the twentieth century. For over a half - century following the Civil War, convict camps dotted the Southern landscape, and thousands of men and women -- most of them former slaves -- passed years of their lives within the system. Men with capital, from the North and the South, bought years of these convicts ' lives and put them to work in large mining and railroad operations, as well as smaller everyday businesses. On average, the death rate in Southern leasing arrangements exceeded that in Northern prisons three-fold.
The convict lease, as practiced in the South, was not just a bald attempt by state governments to resurrect slavery, according to historians Edward L. Ayers and Marie Gottschalk. It reflected continuities in race relations, both argue, but it also reflected fundamental changes in the post-war Southern economy. For first time, millions of freed slaves came under the centralized control of state penal apparatuses; at the same time, nascent industrial capitalism in the South faced a shortage of both capital and labor. Former slaves were the easiest Southern demographic to impress into service and adapt southern industries to these changes.
Ultimately, however, the longest legacy of the system may be as symbol for white South 's injustice and inhumanity. In 1912, Dr. E. Stagg of the National Commission on Prison Labor described the status of the Southern convict as "the last surviving vestige of the slave system. '' A Northern writer in the 1920s referred to the Southern chain gang as the South 's new "peculiar institution ''.
Southern penitentiaries from the antebellum period by and large continued to fall into disrepair in the post-war years as they became mere outposts of the much larger convict labor system. One by one, Southern penitentiary systems had disintegrated during the American Civil War. Mississippi sent its prisoners to Alabama for safekeeping in the midst of a Northern invasion. Louisiana concentrated its prisoners into a single urban workhouse. Arkansas dispersed its convicts in 1863 when the Union Army breached its borders. Occupied Tennessee hired its prisoners out to the United States government, while Georgia freed its inmates as General William Tecumseh Sherman headed for Atlanta with his armies in 1864. With the fall of Richmond, most of Virginia 's prisoners escaped.
The convict lease system emerged haltingly from this chaos, Edward L. Ayers and Marie Gottschalk conclude, just as the penitentiary itself had in years past. The penitentiary had become a Southern institution at this point, Ayers points out, and its complete abolition would have required a major renovation of state criminal codes. Some states, like Georgia, tried to revive their penitentiary systems in the post-war years, but had to first deal with crumbling state infrastructure and a growing prison population. The three states that had not established prisons in the antebellum period -- i.e., the Carolinas and Florida -- hastened to establish them during Reconstruction.
But many Southern states -- including North Carolina, Mississippi, Virginia, and Georgia -- soon turned to the lease system as a temporary expedient, as rising costs and convict populations outstripped their meager resources. According to Edward L. Ayers, "(t) he South... more or less stumbled into the lease, seeking a way to avoid large expenditures while hoping a truly satisfactory plan would emerge. '' Social historian Marie Gottschalk characterizes these leasing arrangements as an "important bridge between an agricultural economy based on slavery and the industrialization and agricultural modernization of the New South. '' This may help to explain why support for the convict lease was altogether widespread in Southern society, Ayers concludes. No single group -- black or white, Republican or Democrat -- consistently opposed the lease once it gained power.
The labor that convict lessees performed varied as the Southern economy evolved after the American Civil War. Ex-plantation owners were early beneficiaries, but emerging industrial capitalism ventures -- e.g., phosphate mines and turpentine plants in Florida, railroads in Mississippi (and across the South) -- soon came to demand convict labor. The South experienced an acute labor shortage in the post-war years, Edward L. Ayers explains, and no pool of displaced agricultural laborers was available to feed the needs of factory owners, as they had been in England and on the Continent.
The lease system was useful for capitalists who wanted to make money quickly: Labor costs were fixed and low, and labor uncertainty was reduced to the vanishing point. Convicts could be and were driven to a point free laborers would not tolerate (and could not drink or misbehave). Although labor unrest and economic depression continued to rile the North and its factories, the lease system insulated its beneficiaries in South from these external costs.
In many cases, Edward L. Ayers writes, the businessmen who utilized the convict - lease system were the same politicians who administered it. The system became, Ayers argues, a sort of "mutual aid society '' for the new breed of capitalists and politicians who controlled the white Democratic regimes of the New South. Thus, Ayers concludes, officials often had something to hide, and contemporary reports on leasing operations often skirted or ignored the appalling conditions and death rates that attended these projects.
In Alabama, 40 percent of convict lessees died during their term of labor in 1870 -- death rates for 1868 and 1869 were 18 and 17 percent, respectively. Lessees on Mississippi 's convict labor projects died at nine times the rate of inmates in Northern prisons throughout the 1880s. One man who had served time in the Mississippi system claimed that reported death rates would have been far higher had the state not pardoned many broken convicts before they died, so that they could do so at home instead.
Compared to contemporary non-leasing prison systems nationwide, which recouped only 32 percent of expenses on average, convict leasing systems earned average profits of 267 percent. Even in comparison to Northern factories, Edward L. Ayers writes, the lease system 's profitability was real and sustained in the post-war years and remained so into the twentieth century.
Exposes on the lease system began appearing with increasing frequency in newspapers, state documents, Northern publications, and the publications of national prison associations during the post-war period -- just as they did for Northern prisons like those in New York. Mass grave sites containing the remains of convict lessees have been discovered in Southern states like Alabama, where the United States Steel Corporation purchased convict labor for its mining operations for several years at the end of the nineteenth and beginning of the twentieth centuries.
The focus of Southern justice on racial control in the post-war years had a profound effect on the demographics of the lease systems ' populations. Before the Civil War, virtually all Southern prisoners were white, but under post-war leasing arrangements almost all (approximately 90 percent) were black. In the antebellum period, white immigrants made up a disproportionate share of the South 's prison population before all but disappearing from prison records in the post-war period. The reasons for this are likely two-fold, Edward L. Ayers suggests. First, white immigrants generally avoided the post-war South due to its generally poor economic climate and the major increase in labor competition posed by emancipated slaves. Second, the preoccupation of post-war Southern police forces with crime committed by blacks decreased their efforts among the white population, including immigrants.
The source of convicts also changed in the post-war South. Before the American Civil War, rural counties sent few defendants to the state penitentiaries, but after the war rural courts became steady suppliers to their states ' leasing systems (though cities remained the largest supplier of convict lessees during this period). Savannah, Georgia, for example, sent convicts to leasing operations at approximately three times the number that its population would suggest, a pattern amplified by the reality that 76 percent of all blacks convicted in its courts received a prison sentence.
Most convicts were in their twenties or younger. The number of women in Southern prison systems, increased in the post-war years to about 7 percent, a ratio not incommensurate with other contemporary prisons in the United States, but a major increase for the South, which had previously boasted of the moral rectitude of its (white) female population. Virtually all such women were black.
The officials who ran the South 's leasing operations tried to maintain strict racial separation in the convict camps, refusing to recognize social equality between the races even among felons. As one Southerner reported to the National Prison Congress in 1886: Mixing the races in prison "is akin to the torture anciently practised of tieing (sic) a murderer to the dead body of his victim limb to limb, from head to foot, until the decaying corpse brought death to the living. '' Whites who did end up in Southern prisons, according to Edward L. Ayers, were considered the lowest of their race. At least some legislators referred to white prisoners with the same racial epithets reserved for blacks at the time.
The Southern lease system was something less than a "total system. '' The vast majority of convict - lease camps were dispersed, with little in the way of walls or other securities measures -- although some Southern chain gangs were carted around in open - air cages to their work sites and kept in them at night. Order in the camps was generally tenuous at best, Edward L. Ayers argues. Escapes were frequent and the brutal punishments that characterized the camps -- chains, bloodhounds, guns, and corporal punishments -- were dealt with a palpable sense of desperation. (At least some observers, however, questioned whether the high number of reported escapees was not a ploy to cover up foul play.)
Reflecting changing criminal dockets in the Southern courts, about half of prisoners in the lease system served sentences for property crime. Rehabilitation played no real role in the system. Whatever onus for reform there was fell on the shoulders of chaplains, Edward L. Ayers relates. As Warden J.H. Bankhead of the Alabama penitentiary observed in the 1870s: "(O) ur system is a better training school for criminals than any of the dens of iniquity that exist in our large cities... You may as well expect to instill decent habits into a hog as to reform a criminal whose habits and surroundings are as filthy as a pig 's. ''
Some proponents of the lease claimed that the system would teach blacks to work, but many contemporary observers came to recognize -- as historian C. Vann Woodward later would -- that the system dealt a great blow to whatever moral authority white society had retained in its paternalistic approach to the "race problem. '' Time in the penitentiary came to carry little stigma in the black community, as preachers and other community leaders spread word of its cruelty.
Whites presented far from a united front in defense of the lease system during the Reconstruction Era. Reformers and government insiders began condemning the worst abuses of the system from early on. Newspapers began taking up the call by the 1880s, although they had defended it during the more politically charged years that immediately followed the Civil War. But the system also had its defenders -- at times even the reformers themselves, who chafed at Northern criticism even where they agreed with its substance. The "scientific '' racial attitudes of the late nineteenth century also helped some supporters of the lease to assuage their misgivings. One commentator wrote that blacks died in such numbers on the convict lease farms because of the weakness of their inferior, "uneducated '' blood.
Economic, rather than moral, concerns underlay the more successful attacks on leasing. Labor launched effective opposition movements to the lease in the post-war period. Birmingham, Alabama, and its Anti-Convict League, formed in 1885, were the center of this movement, according to Ayers. Coal miner revolts against the lease occurred twenty - two recorded times in the South between 1881 and 1900. By 1895, Tennessee caved in to the demands of its miners and abolished its lease system. These revolts notably crossed racial lines. In Alabama, for instance, white and black free miners marched side - by - side to protest the use of convict labor in local mining operations.
In these confrontations, convict labor surely took on a somewhat exaggerated importance to free workers, argues Edward L. Ayers. Only 27,000 convicts were engaged in some form of labor arrangement in the 1890s South. But the emerging nature of Southern industry and labor groups -- which tended to be smaller and more concentrated -- made for a situation in which a small number of convicts could affect entire industries.
Just as the convict lease emerged gradually in the post-war South, it also made a gradual exit.
Although Virginia, Texas, Tennessee, Kentucky, and Missouri utilized Northern - style manufacturing prisons in addition to their farms, as late as 1890 the majority of Southern convicts still passed their sentences in convict camps run by absentee businessmen. But the 1890s also marked the beginning of a gradual shift toward compromise over the lease system, in the form of state - run prison farms. States began to cull the women, children, and the sick from the old privately run camps during this period, to remove them from the "contamination '' of bad criminals and provide a healthier setting and labor regime. Mississippi enacted a new state constitution in 1890 that called for the end of the lease by 1894.
Despite these changes, and continuing attacks from labor movements, Populists, and Greenbackers, only two Southern states besides Mississippi ended the system prior to the twentieth century. Most Southern states did bring their systems under tighter control and make increasing use of state penal farms by the twentieth century, however, resulting in improved conditions and a decline in death rates. Georgia abolished its system in 1908, after an expose by Charles Edward Russell in Everybody 's Magazine revealed "hideous '' conditions on lease projects. A former warden described how men in the Georgia camps were hung by their thumbs as punishment, to the point that their thumbs became so stretched and deformed, to the length of index fingers, that they resembled the "paws of certain apes. '' Florida 's prison camps -- where even the sick were forced to work under threat of a beating or shooting -- remained in use until 1923.
Replacements for the lease system, such as chain gangs and state prison farms, were not so different from their predecessors. An example of the lingering influence of the lease system can be found in the Arkansas prison farms. By the mid-twentieth century, Arkansas ' male penal system still consisted of two large prison farms, which remained almost totally cut off from the outside world and continued to operate much as they had during the Reconstruction Era. Conditions in these camps were bad enough that, as late as the 1960s, an Oregon judge refused to return escapees from Arkansas, who had been apprehended in his jurisdiction. The judge declared that returning the prisoners to Arkansas would make his state complicit in what he described as "institutes of terror, horror, and despicable evil, '' which he compared to Nazi concentration camps.
In 1966, around the time of the Oregon judge 's ruling, the ratio of staff to inmates at the Arkansas penal farms was one staff member for every sixty - five inmates. By contrast, the national average at the time was around one prison staff member for every seven inmates. The state was not the only entity profiting from the farm; private operators controlled certain of its industries and maintained high profit margins. The physician who ran the farm 's for - profit blood bank, for instance, earned between $130,000 and $150,000 per year off of inmate donations that he sold to hospitals.
Faced with this acute shortage of manpower, authorities at the penal farms relied upon armed inmates, known as "trusties '' or "riders, '' to guard the convicts while they worked Under the trusties ' control prisoners worked ten to fourteen hours per day (depending on the time of year), six days per week. Arkansas was, at the time, the only state where prison officials could still whip convicts.
Violent deaths were commonplace on the Arkansaw prison farms. An investigation begun by incumbent Governor Orval Faubus during a heated 1966 gubernatorial race revealed ongoing abuses -- e.g., use of wire pliers on inmates ' genitals, stabbings, use of nut crackers to break inmates knuckles, trampling of inmates with horses, and charging inmates for hospital time after beatings. When the chairman of the Arkansas legislature 's prison committee was asked about the allegations, however, he replied, "Arkansas has the best prison system in the United States. '' Only later, after a federal court intervened, did reforms begin at the Arkansas prison camps.
History of criminal justice in Colonial America
|
how does a class implement an interface in java | Interface (Java) - wikipedia
An interface in the Java programming language is an abstract type that is used to specify a behavior that classes must implement. They are similar to protocols. Interfaces are declared using the interface keyword, and may only contain method signature and constant declarations (variable declarations that are declared to be both static and final). All methods of an Interface do not contain implementation (method bodies) as of all versions below Java 8. Starting with Java 8, default and static methods may have implementation in the interface definition.
Interfaces can not be instantiated, but rather are implemented. A class that implements an interface must implement all of the non-default methods described in the interface, or be an abstract class. Object references in Java may be specified to be of an interface type; in each case, they must either be null, or be bound to an object that implements the interface.
One benefit of using interfaces is that they simulate multiple inheritance. All classes in Java must have exactly one base class, the only exception being java. lang. Object (the root class of the Java type system); multiple inheritance of classes is not allowed. However, an interface may inherit multiple interfaces and a class may implement multiple interfaces.
Interfaces are used to encode similarities which the classes of various types share, but do not necessarily constitute a class relationship. For instance, a human and a parrot can both whistle; however, it would not make sense to represent Human s and Parrot s as subclasses of a Whistler class. Rather they would most likely be subclasses of an Animal class (likely with intermediate classes), but both would implement the Whistler interface.
Another use of interfaces is being able to use an object without knowing its type of class, but rather only that it implements a certain interface. For instance, if one were annoyed by a whistling noise, one may not know whether it is a human or a parrot, because all that could be determined is that a whistler is whistling. The call whistler. whistle () will call the implemented method whistle of object whistler no matter what class it has, provided it implements Whistler. In a more practical example, a sorting algorithm may expect an object of type Comparable. Thus, without knowing the specific type, it knows that objects of that type can somehow be sorted.
For example:
An interface:
Interfaces are defined with the following syntax (compare to Java 's class definition):
Example: public interface Interface1 extends Interface2;
The body of the interface contains abstract methods, but since all methods in an interface are, by definition, abstract, the abstract keyword is not required. Since the interface specifies a set of exposed behaviors, all methods are implicitly public.
Thus, a simple interface may be
The member type declarations in an interface are implicitly static, final and public, but otherwise they can be any type of class or interface.
The syntax for implementing an interface uses this formula:
Classes may implement an interface. For example,
If a class implements an interface and does not implement all its methods, it must be marked as abstract. If a class is abstract, one of its subclasses is expected to implement its unimplemented methods. Although if any of the abstract class ' subclasses does not implement all interface methods, the subclass itself must be marked again as abstract.
Classes can implement multiple interfaces:
Interfaces can share common class methods:
However a given class can not implement the same or a similar interface multiple times:
Interfaces are commonly used in the Java language for callbacks, as Java does not allow multiple inheritance of classes, nor does it allow the passing of methods (procedures) as arguments. Therefore, in order to pass a method as a parameter to a target method, current practice is to define and pass a reference to an interface as a means of supplying the signature and address of the parameter method to the target method rather than defining multiple variants of the target method to accommodate each possible calling class.
Interfaces can extend several other interfaces, using the same formula as described below. For example,
is legal and defines a subinterface. Note how it allows multiple inheritance, unlike classes. Note also that Predator and Venomous may possibly define or inherit methods with the same signature, say kill (Prey p). When a class implements VenomousPredator it will implement both methods simultaneously.
Some common Java interfaces are:
|
in which episode of grey's anatomy does derek die | Death and All His Friends (Grey 's Anatomy) - wikipedia
"Death and All His Friends '' is the season finale of the sixth season of the American television medical drama Grey 's Anatomy, and the show 's 126th episode overall. It was written by Shonda Rhimes and directed by Rob Corn. The episode was originally broadcast on the American Broadcasting Company (ABC) in the United States on May 20, 2010. The episode was the second part of the two - hour season six finale, the first being Sanctuary, and took place at the fictional Seattle Grace Hospital. The original episode broadcast in the United States had an audience of 16.13 million viewers and opened up to universal acclaim. The episode centers a shooting spree at the hospital by a former patient 's husband Gary Clark (Michael O'Neill). The episode marked the last appearances for Nora Zehetner and Robert Baker as Dr. Reed Adamson and Dr. Charles Percy respectively as both the characters were killed in the shooting.
In the episode Cristina Yang (Sandra Oh) and Jackson Avery (Jesse Williams) try to save the life of Chief Derek Shepherd (Patrick Dempsey) who was shot by Gary Clark in front of Meredith Grey (Ellen Pompeo) as she waits outside the OR with April Kepner (Sarah Drew). Miranda Bailey (Chandra Wilson) tries to save Charles along with her patient Mary Portman (Mandy Moore), while Richard Webber (James Pickens, Jr.) tries to get into the hospital. Callie Torres (Sara Ramirez) and Arizona Robbins (Jessica Capshaw) are stuck on a floor with all the younger patients of the hospital, Mark Sloan (Eric Dane) and Lexie Grey (Chyler Leigh) try to save the life of Alex Karev (Justin Chambers) who was shot as well and Owen Hunt (Kevin McKidd) and Teddy Altman (Kim Raver) try to get their patient to safety.
The hospital is hit with an unprecedented crisis: a shooter is in the hospital, and there is a lockdown. Meanwhile, Dr. Meredith Grey (Ellen Pompeo) discovers that she is pregnant, and Dr. Owen Hunt (Kevin McKidd) must choose between Teddy and Cristina (Sandra Oh). The shooter kills Reed Adamson and wounds Dr. Alex Karev (Justin Chambers). Lexie (Chyler Leigh) and Mark (Eric Dane) find the latter and try to save him. Soon, it is revealed that the shooter, Gary Clark (whose wife was a patient at the hospital and later died), is looking for Dr. Derek Shepherd (Patrick Dempsey). At the end, he shoots Derek, which is witnessed by Meredith, Cristina, and April. With the crisis unfolding, each character is put through extreme trials and tribulations. Cristina is put under pressure to save Derek, who is seriously injured.
Meanwhile, Dr. Miranda Bailey (Chandra Wilson) and her patient Mary drag resident Charles Percy, who has been shot, through the hallway but when they get to the elevators, they are off. Bailey freaks out for a moment, then gathers herself and sits with Charles. When he asks if he was dying, she said yes. She tells him she and Mary are going to stay with him and that he is n't alone, with Charles dying soon thereafter. The shooter walks into the OR where Cristina and Jackson are trying to save Derek. Hunt shows up and Meredith tells him what was going on. He looks through the window and says she is doing fine but he 'd go in to see what he could do to help. When Hunt goes inside, the audience is shown that Mr. Clark is in the room with a gun to Cristina 's head, shouting at her to stop fixing Derek. Cristina and Avery refuse to stop working.
Mr. Clark tells Hunt that he 'd shoot him first and then Cristina but he really only wanted to kill Derek. He believes this is retribution for his wife 's death, which he believes is on Derek 's hands. Meredith then comes into the room and tells Mr. Clark to shoot her. She explains that she is Lexie 's sister, she is the closest thing Webber has to a daughter, and she is Derek 's wife. He turns the gun toward Meredith, but Cristina says Meredith is pregnant. Hunt makes a move toward Mr. Clark and is shot, knocking him unconscious. Cristina and Avery then raise their hands and Avery tells Mr. Clark that Derek will die and he can watch it happen on the monitor. Derek flatlines and Meredith cries almost hysterically. Mr. Clark walks out.
Clark later comes across Lexie, who is bringing supplies to help Alex, and when he attempts to shoot her, the recently arrived SWAT team wounds Clark. This gives time for Lexie to escape. Soon after, Webber walks through the halls of the hospital and finds several bodies. In a room, he finds Mr. Clark, who tells him he 'd bought the gun at a superstore and bought extra ammo because it was on sale. Mr. Clark tells Webber he has one bullet left. He reveals that he planned on shooting Webber, then himself. Eventually, after Webber convinces Clark that his wife would n't want the shooting and would rather be reunited with him, Clark commits suicide as the SWAT team is directly out the door.
Meanwhile, in the operating room, Derek 's heartbeat returns to normal, and everyone is relieved. Outside the hospital, Arizona and Callie reinstate their relationship when Callie says she does n't want to have a baby if it means she ca n't be with Arizona. Arizona agrees to have a baby because she says that she does n't want to stop Callie from being an excellent mother. Mark and Lexie show up at the other hospital. Alex is out of surgery. Lexie goes to Alex. Mark watches Lexie take Alex 's hand. Meredith takes another look at her positive pregnancy test, and throws it out.
The episode was written by showrunner Shonda Rhimes and directed by Rob Corn. In an interview Rhimes told that the writing of the two - part season six finale, caused her struggle. She elaborated on this:
TV Guide reported that Mandy Moore will check into Grey 's Anatomy 's two - hour season finale, adding, "She 'll play a patient named Mary as part of an explosive -- and, of course, top secret -- cliffhanger. Bailey -- who 's having one hell of a season as a newly single gal -- will be her doctor, but beyond that, details are being kept under heavily guarded wraps. '' The news was later confirmed by Entertainment Weekly which further reported, "Bailey -- who 's having one hell of a season as a newly single gal -- will be her doctor, but beyond that, details are being kept under heavily guarded wraps. She comes as part of a grand tradition of recognizable names as patients, usually quick, juicy roles like Sara Gilbert 's euthanized patient two weeks ago and Demi Lovato 's upcoming stretch as a schizophrenic. Moore did that other hospital show, Scrubs ''
It was also reported as veteran actor Michael O'Neill would reprise his role as the grief - stricken widower Gary Clark. O'Neill later in an interview said, "That role changed my career. It was probably the hardest job I 've ever done, hardest work I 've ever done. I had a tremendous resistance to it, and I almost turned the role down. I was so afraid of that being sensationalized, and I did n't want that in the world. I had n't worked with Shonda Rhimes, and so I did n't know. I certainly knew she was a brilliant writer, but I did n't know what the intent was. We had a conversation about it. I said, ' You know what? Can I think about this and call you back tomorrow? ' I called her back, and I said, ' Shonda, I just have to say it frightens me. It really frightens me. ' She said, ' Michael, it frightens me too. ' I thought, OK, we 're at least starting at the same place. Her writing was so good... We saw the wound. We got to see the wound that drove him mad, and I think that was terribly important. It was important to show the fracture that brought this man to that desperate action. ''
Death And All His Friends was originally broadcast on the American Broadcasting Company (ABC) in the United States on May 20, 2010. The episode was the second half of the two - hour season six finale, the other half being Sanctuary It had an audience in The United States of 16.13 million viewers (5.9 / 17 Nielsen rating in its 9: 00 Eastern time - slot.
The episode opened up to universal acclaim with the television critics lauding the writing and performance of the entire cast as well. The performances of guest stars Mandy Moore and Michael O'Neill along with Ellen Pompeo, Sandra Oh, Chandra Wilson, Kevin McKidd, James Pickens, Jr. and Sarah Drew garnered most praise. Marsi from TVFanatic highly praised the episode and gave it five stars, and expressed that it may have been the best episode of the series, adding, "The writing and acting were absolutely stellar, and may lead to many Emmy nominations, but even more impressively, despite a killing spree, it remained distinctly Grey 's. Some of the back - and - forths between the characters were truly memorable, and some of the developments so heartbreaking that we do n't even know where to begin now. Seriously, the Season 6 finale left us laying awake afterward thinking about everything, a feeling we have n't had from Grey 's in years and rarely achieved by any program. ''
John Kubicek of BuddyTV also noted that the finale was the best episode, adding, "(It was) two of the best hours of television all year. It was certainly the best Grey 's Anatomy has ever been, which is saying a lot since I 'd written the show off for the past few years. No show does a big traumatic event like Grey 's, and the shooter gave the show license for heightened drama with five major characters being shot over the two hours. It was emotional, expertly paced and had me in tears for most of the finale. '' PopSugar called it "most intense episodes of Grey 's I have ever seen '' adding, "What a whopper of an episode! Can we talk about how gripping the scene is where Bailey and Percy are hiding from Mr. Clark. Then he turns around and gets Karev right in the chest. Seriously, I have n't been so shocked by a one - two shooting spree since Michael shot Ana Lucia and Libby in rapid succession on Lost. Another review from BuddyTV called the finale "Game changing! '' stating, "The finale that changes everything! You do n't want to miss it! To her credit GA creator and writer - of - this - episode Shonda Rhimes pretty much delivered on her promise. I found myself with my heart in my mouth for the entire evening before finally being able to exhale as the credits rolled. '' The review lauded Ellen Pompeo and Sandra Oh adding, "The Twisted Sisters really were the showcase of the entire finale. Meredith, side - lined for much of the season, took center stage '' even calling Oh 's Cristina Yang a "rockstar ''.
Entertainment Weekly wrote, "You 're still pondering how Grey 's can still be so damn good sometimes, '' further hugely praising the episode, "They did a bang - up job, it felt real in every way, thanks to some stellar performances and writing. And yet it maintained its Grey'sness thanks to some quieter moments, and plenty of witty exchanges to boot. '' Regarding the Bailey - Mary - Charles sequence the site wrote, "the action felt so real '', but added, "But the most intense action was culminating in the OR with Cristina, April and Meredith had a nice moment sitting on the ground together, April sniffling and Meredith stonefaced. Kevin McKidd did some stellar acting as he interacted with them. '' Talking about Meredith 's miscarriage the site added, "She saved Owen, though she did it while having a miscarriage -- again, leave it to Grey 's to make this the least dramatic, most underplayed storyline of the night. (Brilliantly so.) ''
Spoiler Junkie wrote highly of the episode Meredith Grey in particular saying, "In the most heart - wrenching moment of the entire two - hour season finale, Meredith bursts into the room, asking Clark to kill her. Eye for an eye, she says. She 's Lexie 's sister. She 's like a daughter to Dr. Weber. She 's the Chief 's wife. '' and noted that, "This was an amazing season finale, one of the best in the six seasons of Grey 's Anatomy. The second hour of the Grey 's Anatomy Season 6 season finale, Death and All His Friends, is even more intense than the first. '' HitFix also lauded the episode and wrote, "In every sense, these two episodes where Shonda Rhimes, directors Stephen Cragg and Rob Corn, and the entire cast and crew of the show "Going For It ''. There was no opportunity for suspense, or tears, or anger, or horror, or some good old - fashioned monologuing left unexplored. Huge stakes, huge emotions, great performances from everybody Chandra Wilson and Sandra Oh were particularly great, as was Michael O'Neill as shooter Gary Clark. This was about as good as I 've ever seen "Grey 's. '' Episodes like these are what I point to when anyone asks me why I 'm still watching
The site also lauded Ellen Pompeo 's character stating, "Meredith would suffer a miscarriage in the middle of patching up Owen Hunt 's bullet wound - and be so hard - core, just as her best friend was next door in risking death to save Meredith 's man, that she would just keep working even as blood dripped down her legs. A moment like that works only if everyone involved is 100 % committed to the insanity of it, and here everyone was. They all went for it and came up with a riveting two hours of TV. At the completion of the eleventh season Wired named the episode in its must watch list adding, "The show reaches its dramatic apex in a multi-episode arc about a deadly spree shooter who visits the hospital with a vendetta. The repercussions of this incident will echo through most of Season 7, and well into the future. '' The site also called Pompeo 's ' Shoot Me. I 'm Your Eye for an Eye. ' the best scene from the entire series.
|
what is the main source of energy for the herbivore | Herbivore - wikipedia
A herbivore is an animal anatomically and physiologically adapted to eating plant material, for example foliage, for the main component of its diet. As a result of their plant diet, herbivorous animals typically have mouthparts adapted to rasping or grinding. Horses and other herbivores have wide flat teeth that are adapted to grinding grass, tree bark, and other tough plant material.
A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This flora is made up of cellulose - digesting protozoans or bacteria.
Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell 's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from the Latin herba meaning a small plant or herb, and vora, from vorare, to eat or devour.
Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers. Herbivory usually refers to animals eating plants; fungi, bacteria and protists that feed on living plants are usually termed plant pathogens (plant diseases), and microbes that feed on dead plants are saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme.
Our understanding of herbivory in geological time comes from three sources: fossilized plants, which may preserve evidence of defence (such as spines), or herbivory - related damage; the observation of plant debris in fossilised animal faeces; and the construction of herbivore mouthparts.
Although herbivory was long thought to be a Mesozoic phenomenon, fossils have shown that within less than 20 million years after the first land plants evolved, plants were being consumed by arthropods. Insects fed on the spores of early Devonian plants, and the Rhynie chert also provides evidence that organisms fed on plants using a "pierce and suck '' technique.
During the next 75 million years, plants evolved a range of more complex organs, such as roots and seeds. There is no evidence of any organism being fed upon until the middle - late Mississippian, 330.9 million years ago. There was a gap of 50 to 100 million years between the time each organ evolved and the time organisms evolved to feed upon them; this may be due to the low levels of oxygen during this period, which may have suppressed evolution. Further than their arthropod status, the identity of these early herbivores is uncertain. Hole feeding and skeletonisation are recorded in the early Permian, with surface fluid feeding evolving by the end of that period.
Herbivory among four - limbed terrestrial vertebrates, the tetrapods developed in the Late Carboniferous (307 - 299 million years ago). Early tetrapods were large amphibious piscivores. While amphibians continued to feed on fish and insects, some reptiles began exploring two new food types, tetrapods (carnivory) and plants (herbivory). The entire dinosaur order ornithischia was composed with herbivores dinosaurs. Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation. In contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials.
Arthropods evolved herbivory in four phases, changing their approach to it in response to changing plant communities. Tetrapod herbivores made their first appearance in the fossil record of their jaws near the Permio - Carboniferous boundary, approximately 300 million years ago. The earliest evidence of their herbivory has been attributed to dental occlusion, the process in which teeth from the upper jaw come in contact with teeth in the lower jaw is present. The evolution of dental occlusion led to a drastic increase in plant food processing and provides evidence about feeding strategies based on tooth wear patterns. Examination of phylogenetic frameworks of tooth and jaw morphologes has revealed that dental occlusion developed independently in several lineages tetrapod herbivores. This suggests that evolution and spread occurred simultaneously within various lineages.
Herbivores form an important link in the food chain; because they consume plants in order to digest the carbohydrates photosynthetically produced by a plant. Carnivores in turn consume herbivores for the same reason, while omnivores can obtain their nutrients from either plants or animals. Due to a herbivore 's ability to survive solely on tough and fibrous plant matter, they are termed the primary consumers in the food cycle (chain). Herbivory, carnivory, and omnivory can be regarded as special cases of Consumer - Resource Systems.
Two herbivore feeding strategies are grazing (e.g. cows) and browsing (e.g. moose). Although the exact definition of the feeding strategy may depend on the writer, most authors agree that to define a grazer at least 90 % of the forage has to be grass, and for a browser at least 90 % tree leaves and / or twigs. An intermediate feeding strategy is called "mixed - feeding ''. In their daily need to take up energy from forage, herbivores of different body mass may be selective in choosing their food. "Selective '' means that herbivores may choose their forage source depending on, e.g., season or food availability, but also that they may choose high quality (and consequently highly nutritious) forage before lower quality. The latter especially is determined by the body mass of the herbivore, with small herbivores selecting for high quality forage, and with increasing body mass animals are less selective. Several theories attempt to explain and quantify the relationship between animals and their food, such as Kleiber 's law, Holling 's disk equation and the marginal value theorem (see below).
Kleiber 's law describes the relationship between an animal 's size and its feeding strategy, saying that larger animals need to eat less food per unit weight than smaller animals. Kleiber 's law states that the metabolic rate (q) of an animal is the mass of the animal (M) raised to the 3 / 4 power: q = M Therefore, the mass of the animal increases at a faster rate than the metabolic rate.
Herbivores employ numerous types of feeding strategies. Many herbivores do not fall into one specific feeding strategy, but employ several strategies and eat a variety of plant parts.
Optimal Foraging Theory is a model for predicting animal behavior while looking for food or other resource, such as shelter or water. This model assesses both individual movement, such as animal behavior while looking for food, and distribution within a habitat, such as dynamics at the population and community level. For example, the model would be used to look at the browsing behavior of a deer while looking for food, as well as that deer 's specific location and movement within the forested habitat and its interaction with other deer while in that habitat.
This model has been criticized as circular and untestable. Critics have pointed out that its proponents use examples that fit the theory, but do not use the model when it does not fit the reality. Other critics point out that animals do not have the ability to assess and maximize their potential gains, therefore the optimal foraging theory is irrelevant and derived to explain trends that do not exist in nature.
Holling 's disk equation models the efficiency at which predators consume prey. The model predicts that as the number of prey increases, the amount of time predators spend handling prey also increases and therefore the efficiency of the predator decreases. In 1959, S. Holling proposed an equation to model the rate of return for an optimal diet: Rate (R) = Energy gained in foraging (Ef) / (time searching (Ts) + time handling (Th)) R = E f / (T s + T h) (\ displaystyle R = Ef / (Ts + Th)) Where s = cost of search per unit time f = rate of encounter with items, h = handling time, e = energy gained per encounter In effect, this would indicate that a herbivore in a dense forest would spend more time handling (eating) the vegetation because there was so much vegetation around than a herbivore in a sparse forest, who could easily browse through the forest vegetation. According to the Holling 's disk equation, a herbivore in the sparse forest would be more efficient at eating than the herbivore in the dense forest
The marginal value theorem describes the balance between eating all the food in a patch for immediate energy, or moving to a new patch and leaving the plants in the first patch to regenerate for future use. The theory predicts that absent complicating factors, an animal should leave a resource patch when the rate of payoff (amount of food) falls below the average rate of payoff for the entire area. According to this theory, locus should move to a new patch of food when the patch they are currently feeding on requires more energy to obtain food than an average patch. Within this theory, two subsequent parameters emerge, the Giving Up Density (GUD) and the Giving Up Time (GUT). The Giving Up Density (GUD) quantifies the amount of food that remains in a patch when a forager moves to a new patch. The Giving Up Time (GUT) is used when an animal continuously assesses the patch quality.
The myriad defenses displayed by plants means that their herbivores need a variety of skills to overcome these defenses and obtain food. These allow herbivores to increase their feeding and use of a host plant. Herbivores have three primary strategies for dealing with plant defenses: choice, herbivore modification, and plant modification.
Feeding choice involves which plants a herbivore chooses to consume. It has been suggested that many herbivores feed on a variety of plants to balance their nutrient uptake and to avoid consuming too much of any one type of defensive chemical. This involves a tradeoff however, between foraging on many plant species to avoid toxins or specializing on one type of plant that can be detoxified.
Herbivore modification is when various adaptations to body or digestive systems of the herbivore allow them to overcome plant defenses. This might include detoxifying secondary metabolites, sequestering toxins unaltered, or avoiding toxins, such as through the production of large amounts of saliva to reduce effectiveness of defenses. Herbivores may also utilize symbionts to evade plant defences. For example, some aphids use bacteria in their gut to provide essential amino acids lacking in their sap diet.
Plant modification occurs when herbivores manipulate their plant prey to increase feeding. For example, some caterpillars roll leaves to reduce the effectiveness of plant defenses activated by sunlight.
A plant defense is a trait that increases plant fitness when faced with herbivory. This is measured relative to another plant that lacks the defensive trait. Plant defenses increase survival and / or reproduction (fitness) of plants under pressure of predation from herbivores.
Defense can be divided into two main categories, tolerance and resistance. Tolerance is the ability of a plant to withstand damage without a reduction in fitness. This can occur by diverting herbivory to non-essential plant parts or by rapid regrowth and recovery from herbivory. Resistance refers to the ability of a plant to reduce the amount of damage it receives from a herbivore. This can occur via avoidance in space or time, physical defenses, or chemical defenses. Defenses can either be constitutive, always present in the plant, or induced, produced or translocated by the plant following damage or stress.
Physical, or mechanical, defenses are barriers or structures designed to deter herbivores or reduce intake rates, lowering overall herbivory. Thorns such as those found on roses or acacia trees are one example, as are the spines on a cactus. Smaller hairs known as trichomes may cover leaves or stems and are especially effective against invertebrate herbivores. In addition, some plants have waxes or resins that alter their texture, making them difficult to eat. Also the incorporation of silica into cell walls is analogous to that of the role of lignin in that it is a compression - resistant structural component of cell walls; so that plants with their cell walls impregnated with silica are thereby afforded a measure of protection against herbivory.
Chemical defenses are secondary metabolites produced by the plant that deter herbivory. There are a wide variety of these in nature and a single plant can have hundreds of different chemical defenses. Chemical defenses can be divided into two main groups, carbon - based defenses and nitrogen - based defenses.
Plants have also changed features that enhance the probability of attracting natural enemies to herbivores. Some emit semiochemicals, odors that attract natural enemies, while others provide food and housing to maintain the natural enemies ' presence, e.g. ants that reduce herbivory. A given plant species often has many types of defensive mechanisms, mechanical or chemical, constitutive or induced, which allow it to escape from herbivores.
According to the theory of predator -- prey interactions, the relationship between herbivores and plants is cyclic. When prey (plants) are numerous their predators (herbivores) increase in numbers, reducing the prey population, which in turn causes predator number to decline. The prey population eventually recovers, starting a new cycle. This suggests that the population of the herbivore fluctuates around the carrying capacity of the food source, in this case the plant.
Several factors play into these fluctuating populations and help stabilize predator -- prey dynamics. For example, spatial heterogeneity is maintained, which means there will always be pockets of plants not found by herbivores. This stabilizing dynamic plays an especially important role for specialist herbivores that feed on one species of plant and prevents these specialists from wiping out their food source. Prey defenses also help stabilize predator -- prey dynamics, and for more information on these relationships see the section on Plant Defenses. Eating a second prey type helps herbivores ' populations stabilize. Alternating between two or more plant types provides population stability for the herbivore, while the populations of the plants oscillate. This plays an important role for generalist herbivores that eat a variety of plants. Keystone herbivores keep vegetation populations in check and allow for a greater diversity of both herbivores and plants. When an invasive herbivore or plant enters the system, the balance is thrown off and the diversity can collapse to a monotaxon system.
The back and forth relationship of plant defense and herbivore offense can be seen as a sort of "adaptation dance '' in which one partner makes a move and the other counters it. This reciprocal change drives coevolution between many plants and herbivores, resulting in what has been referred to as a "coevolutionary arms race ''. The escape and radiation mechanisms for coevolution, presents the idea that adaptations in herbivores and their host plants, has been the driving force behind speciation.
While much of the interaction of herbivory and plant defense is negative, with one individual reducing the fitness of the other, some is actually beneficial. This beneficial herbivory takes the form of mutualisms in which both partners benefit in some way from the interaction. Seed dispersal by herbivores and pollination are two forms of mutualistic herbivory in which the herbivore receives a food resource and the plant is aided in reproduction.
Herbivorous fish and marine animals are an indispensable part of the coral reef ecosystem. Since algae and seaweeds grow much faster than corals they can occupy spaces where corals could have settled. They can outgrow and thus outcompete corals on bare surfaces. In the absence of plant - eating fish, seaweeds deprive corals of sunlight. They can also physically damage corals with scrapes.
The impact of herbivory can be seen in areas ranging from economics to ecological, and both. For example, environmental degradation from white - tailed deer (Odocoileus virginianus) in the US alone has the potential to both change vegetative communities through over-browsing and cost forest restoration projects upwards of $750 million annually. Agricultural crop damage by the same species totals approximately $100 million every year. Insect crop damages also contribute largely to annual crop losses in the U.S. Herbivores affect economics through the revenue generated by hunting and ecotourism. For example, the hunting of herbivorous game species such as white - tailed deer, cottontail rabbits, antelope, and elk in the U.S. contributes greatly to the billion - dollar annually hunting industry. Ecotourism is a major source of revenue, particularly in Africa, where many large mammalian herbivores such as elephants, zebras, and giraffes help to bring in the equivalent of millions of US dollars to various nations annually.
|
when the secondary of a transformer must be connected in delta | Delta - wye transformer - Wikipedia
A delta - wye transformer is a type of three - phase electric power transformer design that employs delta - connected windings on its primary and wye / star connected windings on its secondary. A neutral wire can be provided on wye output side. It can be a single three - phase transformer, or built from three independent single - phase units. An equivalent term is delta - star transformer.
Delta - wye transformers are common in commercial, industrial, and high - density residential locations, to supply three - phase distribution systems.
An example would be a distribution transformer with a delta primary, running on three 11 kV phases with no neutral or earth required, and a star (or wye) secondary providing a 3 - phase supply at 415 V, with the domestic voltage of 240 available between each phase and the earthed (grounded) neutral point.
The delta winding allows third - harmonic currents to circulate within the transformer, and prevents third - harmonic currents from flowing in the supply line. Delta connected windings are not common for higher transmission voltages (138 kV and above) owing to the higher cost of insulation compared with a wye connection.
Delta - wye transformers introduce a 30, 150, 270 or 330 degree phase shift. Thus they can not be paralleled with wye - wye (or delta - delta) transformers. However, they can be paralleled with identical configurations and some different configurations of other delta - wye (or wye - delta with some attention) transformers.
|
the chronicles of narnia prince caspian book summary | Prince Caspian - Wikipedia
Prince Caspian (originally published as Prince Caspian: The Return to Narnia) is a high fantasy novel for children by C.S. Lewis, and illustrators Pauline Baynes, Chris Van Allsburg in 1978, and Leo and Diane Dillon in 1994, published by Geoffrey Bles in 1951. It was the second published of seven novels in The Chronicles of Narnia (1950 -- 1956), and Lewis had finished writing it in 1949, before the first book was out. It is volume four in recent editions of the series, sequenced according to Narnia history. Like the others, it was illustrated by Pauline Baynes and her work has been retained in many later editions.
Prince Caspian features the return to Narnia of the four Pevensie children of The Lion, the Witch, and the Wardrobe. The novel is set about a year later than The Lion, the Witch, and the Wardrobe in English time, but 1,300 years later in Narnian time. The Pevensie siblings are legendary Kings and Queens of Narnia and are magically recalled once again as children by the refugee Prince Caspian.
Prince Caspian has been adapted and filmed as two episodes of a BBC television series in 1989 and as a feature film in 2008.
While standing on a railway station, Peter, Susan, Edmund, and Lucy Pevensie are magically whisked away to a beach near an old and ruined castle. They determine the ruin is Cair Paravel, where they ruled as the Kings and Queens of Narnia, and discover the treasure vault where Peter 's sword and shield, Susan 's bow and arrows, and Lucy 's bottle of magical cordial and dagger are stored. Susan 's horn for summoning help is missing, as she left it in the woods the day they returned to England after their prior visit to Narnia. Although only a year has passed in England, 1300 years have passed in Narnia.
The children intervene to rescue Trumpkin the dwarf from soldiers who have brought him to the ruins to drown him. Trumpkin tells the children that since their disappearance, a race of men called Telmarines have invaded Narnia, driving the Talking Beasts into the wilderness and pushing even their memory underground. Narnia is ruled by King Miraz and his wife Queen Prunaprismia, but the rightful king is Miraz 's nephew, Prince Caspian, who has gained the support of the Old Narnians.
Miraz usurped the throne by killing his brother, Caspian 's father King Caspian IX. Miraz tolerated Caspian as heir until his own son was born. Prince Caspian, until that point ignorant of his uncle 's deeds, escaped from Miraz 's Castle with the aid of his tutor Doctor Cornelius, who schooled him in the lore of Old Narnia, and gave him Queen Susan 's horn. Caspian fled into the forest but was knocked unconscious when his horse bolted. He awoke in the den of a talking badger, Trufflehunter, and two dwarfs, Nikabrik and Trumpkin, who accepted Caspian as their king.
The badger and dwarves took Caspian to meet many creatures of Old Narnia. During a midnight council on Dancing Lawn, Doctor Cornelius arrived to warn them of the approach of King Miraz and his army; he urged them to flee to Aslan 's How in the great woods near Cair Paravel. The Telmarines followed the Narnians to the How, and after several skirmishes the Narnians appeared close to defeat. At a second war council, they discussed whether to use Queen Susan 's horn, and whether it would bring Aslan or the Kings and Queens of the golden age. Not knowing where help would arrive, they dispatched a squirrel to Lantern Waste and Trumpkin to Cair Paravel; it is then that Trumpkin was captured by the Telmarines and rescued by the Pevensies.
Trumpkin and the Pevensies make their way to Caspian. They try to save time by travelling up Glasswater Creek, but lose their way. Lucy sees Aslan and wants to follow where he leads, but the others do not believe her and follow their original course, which becomes increasingly difficult. In the night, Aslan calls Lucy and tells her she must awaken the others and insist they follow her on Aslan 's path. When the others obey, they begin to see Aslan 's shadow, then Aslan himself. Aslan sends Peter, Edmund, and Trumpkin ahead to Aslan 's How to deal with treachery brewing there, and follows with Susan and Lucy.
Peter, Edmund, and Trumpkin enter Aslan 's How; they overhear Nikabrik and his confederates, a Hag and a Wer - Wolf, trying to persuade Caspian, Cornelius, and Trufflehunter to help them resurrect the White Witch in hopes of using her power to defeat Miraz. A fight ensues, and Nikabrik and his comrades are slain.
Peter challenges Miraz to single combat; the army of the victor in this duel will be considered the victor in the war. Miraz accepts the challenge, goaded by Lords Glozelle and Sopespian. Miraz loses the combat, but Glozelle and Sopespian declare that the Narnians have cheated and stabbed the King in the back while he was down. They command the Telmarine army to attack, and in the commotion that follows, Glozelle stabs Miraz in the back. Aslan, accompanied by Lucy and Susan, summons the gods Bacchus and Silenus, and with their help brings the woods to life. The gods and awakened trees turn the tide of battle and send the Telmarines fleeing. Discovering themselves trapped at the Great River, where their bridge has been destroyed by Bacchus, the Telmarines surrender.
Aslan gives the Telmarines a choice of staying in Narnia under Caspian or returning to Earth, their original home. After one volunteer disappears through the magic door created by Aslan, the Pevensies go through to reassure the other Telmarines, though Peter and Susan reveal to Edmund and Lucy that they are too old to return to Narnia. The Pevensies find themselves back at the railway station.
The two major themes of the story are courage and chivalry and, as Lewis himself said in a letter to an American girl, "the restoration of the true religion after a corruption ''.
The Telmarine conquest of Narnia, as depicted in the book, is in many ways similar to the historical Norman Conquest of England. Though there is no precise parallel in actual English history to the specific events of this book, the end result -- "Old Narnians '' and Telmarines becoming a single people and living together in harmony -- is similar to the historical process of Saxons and Normans eventually fusing into a single English people.
The BBC adapted Prince Caspian in two episodes of the 1989 series of The Chronicles of Narnia.
The second in the series of films from Walt Disney Pictures and Walden Media, titled The Chronicles of Narnia: Prince Caspian, was released in the US on 16 May 2008. The UK release date was 26 June 2008.
The book was the inspiration for a song of the same name on the Phish album Billy Breathes.
The script for a stage adaptation was written by Erina Caradus and first performed in 2007.
|
who is believed to be the founder of the jewish hasidic movement | Hasidic Judaism - wikipedia
Hasidism, sometimes Hasidic Judaism (Hebrew: חסידות , hasidut, Ashkenazi pronunciation: (χaˈsidus); originally, "piety ''), is a Jewish religious group. It arose as a spiritual revival movement in contemporary Western Ukraine during the 18th century, and spread rapidly throughout Eastern Europe. Today, most affiliates reside in the United States, Israel, and the United Kingdom. Israel Ben Eliezer, the "Baal Shem Tov '', is regarded as its founding father, and his disciples developed and disseminated it. Present - day Hasidism is a sub-group within Ultra-Orthodox ("Haredi '') Judaism, and is noted for its religious conservatism and social seclusion. Its members adhere closely both to Orthodox Jewish practice -- with the movement 's own unique emphases -- and the traditions of Eastern European Jews, so much that many of the latter, including various garments or the use of the Yiddish language, are nowadays associated almost exclusively with Hasidism.
Hasidic thought draws heavily on Lurianic Kabbalah, and, to an extent, is a popularization of it. Teachings emphasize God 's immanence in the universe, the need to cleave and be one with Him at all times, the devotional aspect of religious practice, and the spiritual dimension of corporeality and mundane acts. Hasidim, the adherents of Hasidism, are organized in independent sects known as "courts '' or dynasties, each headed by its own hereditary leader, a Rebbe. Reverence and submission to the Rebbe are key tenets, as he is considered a spiritual authority with whom the follower must bond to gain closeness to God. The various "courts '' share basic convictions, but operate apart, and possess unique traits and customs. Affiliation is often retained in families for generations, and being Hasidic is as much a sociological factor, entailing birth into a specific community and allegiance to a dynasty of Rebbes, as it is a purely religious one. There are several "courts '' with many thousands of member households each, and dozens of smaller ones. The total number of Hasidim, both adults and children, is estimated to be above 400,000.
The terms hasid and hasidut, meaning "pietist '' and "piety '', have a long history in Judaism. The Talmud and other old sources refer to the "Pietists of Old '' (Hasidim ha - Rishonim) who would contemplate an entire hour in preparation for prayer. The phrase denoted extremely devoted individuals who not only observed the Law to its letter, but performed good deeds even beyond it. Adam himself is honored with the title in tractate Eruvin 18b by Rabbi Meir: "Adam was a great hasid, having fasted for 130 years. '' The first to adopt the epithet collectively were apparently the hasidim in Second Temple period Judea, known as Hasideans after the Greek rendering of their name, who perhaps served as the model for those mentioned in the Talmud. The title continued to be applied as an honorific for the exceptionally devout. In 12th - century Rhineland, or Ashkenaz in Jewish parlance, another prominent school of ascetics named themselves hasidim; to distinguish them from the rest, later research employed the term Ashkenazi Hasidim. In the 16th century, when Kabbalah spread, the title also became associated with it. Jacob ben Hayyim Zemah wrote in his glossa on Isaac Luria 's version of the Shulchan Aruch that, "One who wishes to tap the hidden wisdom, must conduct himself in the manner of the Pious. ''
The movement founded by Israel Ben Eliezer in the 18th century adopted the term hasidim in the original connotation. But when the sect grew and developed specific attributes, from the 1770s, the names gradually acquired a new meaning. Its common adherents, belonging to groups each headed by a spiritual leader, were henceforth known as Hasidim. The transformation was slow: The movement was at first referred to as "New Hasidism '' by outsiders (as recalled in the autobiography of Salomon Maimon) to separate it from the old one, and its enemies derisively mocked its members as Mithasdim, "(those who) pretend (to be) hasidim ''. Yet, eventually, the young sect gained such a mass following that the old connotation was sidelined. In popular discourse, at least, Hasid came to denote someone who follows a religious teacher from the movement. It also entered Modern Hebrew as such, meaning "adherent '' or "disciple ''. One was not merely a hasid anymore, observed historian David Assaf, but a Hasid of someone or some dynasty in particular. This linguistic transformation paralleled that of the word tzaddik, "righteous '', which the Hasidic leaders adopted for themselves -- though they are known colloquially as Rebbes or by the honorific Admor. Originally denoting an observant, moral person, in Hasidic literature tzaddik became synonymous with the often hereditary master heading a sect of followers.
The lengthy history of Hasidism, the numerous schools of thought therein, and particularly its use of the traditional medium of homiletic literature and sermons -- comprising numerous references to earlier sources in the Pentateuch, Talmud and exegesis as a means to grounding oneself in tradition -- as the almost sole channel to convey its ideas, all made the isolation of a common doctrine highly challenging to researchers. As noted by Joseph Dan, "every attempt to present such a body of ideas has failed. '' Even motifs presented by scholars in the past as unique Hasidic contributions were later revealed to have been common among both their predecessors and opponents, all the more so regarding many other traits that are widely extant -- these play, Dan added, "a prominent role in modern non-Hasidic and anti-Hasidic writings as well ''. The difficulty of separating the movement 's philosophy from that of its main inspiration, Lurianic Kabbalah, and determining what was novel and what merely a recapitulation, also baffled historians. Some, like Louis Jacobs, regarded the early masters as innovators who introduced "much that was new if only by emphasis ''; others, primarily Mendel Piekarz, argued to the contrary that but a little was not found in much earlier tracts, and the movement 's originality lay in the manner it popularized these teachings to become the ideology of a well - organized sect.
Among the traits particularly associated with Hasidism in common understanding which are in fact widespread, is the importance of joy and happiness at worship and religious life -- though the sect undoubtedly stressed this aspect and still possesses a clear populist bent. Another example is the value placed on the simple, ordinary Jew in supposed contradiction with the favouring of elitist scholars beforehand; such ideas are common in ethical works far preceding Hasidism. The movement did for a few decades challenge the rabbinic establishment, which relied on the authority of Torah acumen, but affirmed the centrality of study very soon. Concurrently, the image of its Opponents as dreary intellectuals who lacked spiritual fervour and opposed mysticism is likewise unfounded. Neither did Hasidism, often portrayed as promoting healthy sensuality, unanimously reject the asceticism and self - mortification associated primarily with its rivals. Joseph Dan ascribed all these perceptions to so - called "Neo-Hasidic '' writers and thinkers, like Martin Buber. In their attempt to build new models of spirituality for modern Jews, they propagated a romantic, sentimental image of the movement. The "Neo-Hasidic '' interpretation influenced even scholarly discourse to a great degree, but had a tenuous connection with reality.
A further complication is the divide between what researchers term "early Hasidism '', which ended in the early 1800s, and established Hasidism since then onwards. While the former was a highly dynamic religious revival movement, the latter phase is characterized by consolidation into sects with hereditary leadership. The mystical teachings formulated during the first era were by no means repudiated, and many Hasidic masters remained consummate spiritualists and original thinkers; as noted by Benjamin Brown, Buber 's once commonly accepted view that the routinization constituted "decadence '' was refuted by later studies, demonstrating that the movement remained very much innovative. Yet many aspects of early Hasidism were indeed de-emphasized in favour of more conventional religious expressions, and its radical concepts were largely neutralized. Some Rebbes adopted a relatively rationalist bent, sidelining their explicit mystical, theurgical roles, and many others functioned almost solely as political leaders of large communities. As to their Hasidim, affiliation was less a matter of admiring a charismatic leader as in the early days, but rather birth into a family belonging to a specific "court ''.
The most fundamental theme underlying all Hasidic theory is the immanence of God in the universe, often expressed in a phrase from Tikunei haZohar, Leit Atar panuy mi - néya (Aramaic: "no site is devoid of Him ''). This panentheistic concept was derived from Lurianic discourse, but greatly expanded in the Hasidic one. In the beginning, in order to create the world, God contracted (Tzimtzum) His Omnipresence, the Ein Sof, leaving a Vacant Void (Khalal panui), bereft from obvious presence and therefore able to entertain free will, contradictions and other phenomena seemingly separate from God Himself. These would have been impossible within His original, perfect existence. Yet, the very reality of the world which was created in the Void is entirely dependent on its divine origin. Matter would have been null and void without the true, spiritual essence it possesses. Just the same, the infinite Ein Sof can not manifest in the Vacant Void, and must limit itself in the guise of measurable corporeality that may be perceived.
Thus, there is a dualism between the true aspect of everything and the physical side, false but ineluctable, with each evolving into the other: as God must compress and disguise Himself, so must humans and matter in general ascend and reunite with the Omnipresence. Rachel Elior quoted Shneur Zalman of Liadi, in his commentary Torah Or on Genesis 28: 21, who wrote that "this is the purpose of Creation, from Infinity to Finitude, so it may be reversed from the state of Finite to that of Infinity ''. Kabbalah stressed the importance of this dialectic, but mainly (though not exclusively) evoked it in cosmic terms, referring for example to the manner in which God progressively diminished Himself into the world through the various dimensions, or Sephirot. Hasidism applied it also to the most mundane details of human existence. All Hasidic schools devoted a prominent place in their teaching, with differing accentuation, to the interchanging nature of Ein, both infinite and imperceptible, becoming Yesh, "Existent '' -- and vice versa. They used the concept as a prism to gauge the world, and the needs of the spirit in particular. Elior noted: "reality lost its static nature and permanent value, now measured by a new standard, seeking to expose the Godly, boundless essence, manifest in its tangible, circumscribed opposite. ''
One major derivative of this philosophy is the notion of devekut, "communion ''. As God was everywhere, connection with Him had to be pursued ceaselessly as well, in all times, places and occasions. Such an experience was in the reach of every person, who only had to negate his inferior impulses and grasp the truth of divine immanence, enabling him to unite with it and attain the state of perfect, selfless bliss. Hasidic masters, well versed in the teachings concerning communion, are supposed not only to gain it themselves, but to guide their flock to it. Devekut was not a strictly defined experience; many varieties were described, from the utmost ecstasy of the learned leaders to the common man 's more humble yet no less significant emotion during prayer.
Closely linked with the former is Bitul ha - Yesh, "Negation of the Existent '', or of the "Corporeal ''. Hasidism teaches that while a superficial observance of the universe by the "eyes of the flesh '' (Einei ha - Basar) purportedly reflects the reality of all things profane and worldly, a true devotee must transcend this illusory façade and realize that there is nothing but God. It is not only a matter of perception, but very practical, for it entails also abandoning material concerns and cleaving only to the true, spiritual ones, oblivious to the surrounding false distractions of life. The practitioner 's success in detaching from his sense of person, and conceive himself as Ein (in the double meaning of ' naught ' and ' infinite '), is regarded as the highest state of elation in Hasidism. The true divine essence of man -- the soul -- may then ascend and return to the upper realm, where it does not possess an existence independent from God. This ideal is termed Hitpashtut ha - Gashmi'yut, "the expansion (or removal) of corporeality ''. It is the dialectic opposite of God 's contraction into the world.
To be enlightened and capable of Bitul ha - Yesh, pursuing the pure spiritual aims and defying the primitive impulses of the body, one must overcome his inferior "Bestial Soul '', connected with the Eyes of the Flesh. He may be able to tap into his "Divine Soul '' (Nefesh Elohit), which craves communion, by employing constant contemplation, Hitbonenot, on the hidden Godly dimension of all that exists. Then he could understand his surroundings with the "Eyes of the Intellect ''. The ideal adherent was intended to develop equanimity, or Hishtavut in Hasidic parlance, toward all matters worldly, not ignoring them, but understanding their superficiality.
Hasidic masters exhorted their followers to "negate themselves '', paying as little heed as they could for worldly concerns, and thus, to clear the way for this transformation. The struggle and doubt of being torn between the belief in God 's immanence and the very real sensual experience of the indifferent world is a key theme in the movement 's literature. Many tracts have been devoted to the subject, acknowledging that the "callous and rude '' flesh hinders one from holding fast to the ideal, and these shortcomings are extremely hard to overcome even in the purely intellectual level, a fortiori in actual life.
Another implication of this dualism is the notion of "Worship through Corporeality '', Avodah be-Gashmi'yut. As the Ein Sof metamorphosed into substance, so may it in turn be raised back to its higher state; likewise, since the machinations in the higher Sephirot exert their influence on this world, even the most simple action may, if performed correctly and with understanding, achieve the reverse effect. According to Lurianic doctrine, The netherworld was suffused with divine sparks, concealed within "husks '', Qliphoth. The glints had to be recovered and elevated to their proper place in the cosmos. "Materiality itself could be embraced and consecrated '', noted Glenn Dynner, and Hasidism taught that by common acts like dancing or eating, performed with intention, the sparks could be extricated and set free. Avodah be-Gashmi'yut had a clear, if not implicit, antinomian edge, possibly equating sacred rituals mandated by Judaism with everyday activities, granting them the same status in the believer 's eyes and having him content to commit the latter at the expense of the former. While at some occasions the movement did appear to step at that direction -- for example, in its early days prayer and preparation for it consumed so much time that adherents were blamed of neglecting sufficient Torah study -- Hasidic masters proved highly conservative. Unlike in other, more radical sects influenced by kabbalistic ideas, like the Sabbateans, Worship through Corporeality was largely limited to the elite and carefully restrained. The common adherents were taught they may engage it only mildly, through small deeds like earning money to support their leaders.
The complementary opposite of corporeal worship, or the elation of the finite into infinite, is the concept of Hamshacha, "drawing down '' or "absorbing '', and specifically, Hamschat ha - Shefa, "absorption of effluence ''. During spiritual ascension, one could siphon the power animating the higher dimensions down into the material world, where it would manifest as benevolent influence of all kinds. These included spiritual enlightenment, zest in worship and other high - minded aims, but also the more prosaic health and healing, deliverance from various troubles and simple economic prosperity. Thus, a very tangible and alluring motivation to become followers emerged. Both corporeal worship and absorption allowed the masses to access, with common actions, a religious experience once deemed esoteric.
Yet another reflection of the Ein - Yesh dialectic is pronounced in the transformation of evil to goodness and the relations between these two poles and other contradicting elements -- including various traits and emotions of the human psyche, like pride and humility, purity and profanity, et cetera. Hasidic thinkers argued that in order to redeem the sparks hidden, one had to associate not merely with the corporeal, but with sin and evil. One example is the elevation of impure thoughts during prayer, transforming them to noble ones rather than repressing them, advocated mainly in the early days of the sect; or "breaking '' oneself 's character by directly confronting profane inclinations. This aspect, once more, had sharp antinomian implications was and used by the Sabbateans to justify excessive sinning. It was mostly toned down in late Hasidism, and even before that leaders were careful to stress that it was not exercised in the physical sense, but in the contemplative, spiritual one. This kabbalistic notion, too, was not unique to the movement and appeared frequently among other Jewish groups.
While its mystical and ethical teachings are not easily sharply distinguished from those of other Jewish currents, the defining doctrine of Hasidism is that of the saintly leader, serving both as an ideal inspiration and an institutional figure around whom followers are organized. In the movement 's sacral literature, this person is referred to as the Tzaddiq, the Righteous One -- often also known by the general honorific Admor (acronym of Hebrew for "our master, teacher and Rabbi ''), granted to rabbis in general, or colloquially as Rebbe. The idea that, in every generation, there are righteous persons through whom the divine effluence is drawn to the material world is rooted in the kabbalistic thought, which also claims that one of them is supreme, the reincarnation of Moses. Hasidism elaborated the notion of the Tzaddiq into the basis of its entire system -- so much that the very term gained an independent meaning within it, apart from the original which denoted God - fearing, highly observant people.
When the sect began to attract following and expanded from a small circle of learned disciples to a mass movement, it became evident that its complex philosophy could be imparted only partially to the new rank and file. As even intellectuals struggled with the sublime dialectics of infinity and corporeality, there was little hope to have the common folk truly internalize these, not as mere abstractions to pay lip service to. Ideologues exhorted them to have faith, but the true answer, which marked their rise as a distinct sect, was the concept of the Tzaddiq. A Hasidic master was to serve as a living embodiment of the recondite teachings. He was able to transcend matter, gain spiritual communion, Worship through Corporeality and fulfill all the theoretical ideals. As the vast majority of his flock could not do so themselves, they were to cleave to him instead, acquiring at least some semblance of those vicariously. His commanding and often -- especially in the early generations -- charismatic presence was to reassure the faithful and demonstrate the truth in Hasidic philosophy by countering doubts and despair. But more than spiritual welfare was concerned: Since it was believed he could ascend to the higher realms, the leader was able to harvest effluence and bring it down upon his adherents, providing them with very material benefits. "The crystallization of that theurgical phase '', noted Glenn Dynner, "marked Hasidism 's evolution into a full - fledged social movement. ''
In Hasidic discourse, the willingness of the leader to sacrifice the ecstasy and fulfillment of unity in God was deemed a heavy sacrifice undertook for the benefit of the congregation. His followers were to sustain and especially to obey him, as he possessed superior knowledge and insight gained through communion. The "descent of the Righteous '' (Yeridat ha - Tzaddiq) into the matters of the world was depicted as identical with the need to save the sinners and redeem the sparks concealed in the most lowly places. Such a link between his functions as communal leader and spiritual guide legitimized the political power he wielded. It also prevented a retreat of Hasidic masters into hermitism and passivity, as many mystics before them did. Their worldly authority was perceived as part of their long - term mission to elevate the corporeal world back into divine infinity. To a certain extent, the Saint even fulfilled for his congregation, and for it alone, a limited Messianic capacity in his lifetime. After the Sabbatean debacle, this moderate approach provided a safe outlet for the eschatological urges. At least two leaders radicalized in this sphere and caused severe controversy: Nachman of Breslov, who declared himself the only true Tzaddiq, and Menachem Mendel Schneerson, whom many of his followers believed to be the Messiah. The Rebbes were subject to intense hagiography, even subtly compared with Biblical figures by employing prefiguration. It was argued that since followers could not "negate themselves '' sufficiently to transcend matter, they should instead "negate themselves '' in submission to the Saint (hitbatlut la - Tzaddiq), thus bonding with him and enabling themselves to access what he achieved in terms of spirituality. The Righteous served as a mystical bridge, drawing down effluence and elevating the prayers and petitions of his admirers.
The Saintly forged a well - defined relationship with the masses: they provided the latter with inspiration, were consulted in all matters, and were expected to intercede on behalf of their adherents with God and ensure they gained financial prosperity, health and male offspring. The pattern still characterizes Hasidic sects, though prolonged routinization in many turned the Rebbes into de facto political leaders of strong, institutionalized communities. The role of a Saint was obtained by charisma, erudition and appeal in the early days of Hasidism. But by the dawn of the 19th century, the Righteous began to claim legitimacy by descent to the masters of the past, arguing that since they linked matter with infinity, their abilities had to be associated with their own corporeal body. Therefore, it was accepted "there can be no Tzaddiq but the son of a Tzaddiq ''. Virtually all modern sects maintain this hereditary principle. For example, the Rebbes ' families maintain endogamy and marry almost solely with scions of other dynasties.
Some Hasidic "courts '', and not a few individual prominent masters, developed distinct philosophies with particular accentuation of various themes in the movement 's general teachings. Several of these Hasidic schools had lasting influence over many dynasties, while others died with their proponents. In the doctrinal sphere, the dynasties may be divided along many lines. Some are characterized by Rebbes who are predominantly Torah scholars and decisors, deriving their authority much like ordinary non-Hasidic rabbis do. Such "courts '' place great emphasis on strict observance and study, and are among the most meticulous in the Orthodox world in practice. Prominent examples are the House of Sanz and its scions, such as Satmar, or Belz. Other sects, like Vizhnitz, espouse a charismatic - populist line, centered on the admiration of the masses for the Righteous, his effervescent style of prayer and conduct and his purported miracle - working capabilities. Fewer still retain a high proportion of the mystical - spiritualist themes of early Hasidism, and encourage members to study much kabbalistic literature and (carefully) engage in the field. The various Ziditchover dynasties mostly adhere to this philosophy. Others still focus on contemplation and achieving inner perfection. No dynasty is wholly devoted to a single approach of the above, and all offer some combination with differing emphasis on each of those.
In 1812, a schism occurred between the Seer of Lublin and his prime disciple, the Holy Jew of Przysucha, due to both personal and doctrinal disagreements. The Seer adopted a populist approach, centered on the Righteous ' theurgical functions to draw the masses. He was famous for his lavish, enthusiastic conduct during prayer and worship, and extremely charismatic demeanour. He stressed that as Tzaddiq, his mission was to influence the common folk by absorbing Divine Light and satisfying their material needs, thus converting them to his cause and elating them. The Holy Jew pursued a more introspective course, maintaining that the Rebbe 's duty was to serve as a spiritual mentor for a more elitist group, helping them to achieve a senseless state of contemplation, aiming to restore man to his oneness with God which Adam supposedly lost when he ate the fruit of the Lignum Scientiae. The Holy Jew and his successors did neither repudiate miracle working, nor did they eschew dramatic conduct; but they were much more restrained in general. The Przysucha School became dominant in Central Poland, while populist Hasidism resembling the Lublin ethos often prevailed in Galicia. One extreme and renowned philosopher who emerged from the Przysucha School was Menachem Mendel of Kotzk. Adopting an elitist, hard - line attitude, he openly denounced the folkly nature of other Tzaddiqim, and rejected financial support. Gathering a small group of devout scholars who sought to attain spiritual perfection, whom he often berated and mocked, he always stressed the importance of both somberness and totality, stating it was better to be fully wicked than only somewhat good.
The Chabad school, limited to its namesake dynasty, but prominent, was founded by Shneur Zalman of Liadi and was elaborated by his successors, until the late 20th century. The movement retained many of the attributes of early Hasidism, before a clear divide between Righteous and ordinary followers was cemented. Chabad Rebbes insisted their adherents acquire proficiency in the sect 's lore, and not relegate most responsibility to the leaders. The sect emphasizes the importance of intellectually grasping the dynamics of the hidden divine aspect and how they affect the human psyche; the very acronym Chabad is for the three penultimate Sephirot, associated with the cerebral side of consciousness.
Another famous philosophy is that formulated by Nachman of Breslov and adhered to by Breslov Hasidim. In contrast to most of his peers who believed God must be worshiped through joy, Nachman portrayed the corporeal world in grim colors, as a place devoid of God 's immediate presence from which the soul yearns to liberate itself. He mocked the attempts to perceive the nature of infinite - finite dialectics and the manner in which God still occupies the Vacant Void albeit not, stating these were paradoxical, beyond human understanding. Only naive faith in their reality would do. Mortals were in constant struggle to overcome their profane instincts, and had to free themselves from their limited intellects to see the world as it truly is.
Tzvi Hirsh of Zidichov, a major Galician Tzaddiq, was a disciple of the Seer of Lublin, but combined his populist inclination with a strict observance even among his most common followers, and great pluralism in matters pertaining to mysticism, as those were eventually emanating from each person 's unique soul.
Mordechai Yosef Leiner of Izbica promulgated a radical understanding of free will, which he considered illusory and also derived directly from God. He argued that when one attained a sufficient spiritual level and could be certain evil thoughts did not derive from his animalistic soul, then sudden urges to transgress revealed Law were God - inspired and may be pursued. This volatile, potentially antinomian doctrine of "Transgression for the Sake of Heaven '' is found also in other Hasidic writings, especially from the early period. His successors de-emphasized it in their commentaries. Leiner 's disciple Zadok HaKohen of Lublin also developed a complex philosophic system which presented a dialectic nature in history, arguing that great progress had to be preceded by crisis and calamity.
The Hasidic community is organized in a sect known as "court '' (Hebrew: חצר, hatzer; Yiddish: האף, Hoif). In the early days of the movement, a particular Rebbe 's following usually resided in the same town, and Hasidim were categorized by their leaders ' settlement: a Hasid of Belz, Vizhnitz and so forth. Later, especially after World War II, the dynasties retained the names of their original Eastern European settlements when moving to the West or Israel. Thus, for example, the "court '' established by Joel Teitelbaum in 1905 at Transylvania remained known after its namesake town, Sathmar, even though its headquarters lay in New York, and almost all other Hasidic sects likewise -- albeit some groups founded overseas were named accordingly, like the Boston (Hasidic dynasty).
Akin to his spiritual status, the Rebbe is also the administrative head of the community. Sects often possess their own synagogues, study halls and internal charity mechanisms, and ones sufficiently large also maintain entire educational systems. The Rebbe is the supreme figure of authority, and not just for the institutions. The rank - and - file Hasidim are also expected to consult with him on important matters, and often seek his blessing and advice. He is personally attended by aides known as Gabbai or Mashbak.
Many particular Hasidic rites surround the leader. On the Sabbath, holidays, and celebratory occasions, Rebbes hold a Tisch (table), a large feast for their male adherents. Together, they sing, dance, and eat, and the head of the sect shakes the hands of his followers to bless them, and often delivers a sermon. A Chozer, "repeater '', selected for his good memory, commits the text to writing after the Sabbath, during which the action is forbidden. In many "courts '', the remnants of his meal, supposedly suffused with holiness, are handed over and even fought upon. Often, a very large dish is prepared beforehand and the Rebbe only tastes it before passing it to the crowd. Apart from the gathering at noon, the third repast on Sabbath and the "Melaveh Malkah '' meal when it ends are also particularly important and an occasion for song, feasting, tales and sermons. A central custom, which serves as a major factor in the economics of most "courts '', is the Pidyon, "Ransom '', better known by its Yiddish name Kvitel, "little note '': adherents submit a written petition, which the master may assist with on behalf of his sanctity, adding a sum of money for either charity or the leader 's needs. Occasions in the "court '' serve as pretext for mass gatherings, flaunting the power, wealth and size of each. Weddings of the leader 's family, for example, are often held with large multistoried stands (פארענטשעס, Parentches) filled with Hasidim surround the main floor, where the Rebbe and his relatives dine, celebrate and perform the Mitzvah tantz. This is a festive dance with the bride: both parties hold one end of a long sash, a Hasidic gartel, for reasons of modesty.
Allegiance to the dynasty and Rebbe is also a cause for tension and violence. Notable feuds between "courts '' include the 1926 -- 34 strife after Chaim Elazar Spira of Munkatch cursed the deceased Yissachar Dov Rokeach I of Belz; the 1980 -- 2012 Satmar - Belz collision after Yissachar Dov Rokeach II broke with the Orthodox Council of Jerusalem, which culminated when he had to travel in a bulletproof car; and the 2006 -- present Satmar succession dispute between brothers Aaron Teitelbaum and Zalman Teitelbaum, which saw mass riots. Like in other Ultra-Orthodox groups, Hasidim who wish to disaffiliate from the community face threats, hostility and various punitive measures. A related phenomenon is the recent rise of Mashpi'im ("influencers ''). Once a title for an instructor in Chabad and Breslov only, the institutionalized nature of the established "courts '' led many adherents to seek guidance and inspiration from persons who did not declare themselves new leaders, but only Mashpi'im. Technically, they fill the original role of Rebbes in providing for spiritual welfare; yet, they did not usurp the title, and therefore could be countenanced.
Most Hasidim use some variation of Nusach Sefard, a blend of Ashkenazi and Sephardi liturgies, based on the innovations of Rabbi Isaac Luria. Many dynasties have their own specific adaptation of Nusach Sefard; some, such as the versions of the Belzer, Bobover, and Dushinsky Hasidim, are closer to Nusach Ashkenaz, while others, such as the Munkacz version, are closer to the old Lurianic. Many sects believe that their version reflects Luria 's mystical devotions best. The Baal Shem Tov added two segments to Friday services on the eve of Sabbath: Psalm 107 before afternoon prayer, and Psalm 23 at the end of evening service.
Hasidim use the Ashkenazi pronunciation of Hebrew and Aramaic for liturgical purposes, reflecting their Eastern European background. Wordless, emotional melodies, nigunim, are particularly common in their services.
Hasidim lend great importance to kavana, devotion or intention, and their services tend to be extremely long and repetitive. Some courts nearly abolished traditional specified times by which prayers must be conducted (zemanim), to prepare and concentrate. This practice, still enacted in Chabad for one, is controversial in many dynasties, which do follow the specifics of Jewish Law on praying earlier, and not eating beforehand. Another reglement is daily immersion in a ritual bath by males for spiritual cleansing, at a rate much higher than is customary among other Orthodox Jews.
Within the Hasidic world, it is possible to distinguish different Hasidic groups by subtle differences in dress. Some details of their dress are shared by non-Hasidic Haredim. Much of Hasidic dress was historically the clothing of all Eastern European Jews, influenced by the style of Polish -- Lithuanian nobility. Furthermore, Hasidim have attributed religious origins to specific Hasidic items of clothing.
Hasidic men most commonly wear dark overclothes. On weekdays, they wear a long, black, cloth jacket called a rekel, and on Jewish Holy Days, the bekishe zaydene kapote (Yiddish, lit., satin caftan), a similarly long, black jacket, but of satin fabric traditionally silk. Indoors, the colorful tish bekishe is still worn. Some Hasidim wear a satin overcoat, known as rezhvolke. A rebbe 's rezhvolke might be trimmed with velvet. Most do not wear neckties.
On the Sabbath, the Hasidic Rebbes traditionally wore a white bekishe. This practice has fallen into disuse among most. Many of them wear a black silk bekishe that is trimmed with velvet (known as stro - kes or samet) and in Hungarian ones, gold - embroidered.
Various symbolic and religious qualities are attributed to Hasidic dress, though they are mainly apocryphal and the clothes ' origin is cultural and historical. For example, the long overcoats are considered modest, the Shtreimel is supposedly related to shaatnez and keeps one warm without using wool, and Sabbath shoes are laceless in order not to have to tie a knot, a prohibited action. A gartel divides the Hasid 's lower parts from his upper parts, implying modesty and chastity, and for kabbalistic reasons, Hasidim button their clothes right over left. Hasidim customarily wear black hats during the weekdays, as do nearly all Haredim today. A variety of hats are worn depending on the group: Chabad often pinch their hats to form a triangle on the top, Satmar wear an open - crown hat with rounded edges, and Samet (velvet) or biber (beaver) hats are worn by many Galician and Hungarian Hasidim.
Married Hasidim don a variety of fur headdresses on the Sabbath, once common among all wedded Eastern European Jewish males and still worn by non-Hasidic Perushim in Jerusalem. The most ubiquitous is the Shtreimel, which is seen especially among Galician and Hungarian sects like Satmar or Belz. A taller Spodik is donned by Polish dynasties such as Ger. A Kolpik is worn by unmarried sons and grandsons of many Rebbes on the Sabbath. Some Rebbes don it on special occasions.
There are many other distinct items of clothing. Such are the Gerrer hoyznzokn -- long black socks into which the trousers are tucked. Some Hasidim from Eastern Galicia wear black socks with their breeches on the Sabbath, as opposed to white ones on weekdays, particularly Belzer Hasidim.
Following a Biblical commandment not to shave the sides of one 's face, male members of most Hasidic groups wear long, uncut sidelocks called payot (or peyes). Some Hasidic men shave off the rest of their hair. Not every Hasidic group requires long peyos, and not all Jewish men with peyos are Hasidic, but all Hasidic groups discourage the shaving of one 's beard. Most Hasidic boys receive their first haircuts ceremonially at the age of three years (only the Skverrer Hasidim do this at their boys ' second birthday). Until then, Hasidic boys have long hair.
Hasidic women wear clothing adhering to the principles of modest dress in Jewish law. This includes long, conservative skirts and sleeves past the elbow, as well as covered necklines. Also, the women wear stockings to cover their legs; in some Hasidic groups, such as Satmar or Toldot Aharon, the stockings must be opaque. In keeping with Jewish law, married women cover their hair, using either a sheitel (wig), a tichel (headscarf), a shpitzel, a snood, a hat, or a beret. In some Hasidic groups, such as Satmar, women may wear two headcoverings -- a wig and a scarf, or a wig and a hat.
Hasidic Jews, like many other Orthodox Jews, typically produce large families; the average Hasidic family in the United States has 8 children. This is followed out of a desire to fulfill the Biblical mandate to "be fruitful and multiply ''.
Most Hasidim speak the language of their countries of residence, but use Yiddish among themselves as a way of remaining distinct and preserving tradition. Thus, children are still learning Yiddish today, and the language, despite predictions to the contrary, is not dead. Yiddish newspapers are still published, and Yiddish fiction is being written, primarily aimed at women. Even films in Yiddish are being produced within the Hasidic community. Some Hasidic groups, such as Satmar or Toldot Aharon, actively oppose the everyday use of Hebrew, which they consider a holy tongue. The use of Hebrew for anything other than prayer and study is, according to them, profane. Hence, Yiddish is the vernacular and common tongue for many Hasidim around the world.
Hasidic Tales are a literary genre, concerning both hagiography of various Rebbes and moralistic themes. Some are anecdotes or recorded conversations dealing with matters of faith, practice, and the like. The most famous tend to be terse and carry a strong and obvious point. They were often transmitted orally, though the earliest compendium is from 1815.
Many revolve around the Righteous. The Baal Shem, in particular, was subject to excess hagiography. Characterized by vivid metaphors, miracles, and piety, each reflects the surrounding and era it was composed in. Common themes include dissenting the question what is acceptable to pray for, whether or not the commoner may gain communion, or the meaning of wisdom. The tales were a popular, accessible medium to convey to movement 's messages.
The various Hasidic groups may be categorized along several parameters, including their geographical origin, their proclivity for certain teachings, and their political stance. These attributes are quite often, but by no means always, correlated, and there are many instances when a "court '' espouses a unique combination. Thus, while most dynasties from the former Greater Hungary and Galicia are inclined to extreme conservatism and anti-Zionism, Rebbe Yekutiel Yehuda Halbertsam led the Sanz - Klausenburg sect in a more open and mild direction; and though Hasidim from Lithuania and Belarus are popularly perceived as prone to intellectualism, David Assaf noted this notion is derived more from their Litvak surroundings than their actual philosophies. Apart from those, each "court '' often possesses its unique customs, including style of prayer, melodies, particular items of clothing and the like.
On the political scale, "courts '' are mainly divided on their relations to Zionism. The right - wing, identified with Satmar, are hostile to the State of Israel, and refuse to participate in the elections there or receive any state funding. They are mainly affiliated with the Orthodox Council of Jerusalem and the Central Rabbinical Congress. The great majority belong to Agudas Israel, represented in Israel by the United Torah Judaism party. Its Council of Torah Sages now includes a dozen Rebbes. In the past, there were Religious Zionist Rebbes, mainly of the Ruzhin line, but there are virtually none today.
In 2005, Prof. Jacques Gutwirth estimated there were some 400,000 men, women, and children adhering to Hasidic sects worldwide, and that figure was expected to grow due to high birth rates of Hasidic Jews. About 200,000, he assumed, lived in the State of Israel, another 150,000 in the United States, and further 50,000 were scattered around the world, especially in Britain, but also in Antwerp, Montreal, Vienna, and other centers. In Israel, the largest Hasidic concentrations are in the Ultra-Orthodox neighbourhoods of Jerusalem -- including Ramot Alon, Batei Ungarin et cetera -- in the cities of Bnei Brak and El'ad, and in the West Bank settlements of Modi'in Illit and Beitar Illit. There is considerable presence in other specifically Orthodox municipalities or enclaves, like Kiryat Sanz, Netanya. In the United States, most Hasidim reside in New York and New Jersey, though there are small communities across the entire country. In Brooklyn, Borough Park, Williamsburg, and Crown Heights all house a particularly large population. So does the hamlet of Monsey in upstate New York. In the same region, New Square, Monroe, and Kiryas Joel are rapidly growing all - Hasidic enclaves, one founded by the Skver dynasty and the other by Satmar. In Britain, Stamford Hill is home to the largest Hasidic community in the country, and there are others in London and Prestwich in Manchester. In Canada, Kiryas Tosh is a settlement populated entirely by Tosh Hasidim, and there are more adherents of other sects in and around Montreal.
There are more than a dozen Hasidic dynasties with a large following, and over a hundred which have small or minuscule adherence, sometimes below twenty people, with the presumptive Rebbe holding the title more as a matter of prestige. Many "courts '' became completely extinct during the Holocaust, like the Aleksander (Hasidic dynasty) from Aleksandrów Łódzki, which numbered tens of thousands in 1939, and barely exists today.
The largest sect in the world is Satmar, founded in 1905 in the namesake city in Hungary and based in Williamsburg, Brooklyn and Kiryas Joel. Estimates claim as many as 120,000 adherents of all ages. Satmar is known for its conservatism and opposition to both Agudas Israel and Zionism, inspired by the legacy of Hungarian Ultra-Orthodoxy. The sect underwent a schism in 2006 and two competing factions emerged, led by rival brothers Aaron Teitelbaum and Zalman Teitelbaum. The second - largest "court '' worldwide is Ger, established in 1859 at Góra Kalwaria, near Warsaw. Ger lists some 10,000 households in its Israel registry alone, and there are more abroad. For decades, it was the dominant power in Agudas and espoused a moderate line toward Zionism and modern culture. Its origins lay in the rationalist Przysucha School of Central Poland. The current Rebbe is Yaakov Aryeh Alter. Another major group is Belz, established 1817 in namesake Belz, south of Lviv. An Eastern Galician dynasty drawing both from the Seer of Lublin 's charismatic - populist style and "rabbinic '' Hasidism, it espoused hard - line positions, but broke off from the Orthodox Council of Jerusalem and joined Agudas in 1979. It has between 6,000 and 8,000 affiliated households, and is led by Rebbe Yissachar Dov Rokeach. Yet another large dynasty is Vizhnitz, a charismatic sect founded in 1854 at Vyzhnytsia, Bukovina, to which some 7,000 families belong. A moderate sect involved in Israeli politics, it is split into several branches, which maintain cordial relations. The main partition is between Vizhnitz - Israel and Vizhnitz - Monsey, headed respectively by Rebbes Israel Hager and his uncle Mordecai Hager.
The Bobover dynasty, founded 1881 in Bobowa, West Galicia, claims some 2,000 - 3,000 households in total and has undergone a bitter succession strife since 2005, eventually forming the "Bobov '' and "Bobov - 45 '' sects. Sanz - Klausenburg, divided into a New York and Israeli branches, also purports to preside over 2,000 households. The Skver sect, established in 1848 in Skvyra near Kiev, is likewise claiming 2,000 - 3,000. The Shomer Emunim dynasties, originating in Jerusalem during the 1920s and known for their unique style of dressing imitating that of the Old Yishuv, have over 1,500 - 2,000 families, almost all in the larger "courts '' of Toldos Aharon and Toldos Avraham Yitzchak. Karlin Stolin, which rose already in the 1760s in a quarter of Pinsk, also encompasses a few thousands of adherents.
There are two other populous Hasidic sub-groups, which do not function as classical Rebbe - headed "courts '', but as decentralized movements, retaining some of the characteristics of early Hasidism. Breslov rose under its charismatic leader Nachman of Breslov in the early 19th century. Critical of all other Rebbes, he forbade his followers to appoint a successor upon his death in 1810. His acolytes led small groups of adherents, persecuted by other Hasidim, and disseminated his teachings. The original philosophy of the sect elicited great interest among modern scholars, and that led many newcomers to Orthodox Judaism ("repentants '') to join it. Numerous Breslov communities, each led by its own rabbis, now have thousands of full - fledged followers and far more admirers and semi-committed supporters. Chabad - Lubavitch, originating in the 1770s, did have hereditary leadership, but always stressed the importance of self - study rather than reliance on the Righteous. Its seventh and last leader, Menachem Mendel Schneerson, converted it into a vehicle for Jewish outreach. By his death in 1994, it had many more semi-engaged supporters than Hasidim in the strict sense, and they are still hard to distinguish: Estimates for number of Chabad affiliates of all sorts therefore range from 50,000 to 200,000. None succeeded Schneerson, and the sect operates as a large network of communities with independent leaders.
In the late 17th century, several social trends converged among the Jews who inhabited the southern periphery of the Polish -- Lithuanian Commonwealth, especially in contemporary Western Ukraine. These enabled the emergence and flourishing of Hasidism.
The first and most prominent was the popularization of the mystical lore of Kabbalah. For several centuries an esoteric teaching practiced surreptitiously only by a narrow stratum of the highly learned, it was transformed into almost household knowledge by a mass of cheap pamphlets printed by both Jewish and Christian publishers from the beginning of the century. The kabbalistic inundation was a major influence behind the rise of the heretical Sabbatean movement, led by Sabbatai Zevi, who declared himself Messiah in 1665. The propagation of Kabbalah made the Jewish masses susceptible to Hasidic ideas, themselves in essence a popularized version of the teaching -- indeed, Hasidism actually emerged when its founders determined to openly practice it instead of remaining a secret circle of aesthetics as was the manner of almost all past kabbalists. The correlation between publicizing the lore and Sabbateanism did not escape the rabbinic elite, and caused vehement opposition to the new movement.
Another factor was the decline of the traditional authority structures. Jewish autonomy remained quite secured; later research debunked Simon Dubnow 's claim that the Council of Four Lands ' demise in 1746 was a culmination of a long process which destroyed judicial independence and paved the way for the Hasidic rebbes to serve as leaders (another long - held explanation for the sect 's rise advocated by Raphael Mahler, that the Khmelnytsky Uprising effected economic impoverishment and despair, was also refuted). However, the magnates and nobles held much sway over the nomination of both rabbis and communal elders, to such a degree that the masses often perceived them as mere lackeys of the land owners. Their ability to serve as legitimate arbiters in disputes -- especially those concerning the regulation of leasehold rights over alcohol distillation and other monopolies in the estates -- was severely diminished. The reduced prestige of the establishment, and the need for an alternative source of authority to pass judgement, left a vacuum which Hasidic charismatics eventually filled. They transcended old communal institutions, to which all the Jews of a locality were subordinate, and had groups of followers in each town across vast territories. Often supported by rising strata outside the traditional elite, whether nouveau riche or various low - level religious functionaries, they created a modern form of leadership.
Historians discerned other influences. The formative age of Hasidism coincided with the rise of numerous religious revival movements across the world, including the First Great Awakening in New England, German Pietism, Wahhabism in Arabia and the Russian Old Believers who opposed the established church. They all rejected the existing order, decrying it as stale and overly hierarchic. They offered what they described as more spiritual, candid and simple substitutes. Gershon David Hundert noted the considerable similarity between the Hasidic conceptions and this general background, rooting both in the growing importance attributed to the individual 's consciousness and choices.
Israel ben Eliezer (ca. 1690 -- 1760), known as the Baal Shem Tov ("Master of the Good Name '', Acronym: "Besht ''), is considered the founder of Hasidism. Born apparently south of the Prut, in the northern frontier of Moldavia, he earned a reputation as a Baal Shem, "Master of the Name ''. These were common folk healers who employed mysticism, amulets and incantations at their trade. Little is known for certain on ben Eliezer. Though no scholar, he was sufficiently learned to become notable in the communal hall of study and marry into the rabbinic elite, his wife being the divorced sister of a rabbi; in his later years he was wealthy and famous, as attested by contemporary chronicles. Apart from that, most is derived from Hasidic hagiographic accounts. These claim that as a boy he was recognized by one "Rabbi Adam Baal Shem Tov '' who entrusted him with great secrets of the Torah passed in his illustrious family for centuries. the Besht later spent a decade in the Carpathian Mountains as a hermit, where he was visited by the Biblical prophet Ahijah the Shilonite who taught him more. At the age of thirty - six, he was granted heavenly permission to reveal himself as a great kabbalist and miracle worker.
By the 1740s, it is verified that he relocated to the town of Medzhybizh and became recognized and popular in all Podolia and beyond. It is well attested that he did emphasize several known kabbalistic concepts, formulating a teaching of his own to some degree. The Besht stressed the immanence of God and His presence in the material world, and that therefore, physical acts, such as eating, have actual influence on the spiritual sphere and may serve to hasten the achievement of communion with the divine (devekut). He was known to pray ecstatically and with great intention, again in order to provide channels for the divine light to flow into the earthly realm. The Besht stressed the importance of joy and contentment in the worship of God, rather than the abstinence and self - mortification deemed essential to become a pious mystic, and of fervent and vigorous prayer as a means of spiritual elation instead of severe aestheticism -- though many of his immediate disciples reverted in part to the older doctrines, especially in disavowing sexual pleasure even in marital relations. In that, the "Besht '' laid the foundation for a popular movement, offering a far less rigorous course for the masses to gain a significant religious experience. And yet, he remained the guide of a small society of elitists, in the tradition of former kabbalists, and never led a large public as his successors did. While many later figures cited him as the inspiration behind the full - fledged Hasidic doctrine, the Besht himself did not practice it in his lifetime.
Israel ben Eliezer gathered a considerable following, drawing to himself disciples from far away. They were largely of elitist background, yet adopted the populist approach of their master. The most prominent was Rabbi Dov Ber the Maggid (preacher). He succeeded the former upon his death, though other important acolytes, mainly Jacob Joseph of Polonne, did not accept his leadership. Establishing himself in Mezhirichi, the Maggid turned to greatly elaborate the Besht 's rudimentary ideas and institutionalize the nascent circle into an actual movement. Ben Eliezer and his acolytes used the very old and common epithet hasidim, "pious ''; in the latter third of the 18th century, a clear differentiation arose between that sense of the word and what was at first described as "New Hasidism '', propagated to a degree by the Maggid and especially his successors.
Doctrine coalesced as Jacob Joseph, Dov Ber and the latter 's disciple Rabbi Elimelech of Lizhensk composed the three magna opera of early Hasidism, respectively: the 1780 Toldot Ya'akov Yosef, the 1781 Maggid d'varav le - Ya'akov and the 1788 No'am Elimelekh. Other books were also published. Their new teaching had many aspects. The importance of devotion in prayer was stressed to such degree that many waited beyond the prescribed time to properly prepare; the Besht 's recommendation to "elevate and sanctify '' impure thoughts rather than simply repress them during the service was expanded by Dov Ber into an entire precept, depicting prayer as a mechanism to transform thoughts and feelings from a primal to a higher state in a manner parallel to the unfolding of the Sephirot. But the most important was the notion of the Tzaddiq -- later designated by the general rabbinic honorific Admor (our master, teacher and rabbi) or by the colloquial Rebbe -- the Righteous One, the mystic who was able to elate and achieve communion with the divine, but unlike kabbalists past, did not practice it in secret, but as leader of the masses. He was able to bring down prosperity and guidance from the higher Sephirot, and the common people who could not attain such a state themselves would achieve it by "clinging '' to and obeying him. The Tzaddiq served as a bridge between the spiritual realm and the ordinary folk, as well as a simple, understandable embodiment of the esoteric teachings of the sect, which were still beyond the reach of most just as old - style Kabbalah before.
The various Hasidic Tzaddiqim, mainly the Maggid 's disciples, spread across Eastern Europe with each gathering adherents among the people and learned acolytes who could be initiated as leaders. The Righteous ' "courts '' in which they resided, attended by their followers to receive blessing and council, became the institutional centers of Hasidism, serving as its branches and organizational core. Slowly, various rites emerged in them, like the Sabbath Tisch or "table '', in which the Righteous would hand out food scraps from their meals, considered blessed by the touch of ones imbued with godly Light during their mystical ascensions. Another potent institution was the Shtibel, the private prayer gatherings opened by adherents in every town which served as a recruiting mechanism. The Shtibel differed from the established synagogues and study halls, allowing their members greater freedom to worship when they pleased and also serving recreational and welfare purposes. Combined with its simplified message, more appealing to the common man, its honed organizational framework accounted for the exponential growth of Hasidic ranks.
From its original base in Podolia and Volhynia, the movement was rapidly disseminated during the Maggid 's lifetime and after his 1772 death. Twenty or so of Dov Ber 's prime disciples each brought it to a different region, and their own successors followed: Aharon of Karlin (I), Menachem Mendel of Vitebsk and Shneur Zalman of Liadi were the emissaries to the former Lithuania in the far north, while Menachem Nachum Twersky headed to Chernobyl in the east and Levi Yitzchok of Berditchev remained nearby. Elimelech of Lizhensk, his brother Zusha of Hanipol and Yisroel Hopsztajn established the sect in Poland proper. Vitebsk and Abraham Kalisker later led a small ascension to the Land of Israel, establishing a Hasidic presence in the Galilee.
The spread of Hasidism also incurred organized opposition. Rabbi Elijah of Vilnius, one the greatest authorities of the generation and a hasid and secret kabbalist of the old style, was deeply suspicious of their emphasis on mysticism rather than mundane Torah study, threat to established communal authority, resemblance to the Sabbatean movement and other details he considered infractions. In April 1772, He and the Vilnius community wardens launched a systematic campaign against the sect, placing an anathema upon them, banishing their leaders and sending letters denouncing the movement. Further excommunication followed in Brody and other cities. In 1781, during a second round of hostilities, the books of Jacob Joseph were burned in Vilnius. Another cause for strife emerged when the Hasidim adopted the Lurianic prayer rite, which they revised somewhat to Nusach Sefard; the first edition in Eastern Europe was printed in 1781 and received approbation from the anti-Hasidic scholars of Brody, but the sect quickly embraced the Kabbalah - infused tome and popularized it, making it their symbol. Their rivals, named Misnagdim, "opponents '' (a generic term which acquired an independent meaning as Hasidism grew stronger) soon accused them of abandoning the traditional Nusach Ashkenaz.
In 1798, Opponents made accusations of espionage against Shneur Zalman of Liadi and he was imprisoned by the Russian government for two months. Excoriatory polemics were printed and anathemas declared in the entire region. But Elijah 's death in 1797 denied the Misnagdim their powerful leader. In 1804, Alexander I of Russia allowed independent prayer groups to operate, the chief vessel through which the movement spread from town to town. The failure to eradicate Hasidism, which acquired a clear self - identity in the struggle and greatly expanded throughout it, convinced its adversaries to adopt a more passive method of resistance, as exemplified by Chaim of Volozhin. The growing conservatism of the new movement -- which at some occasions drew close to Kabbalah - based antinomian phraseology, as did the Sabbateans, but never crossed the threshold and remained thoroughly observant -- and the rise of common enemies slowly brought a rapprochement, and by the second half of the 19th century both sides basically considered each other legitimate.
The turn of the century saw several prominent new, fourth - generation tzaddiqim. Upon Elimelech 's death in the now - partitioned Poland, his place in Habsburg Galicia was assumed by Menachem Mendel of Rimanov, who was deeply hostile to the modernization the Austrian rulers attempted to force on the traditional Jewish society (though this same process also allowed his sect to flourish, as communal authority was severely weakened). The rabbi of Rimanov hearkened the alliance the hasidim would form with the most conservative elements of the Jewish public. In Central Poland, the new leader was Jacob Isaac Horowiz, the "Seer of Lublin '', who was of a particularly populist bent and appealed to the common folk with miracle working and little strenuous spiritual demands. The Seer 's senior acolyte, Jacob Isaac Rabinovitz the "Holy Jew '' of Przysucha, gradually dismissed his mentor 's approach as overly vulgar and adopted a more aesthetic and scholarly approach, virtually without theurgy to the masses. The Holy Jew 's "Przysucha School '' was continued by his successor Simcha Bunim and especially the reclusive, morose Menachem Mendel of Kotzk. The most controversial fourth - generation tzaddiq was the Podolia - based Nachman of Breslov, who denounced his peers for becoming too institutionalized, much like the old establishment their predecessors challenged decades before, and espoused an anti-rationalist, pessimistic spiritual teaching, very different from the prevalent stress on joy.
The opening of the 19th century saw the Hasidic sect transformed. Once a rising force outside the establishment, the tzaddiqim now became an important and often dominant power in most of Eastern Europe. The slow process of encroachment, which mostly begun with forming an independent Shtibel and culminated in the Righteous becoming an authority figure (either alongside or above the official rabbinate) for the entire community, overwhelmed many towns even in Misnagdic stronghold of Lithuania, far more so in Congress Poland and the vast majority in Podolia, Volhynia and Galicia. It began to make inroads into Bukovina, Bessarabia and the westernmost frontier of autochthonic pre-WWII Hasidism, in northeastern Hungary, where the Seer 's disciple Moses Teitelbaum (I) was appointed in Ujhely.
Less than three generations after the Besht 's death, the sect grew to encompass hundreds of thousands by 1830. As a mass movement, a clear stratification emerged between the court 's functionaries and permanent residents (yoshvim, "sitters ''), the devoted followers who would often visit the Righteous on Sabbath, and the large public which prayed at Sefard Rite synagogues and was minimally affiliated.
All this was followed by a more conservative approach and power bickering among the Righteous. Since the Maggid 's death, none could claim the overall leadership. Among the several dozens active, each ruled over his own turf, and local traditions and customs began to emerge in the various courts which developed their own identity. The high mystical tension typical of a new movement subsided, and was soon replaced by more hierarchical, orderly atmosphere.
The most important aspect of the routinization Hasidism underwent was the adoption of dynasticism. The first to claim legitimacy by right of descent from the Besht was his grandson, Boruch of Medzhybizh, appointed 1782. He held a lavish court with Hershel of Ostropol as jester, and demanded the other Righteous acknowledge his supremacy. Upon the death of Menachem Nachum Twersky of Chernobyl, his son Mordechai Twersky succeeded him. The principle was conclusively affirmed in the great dispute after Liadi 's demise in 1813: his senior acolyte Aharon HaLevi of Strashelye was defeated by his son, Dovber Schneuri, whose offspring retained the title for 181 years.
By the 1860s, virtually all courts were dynastic. Rather than single tzaddiqim with followings of their own, each sect would command a base of rank - and - file hasidim attached not just to the individual leader, but to the bloodline and the court 's unique attributes. Israel Friedman of Ruzhyn insisted on royal splendour, resided in a palace and his six sons all inherited some of his followers. With the constraints of maintaining their gains replacing the dynamism of the past, the Righteous or Rebbes / Admorim also silently retreated from the overt, radical mysticism of their predecessors. While populist miracle working for the masses remained a key theme in many dynasties, a new type of "Rebbe - Rabbi '' emerged, one who was both a completely traditional halakhic authority as well as a spiritualist. The tension with the Misnagdim subsided significantly.
But it was an external threat, more than anything else, that mended relations. While traditional Jewish society remained well entrenched in backward Eastern Europe, reports of the rapid acculturation and religious laxity in the West troubled both camps. When the Haskalah, the Jewish Enlightenment, appeared in Galicia and Congress Poland in the 1810s, it was soon perceived as a dire threat. The maskilim themselves detested Hasidism as an anti-rationalist and barbaric phenomenon, as did Western Jews of all shades, including the most right - wing Orthodox such as Rabbi Azriel Hildesheimer. In Galicia especially, hostility towards it defined the Haskalah to a large extent, from the staunchly observant Rabbi Zvi Hirsch Chajes and Joseph Perl to the radical anti-Talmudists like Osias Schorr. The Enlightened, who revived Hebrew grammar, often mocked their rivals ' lack of eloquence in the language. While a considerable proportion of the Misnagdim were not adverse to at least some of the Haskala 's goals, the Rebbes were unremittingly hostile.
The most distinguished Hasidic leader in Galicia in the era was Chaim Halberstam, who combined talmudic erudition and the status of a major decisor with his function as tzaddiq. He symbolized the new era, brokering peace between the small Hasidic sect in Hungary to its opponents. At that country, where modernization and assimilation were much more imminent than in the East, the local Righteous joined forces with those now termed Orthodox against the rising liberals. Rabbi Moses Sofer of Pressburg, while no friend to Hasidism, tolerated it as he combated the forces which sought modernization of the Jews; a generation later, in the 1860s, the Rebbes and the zealot ultra-Orthodox Hillel Lichtenstein allied closely.
Around the mid-19th century, over a hundred dynastic courts related by marriage were the main religious power in the territory enclosed between Hungary, former Lithuania, Prussia and inner Russia, with considerable presence in the former two. In Central Poland, the pragmatist, rationalist Przysucha school thrived: Yitzchak Meir Alter founded the court of Ger in 1859, and in 1876 Jechiel Danziger established Alexander. In Galicia and Hungary, apart from Halberstam 's House of Sanz, Tzvi Hirsh of Zidichov 's descendants each pursued a mystical approach in the dynasties of Zidichov, Komarno and so forth. In 1817, Sholom Rokeach became the first Rebbe of Belz. At Bukovina, the Hager line of Kosov - Vizhnitz was the largest court.
The Haskalah was always a minor force, but the Jewish national movements which emerged in the 1880s, as well as Socialism, proved much more appealing to the young. Progressive strata condemned Hasidism as a primitive relic, strong, but doomed to disappear, as Eastern European Jewry underwent slow yet steady secularization. The gravity of the situation was attested to by the foundation of Hasidic yeshivas (in the modern, boarding school - equivalent sense) to enculturate the young and preserve their loyalty: The first was established at Nowy Wiśnicz by Rabbi Shlomo Halberstam (I) in 1881. These institutions were originally utilized by the Misnagdim to inoculate their youth from Hasidic influence, but now, the latter faced a similar crisis. One of the most contentious issues in this respect was Zionism; the Ruzhin dynasties were quite favourably disposed toward it, while Hungarian and Galician courts reviled it.
Outside pressure was mounting in the early 20th century. In 1912, many Hasidic leaders partook in the creation of the Agudas Israel party, a political instrument intended to safeguard what was now named Orthodox Judaism even in the relatively traditional East; the more hard - line dynasties, mainly Galician and Hungarian, opposed the Aguda as too lenient. Mass immigration to America, urbanization, World War I and the subsequent Russian Civil War uprooted the shtetl s in which the local Jews lived for centuries and were the bedrock of Hasidism. In the new Soviet Union, civil equality first achieved and a harsh repression of religion caused a rapid secularization. Few remaining Hasidim, especially of Chabad, continued to practice underground for decades. In the new states of the Interbellum era, the process was only somewhat slower. On the eve of World War II, strictly observant Jews were estimated to constitute no more than a third of the total Jewish population in Poland, the world 's most Orthodox country. While the Rebbes still had a vast base of support, it was aging and declining.
The Holocaust hit the Hasidim, easily identifiable and almost unable to disguise themselves among the larger populace due to cultural insularity, particularly hard. Hundreds of leaders perished with their flock, while the flight of many notable ones as their followers were being exterminated -- especially Aharon Rokeach of Belz and Joel Teitelbaum of Satmar -- elicited bitter recrimination. In the immediate post-war years, the entire movement seemed to teeter on the precipice of oblivion. In Israel, the United States, and Western Europe, the survivors ' children were at best becoming Modern Orthodox. While a century earlier the Haskalah depicted it as a medieval, malicious power, now it was so weakened that the popular cultural image was sentimental and romantic, what Joseph Dan termed "Frumkinian Hasidism '' for it began with the short stories of Michael Levi Rodkinson (Frumkin). Martin Buber was the major contributor to this trend, portraying the sect as a model of a healthy folk consciousness. "Frumkinian '' style was very influential, later inspiring the so - called "Neo-Hasidism '', and also utterly ahistorical.
Yet, the movement proved more resilient than expected. Talented and charismatic Hasidic masters emerged, who reinvigorated their following and drew new crowds. In New York, the Satmar Rebbe Joel Teitelbaum formulated a fiercely anti-Zionist Holocaust theology and founded an insular, self - sufficient community which attracted many immigrants from Greater Hungary; already by 1961, 40 % of families were newcomers. Yisrael Alter of Ger created robust institutions, fortified his court 's standing in Agudas Israel and held tisch every week for 29 years. He halted the hemorrhage of his followers and retrieved many Litvaks (the contemporary, less adverse epithet for Misnagdim) and Religious Zionists whose parents were Gerrer Hasidim before the war. Chaim Meir Hager similarly restored Vizhnitz. Moses Isaac Gewirtzman founded the new Pshevorsk (Hasidic dynasty) in Antwerp.
The most explosive growth was experienced in Chabad - Lubavitch, whose head Menachem Mendel Schneerson adopted a modern (he and his disciples ceased wearing the customary Shtreimel) and outreach - centered orientation. At a time when most Orthodox and Hasidim in particular rejected proselytization, he turned his sect into a mechanism devoted almost solely to it, blurring the difference between actual Hasidim and loosely affiliated supporters until researchers could scarcely define it as a regular Hasidic group. Another phenomenon was the revival of Breslov, which remained without an acting Tzaddiq since the rebellious Rebbe Nachman 's 1810 death. Its complex, existentialist philosophy drew many to it.
Exorbitant fertility rates, increasing tolerance and multiculturalism on behalf of surrounding society and the great wave of newcomers to Orthodox Judaism which began in the 1970s all cemented the movement 's status as very much alive and thriving. The clearest indication for that, noted Joseph Dan, was the disappearance of the "Frumkinian '' narrative which inspired much sympathy towards it from non-Orthodox Jews and others, as actual Hasidism returned to the fore. As numbers grew, "courts '' were again torn apart by schisms between Rebbes ' sons vying for power, a common occurrence during the golden age of the 19th century.
|
what was the purpose of austria and prussia's declaration of pillnitz quizlet | Declaration of Pillnitz - wikipedia
The Declaration of Pilnite, more commonly referred to as the Declaration of Pillnitz, was a statement issued on 27 August 1791 at Pillnitz Castle near Dresden (Saxony) by Frederick William II of Prussia and the Habsburg Holy Roman Emperor Leopold II who was Marie Antoinette 's brother. It declared the joint support of the Holy Roman Empire and of Prussia for King Louis XVI of France against the French Revolution.
Since the French Revolution of 1789, Leopold had become increasingly concerned about the safety of his sister, Marie - Antoinette, and her family but felt that any intervention in French affairs would only increase their danger. At the same time, many French aristocrats were fleeing France and taking up residence in neighbouring countries, spreading fear of the Revolution and agitating for foreign support to Louis XVI. After the Flight to Varennes in June 1791, Louis had been arrested and was imprisoned. On 6 July 1791, Leopold issued the Padua Circular, calling on the sovereigns of Europe to join him in demanding Louis ' freedom.
The declaration stated that Austria would go to war if and only if all the other major European powers also went to war with France. Leopold chose this wording so that he would not be forced to go to war; he knew the British prime minister, William Pitt did not support war with France. Leopold issued the declaration only to satisfy the French emigres who had taken refuge in his country and were calling for foreign interference in their homeland.
Calling on European powers to intervene if Louis was threatened, the declaration was intended to serve as a warning to the French revolutionaries to stop infringing on the king 's prerogatives and to permit his resumption of power.
(The Pillnitz Conference itself dealt mainly with the Polish Question and the war of Austria against the Ottoman Empire.)
"His Majesty the Emperor and His Majesty the King of Prussia (...) declare together that they regard the actual situation of His Majesty the King of France as a matter of communal interest for all sovereigns of Europe. They hope that that interest will be recognized by the powers whose assistance is called in, and that they wo n't refuse, together with aforementioned Majesties, the most efficacious means for enabling the French king to strengthen, in utmost liberty, the foundations of a monarchical government suiting to the rights of the sovereigns and favourable to the well - being of the French. In that case, aforementioned Majesties are determined to act promptly and unanimously, with the forces necessary for realizing the proposed and communal goal. In expectation, they will give the suitable orders to their troups so that they will be ready to commence activity. ''
The National Assembly of France interpreted the declaration to mean that Leopold was going to declare war. Radical Frenchmen who called for war, such as Jacques Pierre Brissot, used it as a pretext to gain influence and declare war on 20 April 1792, leading to the campaigns of 1792 in the French Revolutionary Wars.
Media related to Declaration of Pillnitz at Wikimedia Commons
|
who has recorded how am i supposed to live without you | How Am I Supposed to Live Without You - wikipedia
"How Am I Supposed to Live Without You '' is a song written in 1983 by Doug James and Michael Bolton. The ballad has been recorded by many artists around the world, in several languages, becoming something of a modern pop standard. Instrumental versions of the song have been recorded featuring variously the piano, guitar, saxophone, pan flute, steel drum, and music box.
"How Am I Supposed to Live Without You '' was supposed to be recorded by the duo Air Supply. But when Arista President Clive Davis asked for permission to change the lyrics of the chorus, Bolton refused, and Davis released the song. Subsequently Laura Branigan recorded it as written, and it became the first major hit for the two songwriters. The song was also performed by actress Lisa Hartman on the soap opera Knots Landing. Bolton 's own rendition became a worldwide hit in early 1990.
As the second single from Branigan 's second album Branigan 2, "How Am I Supposed to Live Without You '' spent three weeks at number one on the Billboard Adult Contemporary chart and peaked at number twelve on the Hot 100 in early October 1983. Branigan 's single also hit the number one spot on the Adult Contemporary chart in Canada. This success came without benefit of a music video. Branigan performed the song on the syndicated music countdown show Solid Gold in late 1983 and on the popular holiday special Dick Clark 's New Year 's Rockin ' Eve. Branigan 2 went out of print in 2004, but Branigan 's original version can still be heard on the compilation albums The Best of Branigan (1995), The Essentials (2002) and The Platinum Collection (2006).
The single 's B - side was a newly written song over the music to the Italian song "Mama '', by Giancarlo Bigazzi and Umberto Tozzi. Branigan 's first major hit had been with "Gloria '', another English song written to an Italian hit by the duo.
In 1988, Michael Bolton recorded a version of the song for the album Soul Provider. The single reached number one on both the Billboard Hot 100 and Adult Contemporary charts and also won him a Grammy Award for Best Male Pop Vocal Performance. The release marked a turning point in Bolton 's career. After years of being primarily known as a songwriter, the single got him recognition as a performer and made him a certified superstar.
The single took off in October 1989, slowly climbed the Hot 100 and by mid January became the first new number one single of the 1990s, following "Another Day in Paradise '' by Phil Collins.
The beginning of the music video shows Bolton performing the selection whilst he is sitting in his living room, and small bits of story about his and his girlfriend 's relationship are told through fade - outs. As he is about to leave the apartment, already having packed his suitcases, he thinks of her and the time they spent together and seemingly decides against the decision; he then cuddles with his girlfriend. It is revealed, the next night, that he plans to give her a bracelet, which he quickly hides as he reads a newspaper before she enters the room. She surprises him with breakfast and they cuddle again. Later on, the two have a fight about something and she storms out of the apartment, and Bolton visibly feels guilty.
|
who scoring brazil's first goal in 1990 world cup | 1990 FIFA World Cup - wikipedia
The 1990 FIFA World Cup was the 14th FIFA World Cup, the quadrennial international football tournament. It was held from 8 June to 8 July 1990 in Italy, the second country to host the event twice (the first being Mexico in 1986). Teams representing 116 national football associations entered and qualification began in April 1988. 22 teams qualified from this process, along with host nation Italy and defending champions Argentina.
The tournament was won by West Germany, their third World Cup title. They beat Argentina 1 -- 0 at the Stadio Olimpico in Rome, a rematch of the previous final four years earlier. Italy finished third and England fourth, after both lost their semi-finals in penalty shootouts. This was the last tournament to feature a team from West Germany, with the country being reunified with East Germany a few months later in October, as well as teams from the Eastern Bloc prior to the end of the Cold War in 1991, as the Soviet Union, Romania, Czechoslovakia and Yugoslavia teams made appearances. Costa Rica, Ireland and the UAE made their first appearances in the finals. As of 2018, this was the last time the UAE qualified for a FIFA World Cup finals. The official match ball was the Adidas Etrusco Unico.
The 1990 World Cup is widely regarded as one of the poorest World Cups in terms of the games. It generated an average 2.2 goals per game -- a record low that still stands -- and a then - record 16 red cards, including the first ever dismissal in a final. Regarded as being the World Cup that has had perhaps the most lasting influence on the game as a whole, it saw the introduction of the pre-match Fair Play Flag (then inscribed with "Fair Play Please '') to encourage fair play. Defensive tactics led to the introduction of the back - pass rule in 1992 and three points for a win instead of two at future World Cups. The tournament also produced some of the World Cup 's best remembered moments and stories, including the emergence of African nations, in addition to what has become the World Cup soundtrack: "Nessun dorma ''.
The 1990 World Cup stands as one of the most watched events in television history, garnering an estimated 26.69 billion non-unique viewers over the course of the tournament. This was the first World Cup to be officially recorded and transmitted in HDTV by the Italian broadcaster RAI in association with Japan 's NHK. The huge success of the broadcasting model has also had a lasting impact on the sport. At the time it was the most watched World Cup in history in non-unique viewers, but was bettered by the 1994 and 2002 World Cups.
The vote to choose the hosts of the 1990 tournament was held on 19 May 1984 in Zürich, Switzerland. Here, the FIFA Executive Committee chose Italy ahead of the only rival bid, the USSR, by 11 votes to 5. This awarding made Italy only the second nation to host two World Cup tournaments, after Mexico had also achieved this with their 1986 staging. Italy had previously had the event in 1934, where they had won their first championship.
Austria, England, France, Greece, West Germany and Yugoslavia also submitted initial applications for 31 July 1983 deadline. A month later, only England, Greece, Italy and the Soviet Union remained in the hunt after the other contenders all withdrew. All four bids were assessed by FIFA in late 1983, with the final decision over-running into 1984 due to the volume of paperwork involved. In early 1984, England and Greece also withdrew, leading to a two - horse race in the final vote. The Soviet boycott of the 1984 Olympic Games, announced on the eve of the World Cup decision, was speculated to have been a major factor behind Italy winning the vote so decisively, although this was denied by the FIFA President João Havelange.
116 teams entered the 1990 World Cup, including Italy as host nation and Argentina as reigning World Cup champions, who were both granted automatic qualification. Thus, the remaining 22 finals places were divided among the continental confederations, with 114 initially entering the qualification competition. Due to rejected entries and withdrawals, 103 teams eventually participated in the qualifying stages.
Thirteen places were contested by UEFA teams (Europe), two by CONMEBOL teams (South America), two by CAF teams (Africa), two by AFC teams (Asia), and two by CONCACAF teams (North and Central America and Caribbean). The remaining place was decided by a play - off between a CONMEBOL team and a team from the OFC (Oceania).
Both Mexico and Chile were disqualified during the qualification process; the former for fielding an overage player in a prior youth tournament, the latter after goalkeeper Roberto Rojas faked injury from a firework thrown from the stands, which caused the match to be abandoned. Chile were also banned from the 1994 qualifiers for this offence.
Three teams made their debuts, as this was the first World Cup to feature Costa Rica and the Republic of Ireland, and the only one to date to feature the United Arab Emirates.
Returning after long absences were Egypt, who appeared for the first time since 1934, the United States, who competed for the first time since 1950, Colombia, who appeared for the first time since 1962, Romania, who last appeared at the Finals in 1970 and Sweden and the Netherlands, both of which last qualified in 1978. Austria, Cameroon, Czechoslovakia and Yugoslavia also returned after missing the 1986 tournament.
Among the teams who failed to qualify were 1986 semi-finalists France (missing out their first World Cup since 1974), Denmark, Poland (for the first time since 1970), Portugal and Hungary.
The following 24 teams qualified for the final tournament.
Twelve stadiums in twelve cities were selected to host matches at the 1990 World Cup. The Stadio San Nicola in Bari and Turin 's Stadio delle Alpi were completely new venues opened for the World Cup.
The remaining ten venues all underwent extensive programmes of improvements in preparation for the tournament, forcing many of the club tenants of the stadia to move to temporary homes. Additional seating and roofs were added to most stadia, with further redevelopments seeing running tracks removed and new pitches laid. Due to structural constraints, several of the existing stadia had to be virtually rebuilt to implement the changes required.
Like Espana ' 82, the group stage of this tournament was organized in such a way where specific groups only played in two cities close in proximity to each other. Group A only played in Rome and Florence (Hosts Italy played all their competitive matches in Rome, except for their semi-final and third place matches, which were played in Naples and Bari, respectively), Group B played their matches in Naples and Bari (except for Argentina vs. Cameroon, which was the opening match of the tournament, played in Milan), Group C played their matches in Turin and Genoa, Group D played all their matches in Milan and Bologna, Group E played only in Udine and Verona, and Group F played on the island cities of Cagliari and Palermo. The cities that hosted the most World Cup matches were the two biggest cities in Italy: Rome and Milan, each hosting six matches, and Bari, Naples and Turin each hosted five matches. Cagliari, Udine and Palermo were the only cities of the 12 selected that did not host any knockout round matches.
The England national team, at the British government 's request, were forced to play all their matches in Cagliari on the island of Sardinia. Hooliganism, rife in English football in the 1980s, had followed the national team while they played friendlies on the European continent -- the distrust of English fans was so high that the English FA 's reputation and even diplomatic relations between the UK and Italy were seen to be at risk if England played any group stage matches on the Italian mainland. Thanks largely to British Sports Minister Colin Moynihan 's negative remarks about English fans weeks before the match, security around Cagliari during England 's three matches there was extremely heavy -- in addition to 7,000 local police and Carabineri, highly trained Italian military special forces were also there patrolling the premises. The Italian authorities ' heavy presence proved to be justified as there were several riots during the time England were playing their matches in Cagliari, leading to a number of injuries, arrests and even deportations.
Most of the construction cost in excess of their original estimates and total costs ended up being over £ 550 million (approximately $935 million). Rome 's Stadio Olimpico which would host the final was the most expensive project overall, while Udine 's Stadio Friuli, the newest of the existing stadia (opened 14 years prior), cost the least to redevelop.
Squads for the 1990 World Cup consisted of 22 players, as for the previous tournament in 1986. Replacement of injured players was permitted during the tournament at FIFA 's discretion. Two goalkeepers -- Argentina 's Ángel Comizzo and England 's Dave Beasant -- entered their respective squads during the tournament to replace injured players (Nery Pumpido and David Seaman).
41 match officials from 34 countries were assigned to the tournament to serve as referees and assistant referees. Officials in italics were only used as assistants during the tournament. Referees dressed only in traditional black jerseys for the final time at a World Cup (a red change shirt was used for two Group C games in which Scotland wore their navy blue shirts).
The six seeded teams for the 1990 tournament were announced on 7 December 1989. The seeds were then allocated to the six groups in order of their seeding rank (1st seed to Group A, 2nd seed to Group B, etc.).
The seeds were decided by FIFA based on the nations ' performance in, primarily, the 1986 World Cup with the 1982 World Cup also considered as a secondary influence. Six of the final eight in 1986 had qualified for the 1990 tournament, the missing nations being Mexico (quarter - final in 1986) and France (third place). Italy -- who were seeded first as hosts -- had not reached the final eight in 1986 and this left FIFA needing to exclude one of the three (qualified) nations who were eliminated in the 1986 quarter - finals: Brazil, England or Spain.
Owing to their performance in 1982 but also to their overall World Cup record, Brazil were seeded third and not considered to drop out of the seedings. FIFA opted to seed England ahead of Spain. Spain had only been eliminated in 1986 on penalties, albeit by fourth - placed Belgium, while England had been defeated in 90 minutes by eventual winners Argentina; both countries had also reached the second stage in the 1982 event, playing in the same group in the second group stage with England ending up ahead of Spain, but Spain had also appeared in the 1978 event, while England had failed to qualify. FIFA President João Havelange had reportedly earlier stated that Spain would be seeded.
Spanish officials believed the seeding was contrived to ensure England would be placed in Group F, the group to be held off the Italian mainland, in a bid to contain England 's hooliganism problems. Their coach Luis Suárez said, "We feel we 've been cheated... they wanted to seed England and to send it to Cagliari at all costs. So they invented this formula ''. FIFA countered that "the formula was based on the teams ' respective showings during the previous two World Cups. England merited the sixth position. This is in no way a concession to English hooliganism ''.
Meanwhile, the Netherlands also had an argument that on grounds of recent footballing form, they should be seeded, as the winners of the 1988 European Championship, in which both Spain and England had been eliminated in the group stages, while Belgium (fourth in the 1986 World Cup after beating Spain, and thus seeded in 1990) had failed to even qualify: but this argument was countered by the fact that the Netherlands had themselves failed to qualify for both the 1982 and 1986 World Cups, which was considered the most important factor in the decision not to seed them.
As it happened, the two teams considered the most unlucky not to be seeded, namely Spain and the Netherlands, were both drawn in groups against the two teams considered the weakest of the seeded nations, namely Belgium and England: and the arguments over the seeding positions fizzled out. England could be said to have justified their seeded position by narrowly winning their group ahead of the Netherlands: while Spain seemed to have made their own point about being worth a seeded position, by defeating Belgium to top their own group, in doing so gaining a measure of revenge for the fact that it was Belgium who had eliminated them in 1986.
Italy (1st) Argentina (2nd) Brazil (3rd) West Germany (4th) Belgium (5th) England (6th)
Cameroon Costa Rica Egypt South Korea United Arab Emirates United States
Colombia Czechoslovakia Republic of Ireland Romania Sweden Uruguay
Austria Netherlands Scotland Spain Soviet Union Yugoslavia
On 9 December 1989 the draw was conducted at the Palazzetto dello Sport in Rome, where the teams were drawn out from the three pots to be placed with the seeded teams in their predetermined groups. The only stipulation of the draw was that no group could feature two South American teams. The ceremony was hosted by Italian television presenter Pippo Baudo, with Italian actress Sophia Loren and opera singer Luciano Pavarotti conducting the draw alongside FIFA general secretary Sepp Blatter.
The draw show was FIFA 's most ambitious yet with Pelé, Bobby Moore and Karl - Heinz Rummenigge appearing, as well as a performance of the Italian version of the tournament 's official song "To Be Number One '' by Giorgio Moroder, performed as "Un'estate italiana '' by Edoardo Bennato and Gianna Nannini.
The event also featured the official mascot of this World Cup, Ciao, a stick figure player with a football head and an Italian tricolor body that formed the word "ITALIA '' when deconstructed and reconstructed. Its name is a greeting in Italian.
The finals tournament began in Italy on 8 June and concluded on 8 July. The format of the 1990 competition remained the same as in 1986: 24 qualified teams were divided into six groups of four. The top two teams and four best third - place finishers from the six groups advanced to the knockout stage, which eliminated the teams until a winner emerged. In total, 52 games were played.
The tournament generated a record low goals - per - game average and a then - record of 16 red cards were handed out. In the knockout stage, many teams played defensively for 120 minutes, with the intention of trying their luck in the penalty shoot - out, rather than risk going forward. Two exceptions were the eventual champions West Germany and hosts Italy, the only teams to win three of their four knockout matches in normal time. There were four penalty shoot - outs, a record subsequently equalled in the 2006, 2014 and 2018 tournaments. Eight matches went to extra time, a record equalled in the 2014 tournament.
Ireland and Argentina were prime examples of this trend of cautious defensive play; the Irish team fell behind in two of their three group matches and only equalised late in both games. Losing finalists Argentina, meanwhile, scored only five goals in the entire tournament (a record low for a finalist). Argentina also became the first team to advance twice on penalty shoot - outs and the first team to fail to score and have a player sent off in a World Cup final.
Largely as a result of this trend FIFA introduced the back - pass rule in time for the 1994 tournament to make it harder for teams to time - waste by repeatedly passing the ball back for their goalkeepers to pick up. Three, rather than two points would be awarded for victories at future tournaments to help further encourage attacking play.
Cameroon reached the quarter - finals, where they were narrowly defeated by England. They opened the tournament with a shock victory over reigning champions Argentina, before topping the group ahead of them, Romania and European Championship runners - up the Soviet Union. Their success was fired by the goals of Roger Milla, a 38 - year - old forward who came out of international retirement to join the national squad at the last moment after a personal request from Cameroonian President Paul Biya. Milla 's four goals and flamboyant goal celebrations made him one of the tournament 's biggest stars as well as taking Cameroon to the last eight. Most of Cameroon 's squad was made up of players who played in France 's premier football league, Ligue 1 - French is one of the officially spoken languages in Cameroon, it being a former French territory. In reaching this stage, they had gone further than any African nation had ever managed in a World Cup before; a feat only equalled twice since (by Senegal in 2002 and Ghana in 2010). Their success was African football 's biggest yet on the world stage and FIFA subsequently decided to allocate the CAF qualifying zone an additional place for the next World Cup tournament.
Despite the performances of nations such as Cameroon, Colombia, Ireland, Romania and Costa Rica, the semi-finalists consisted of Argentina, England, Italy and West Germany, all previous World Cup winners, with eight previous titles between them. After the 1970 tournament, this is only the second time in the history of the World Cup this has occurred. The teams which finished first, second and third had also contested both the two previous World Cup Finals between themselves.
All times are Central European Summer Time (UTC + 2)
In the following tables:
The Group stage saw the twenty - four teams divided into six groups of four teams. Each group was a round - robin of six games, where each team played one match against each of the other teams in the same group. Teams were awarded two points for a win, one point for a draw and none for a defeat. The teams coming first and second in each group qualified for the Round of 16. The four best third - placed teams would also advance to the next stage.
Typical of a World Cup staged in Europe, the matches all started at either 5: 00 or 9: 00 in the evening; this allowed for the games to avoid being played in the heat of an Italian summer, which would soar past 86F (30C) all over Italy.
If teams were level on points, they were ranked on the following criteria in order:
Hosts Italy won Group A with a 100 percent record. They beat Austria 1 -- 0 thanks to substitute Salvatore ' Totò ' Schillaci, who had played only one international before but would become a star during the tournament. A second 1 -- 0 victory followed against a United States team already thumped 5 -- 1 by Czechoslovakia. The Czechoslovaks ended runners - up in the group, while the USA 's first appearance in a World Cup Finals since 1950 ended with three consecutive defeats.
Cameroon defeated reigning champions Argentina. Despite ending the match with only nine men, the African team held on for a shock 1 -- 0 win, with contrasting fortunes for the brothers Biyik: François Omam scoring the winning goal, shortly after seeing Andre Kana sent off for a serious foul. In their second game the introduction of Roger Milla was the catalyst for a 2 -- 1 win over Romania, Milla scoring twice from the bench (making him the oldest goalscorer in the tournament). With progression assured, Cameroon slumped to a 4 -- 0 defeat in their final group game to the Soviet Union (in what would be their last World Cup due to the dissolution of the Soviet Union), who were striving to stay in the tournament on goal difference after successive 2 -- 0 defeats. Argentina lost their veteran goalkeeper, Nery Pumpido, to a broken leg during their victory over the USSR: his replacement, Sergio Goycochea, proved to be one of the stars of their tournament. In the final match, a 1 -- 1 draw between Romania and Argentina sent both through, equal on points and on goal difference but Romania having the advantage on goals scored: Romania were thus second, Argentina qualified as one of the best third - placed teams.
Costa Rica beat Scotland 1 -- 0 in their first match, lost 1 -- 0 to Brazil in their second, then saw off Sweden 2 -- 1 to claim a place in the second round. Brazil took maximum points from the group. They began with a 2 -- 1 win over Sweden, then beat both Costa Rica and Scotland 1 -- 0. Scotland 's 2 -- 1 win over Sweden was not enough to save them from an early return home as one of the two lowest - ranked third - placed teams.
Group D featured the most goals of all the groups, most due to two large wins of West Germany and defensive inadequacies of a United Arab Emirates team that lost 2 -- 0 to Colombia, 5 -- 1 to West Germany and 4 -- 1 to Yugoslavia. The West Germans topped the group after a 4 -- 1 opening victory over group runners - up Yugoslavia.
The winners of Group E were Spain, for whom Michel hit a hat - trick as they beat South Korea 3 -- 1 in an unbeaten group campaign. Belgium won their first two games against South Korea and Uruguay to ensure their progress; Uruguay 's advance to the second round came with an injury time winner against South Korea to edge them through as the weakest of the third - placed sides to remain in the tournament.
Group F, featured the Netherlands, England, the Republic of Ireland and Egypt. In the six group games, no team managed to score more than once in a match. England beat Egypt 1 -- 0, the only match with a decisive result, and that was enough to win the group. The group containing England, Ireland and the Netherlands was remarkably similar to the group stage of the 1988 European Championship, which had eventually been won by the Netherlands, with England crashing out with three losses (to Ireland, the Netherlands and the USSR) and Ireland also narrowly failing to progress after losing to the Netherlands and drawing with the USSR. The results of the 1990 group, however, were different: England took the lead with an early goal for Lineker against Ireland, but Sheedy 's late equalizer gave them a share of the spoils. The Netherlands failed to replicate their form of two years earlier, only drawing against Egypt: they had taken a 1 - 0 lead, but without impressing, and Egypt were well worth their equalizer courtesy of a penalty by Abdelghani. England then had much the better of their goalless draw with the Netherlands: indeed they had the ball in the net once, from a free - kick by Pearce, but it was disallowed. For the second World Cup in succession, however, England lost their captain Bryan Robson to an injury which put him out of the tournament, just over halfway through their second match. Ireland and Egypt barely registered a shot on goal between them in the other 0 - 0 draw: after the first four matches all four teams had equal records with 2 draws, 1 goal for and 1 goal against. England 's victory over Egypt, thanks to a 58th - minute goal from Mark Wright, put them top of the group: in the other match, Gullit gave the Netherlands the lead against Ireland, but Niall Quinn scored a second - half equalizer and the two teams finished in second and third, still with identical records. Both teams qualified but they had to draw lots to place the teams in second and third place.
The Republic of Ireland and the Netherlands finished with identical records. With both teams assured of progressing, they were split by the drawing of lots to determine second and third place.
Ireland won the drawing of lots against the Netherlands for second place in Group F: the Netherlands were the only third - placed team not to have won any matches - or lost any: they progressed with three draws (3 points), ahead of Austria and Scotland who each had one win and two losses (2 points).
The knockout stage involved the 16 teams that qualified from the group stage of the tournament. There were four rounds of matches, with each round eliminating half of the teams entering that round. The successive rounds were: round of 16, quarter - finals, semi-finals, final. There was also a play - off to decide third / fourth place. For each game in the knockout stage, any draw at 90 minutes was followed by 30 minutes of extra time; if scores were still level there would be a penalty shoot - out (five penalties each, if neither team already had a decisive advantage, and more if necessary) to determine who progressed to the next round. Scores after extra time are indicated by (aet) and penalty shoot - outs are indicated by (p).
All times listed are local (UTC + 2)
Two of the ties -- Brazil vs Argentina and Italy vs Uruguay -- pitted former champion countries against each other and West Germany met the Netherlands in a rematch of the 1974 World Cup Final.
The all - South American game was won for Argentina by a goal from Claudio Caniggia with 10 minutes remaining after a run through the Brazilian defence by Diego Maradona and an outstanding performance from their goalkeeper Sergio Goycochea. It would later come to light that Branco had been offered water spiked with tranquillisers by Maradona and Ricardo Giusti during half time, to slow him down in the second half. Initially discredited by the press, Branco would be publicly proven right years later, when Maradona confessed the episode on a TV show in Argentina. As for Italy, a strong second half showing saw the hosts beat Uruguay 2 -- 0, thanks to another goal from Schillaci and one from Aldo Serena.
The match between West Germany and the Netherlands was held in Milan, and both sides featured several notable players from the two Milanese clubs (Germans Andreas Brehme, Lothar Matthäus and Jürgen Klinsmann for Internazionale, and Dutchmen Marco van Basten, Ruud Gullit and Frank Rijkaard for Milan). After 22 minutes Rudi Völler and Rijkaard were both dismissed after a number of incidents between the two players (including Rijkaard spitting on Völler) left the Argentine referee with no option but to send them both off. As the players walked off the pitch together, Rijkaard spat on Völler a second time. Early in the second half, Jürgen Klinsmann put the West Germans ahead and Andreas Brehme added a second with eight minutes left. A Ronald Koeman penalty for the Netherlands in the 89th minute narrowed the score to 2 -- 1 but the Germans saw the game out to gain some revenge for their exit to the Dutch in the previous European Championship.
Meanwhile, the heroics of Cameroon and Roger Milla continued in their game with Colombia. Milla was introduced as a second - half substitute with the game goalless, eventually breaking the deadlock midway in extra time. Three minutes later he netted a second after Colombian goalkeeper, René Higuita was dispossessed by Milla while well out of his goal, leaving the striker free to slot the ball into the empty net. Though the deficit was soon reduced to 2 -- 1, Cameroon held on to become the first African team ever to reach the World Cup quarter - finals. Costa Rica were comfortably beaten 4 -- 1 by Czechoslovakia, for whom Tomáš Skuhravý scored the tournament 's second and final hat - trick.
The Republic of Ireland 's match with Romania remained goalless after extra time and the Irish side won 5 -- 4 on penalties. David O'Leary converted the penalty that clinched Ireland 's place in the quarter - finals. Ireland thus became the first team since Sweden in 1938 to reach the last eight in a World Cup finals tournament without winning a match outright. Yugoslavia beat Spain 2 -- 1 after extra time, with Dragan Stojković scoring both the Yugoslavs ' goals. England were the final qualifier against Belgium, as midfielder David Platt 's swivelling volley broke the stalemate with the game moments away from a penalty shoot - out.
The first game of the last 8 saw Argentina and a Yugoslav side, reduced to 10 men after only half an hour, play out a goalless stalemate. The holders reached the semi-finals after winning the penalty shoot - out 3 -- 2, despite Maradona having his penalty saved. A second Argentine miss (by Pedro Troglio) looked to have eliminated them until goalkeeper Sergio Goycochea -- playing because first choice Nery Pumpido broke his leg during the group stage -- rescued his side by stopping the Yugoslavs ' final two spotkicks.
The Republic of Ireland 's World Cup run was brought to an end by a single goal from Schillaci in the first half of their quarter - final with hosts Italy. West Germany beat Czechoslovakia with a 25th minute Lothar Matthäus penalty.
The quarter - final between England and Cameroon was the only quarter - final to produce more than one goal. Despite Cameroon 's heroics earlier in the tournament, David Platt put England ahead in the 25th minute. At half - time, Milla was brought on. In the second half, the game was turned on its head during a five - minute stretch: first Cameroon were awarded a penalty from which Emmanuel Kunde scored the equaliser; then in the 65th minute Eugene Ekeke put Cameroon ahead. Cameroon came within eight minutes of reaching the semi-finals before they conceded a penalty, which Gary Lineker converted. Midway through extra time, England were awarded another penalty and Lineker again scored from the spot. England were through to the semi-finals for the first time since the days of Bobby Moore 24 years prior.
The first semi-final featured the host nation, Italy, and the world champions, Argentina in Naples. ' Toto ' Schillaci scored yet again to put Italy ahead in the 17th minute, but Claudio Caniggia equalised midway through the second half, breaking Walter Zenga 's clean sheet streak throughout the tournament. There were no more goals in the 90 minutes or in extra time despite Maradona (who played for Naples in Serie A at the time) showing glimpses of magic, but there was a sending - off: Ricardo Giusti of Argentina was shown the red card in the 13th minute of extra time. Argentina went through on penalties, winning the shoot - out 4 -- 3 after more heroics from Goycochea.
The semi-final between West Germany and England at Juventus 's home stadium in Turin was goalless at half - time. Then, in the 60th minute, a shot from Andreas Brehme was deflected by Paul Parker into his own net. England equalised with ten minutes left; Gary Lineker was the scorer. The game ended 1 -- 1. Extra time yielded more chances. Klinsmann was guilty of two glaring misses and both sides struck a post. England had another Platt goal disallowed for offside. The match went to penalties, and West Germany went on to win the shoot - out 4 -- 3.
The game saw three goals in a 15 - minute spell. Roberto Baggio opened the scoring after a rare mistake by England 's goalkeeper Peter Shilton, in his final game before international retirement, presented a simple opportunity. A header by David Platt levelled the game 10 minutes later but Schillaci was fouled in the penalty area five minutes later, leading to a penalty. Schillaci himself got up to convert the kick to win him the tournament 's Golden Boot for his six - goal tally. Nicola Berti had a goal ruled out minutes later, but the hosts claimed third place. England had the consolation prize of the Fair Play award, having received no red cards and the lowest average number of yellows per match.
The final between West Germany and Argentina has been cited as the most cynical and lowest - quality of all World Cup Finals. In the 65th minute, Argentina 's Pedro Monzon - himself only recently on as a substitute - was sent off for a foul on Jürgen Klinsmann. Monzon was the first player ever to be sent off in a World Cup Final.
Argentina, weakened by suspension and injury, offered little attacking threat throughout a contest dominated by the West Germans, who struggled to create many clear goalscoring opportunities. The only goal of the contest arrived in the 85th minute when Mexican referee Edgardo Codesal awarded a penalty to West Germany, after a foul on Rudi Völler by Roberto Sensini leading to Argentinian protests. Andreas Brehme converted the spot kick to settle the contest. In the closing moments, Argentina were reduced to nine after Gustavo Dezotti, who had already been yellow carded earlier in the match, received a red card when he hauled Jürgen Kohler to the ground during a stoppage in play. The 1 -- 0 scoreline provided another first: Argentina were the first team to fail to score in a World Cup Final.
With its third title (and three second - place finishes) West Germany -- in its final tournament before national reunification -- became the most successful World Cup nation at the time. West German manager Franz Beckenbauer became the only man to both captain (in 1974) and manage a World Cup winning team, and only the second man (after Mário Zagallo of Brazil) to win the World Cup as a player and as team manager. It was also the first time a team from UEFA won the final against a non-European team.
Salvatore Schillaci received the Golden Boot award for scoring six goals in the World Cup. This made him the second Italian footballer to have this honour, after Paolo Rossi won the award in 1982. In total, 115 goals were scored by 75 players (none credited as own goals).
After the tournament, FIFA published a ranking of all teams that competed in the 1990 World Cup finals based on progress in the competition, overall results and quality of the opposition.
|
who won the first ever series of britains got talent | Britain 's Got Talent (series 1) - wikipedia
Series One of Britain 's Got Talent, a British talent competition series, began broadcasting in the UK during 2007, from 9 June to 17 June on ITV. The success of America 's Got Talent helped to revive production of a British version of the show, after initial development for the programme was suspended when its originally planned host, Paul O'Grady, became involved in an argument with ITV and later defected to another channel. The judges chosen for the series were Piers Morgan, Amanda Holden and Simon Cowell; both Morgan and Cowell had been original choices during the original plans for the programme before O'Grady's departure. The hosts selected for the series were Ant & Dec, while a spin - off show on ITV2, entitled Britain 's Got More Talent would be hosted by Stephen Mulhern.
The first series was won by opera singer Paul Potts. During its broadcast, the series averaged around 8.4 million viewers.
After Simon Cowell pitched to ITV his plans for a televised talent competition, production was green - lighted for a full series after a pilot episode was created in mid-2005. A dispute between Paul O'Grady, the original choice as host for the programme, and the broadcaster, led to filming being suspended, with production not resuming until after the success of the first series of America 's Got Talent. When it did resume, production staff focused on a schedule of ten episodes to begin with for the first series of Britain 's Got Talent, with major auditions for potential acts held within the cities of Manchester, Birmingham, London and Cardiff. The initial choices for judges changed to begin with following O'Grady's decision to switch broadcasters, with it eventually finalised on Cowell, Piers Morgan, and Amanda Holden.
Of the participants who auditioned to be in the contest for this series, only 24 made it into the three live semi-finals, with eight appearing in each one, and six of these acts moving on into the live final. The following below lists the results of each participant 's overall performance in this series:
During the initial broadcast of this series, two of the participants - Richard Bates, and the Kit Kat Dolls - were removed from the programme, after the producers discovered that each had either withheld or failed to disclose information that would have made them ineligible to be a part of Britain 's Got Talent. These fact only came to light when, in each case, an outside source contacted the producers with the relevant information. In Bates ' case, his removal was the result of a request made by Lancashire Police. The producers learnt that he had n't disclosed that he had been placed on the UK 's Violent and Sex Offender Register (at that time) for an offence he had committed in 2005. Lancashire Police request was accepted based upon concerns over the possibility of either the offence 's victim, or the victim 's family, being unsettled by Bates ' appearance on television.
In comparison, the removal of the Kit Kat Dolls was done under the instructions of ITV. Unbeknown to the broadcaster, or the producers, three of the members from the act were secretly working as prostitutes, which was later considered a serious breach of trust. This information was later brought to light through an undercover investigation conducted by the News of the World, who contacted ITV with their findings. The group were promptly disqualified as a direct result, per Section 24 of the programme 's terms and conditions regarding application forms:
"The Producer reserves the right to disqualify you if you have supplied untruthful, inaccurate or misleading personal details and / or information, have failed to abide by the Rules and / or are in breach of the terms hereof. ''
ITV released a statement later in June 2007, regarding the controversy caused by the act 's members:
"We 'd like to thank the News of the World for bringing this gross abuse of our trust to our attention. We have removed the group from the show. As a consequence of this incident the whole band has had to be punished. We feel let down as do their fellow bandmates. ''
Following the live broadcast of the third semi-final, Ofcom received several complaints from viewers in regards to the magical act conducted by magician Doctor Gore. The complaints focused on criticism over his act being unsuitable for the show, due to the gruesome nature of the magic tricks he performed. Although the production team stated in its defence that the performance had been reviewed carefully to ensure it would not be frightening, the regulator ruled that the programme had been in breach of the broadcasting code that concerned the protection of children from unsuitable material.
|
visa requirements for british citizens travelling to mexico | Visa policy of Mexico - Wikipedia
Mexican visas are documents issued by the National Migration Institute, dependent on the Secretariat of the Interior, with the stated goal of regulating and facilitating migratory flows.
A foreign national wishing to enter Mexico must obtain a visa unless they are a citizen of one of the 65 eligible visa - exempt countries or one of the three Electronic Authorization System eligible countries.
All visitors entering by land and traveling farther than 20 kilometres (12 miles) into Mexico or staying longer than 72 hours should obtain a document Forma Migratoria Multiple to present at checkpoints within the country. In 2016 Mexico has introduced the electronic version of the form (Forma Migratoria Múltiple Electrónica, or FMME) which can be obtained online at a price of 390 Mexican pesos.
Nationals of the following 65 countries and jurisdictions holding normal passports do not require a visa to enter Mexico as tourists, visitors in transit or business visitors. Tourists and business visitors can stay in Mexico for up to 180 days. Visitors in transit can stay for up to 30 days.
Notes:
Nationals of any countries for which there is a visa requirement are exempt from it if they have any of the following:
The Electronic Authorization System (Sistema de Autorización Electrónica, SAE) is an online system, which allows citizens of the eligible countries travelling by air to obtain an electronic authorization to travel to Mexico for transit, tourism or business purposes without a consular visa. It is valid for 30 days and a single entry. Upon arrival, visitors are authorized to stay in Mexico as tourists for up to 180 days. SAE does not apply to travelers entering Mexico by land or sea, or those who are travelling on a non-participating airline, and they must hold a valid Mexican visa or an applicable visa issued by a third country.
Eligible countries are:
Passengers requiring a visa who are transiting in Mexico City can do so without a visa if their connection time does not exceed 24 hours and if their flight is nonstop, without intermediate stops within Mexican territory. They are escorted to the transit hall of the Mexico City International Airport in the custody of an agent of the National Immigration Service who holds passports and / or travel documents until the passenger boards the connecting flight.
Holders of diplomatic or service category passports issued by Algeria, Antigua and Barbuda, Armenia, Barbados, Bolivia, China, Cuba, Guatemala, Guyana, India, Indonesia, Kazakhstan, Laos, Malaysia, Mongolia, Morocco, Pakistan, Philippines, Russia, Saint Lucia, Saint Vincent and the Grenadines, Serbia, Thailand, Tunisia and United Arab Emirates and of diplomatic passports of issued by Azerbaijan, Benin, Dominican Republic, Ecuador, El Salvador, Ethiopia, Honduras, Kuwait, South Africa, Turkey and Ukraine do not require a visa.
Holders of diplomatic passports only issued by Andorra, Austria, Czech Republic, Belgium, Denmark, Finland, Hungary, Lithuania, Marshall Islands, Micronesia, Netherlands, Norway, Palau, Portugal, Slovakia do not require a visa.
Holders of diplomatic or service category passports of Australia, Bahamas, Liechtenstein, Malta, Monaco and San Marino require a visa.
Holders of passports issued by the following countries who possess an APEC Business Travel Card (ABTC) containing the "MEX '' code on the reverse that it is valid for travel to Mexico can enter visa - free for business trips for up to 90 days.
ABTCs are issued to nationals of:
British Overseas Territories. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The majority of its population (80 %) lives in European Russia, therefore Russia as a whole is included as a European country here. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has part of its territory (3 %) in Southeast Europe called Turkish Thrace. Azerbaijan (Artsakh) and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has part of its territories located west of the Ural River in Eastern Europe. Armenia and Cyprus (Northern Cyprus; Akrotiri and Dhekelia) are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has part of its territory in the Middle East called Sinai Peninsula. Part of the Realm of New Zealand. Partially recognized. Unincorporated territory of the United States. Part of Norway, not part of the Schengen Area, special open - border status under Svalbard Treaty
British Overseas Territories. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The vast majority of its population (80 %) lives in European Russia. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has a small part of its territory (3 %) in Southeast Europe called Turkish Thrace. Abkhazia, Azerbaijan, Georgia, and South Ossetia are often regarded as transcontinental countries. Both have a small part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has a small part of its territories located west of the Urals in Eastern Europe. Armenia, Artsakh, Cyprus, and Northern Cyprus are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has a small part of its territory in the Middle East called Sinai Peninsula. Partially recognized.
|
the human body is made up of matter in which of the following states | Composition of the human body - wikipedia
Body composition may be analyzed in terms of molecular type e.g., water, protein, connective tissue, fats (or lipids), hydroxylapatite (in bones), carbohydrates (such as glycogen and glucose) and DNA. In terms of tissue type, the body may be analyzed into water, fat, muscle, bone, etc. In terms of cell type, the body contains hundreds of different types of cells, but notably, the largest number of cells contained in a human body (though not the largest mass of cells) are not human cells, but bacteria residing in the normal human gastrointestinal tract.
Almost 99 % of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85 % is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life. The remaining elements are trace elements, of which more than a dozen are thought on the basis of good evidence to be necessary for life. All of the mass of the trace elements put together (less than 10 grams for a human body) do not add up to the body mass of magnesium, the least common of the 11 non-trace elements.
Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple bystander contaminants without function (examples: caesium, titanium), while many others are thought to be active toxics, depending on amount (cadmium, mercury, radioactives). The possible utility and toxicity of a few elements at levels normally found in the body (aluminium) is debated. Functions have been proposed for trace amounts of cadmium and lead, although these are almost certainly toxic in amounts very much larger than normally found in the body. There is evidence that arsenic, an element normally considered a toxic in higher amounts, is essential in ultratrace quantities, in mammals such as rats, hamsters, and goats.
Some elements (silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Bromine is used abundantly by some (though not all) lower organisms, and opportunistically in eosinophils in humans. One study has found bromine to be necessary to collagen IV synthesis in humans. Fluorine is used by a number of plants to manufacture toxins (see that element) but in humans only functions as a local (topical) hardening agent in tooth enamel, and not in an essential biological role.
The average 70 kg (150 lb) adult human body contains approximately 7027700000000000000 ♠ 7 × 10 atoms and contains at least detectable traces of 60 chemical elements. About 29 of these elements are thought to play an active positive role in life and health in humans.
The relative amounts of each element vary by individual, mainly due to differences in the proportion of fat, muscle and bone in their body. Persons with more fat will have a higher proportion of carbon and a lower proportion of most other elements (the proportion of hydrogen will be about the same). The numbers in the table are averages of different numbers reported by different references.
The adult human body averages ~ 53 % water. This varies substantially by age, sex, and adiposity. In a large sample of adults of all ages and both sexes, the figure for water fraction by weight was found to be 48 ± 6 % for females and 58 ± 8 % water for males. Water is ~ 11 % hydrogen by mass but ~ 67 % hydrogen by atomic percent, and these numbers along with the complementary % numbers for oxygen in water, are the largest contributors to overall mass and atomic composition figures. Because of water content, the human body contains more oxygen by mass than any other element, but more hydrogen by atom - fraction than any element.
The elements listed below as "Essential in humans '' are those listed by the (US) Food and Drug Administration as essential nutrients, as well as six additional elements: oxygen, carbon, hydrogen, and nitrogen (the fundamental building blocks of life on Earth), sulfur (essential to all cells) and cobalt (a necessary component of vitamin B). Elements listed as "Possibly '' or "Probably '' essential are those cited by the National Research Council (United States) as beneficial to human health and possibly or probably essential.
* Iron = ~ 3 g in men, ~ 2.3 g in women
Of the 94 naturally occurring chemical elements, 60 are listed in the table above. Of the remaining 34, it is not known how many occur in the human body.
Most of the elements needed for life are relatively common in the Earth 's crust. Aluminium, the third most common element in the Earth 's crust (after oxygen and silicon), serves no function in living cells, but is harmful in large amounts. Transferrins can bind aluminium.
Periodic table highlighting dietary elements
The composition of the human body is expressed in terms of chemicals:
The composition of the human body can be viewed on an atomic and molecular scale as shown in this article.
The estimated gross molecular contents of a typical 20 - micrometre human cell is as follows:
Body composition can also be expressed in terms of various types of material, such as:
There are many species of bacteria and other microorganisms that live on or inside the healthy human body. In fact, 90 % of the cells in (or on) a human body are microbes, by number (much less by mass or volume). Some of these symbionts are necessary for our health. Those that neither help nor harm humans are called commensal organisms.
|
what name is given to the rate of flow of electric charge | Electric current - wikipedia
An electric current is a flow of electric charge. In electric circuits this charge is often carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas (plasma).
The SI unit for measuring an electric current is the ampere, which is the flow of electric charge across a surface at the rate of one coulomb per second. Electric current is measured using a device called an ammeter.
Electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators.
The moving charged particles in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom, and can move freely about within the metal. These conduction electrons are the charge carriers in metal conductors.
The conventional symbol for current is I, which originates from the French phrase intensité de courant, (current intensity). Current intensity is often referred to simply as current. The I symbol was used by André - Marie Ampère, after whom the unit of electric current is named, in formulating Ampère 's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using C to I until 1896.
In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei are held in a fixed position, and the negatively charged electrons are free to move, carrying their charge from one place to another. In other materials, notably the semiconductors, the charge carriers can be positive or negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. The direction of conventional current is arbitrarily defined as the same direction as positive charges flow.
Since electrons, the charge carriers in metal wires and most other parts of electric circuits, have a negative charge, as a consequence, they flow in the opposite direction of conventional current flow in an electrical circuit.
Since the current in a wire or component can flow in either direction, when a variable I is defined to represent that current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the reference direction of current I. If the current flows in the opposite direction, the variable I has a negative value.
When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the variable means that the actual direction of current through that circuit element is opposite that of the chosen reference direction. In electronic circuits, the reference current directions are often chosen so that all currents are toward ground. This often corresponds to the actual current direction, because in many circuits the power supply voltage is positive with respect to ground.
Ohm 's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms. More specifically, Ohm 's law states that the R in this relation is constant, independent of the current.
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave. Certain applications use different waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or modulated) onto the AC signal.
In contrast, direct current (DC) is the unidirectional flow of electric charge, or a system in which the movement of electric charge is in one direction only. Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator - type electric machines of the dynamo type. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was galvanic current.
Natural observable examples of electrical current include lightning, static electric discharge, and the solar wind, the source of the polar auroras.
Man - made occurrences of electric current include the flow of conduction electrons in metal wires such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery or a neuron, and the flow of holes within a semiconductor.
Current can be measured using an ammeter.
At the circuit level, various techniques are used to measure current:
Joule heating, also known as ohmic heating and resistive heating, is the process by which the passage of an electric current through a conductor produces heat. It was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
This relationship is known as Joule 's First Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known unit of power, the watt, is equivalent to one joule per second.
In an electromagnet a coil, of a large number of circular turns of insulated wire, wrapped on a cylindrical core, behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Magnetism can also produce electric currents. When a changing magnetic field is applied to a conductor, an Electromotive force (EMF) is produced, and when there is a suitable path, this causes current.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient. Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current. Devices used for this include Hall effect sensors, current clamps, current transformers, and Rogowski coils.
When an electric current flows in a suitably shaped conductor at radio frequencies radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, conventional current is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction as the electrons. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead - acid electrochemical cell, electric currents are composed of positive hydrogen ions (protons) flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
In a metal, some of the outer electrons in each atom are not bound to the individual atom as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons can serve as charge carriers, carrying a current. Metals are particularly conductive because there are a large number of these free electrons, typically one per atom in the lattice. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 10 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, One, Two, Three... Infinity (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current. ''
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:
where Q is the electric charge transferred through the surface over a time t. If Q and t are measured in coulombs and seconds respectively, I is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na and Cl (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, absorbing each ion.
Water - ice and certain solid electrolytes called proton conductors contain positive hydrogen ions ("protons '') that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized '' from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen (O → 2O), which then recombine creating ozone (O)).
Since a "perfect vacuum '' contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal 's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called cathode spots or anode spots) are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron - emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity can not be understood simply as the idealization of perfect conductivity in classical physics.
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes '' (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p - type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10 to 10 siemens per centimeter (S ⋅ cm).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to a large number of discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the conduction band, the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimension -- that is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor 's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current - carrying electrons in the conduction band are known as free electrons, though they are often simply called electrons if that is clear in context.
Current density is a measure of the density of an electric current. It is defined as a vector whose magnitude is the electric current per cross-sectional area. In SI units, the current density is measured in amperes per square metre.
where I (\ displaystyle I) is current in the conductor, J → (\ displaystyle (\ vec (J))) is the current density, and d A → (\ displaystyle d (\ vec (A))) is the differential cross-sectional area vector.
The current density (current per unit area) J → (\ displaystyle (\ vec (J))) in materials with finite resistance is directly proportional to the electric field E → (\ displaystyle (\ vec (E))) in the medium. The proportionality constant is called the conductivity σ (\ displaystyle \ sigma) of the material, whose value depends on the material concerned and, in general, is dependent on the temperature of the material:
The reciprocal of the conductivity σ (\ displaystyle \ sigma) of the material is called the resistivity ρ (\ displaystyle \ rho) of the material and the above equation, when written in terms of resistivity becomes:
Conduction in semiconductor devices may occur by a combination of drift and diffusion, which is proportional to diffusion constant D (\ displaystyle D) and charge density α q (\ displaystyle \ alpha _ (q)). The current density is then:
with q (\ displaystyle q) being the elementary charge and n (\ displaystyle n) the electron density. The carriers move in the direction of decreasing concentration, so for electrons a positive current results for a positive density gradient. If the carriers are holes, replace electron density n (\ displaystyle n) by the negative of the hole density p (\ displaystyle p).
In linear anisotropic materials, σ, ρ and D are tensors.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm 's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
where I (\ displaystyle I) is the current, measured in amperes; V (\ displaystyle V) is the potential difference, measured in volts; and R (\ displaystyle R) is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
where
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near - vacuum inside a cathode ray tube, the electrons travel in near - straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell 's Equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. (See also hydraulic analogy.)
|
what is the meaning of muslim name zeeshan | Zeeshan - wikipedia
Zeeshan (Zişan) or Zeshan (Zeşan) is a Turkish or Persian masculine given name, derived from words "Zee '' (possessor of, from Arabic ذو) and "Shan '' (high status or splendor, from Arabic شأن), sometimes simply translated as "princely '' or "Moon ''. This word is also used in Persian, Urdu and sometimes in Turkish poetry as an adjective.
This name is mostly used for males in, India, Pakistan, Iran, and other Asian countries of the region, though the name is neutral in gender and can also be used for females (mostly in Turkey) with meanings "Moon '', "magnificent '' and "brilliant ''.
The diminutive or nickname for Zeeshan is Shaan or "Shani ''.
Jishan is a Bihari and Indian cognate for the same word and is written as such because of lack of the ' Z ' sound natively in Sanskrit derived languages.
|
who plays ice bear in we bare bears | We Bare Bears - wikipedia
We Bare Bears is an American animated sitcom created by Daniel Chong for Cartoon Network. The show made its premiere on July 27, 2015 and follows three bear siblings, Grizzly, Panda and Ice Bear (respectively voiced by Eric Edelstein, Bobby Moynihan, and Demetri Martin), and their awkward attempts at integrating with the human world in the San Francisco Bay Area. Based on Chong 's webcomic The Three Bare Bears, the pilot episode made its world premiere at the KLIK! Amsterdam Animation Festival, where it won in the "Young Amsterdam Audience '' category. The series premiered on July 27, 2015. Nintendo has also partnered with Cartoon Network to make ads of the show 's characters playing the Nintendo Switch.
We Bare Bears follows three adoptive bear brothers: Grizzly, Panda and Ice Bear. The bears attempt to integrate with human society, such as by purchasing food, making human companions or trying to become famous on the Internet, although these attempts see the bears struggle to do so due to the civilized nature of humans and their own animal instincts. However, in the end, they figure out that they have each other for support. One notable aspect of the show 's humor is the bears ' ability to form a "bear stack ''. As its name implies, the bears stack on top of each other, which serves as their unique way of transportation. Occasionally, the bears share adventures with their friends, such as child prodigy Chloe, bigfoot Charlie, internet sensation Nom Nom, Park Ranger Tabes and produce saleswoman Lucy.
Occasionally, there are flashback episodes which chronicle the adventures of the bears as cubs trying to find a home. So far the only cub episodes of the series are "The Road '', "Pet Shop '', "The Island '', "Baby Bears on a Plane '', "$100 '' and "The Fair ''. There have been two episodes centered on one bear, "Burrito '' (Grizz), and "Yuri and the Bear '' (Ice Bear). There is a Panda one in development.
The show was created by cartoonist Daniel Chong, who had previously worked as a story artist for Pixar and Illumination Entertainment. The show is based on his webcomic The Three Bare Bears, which also features the identifying characters. The comic was published from 2010 to 2011. Billed as a comedy, the show is a production of Cartoon Network Studios, which developed the program with Chong as part of their shorts development program. It was announced during the network 's 2014 upfront.
Nom Nom and Charlie were initially voiced by Ken Jeong and Tom Arnold, respectively, before getting recast sometime before airing. The show 's visual simplistic look was inspired by classic hand - drawn animation akin to Charlie Brown and The Many Adventures of Winnie - the - Pooh.
The pilot episode made its world premiere as part of two separate venues of the KLIK! Amsterdam Animation Festival at the EYE Film Institute Netherlands: the "Animated Shorts 5 '', and the "Animated Shorts for Kids '' ages 9 to 12. The short was screened alongside the Dutch premiere of Clarence, the Steven Universe episodes "Mirror Gem '' and "Ocean Gem '', and a live interview with the creator of the latter series, Rebecca Sugar. Described by the institute as "hilarious and endearing '', the pilot won in the "Young Amsterdam Audience '' category.
We Bare Bears has received generally positive reviews from critics. Kevin Johnson from The A.V. Club gave "Everyday Bears '' a B. He thought the show was charming enough to enjoy, but not "must - watch ''. He praised the voice actors, but was disappointed that the show did not thoroughly explore the discrimination humans have towards the bears and ended by calling it a "very good ' just okay ' kind of show '' until the discrimination is explored.
We Bare Bears premiered on Cartoon Network in Canada on July 27, 2015 and on Cartoon Network in the United Kingdom and Ireland on September 7, 2015. The series debuted on Cartoon Network channels in Australia and New Zealand and the Philippines on November 16. It premiered on Cartoon Network in India on November 29, 2015, and in Italy on December 8, 2015. We Bare Bears premiered in Pakistan on Cartoon Network on July 13, 2016 and airs on every Saturday and Sunday at 11: 00 PM, according to Pakistan Standard Time. In Germany, Austria and Switzerland it airs on Cartoon Network (Pay - TV) and Disney Channel (Free - TV). In the Czech Republic it airs on Cartoon Network (Pay - TV) and ČT Déčko (Free - TV).
Penguin Random House announced in 2014 that it will publish books based on various programs for Cartoon Network, including We Bare Bears. The books are produced out of the company 's Cartoon Network Books imprint, a division of the Penguin Young Readers Group, and is based on a partnership with the network that started in 2013.
|
where is the biggest roller coaster in canada | List of roller Coaster rankings - wikipedia
Roller coasters are amusement rides developed for amusement parks and modern theme parks. During the 16th and 17th centuries, rides consisting of wooden sleds that took riders down large slides made from ice were popular in Russia. The first roller coasters, where the train was attached to a wooden track, first appeared in France in the early 1800s. Although wooden roller coasters are still being produced, steel roller coasters are more common and can be found on every continent except Antarctica.
Ranked by height, speed, length, inversions, and steepness, roller coasters are also rated through public opinion polls. Amusement parks often compete to build the tallest, fastest, and longest rides to attract thrill seekers and boost overall park attendance. However, many records do not usually last long. When Magnum XL - 200 opened in 1989, it began a new era of roller coasters and increased competition among parks to set new world records. The Magnum XL -- 200 was the first complete - circuit roller coaster built over 200 feet (61 m). Other notable roller coasters include Formula Rossa which reaches a top speed of 149 miles per hour (240 km / h), Kingda Ka which stands at 456 feet (139 m) tall, and Steel Dragon 2000 which measures 8,133 feet (2,479 m) in length.
Source:
Sources:
Source:
Source:
Source:
Source:
Source:
Source:
This listing contains all types of roller coaster inversions.
Sources:
Source:
|
what is the solar radius of the sun | Solar radius - wikipedia
Solar radius is a unit of distance used to express the size of stars in astronomy. The solar radius is usually defined as the radius to the layer in the Sun 's photosphere where the optical depth equals 2 / 3:
The solar radius is approximately 695,700 kilometres (432,300 miles), which is about 10 times the average radius of Jupiter, 110 times the radius of the Earth, and 1 / 215th of an astronomical unit, the distance of the Earth from the Sun. It varies slightly from pole to equator due to its rotation, which induces an oblateness in the order of 10 parts per million. (See 1 gigametre for similar distances.)
The unmanned SOHO spacecraft was used to measure the radius of the Sun by timing transits of Mercury across the surface during 2003 and 2006. The result was a measured radius of 696,342 ± 65 kilometres (432,687 ± 40 miles).
Haberreiter, Schmutz & Kosovichev (2008) determined the radius corresponding to the solar photosphere to be 695,660 ± 140 kilometres (432,263 ± 87 miles). This new value is consistent with helioseismic estimates; the same study showed that previous estimates using inflection point methods had been overestimated by approximately 300 km (190 mi).
In 2015, the International Astronomical Union passed Resolution B3, which defined a set of nominal conversion constants for stellar and planetary astronomy. Resolution B3 defined the nominal solar radius (symbol R ⊙ N (\ displaystyle R_ (\ odot) ^ (N))) to be equal to exactly 695,700 km. The nominal values were adopted to help astronomers avoid confusion when quoting stellar radii in units of the Sun 's radius, even when future observations will likely refine the Sun 's actual photospheric radius (which is currently only known to about the ± 100 -- 200 km accuracy).
Solar radii as a unit are popular when talking about spacecraft moving close to the sun. Two spacecraft in the 2010s.
|
where is the stadium being built in las vegas | Las Vegas stadium - wikipedia
Las Vegas Stadium is the working name for a domed stadium under construction in Paradise, Nevada for the Las Vegas Raiders of the National Football League (NFL) and the UNLV Rebels football team of the University of Nevada, Las Vegas (UNLV). It is located on about 62 acres west of Mandalay Bay at Russell Road and Hacienda Avenue and between Polaris Avenue and Dean Martin Drive, just west of Interstate 15. Construction of the $1.9 billion stadium began in September 2017 and is expected to be completed in time for the 2020 NFL season.
In January 2016, reports emerged that Las Vegas Sands was considering developing a stadium in conjunction with Majestic Realty and UNLV, on a 42 - acre site owned by UNLV. UNLV had been in the market to for a new stadium to replace Sam Boyd Stadium since at least 2011. Raiders owner Mark Davis visited Las Vegas on January 29 to tour the site and meet with Sands chairman Sheldon Adelson and other local figures. The Raiders, who had been trying to get a new, long term stadium built for the team since the 1980s had just missed out on relocating to Los Angeles that same month and were at an impasse in Oakland. In order for the team to relocate to Las Vegas a new stadium was required, since Sam Boyd Stadium was undersized for the NFL and there were no other professional - caliber stadiums in Nevada.
On March 21, 2016, when asked about Las Vegas, Davis said, "I think the Raiders like the Las Vegas plan, '' and "it 's a very very very intriguing and exciting plan '', referring to the stadium plan in Las Vegas. Davis also met with Nevada Governor Brian Sandoval about the stadium plan. On April 1, 2016, Davis toured Sam Boyd Stadium to evaluate whether UNLV could serve as a temporary home of the team and was with UNLV football coach Tony Sanchez, athletic director Tina Kunzer - Murphy, adviser Don Snyder and school president Len Jessup to further explore the possibility of the Raiders moving to Las Vegas.
On April 28, 2016, Davis said he wanted to move the Raiders to Las Vegas and pledged $500 million toward the construction of the proposed $2.4 billion domed stadium. "Together we can turn the Silver State into the silver and black state, '' Davis said.
In the spring of 2016, the board of directors of Las Vegas Sands rejected Adelson 's stadium proposal. Adelson decided to move ahead with the stadium as an individual investment, pledging $650 million of his personal wealth to the project.
The viability of the Tropicana Avenue site was called into serious question in June 2016, when Southwest Airlines objected to the location because its proximity to the northern end of one of McCarran Airport 's runways could have a negative impact on the safety and capacity of air traffic at the airport. The list of potential locations soon expanded to nine candidates, including the sites of the Wild Wild West casino, the Wynn golf course, the Riviera casino, the Las Vegas Festival Grounds, and Cashman Center. By September, the list was narrowed to two possibilities: the Bali Hai Golf Club, south of Mandalay Bay, and a vacant lot on Russell Road, just west of Interstate 15.
On August 25, 2016, the Raiders filed a trademark application for "Las Vegas Raiders '' on the same day renderings of a proposed stadium design were released. On September 15, 2016, the Southern Nevada Tourism Infrastructure Committee unanimously voted to recommend and approve $750 million for the Las Vegas stadium plan.
Majestic Realty revealed in October 2016 that it had withdrawn from the stadium project.
Sandoval called a special session of the Nevada Legislature to consider the stadium and other tourism - related proposals in October 2016. The funding bill for the stadium was approved by a 16 -- 5 vote in the Senate and by 28 -- 13 in the Assembly, and was signed into law by Sandoval on October 17. The bill increased a hotel tax to provide the $750 million in funding.
The Raiders filed relocation papers on January 19 to move from Oakland to Las Vegas. On January 26, 2017, the Raiders submitted a proposed lease agreement for the stadium. It was reported that the Raiders had selected the Russell Road site as the stadium location, the team would pay one dollar in rent, and that they could control the naming rights for both the stadium and plaza and in addition keep signage sponsorship revenue.
Days after the Raiders ' announced proposal, Adelson dropped out of the stadium project, pulling his proposed $650 million contribution, and shortly after this announcement Goldman Sachs (one of the backers of stadium proposal) withdrew as well. ESPN reported on January 30, 2017, that the Raiders were expected to increase their contribution from $500 million to $1.15 billion.
On March 6, the Raiders revealed Bank of America would be replacing the Adelson portion of the funding.
NFL owners voted to approve the move by a near unanimous margin of 31 to 1 on March 27. Only Stephen M. Ross, owner of the Miami Dolphins, voted against the relocation. The next day, the Raiders and the Las Vegas Stadium Authority began accepting deposits for season tickets for the new stadium. The Raiders announced that they planned to remain in Oakland until the stadium was complete.
The Raiders closed the purchase of the land for the stadium at the Russell Road site on May 1. The purchase price was reported at $77.5 million. In a Las Vegas Stadium Authority meeting on May 11, it was announced that in a joint venture Mortenson Construction and McCarthy Construction will be the developers for the stadium. Mortenson Construction previously worked on U.S. Bank Stadium in Minneapolis for the Minnesota Vikings. The stadium authority approved a stadium lease with the Raiders on May 18. The lease was to be for 30 years with four successive extension options of five years each.
On September 18, construction activity began on the stadium site with site preparation. The Raiders officially broke ground for the new stadium on November 13. The ceremony featured NFL Commissioner Roger Goodell, Raiders owner Mark Davis, his mother Carol Davis, various Raiders legends including Howie Long, Jim Plunkett, Tom Flores, and Ray Guy, Las Vegas and Nevada politicians such as Governor Brian Sandoval, Las Vegas Mayor Carolyn Goodman, Clark County Commissioner Steve Sisolak and stadium authority head Steve Hill. The event was hosted by George Lopez and included other celebrities like Carlos Santana, longtime Vegas icon Wayne Newton, and Howie D and Nick Carter of the Backstreet Boys. There was also a tribute to the victims of the 2017 Las Vegas shooting that happened nearby by with Judith Hill and the Las Vegas House of Blues Gospel Choir performing ' Rise up ' while 58 beams of light for the 58 people shot and killed lit up.
In January, construction crews began blasting caliche rock with dynamite to excavate and create the stadium bowl. On February 3, the Raiders opened a 7,500 - square - foot stadium preview center at Town Square, located a few miles from the stadium site, featuring interactive exhibits and team memorabilia, with plans for simulations of views from individual seats and a large - scale stadium model.
The budget for development of the stadium was estimated at $1.9 billion. Of this, an estimated $375 million was to be spent on land and infrastructure costs, $1.35 billion on construction, and $100 million on a Raiders practice facility, with the remaining $100 million as a contingency allowance for unexpected costs.
The financing for the project was expected to come in the form of $750 million in public funding, $500 million from the Raiders, and $650 million lent by Bank of America. The public portion of the funding will come from municipal bonds issued by Clark County, backed by the proceeds of a special tax on hotel rooms in the Las Vegas area, which was initiated in March 2017. The Raiders ' contribution was expected to include a $200 million loan from the NFL 's stadium upgrade program, $250 million from sales of personal seat licenses at the stadium, and $50 million from cash reserves.
Local government can not receive any rent or revenue sharing from the stadium, because such an arrangement would not be compatible with the tax - exempt status of the bonds. Proponents instead argued that the public financing would be justified by increased economic activity and tax revenue related to the stadium. Critics have argued that the economic projections were based on overly optimistic assumptions.
For Las Vegas Stadium, Mark Davis retained the same architecture firm, MANICA Architecture, that had designed the previously proposed Carson Stadium near Los Angeles. The stadium as proposed was a 10 level domed stadium with a clear ETFE roof, silver and black exterior and large retractable curtain - like side windows facing the Las Vegas Strip. The design included a large torch in one end that would house a flame in honor of Al Davis, the late long - time owner of the Raiders.
Updated renderings released after the relocation vote passed show the stadium with a roll - in natural grass field similar to the one at University of Phoenix Stadium in Glendale, Arizona. In an August 17, 2017 Las Vegas Stadium Authority meeting it was revealed that the stadium will have a designated pickup / drop off loop for ride sharing services such as Uber and Lyft, a first for a stadium in the NFL.
The stadium will replace Sam Boyd Stadium and will serve as the home of both the Raiders and the UNLV Rebels football program. In addition, it will host various events now held at Sam Boyd, such as the Las Vegas Bowl.
Stadium backers project 20 to 25 additional events per year, with plausible possibilities including the Super Bowl, the Pro Bowl, the NFL Draft, the NCAA Final Four, the USA Sevens rugby tournament, the Monster Jam World Finals, boxing matches, Ultimate Fighting Championship events, neutral - site college football games, international soccer matches, concerts, and corporate shows.
David Beckham visited Las Vegas in 2016 to advocate for the stadium as a possible home for his Major League Soccer expansion team, although he ultimately announced the launch of the team with a stadium in Miami.
In August 2017, the Raiders, along with the city and the state, submitted a bid to the Canada - Mexico - United States World Cup bid committee to be considered as a site for the 2026 FIFA World Cup. If the three nations are chosen to host the World Cup, the stadium would be in the running to be one of at least 12 venues for the tournament.
|
who plays the master in buffy season 1 | Master (Buffy the vampire Slayer) - wikipedia
The Master is a fictional character on the action - horror / fantasy television series Buffy the Vampire Slayer (1997 -- 2003). He is a centuries - old vampire portrayed by Mark Metcalf, determined to open the portal to hell below Sunnydale High School in the fictional town of Sunnydale where the main character Buffy Summers lives. The premise of the series is that Buffy (Sarah Michelle Gellar) is a Slayer, a teenage girl endowed with superhuman strength and other powers which she uses to kill vampires and other evil beings. Each season of the series Buffy and the small group of family and friends who work with her, nicknamed the Scooby Gang, must defeat an evil force referred to as the Big Bad; the villain is usually trying to bring on an apocalypse. The Master is the first season 's Big Bad.
The Master is the head of an ancient order of vampires, a classic Old World villain devoted to ritual and prophecy. He has been entombed beneath Sunnydale for 60 years as the patriarch of a cult posed opposite Buffy, a character who was created to subvert media tropes about frail women falling victim to evil characters. Her youth and insistence on asserting her free will makes her unique in the Master 's experience, but he is devoted to fulfilling a prophecy that states he will kill the Slayer and initiate the extermination of all humanity.
Buffy the Vampire Slayer was originally conceived for a 1992 feature film that pitched Buffy against a similar villain controlling vampires below Los Angeles. Disappointed by the final film, screenwriter and series creator Joss Whedon reworked his script into a television series more in line with his original vision. He and the staff writers employ horror elements in the series to represent real - life conflicts for the adolescent characters, while frequently undercutting the horror aspect of the show with comedy. Sunnydale High School is situated atop a portal to hell called a Hellmouth, which Whedon uses to symbolize the high - school - as - hell experience. Pragmatically, Whedon admitted that placing the high school on a Hellmouth allows the writers to confront the main characters with an endless array of evil creatures.
Veteran character actor Mark Metcalf appeared in heavy prosthetic make - up for the role of the Master, belying his iconic performance in the film National Lampoon 's Animal House (1978) as Douglas C. Neidermeyer, a strident rule - following ROTC officer (and the associated role in Twisted Sister 's "We 're Not Going to Take It '' music video). In 2011, Metcalf acknowledged that his Animal House role would probably live much longer than he, but also recognized his roles on Seinfeld -- where he plays a similarly named character called "Maestro '' -- and Buffy the Vampire Slayer as his favorites. Many actors auditioned for the part, but Metcalf, according to Whedon, played it with more complexity, bringing a "sly and kind of urbane '' sensitivity and a charm to the villainy of the character.
At the beginning of the series, Buffy has left behind a destructive past that has labeled her as a trouble - maker at school and instilled in her the fear that the actions she has had to take to be a successful Slayer are responsible for breaking apart her parents ' marriage. She arrives at Sunnydale High School believing that she has made a fresh start. Her mother Joyce (Kristine Sutherland) is unaware of her daughter 's vocation and stresses that Buffy 's time in Sunnydale should be as peaceful as possible.
When Buffy arrives in the school library, however, she finds that the new librarian, Giles (Anthony Head), is expecting her and ready to continue her training. His expectation that she is there to take up her role as Slayer upsets her; she wants nothing to do with him or with Slaying. Giles is, in fact, Buffy 's new Watcher, a mentor who will teach her about the demons she must face, as well as supervise her training in weapons and battle strategy. Although she desperately desires to be a mere high school student, she is unsuccessful in avoiding her destiny to fight vampires. On her first day she finds a vampire 's victim at school, and resumes her work as the Slayer.
Because it debuted midway through the 1996 -- 97 television season, the first season of Buffy has only 12 episodes as opposed to the standard 22 in subsequent seasons. The Master is first seen in the series premiere "Welcome to the Hellmouth '', which was aired immediately before the second episode "The Harvest '', which reveals more of the Master 's character and backstory. Although the Master 's identity is never revealed on screen, Joss Whedon wrote in the pilot 's script that his name was Heinrich Joseph Nest, roughly 600 years old. This contradicts information presented in the first season that indicates The Master predates written history, as is discussed below.
In "Welcome to the Hellmouth '' the Master is presented as one of the "old ones, '' a vampire with extraordinary physical and mental powers, but weakened through long isolation and needing to feed on people; he is raised from a pool of blood by his acolyte Luke (Brian Thompson). The head of a cult called the Order of Aurelius, the Master attempted to open the Hellmouth in 1937, placing himself in a church to do so. An earthquake swallowed the church during the Master 's attempt, and he has been living in the ruins for 60 years. He is imprisoned by a mystical force, unable to leave his underground lair, so he commands his minions to find people for him to feed off.
The Master 's incarceration underground was a device used by the writers to avoid having Buffy meet him and then thwart his attempts to kill her each week. Whedon was concerned that audiences would consider this implausible and that weekly confrontations would leave no tension for the season finale when Buffy and the Master would finally meet and battle each other.
In "The Harvest '', employing a ritualized dark ceremony which can be used only once in a century, the Master makes Luke his "vessel '': every time Luke feeds, power will be transmitted to the Master. Luke goes to The Bronze, the local nightclub frequented by Buffy and her friends and begins to feed on the patrons before Buffy -- following a delay caused by getting grounded by her mother -- can kill him. Although Luke successfully feeds on a couple of victims, Buffy stakes him, thereby leaving the Master contained, robbed of his proxy, and with insufficient power to break the mystic shield that confines him underground.
The majority of vampires on the series have a human face that can turn into what Whedon and the characters call "vamp face ''. When shown immediately before feeding, the vampire characters transform with prosthetic make - up and computer - generated effects, giving them prominent brows and cheekbones, sharpened yellow teeth, and yellow eyes. Whedon intended to use the vamp face to be able to place vampires around Buffy in different locations -- especially at school -- to highlight the element of surprise by illustrating that the characters often face friends and peers who appear normal, but have dark sides. Simultaneously, the vamp face shows that Buffy is killing monsters instead of people.
Whedon made a decision to have the Master in permanent vamp face to indicate that he is so ancient he predates humanity. The Master never shows a human face; the make - up specialist conceived him as bat - like, intentionally making him look more like an animal. His facial make - up, bald head, extremely long fingernails, and black costume all refer directly to the 1922 German Expressionist film Nosferatu, directed by F.W. Murnau. Like the vampire of that film, Count Orlok, the Master lives in a state of furious isolation from which he is desperate to escape. According to author Matthew Pateman, the Master 's presentation underscores both his great age and his European - ness -- he is emphatically Old World. Even so, as a result of his entrapment in the New World, he adapts and shows himself able to incorporate American technology into his plans. He also (perhaps anachronistically) speaks with a modern American accent.
In "Never Kill a Boy on the First Date '' the Master reads from a formally written Bible - like book of prophecy that foretells the arrival of a powerful warrior enigmatically named "The Anointed One '' (Andrew J. Ferchland) who will become the Master 's "greatest weapon against the Slayer ''. The Master sends other acolytes of the Order of Aurelius to bring The Anointed to him, instructing them to give their lives should it become necessary for them to succeed. When Buffy finally encounters him in the season finale, The Anointed One turns out to inhabit the body of a little boy. The Master instructs the boy in the influence of fear ("Nightmares '') and power ("Angel '').
Buffy studies scholars have noted the role religion plays in the series, and have commented on the Master 's sense of religiosity in particular. With the exception of Angel (David Boreanaz), none of the main characters exhibit any prominent religious views although they observe some religious holidays. Several of the villains in the series, however, are nearly fanatical about religious ritual and custom, the first of which is the Master. The rituals the Master performs to make Luke his vessel are, according to Wendy Love Anderson, an "inversion of Christianity ''. The Master attempts to restore the "old ones '' and aligns himself with a child while setting up Buffy to be a Christ - like figure. He foretells that when he is able to leave his mystical prison, "the stars themselves will hide '', an aberration of a line from John Milton 's epic poem Paradise Lost, where Satan is musing on his own power. The Master 's entombment in a house of worship is a convenient vehicle to introduce the character 's religiosity, but it also represents the way evil is at times allowed to thrive in churches. The unChristian symbolism was intentional on Whedon 's part, as he was cautious about including such subversive imagery in "The Harvest ''; Buffy producer David Greenwalt was certain Christian groups would protest the ceremonial aspects of the plot. Gregory Erickson notes that the Master 's denigration of a Christian cross, what he calls the "two pieces of wood '' even while being burned by it, reflects the series ' treatment of Christianity overall and in turn, the American simplification of religion. On Buffy, a cross is a weapon, but beyond that is an empty symbol. Christian symbols and rituals traditionally play an integral role in many vampire stories, as in Bram Stoker 's Dracula. Conversely, Buffy downplays their importance.
The Master sends minions to kill Buffy in "Angel '', an episode featuring the origin story of Buffy 's romantic interest, a vampire with a murderous past who was re-ensouled by a Gypsy tribe as the ultimate punishment; this "curse '' has caused him to feel remorse and live the past century in misery and torment. His desire for redemption, as well as his attraction to Buffy, compels him to assist her. She discovers he is a vampire in "Angel '' and it is revealed that one of the Master 's most powerful followers, Darla (Julie Benz), was the vampire who transformed Angel and was his lover for several generations. After the Master allows Darla to destroy the minions who failed to kill Buffy, Darla tries to lure Angel to the Master 's side, but Angel stakes and kills her, further thwarting the Master 's plans.
Buffy and the Master finally meet in the season finale "Prophecy Girl '', in which Giles translates a prophecy that states that if she fights the Master, she will die. Buffy overhears Giles discussing it with Angel and tells Giles she refuses to be the Slayer if it means she will die, then begs her mother to go away with her for the weekend. After five students are murdered by more of the Master 's followers, however, Buffy decides she must fight the Master and is led to his underground lair by The Anointed One; she is wearing a long white dress, bought for a dance she was supposed to attend instead. He quickly hypnotizes her and tells her that "prophecies are tricky things '' that do n't reveal all: had she not come to fight him, he could not rise, as it is her blood which will free him. He bites and drinks from her, then tosses her to the ground face - down in a shallow pool where she drowns. Angel and Buffy 's friend Xander (Nicholas Brendon), who have disobeyed her wishes and followed her, arrive after the Master has risen. Xander is able to revive Buffy through CPR, thus the prophecy of her death at the Master 's hands is fulfilled, but its intention thwarted. She becomes stronger as a result of their encounter.
An extension of the Master 's religiosity is his preoccupation with prophecies. The themes of the first season are destiny and forming an identity separate from childhood: breaking the illusions that the world is safe and actions have no real consequences. Destiny is repeatedly a theme between Buffy and the Master. The entire first season is underscored with prophecies -- a narrative device used less frequently in later seasons of the series -- that Buffy neglects to fulfill in various ways. Buffy often has prophetic dreams and the Master is nearly obsessed with recounting and confirming written prophecies. Buffy 's superhuman powers are her birthright. Despite her desire to live a normal life she feels compelled to fulfill her destiny as a Slayer, and the need for her to live up to this responsibility is reinforced by Giles. Buffy, however, subverts these elements to assert her own free will, which is illustrated in the season finale. According to Buffy studies scholar Gregory Stevenson, the Master has such confidence in the prophecy that the Slayer will die that he is unable to comprehend her resurrection by Xander.
When the Master rises, the Hellmouth opens in the floor of the school library where Giles, Buffy 's friends Willow (Alyson Hannigan), Cordelia (Charisma Carpenter), and a teacher, Jenny Calendar (Robia LaMorte) are present and fighting off the emerging monsters. Buffy finds the Master on the roof of the library watching through the octagonal windows in the ceiling. Incredulous upon her arrival, he tells her she was destined to die in a written prophecy. She replies "What can I say? I flunked the written. '' She is now able to resist his attempts to hypnotize her and pushes him through the skylight into the library below, impaling him on a broken wooden table and killing him.
Following his death, the Master makes several appearances in the series, and his presence is still palpable in early second season episodes. In the second season premiere Buffy has still not exorcised the trauma she experienced in her confrontation with the Master, and is masking her anxiety by being hostile towards her friends. She has her catharsis by smashing his bones with a sledgehammer. The Anointed One remains alive until killed by a vampire named Spike (James Marsters) in "School Hard ''.
In the third season, "The Wish '' presents audiences with an alternate reality in Sunnydale: after dating Xander and breaking up, Cordelia expresses to Anyanka (Emma Caulfield), a vengeance demon, a wish to live in a Sunnydale where Buffy never arrived. In this reality the town is overrun with vampires loyal to the successfully risen Master who, in a capitalistic turn, has devised a machine to make an assembly line to bleed humans to feed his followers. In this Sunnydale, very powerful vampires Willow and Xander are his favorites. Near the end of the episode a very different Buffy arrives, friendless and fighting alone, and when she confronts the Master she falls quickly under his hypnotic powers and is killed when he snaps her neck (again fulfilling the prophecy that in their fight, she will die).
In the seventh season premiere of Buffy, "Lessons '', the Master appears once more as a face of the First Evil, a shape - shifting villain and the Big Bad of the final season.
Metcalf also guest - starred on the Buffy spinoff series Angel in the second season episode "Darla '', which goes into more detail about Darla 's human life and her transformation into a vampire at the Master 's hands.
In the canonical comic book series, it is revealed that the Master has been resurrected off - screen by the Seed of Wonder as its guardian at some point after the first season 's finale. He is eventually killed again by a far more powerful rogue higher power, Twilight.
The Master appears in the first Buffy video game, where he is resurrected by a necromancer as a spirit to act as the leader for the Old One Lybach 's plot to build a bridge between his Hell dimension and Earth and lead an army of demons to Earth. The Master possesses Angel and uses the remnants of the Order of Aurelias and demons loyal to him to try to build the bridge. He is eventually exorcised from Angel by Buffy and Willow but survives in spirit form to continue on. After Buffy kills the Dreamers, the demons he 's using to build the bridge, her friends perform a spell to make him corporeal and she is able to kill him once again.
Joss Whedon created Buffy Summers to subvert the dual ideas of female subordination to patriarchy, and authority steeped in tradition, both dynamics well - established in the Master 's world order. According to Buffy scholars, the Master is a classic villain. Rhonda Wilcox writes, "There could hardly be a nastier incarnation of the patriarchy than the ancient, ugly vampire Master '', and Gregory Stevenson places him in the category of "absolute evil '' with the second season 's Judge (also Brian Thompson), third season 's Mayor (Harry Groener), and fourth season 's Adam (George Hertzberg). In contrast, other Buffy characters are more morally ambiguous.
The Master is a grand patriarch consumed with hierarchy, order, subservience, and is defined by what is old. Buffy 's opposition to the Master addresses media tropes found in many horror films where a young, petite blonde woman, up against a male monster, is killed off partway through the film as a result of her own weakness.
The series also highlights the generational divide between the younger characters and the older ones. In particular, the dialogue, termed "Buffyspeak '' by some media, frequently makes the younger characters indecipherable to the older ones. The Master speaks with a stylistic formality found in Bible verses. According to Wilcox, Buffy can hardly understand Giles ' language, much less the Master 's "pompous, quasi-religious remarks ''. The entire first season confronts the younger characters with the problems of impending adulthood, which they only begin reconcile in the last episodes of the season.
Each season finale signifies a turning point for the main characters -- usually Buffy -- and her confronting the Master, according to Stevenson, represents "the end of her childhood illusions of immortality ''. The scene is fraught with romantic imagery, with Buffy in a white gown, initially intended to be her party dress. When the Master bites her it is, according to Elisabeth Kirmmer and Shilpa Raval, her sexual initiation: a different take on the young girl dying at the hands of a monster. Kirmmer and Raval write that the "paradigm of Death and the Maiden is replaced by that of the hero who faces death and emerges stronger ''. When he tries to hypnotize her on the roof, she is able to resist him and kills him.
Buffy 's willful behavior and tendency to buck tradition is further underscored by another Slayer who was brought up in the traditional Slayer path by her Watcher. In the mythos of the series, when one Slayer dies, another takes her place somewhere in the world. Buffy 's brief death activated the Slayer Kendra (Bianca Lawson) in the second season. She is a committed, rule - abiding young woman who does everything authority figures tell her to do. Thus, she is fatally vulnerable to being hypnotized by Drusilla (Juliet Landau), an insane vampire with extraordinary mental abilities, who kills Kendra easily.
|
cause of the battles of lexington and concord | Battles of Lexington and Concord - wikipedia
Strategic American victory
The Battles of Lexington and Concord were the first military engagements of the American Revolutionary War. The battles were fought on April 19, 1775 in Middlesex County, Province of Massachusetts Bay, within the towns of Lexington, Concord, Lincoln, Menotomy (present - day Arlington), and Cambridge. They marked the outbreak of armed conflict between the Kingdom of Great Britain and its thirteen colonies in America.
In late 1774, Colonial leaders adopted the Suffolk Resolves in resistance to the alterations made to the Massachusetts colonial government by the British parliament following the Boston Tea Party. The colonial assembly responded by forming a Patriot provisional government known as the Massachusetts Provincial Congress and calling for local militias to train for possible hostilities. The Colonial government exercised effective control of the colony outside of British - controlled Boston. In response, the British government in February 1775 declared Massachusetts to be in a state of rebellion.
About 700 British Army regulars in Boston, under Lieutenant Colonel Francis Smith, were given secret orders to capture and destroy Colonial military supplies reportedly stored by the Massachusetts militia at Concord. Through effective intelligence gathering, Patriot leaders had received word weeks before the expedition that their supplies might be at risk and had moved most of them to other locations. On the night before the battle, warning of the British expedition had been rapidly sent from Boston to militias in the area by several riders, including Paul Revere, with information about British plans. The initial mode of the Army 's arrival by water was signaled from the Old North Church in Boston to Charleston using lanterns to communicate "one if by land, two if by sea ''.
The first shots were fired just as the sun was rising at Lexington. Eight militiamen were killed, including Ensign Robert Munroe, their ranking officer. The British suffered only one casualty. The militia were outnumbered and fell back, and the regulars proceeded on to Concord, where they broke apart into companies to search for the supplies. At the North Bridge in Concord, approximately 400 militiamen engaged 100 regulars from three companies of the King 's troops at about 11: 00 am, resulting in casualties on both sides. The outnumbered regulars fell back from the bridge and rejoined the main body of British forces in Concord.
The British forces began their return march to Boston after completing their search for military supplies, and more militiamen continued to arrive from neighboring towns. Gunfire erupted again between the two sides and continued throughout the day as the regulars marched back towards Boston. Upon returning to Lexington, Lt. Col. Smith 's expedition was rescued by reinforcements under Brigadier General Hugh Percy, a future duke of Northumberland known as Earl Percy. The combined force of about 1,700 men marched back to Boston under heavy fire in a tactical withdrawal and eventually reached the safety of Charlestown. The accumulated militias then blockaded the narrow land accesses to Charlestown and Boston, starting the Siege of Boston.
Ralph Waldo Emerson describes the first shot fired by the Patriots at the North Bridge in his "Concord Hymn '' as the "shot heard round the world ''.
The British Army 's infantry was nicknamed "redcoats '' and sometimes "devils '' by the colonists. They had occupied Boston since 1768 and had been augmented by naval forces and marines to enforce what the colonists called The Intolerable Acts, which had been passed by the British Parliament to punish the Province of Massachusetts Bay for the Boston Tea Party and other acts of defiance.
General Thomas Gage was the military governor of Massachusetts and commander - in - chief of the roughly 3,000 British military forces garrisoned in Boston. He had no control over Massachusetts outside of Boston, however, where implementation of the Acts had increased tensions between the Patriot Whig majority and the pro-British Tory minority. Gage 's plan was to avoid conflict by removing military supplies from Whig militias using small, secret, and rapid strikes. This struggle for supplies led to one British success and several Patriot successes in a series of nearly bloodless conflicts known as the Powder Alarms. Gage considered himself to be a friend of liberty and attempted to separate his duties as governor of the colony and as general of an occupying force. Edmund Burke described Gage 's conflicted relationship with Massachusetts by saying in Parliament, "An Englishman is the unfittest person on Earth to argue another Englishman into slavery. ''
The colonists had been forming militias since the very beginnings of Colonial settlement for the purpose of defense against Indian attacks. These forces also saw action in the French and Indian War between 1754 and 1763 when they fought alongside British regulars. Under the laws of each New England colony, all towns were obligated to form militia companies composed of all males 16 years of age and older (there were exemptions for some categories), and to ensure that the members were properly armed. The Massachusetts militias were formally under the jurisdiction of the provincial government, but militia companies throughout New England elected their own officers. Gage effectively dissolved the provincial government under the terms of the Massachusetts Government Act, and these existing connections were employed by the colonists under the Massachusetts Provincial Congress for the purpose of resistance to the military threat from Britain.
A February 1775 address to King George III, by both houses of Parliament, declared that a state of rebellion existed:
We... find that a part of your Majesty ' s subjects, in the Province of the Massachusetts Bay, have proceeded so far to resist the authority of the supreme Legislature, that a rebellion at this time actually exists within the said Province; and we see, with the utmost concern, that they have been countenanced and encouraged by unlawful combinations and engagements entered into by your Majesty 's subjects in several of the other Colonies, to the injury and oppression of many of their innocent fellow - subjects, resident within the Kingdom of Great Britain, and the rest of your Majesty ' s Dominions...
We... shall... pay attention and regard to any real grievances... laid before us; and whenever any of the Colonies shall make a proper application to us, we shall be ready to afford them every just and reasonable indulgence. At the same time we... beseech your Majesty that you will... enforce due obedience to the laws and authority of the supreme Legislature; and... it is our fixed resolution, at the hazard of our lives and properties, to stand by your Majesty against all rebellious attempts in the maintenance of the just rights of your Majesty, and the two Houses of Parliament.
On April 14, 1775, Gage received instructions from Secretary of State William Legge, Earl of Dartmouth, to disarm the rebels and to imprison the rebellion 's leaders, but Dartmouth gave Gage considerable discretion in his commands. Gage 's decision to act promptly may have been influenced by information he received on April 15, from a spy within the Provincial Congress, telling him that although the Congress was still divided on the need for armed resistance, delegates were being sent to the other New England colonies to see if they would cooperate in raising a New England army of 18,000 colonial soldiers.
On the morning of April 18, Gage ordered a mounted patrol of about 20 men under the command of Major Mitchell of the 5th Regiment of Foot into the surrounding country to intercept messengers who might be out on horseback. This patrol behaved differently from patrols sent out from Boston in the past, staying out after dark and asking travelers about the location of Samuel Adams and John Hancock. This had the unintended effect of alarming many residents and increasing their preparedness. The Lexington militia in particular began to muster early that evening, hours before receiving any word from Boston. A well - known story alleges that after nightfall one farmer, Josiah Nelson, mistook the British patrol for the colonists and asked them, "Have you heard anything about when the regulars are coming out? '' upon which he was slashed on his scalp with a sword. However, the story of this incident was not published until over a century later, which suggests that it may be little more than a family myth.
Lieutenant Colonel Francis Smith received orders from Gage on the afternoon of April 18 with instructions that he was not to read them until his troops were underway. He was to proceed from Boston "with utmost expedition and secrecy to Concord, where you will seize and destroy... all Military stores... But you will take care that the soldiers do not plunder the inhabitants or hurt private property. '' Gage used his discretion and did not issue written orders for the arrest of rebel leaders, as he feared doing so might spark an uprising.
On March 30, 1775, the Massachusetts Provincial Congress issued the following resolution:
Whenever the army under command of General Gage, or any part thereof to the number of five hundred, shall march out of the town of Boston, with artillery and baggage, it ought to be deemed a design to carry into execution by force the late acts of Parliament, the attempting of which, by the resolve of the late honourable Continental Congress, ought to be opposed; and therefore the military force of the Province ought to be assembled, and an army of observation immediately formed, to act solely on the defensive so long as it can be justified on the principles of reason and self - preservation.
The rebellion 's leaders -- with the exception of Paul Revere and Joseph Warren -- had all left Boston by April 8. They had received word of Dartmouth 's secret instructions to General Gage from sources in London well before they reached Gage himself. Adams and Hancock had fled Boston to the home of one of Hancock 's relatives in Lexington, where they thought they would be safe from the immediate threat of arrest.
The Massachusetts militias had indeed been gathering a stock of weapons, powder, and supplies at Concord and much further west in Worcester. An expedition from Boston to Concord was widely anticipated. After a large contingent of regulars alarmed the countryside by an expedition from Boston to Watertown on March 30, The Pennsylvania Journal, a newspaper in Philadelphia, reported, "It was supposed they were going to Concord, where the Provincial Congress is now sitting. A quantity of provisions and warlike stores are lodged there... It is... said they are intending to go out again soon. ''
On April 8, Paul Revere rode to Concord to warn the inhabitants that the British appeared to be planning an expedition. The townspeople decided to remove the stores and distribute them among other towns nearby.
The colonists were also aware that April 19 would be the date of the expedition, despite Gage 's efforts to keep the details hidden from all the British rank and file and even from the officers who would command the mission. There is reasonable speculation, although not proven, that the confidential source of this intelligence was Margaret Gage, General Gage 's New Jersey - born wife, who had sympathies with the Colonial cause and a friendly relationship with Warren.
Between 9 and 10 pm on the night of April 18, 1775, Joseph Warren told Revere and William Dawes that the British troops were about to embark in boats from Boston bound for Cambridge and the road to Lexington and Concord. Warren 's intelligence suggested that the most likely objectives of the regulars ' movements later that night would be the capture of Adams and Hancock. They did not worry about the possibility of regulars marching to Concord, since the supplies at Concord were safe, but they did think their leaders in Lexington were unaware of the potential danger that night. Revere and Dawes were sent out to warn them and to alert colonial militias in nearby towns.
Dawes covered the southern land route by horseback across Boston Neck and over the Great Bridge to Lexington. Revere first gave instructions to send a signal to Charlestown using lanterns hung in the steeple of Boston 's Old North Church. He then traveled the northern water route, crossing the mouth of the Charles River by rowboat, slipping past the British warship HMS Somerset at anchor. Crossings were banned at that hour, but Revere safely landed in Charlestown and rode west to Lexington, warning almost every house along the route. Additional riders were sent north from Charlestown.
After they arrived in Lexington, Revere, Dawes, Hancock, and Adams discussed the situation with the militia assembling there. They believed that the forces leaving the city were too large for the sole task of arresting two men and that Concord was the main target. The Lexington men dispatched riders to the surrounding towns, and Revere and Dawes continued along the road to Concord accompanied by Samuel Prescott. In Lincoln, they ran into the British patrol led by Major Mitchell. Revere was captured, Dawes was thrown from his horse, and only Prescott escaped to reach Concord. Additional riders were sent out from Concord.
The ride of Revere, Dawes, and Prescott triggered a flexible system of "alarm and muster '' that had been carefully developed months before, in reaction to the colonists ' impotent response to the Powder Alarm. This system was an improved version of an old notification network for use in times of emergency. The colonists had periodically used it during the early years of Indian wars in the colony, before it fell into disuse in the French and Indian War. In addition to other express riders delivering messages, bells, drums, alarm guns, bonfires and a trumpet were used for rapid communication from town to town, notifying the rebels in dozens of eastern Massachusetts villages that they should muster their militias because over 500 regulars were leaving Boston. This system was so effective that people in towns 25 miles (40 km) from Boston were aware of the army 's movements while they were still unloading boats in Cambridge. These early warnings played a crucial role in assembling a sufficient number of colonial militia to inflict heavy damage on the British regulars later in the day. Adams and Hancock were eventually moved to safety, first to what is now Burlington and later to Billerica.
Around dusk, General Gage called a meeting of his senior officers at the Province House. He informed them that instructions from Lord Dartmouth had arrived, ordering him to take action against the colonials. He also told them that the senior colonel of his regiments, Lieutenant Colonel Smith, would command, with Major John Pitcairn as his executive officer. The meeting adjourned around 8: 30 pm, after which Earl Percy mingled with town folk on Boston Common. According to one account, the discussion among people there turned to the unusual movement of the British soldiers in the town. When Percy questioned one man further, the man replied, "Well, the regulars will miss their aim. ''
"What aim? '' asked Percy. "Why, the cannon at Concord '' was the reply. Upon hearing this, Percy quickly returned to Province House and relayed this information to General Gage. Stunned, Gage issued orders to prevent messengers from getting out of Boston, but these were too late to prevent Dawes and Revere from leaving.
The British regulars, around 700 infantry, were drawn from 11 of Gage 's 13 occupying infantry regiments. Major Pitcairn commanded ten elite light infantry companies, and Lieutenant Colonel Benjamin Bernard commanded 11 grenadier companies, under the overall command of Lieutenant Colonel Smith.
Of the troops assigned to the expedition, 350 were from grenadier companies drawn from the 4th (King 's Own), 5th, 10th, 18th (Royal Irish), 23rd, 38th, 43rd, 47th, 52nd and 59th Regiments of Foot, and the 1st Battalion of His Majesty 's Marine Forces. Protecting the grenadier companies were about 320 light infantry from the 4th, 5th, 10th, 23rd, 38th, 43rd, 47th, 52nd, and 59th Regiments, and the 1st Battalion of the Marines. Each company had its own lieutenant, but the majority of the captains commanding them were volunteers attached to them at the last minute, drawn from all the regiments stationed in Boston. This lack of familiarity between commander and company would cause problems during the battle.
The British began to awaken their troops at 9 pm on the night of April 18 and assembled them on the water 's edge on the western end of Boston Common by 10 pm. Colonel Smith was late in arriving, and there was no organized boat - loading operation, resulting in confusion at the staging area. The boats used were naval barges that were packed so tightly that there was no room to sit down. When they disembarked near Phipps Farm in Cambridge, it was into waist - deep water at midnight. After a lengthy halt to unload their gear, the regulars began their 17 miles (27 km) march to Concord at about 2 am. During the wait they were provided with extra ammunition, cold salt pork, and hard sea biscuits. They did not carry knapsacks, since they would not be encamped. They carried their haversacks (food bags), canteens, muskets, and accoutrements, and marched off in wet, muddy shoes and soggy uniforms. As they marched through Menotomy, sounds of the colonial alarms throughout the countryside caused the few officers who were aware of their mission to realize they had lost the element of surprise.
At about 3 am, Colonel Smith sent Major Pitcairn ahead with six companies of light infantry under orders to quick march to Concord. At about 4 am Smith made the wise but belated decision to send a messenger back to Boston asking for reinforcements.
Though often styled a battle, in reality the engagement at Lexington was a minor brush or skirmish. As the regulars ' advance guard under Pitcairn entered Lexington at sunrise on April 19, 1775, about 80 Lexington militiamen emerged from Buckman Tavern and stood in ranks on the village common watching them, and between 40 and 100 spectators watched from along the side of the road. Their leader was Captain John Parker, a veteran of the French and Indian War, who was suffering from tuberculosis and was at times difficult to hear. Of the militiamen who lined up, nine had the surname Harrington, seven Munroe (including the company 's orderly sergeant, William Munroe), four Parker, three Tidd, three Locke, and three Reed; fully one quarter of them were related to Captain Parker in some way. This group of militiamen was part of Lexington 's "training band '', a way of organizing local militias dating back to the Puritans, and not what was styled a minuteman company.
After having waited most of the night with no sign of any British troops (and wondering if Paul Revere 's warning was true), at about 4: 15 a.m., Parker got his confirmation. Thaddeus Bowman, the last scout that Parker had sent out, rode up at a gallop and told him that they were not only coming, but coming in force and they were close. Captain Parker was clearly aware that he was outmatched in the confrontation and was not prepared to sacrifice his men for no purpose. He knew that most of the colonists ' powder and military supplies at Concord had already been hidden. No war had been declared. (The Declaration of Independence was a year in the future.) He also knew the British had gone on such expeditions before in Massachusetts, found nothing, and marched back to Boston.
Parker had every reason to expect that to occur again. The Regulars would march to Concord, find nothing, and return to Boston, tired but empty - handed. He positioned his company carefully. He placed them in parade - ground formation, on Lexington Common. They were in plain sight (not hiding behind walls), but not blocking the road to Concord. They made a show of political and military determination, but no effort to prevent the march of the Regulars. Many years later, one of the participants recalled Parker 's words as being what is now engraved in stone at the site of the battle: "Stand your ground; do n't fire unless fired upon, but if they mean to have a war, let it begin here. '' According to Parker 's sworn deposition taken after the battle:
I... ordered our Militia to meet on the Common in said Lexington to consult what to do, and concluded not to be discovered, nor meddle or make with said Regular Troops (if they should approach) unless they should insult or molest us; and, upon their sudden Approach, I immediately ordered our Militia to disperse, and not to fire: -- Immediately said Troops made their appearance and rushed furiously, fired upon, and killed eight of our Party without receiving any Provocation therefor from us.
Rather than turn left towards Concord, Marine Lieutenant Jesse Adair, at the head of the advance guard, decided on his own to protect the flank of the British column by first turning right and then leading the companies onto the Common itself, in a confused effort to surround and disarm the militia. Major Pitcairn arrived from the rear of the advance force and led his three companies to the left and halted them. The remaining companies under Colonel Smith lay further down the road toward Boston.
A British officer (probably Pitcairn, but accounts are uncertain, as it may also have been Lieutenant William Sutherland) then rode forward, waving his sword, and called out for the assembled militia to disperse, and may also have ordered them to "lay down your arms, you damned rebels! '' Captain Parker told his men instead to disperse and go home, but, because of the confusion, the yelling all around, and due to the raspiness of Parker 's tubercular voice, some did not hear him, some left very slowly, and none laid down their arms. Both Parker and Pitcairn ordered their men to hold fire, but a shot was fired from an unknown source.
(A) t 5 o'clock we arrived (in Lexington), and saw a number of people, I believe between 200 and 300, formed in a common in the middle of town; we still continued advancing, keeping prepared against an attack through without intending to attack them; but on our coming near them they fired on us two shots, upon which our men without any orders, rushed upon them, fired and put them to flight; several of them were killed, we could not tell how many, because they were behind walls and into the woods. We had a man of the 10th light Infantry wounded, nobody else was hurt. We then formed on the Common, but with some difficulty, the men were so wild they could hear no orders; we waited a considerable time there, and at length proceeded our way to Concord.
According to one member of Parker 's militia, none of the Americans had discharged their muskets as they faced the oncoming British troops. The British did suffer one casualty, a slight wound, the particulars of which were corroborated by a deposition made by Corporal John Munroe. Munroe stated that:
After the first fire of the regulars, I thought, and so stated to Ebenezer Munroe... who stood next to me on the left, that they had fired nothing but powder; but on the second firing, Munroe stated they had fired something more than powder, for he had received a wound in his arm; and now, said he, to use his own words, ' I 'll give them the guts of my gun. ' We then both took aim at the main body of British troops the smoke preventing our seeing anything but the heads of some of their horses and discharged our pieces.
Some witnesses among the regulars reported the first shot was fired by a colonial onlooker from behind a hedge or around the corner of a tavern. Some observers reported a mounted British officer firing first. Both sides generally agreed that the initial shot did not come from the men on the ground immediately facing each other. Speculation arose later in Lexington that a man named Solomon Brown fired the first shot from inside the tavern or from behind a wall, but this has been discredited. Some witnesses (on each side) claimed that someone on the other side fired first; however, many more witnesses claimed to not know. Yet another theory is that the first shot was one fired by the British, that killed Asahel Porter, their prisoner who was running away (he had been told to walk away and he would be let go, though he panicked and began to run). Historian David Hackett Fischer has proposed that there may actually have been multiple near - simultaneous shots. Historian Mark Urban claims the British surged forward with bayonets ready in an undisciplined way, provoking a few scattered shots from the militia. In response the British troops, without orders, fired a devastating volley. This lack of discipline among the British troops had a key role in the escalation of violence.
Witnesses at the scene described several intermittent shots fired from both sides before the lines of regulars began to fire volleys without receiving orders to do so. A few of the militiamen believed at first that the regulars were only firing powder with no ball, but when they realized the truth, few if any of the militia managed to load and return fire. The rest ran for their lives.
We Nathaniel Mulliken, Philip Russell, (and 32 other men...) do testify and declare, that on the nineteenth in the morning, being informed that... a body of regulars were marching from Boston towards Concord... About five o'clock in the morning, hearing our drum beat, we proceeded towards the parade, and soon found that a large body of troops were marching towards us, some of our company were coming to the parade, and others had reached it, at which time, the company began to disperse, whilst our backs were turned on the troops, we were fired on by them, and a number of our men were instantly killed and wounded, not a gun was fired by any person in our company on the regulars to our knowledge before they fired on us, and continued firing until we had all made our escape.
The regulars then charged forward with bayonets. Captain Parker 's cousin Jonas was run through. Eight Lexington men were killed, and ten were wounded. The only British casualty was a soldier who was wounded in the thigh. The eight colonists killed were John Brown, Samuel Hadley, Caleb Harrington, Jonathon Harrington, Robert Munroe, Isaac Muzzey, Asahel Porter, and Jonas Parker. Jonathon Harrington, fatally wounded by a British musket ball, managed to crawl back to his home, and died on his own doorstep. One wounded man, Prince Estabrook, was a black slave who was serving in the militia.
The companies under Pitcairn 's command got beyond their officers ' control in part because they were unaware of the actual purpose of the day 's mission. They fired in different directions and prepared to enter private homes. Colonel Smith, who was just arriving with the remainder of the regulars, heard the musket fire and rode forward from the grenadier column to see the action. He quickly found a drummer and ordered him to beat assembly. The grenadiers arrived shortly thereafter, and once order was restored among the soldiers, the light infantry were permitted to fire a victory volley, after which the column was reformed and marched on toward Concord.
In response to the raised alarm, the militiamen of Concord and Lincoln had mustered in Concord. They received reports of firing at Lexington, and were not sure whether to wait until they could be reinforced by troops from towns nearby, or to stay and defend the town, or to move east and greet the British Army from superior terrain. A column of militia marched down the road toward Lexington to meet the British, traveling about 1.5 miles (2 km) until they met the approaching column of regulars. As the regulars numbered about 700 and the militia at this time only numbered about 250, the militia column turned around and marched back into Concord, preceding the regulars by a distance of about 500 yards (457 m). The militia retreated to a ridge overlooking the town, and their officers discussed what to do next. Caution prevailed, and Colonel James Barrett withdrew from the town of Concord and led the men across the North Bridge to a hill about a mile north of town, where they could continue to watch the troop movements of the British and the activities in the center of town. This step proved fortuitous, as the ranks of the militia continued to grow as minuteman companies arriving from the western towns joined them there.
When the British troops arrived in the village of Concord, Lt. Col. Smith divided them to carry out Gage 's orders. The 10th Regiment 's company of grenadiers secured South Bridge under Captain Mundy Pole, while seven companies of light infantry under Captain Parsons, numbering about 100, secured the North Bridge, where they were visible across the cleared fields to the assembling militia companies. Captain Parsons took four companies from the 5th, 23rd, 38th and 52nd Regiments up the road 2 miles (3.2 km) beyond the North Bridge to search Barrett 's Farm, where intelligence indicated supplies would be found. Two companies from the 4th and 10th Regiments were stationed to guard their return route, and one company from the 43rd remained guarding the bridge itself. These companies, which were under the relatively inexperienced command of Captain Walter Laurie, were aware that they were significantly outnumbered by the 400 - plus militiamen. The concerned Captain Laurie sent a messenger to Lt. Col. Smith requesting reinforcements.
Using detailed information provided by Loyalist spies, the grenadier companies searched the small town for military supplies. When they arrived at Ephraim Jones 's tavern, by the jail on the South Bridge road, they found the door barred shut, and Jones refused them entry. According to reports provided by local Loyalists, Pitcairn knew cannon had been buried on the property. Jones was ordered at gunpoint to show where the guns were buried. These turned out to be three massive pieces, firing 24 - pound shot, that were much too heavy to use defensively, but very effective against fortifications, with sufficient range to bombard the city of Boston from other parts of nearby mainland. The grenadiers smashed the trunnions of these three guns so they could not be mounted. They also burned some gun carriages found in the village meetinghouse, and when the fire spread to the meetinghouse itself, local resident Martha Moulton persuaded the soldiers to help in a bucket brigade to save the building. Nearly a hundred barrels of flour and salted food were thrown into the millpond, as were 550 pounds of musket balls. Of the damage done, only that done to the cannon was significant. All of the shot and much of the food was recovered after the British left. During the search, the regulars were generally scrupulous in their treatment of the locals, including paying for food and drink consumed. This excessive politeness was used to advantage by the locals, who were able to misdirect searches from several smaller caches of militia supplies.
Barrett 's Farm had been an arsenal weeks before, but few weapons remained now, and according to family legend, these were quickly buried in furrows to look like a crop had been planted. The troops sent there did not find any supplies of consequence.
Colonel Barrett 's troops, upon seeing smoke rising from the village square as the British burned cannon carriages, and seeing only a few light infantry companies directly below them, decided to march back toward the town from their vantage point on Punkatasset Hill to a lower, closer flat hilltop about 300 yards (274 m) from the North Bridge. As the militia advanced, the two British companies from the 4th and 10th Regiments that held the position near the road retreated to the bridge and yielded the hill to Barrett 's men.
Five full companies of Minutemen and five more of militia from Acton, Concord, Bedford and Lincoln occupied this hill as more groups of men streamed in, totaling at least 400 against Captain Laurie 's light infantry companies, a force totaling 90 -- 95 men. Barrett ordered the Massachusetts men to form one long line two abreast on the highway leading down to the bridge, and then he called for another consultation. While overlooking North Bridge from the top of the hill, Barrett, Lt. Col. John Robinson of Westford and the other Captains discussed possible courses of action. Captain Isaac Davis of Acton, whose troops had arrived late, declared his willingness to defend a town not their own by saying, "I 'm not afraid to go, and I have n't a man that 's afraid to go. ''
Barrett told the men to load their weapons but not to fire unless fired upon, and then ordered them to advance. Laurie ordered the British companies guarding the bridge to retreat across it. One officer then tried to pull up the loose planks of the bridge to impede the colonial advance, but Major Buttrick began to yell at the regulars to stop harming the bridge. The Minutemen and militia from Concord, Acton and a handful of Westford Minutemen, advanced in column formation, two by two, led by Major Buttrick, Lt. Col. Robinson, then Capt. Davis, on the light infantry, keeping to the road, since it was surrounded by the spring floodwaters of the Concord River.
Captain Laurie then made a poor tactical decision. Since his summons for help had not produced any results, he ordered his men to form positions for "street firing '' behind the bridge in a column running perpendicular to the river. This formation was appropriate for sending a large volume of fire into a narrow alley between the buildings of a city, but not for an open path behind a bridge. Confusion reigned as regulars retreating over the bridge tried to form up in the street - firing position of the other troops. Lieutenant Sutherland, who was in the rear of the formation, saw Laurie 's mistake and ordered flankers to be sent out. But as he was from a company different from the men under his command, only three soldiers obeyed him. The remainder tried as best they could in the confusion to follow the orders of the superior officer.
A shot rang out. It was likely a warning shot fired by a panicked, exhausted British soldier from the 43rd, according to Captain Laurie 's report to his commander after the fight. Two other regulars then fired immediately after that, shots splashing in the river, and then the narrow group up front, possibly thinking the order to fire had been given, fired a ragged volley before Laurie could stop them.
Two of the Acton Minutemen, Private Abner Hosmer and Captain Isaac Davis, who were at the head of the line marching to the bridge, were hit and killed instantly. Rev. Dr. Ripley recalled:
The Americans commenced their march in double file... In a minute or two, the Americans being in quick motion and within ten or fifteen rods of the bridge, a single gun was fired by a British soldier, which marked the way, passing under Col. Robinson 's arm and slightly wounding the side of Luther Blanchard, a fifer, in the Acton Company.
Four more men were wounded. Major Buttrick then yelled to the militia, "Fire, for God 's sake, fellow soldiers, fire! '' At this point the lines were separated by the Concord River and the bridge, and were only 50 yards (46 m) apart. The few front rows of colonists, bound by the road and blocked from forming a line of fire, managed to fire over each other 's heads and shoulders at the regulars massed across the bridge. Four of the eight British officers and sergeants, who were leading from the front of their troops, were wounded by the volley of musket fire. At least three privates (Thomas Smith, Patrick Gray, and James Hall, all from the 4th) were killed or mortally wounded, and nine were wounded. In 1824, Reverend and Minuteman Joseph Thaxter wrote:
I was an eyewitness to the following facts. The people of Westford and Acton, some few of Concord, were the first who faced the British at Concord bridge. The British had placed about ninety men as a guard at the North Bridge; we had then no certain information that any had been killed at Lexington, we saw the British making destruction in the town of Concord; it was proposed to advance to the bridge; on this Colonel Robinson, of Westford, together with Major Buttrick, took the lead; strict orders were given not to fire, unless the British fired first; when they advanced about halfway on the causeway the British fired one gun, a second, a third, and then the whole body; they killed Colonel Davis, of Acton, and a Mr. Hosmer. Our people then fired over one another 's heads, being in a long column, two and two; they killed two and wounded eleven. Lieutenant Hawkstone, said to be the greatest beauty of the British army, had his cheeks so badly wounded that it disfigured him much, of which he bitterly complained. On this, the British fled, and assembled on the hill, the north side of Concord, and dressed their wounded, and then began their retreat. As they descended the hill near the road that comes out from Bedford they were pursued; Colonel Bridge, with a few men from Bedford and Chelmsford, came up, and killed several men.
The regulars found themselves trapped in a situation where they were both outnumbered and outmaneuvered. Lacking effective leadership and terrified at the superior numbers of the enemy, with their spirit broken, and likely not having experienced combat before, they abandoned their wounded, and fled to the safety of the approaching grenadier companies coming from the town center, isolating Captain Parsons and the companies searching for arms at Barrett 's Farm.
The colonists were stunned by their success. No one had actually believed either side would shoot to kill the other. Some advanced; many more retreated; and some went home to see to the safety of their homes and families. Colonel Barrett eventually began to recover control. He moved some of the militia back to the hilltop 300 yards (274 m) away and sent Major Buttrick with others across the bridge to a defensive position on a hill behind a stone wall.
Lieutenant Colonel Smith heard the exchange of fire from his position in the town moments after he received the request for reinforcements from Laurie. He quickly assembled two companies of grenadiers to lead toward the North Bridge himself. As these troops marched, they met the shattered remnants of the three light infantry companies running towards them. Smith was concerned about the four companies that had been at Barrett 's, since their route to town was now unprotected. When he saw the Minutemen in the distance behind their wall, he halted his two companies and moved forward with only his officers to take a closer look. One of the Minutemen behind that wall observed, "If we had fired, I believe we could have killed almost every officer there was in the front, but we had no orders to fire and there was n't a gun fired. '' During a tense standoff lasting about 10 minutes, a mentally ill local man named Elias Brown wandered through both sides selling hard cider.
At this point, the detachment of regulars sent to Barrett 's farm marched back from their fruitless search of that area. They passed through the now mostly - deserted battlefield, and saw dead and wounded comrades lying on the bridge. There was one who looked to them as if he had been scalped, which angered and shocked the British soldiers. They crossed the bridge and returned to the town by 11: 30 a.m., under the watchful eyes of the colonists, who continued to maintain defensive positions. The regulars continued to search for and destroy colonial military supplies in the town, ate lunch, reassembled for marching, and left Concord after noon. This delay in departure gave colonial militiamen from outlying towns additional time to reach the road back to Boston.
Lieutenant Colonel Smith, concerned about the safety of his men, sent flankers to follow a ridge and protect his forces from the roughly 1,000 colonials now in the field as the British marched east out of Concord. This ridge ended near Meriam 's Corner, a crossroads about a mile (2 km) outside the village of Concord, where the main road came to a bridge across a small stream. To cross the narrow bridge, the British had to pull the flankers back into the main column and close ranks to a mere three soldiers abreast. Colonial militia companies arriving from the north and east had converged at this point, and presented a clear numerical advantage over the regulars. The British were now witnessing once again what General Gage had hoped to avoid by dispatching the expedition in secrecy and in the dark of night: the ability of the colonial militiamen to rise and converge by the thousands when British forces ventured out of Boston. As the last of the British column marched over the narrow bridge, the British rear guard wheeled and fired a volley at the colonial militiamen, who had been firing irregularly and ineffectively from a distance but now had closed to within musket range. The colonists returned fire, this time with deadly effect. Two regulars were killed and perhaps six wounded, with no colonial casualties. Smith sent out his flanking troops again after crossing the small bridge.
On Brooks Hill (also known as Hardy 's Hill) about 1 mile (1.6 km) past Meriam 's Corner, nearly 500 militiamen had assembled to the south of the road, awaiting opportunity to fire down upon the British column on the road below. Smith 's leading forces charged up the hill to drive them off, but the colonists did not withdraw, inflicting significant casualties on the attackers. Smith withdrew his men from Brooks Hill, and the column continued on to another small bridge into Lincoln, at Brooks Tavern, where more militia companies intensified the attack from the north side of the road.
The regulars soon reached a point in the road now referred to as the "Bloody Angle '' where the road rises and curves sharply to the left through a lightly - wooded area. At this place, the militia company from Woburn had positioned themselves on the southeast side of the bend in the road in a rocky, lightly - wooded field. Additional militia flowing parallel to the road from the engagement at Meriam 's Corner positioned themselves on the northwest side of the road, catching the British in a crossfire, while other militia companies on the road closed from behind to attack. Some 500 yards (460 m) further along, the road took another sharp curve, this time to the right, and again the British column was caught by another large force of militiamen firing from both sides. In passing through these two sharp curves, the British force lost thirty soldiers killed or wounded, and four colonial militia were also killed, including Captain Jonathan Wilson of Bedford, Captain Nathan Wyman of Billerica, Lt. John Bacon of Natick, and Daniel Thompson of Woburn. The British soldiers escaped by breaking into a trot, a pace that the colonials could not maintain through the woods and swampy terrain. Colonial forces on the road itself behind the British were too densely packed and disorganized to mount more than a harassing attack from the rear.
As militia forces from other towns continued to arrive, the colonial forces had risen to about 2,000 men. The road now straightened to the east, with cleared fields and orchards along the sides. Lt. Col. Smith sent out flankers again, who succeeded in trapping some militia from behind and inflicting casualties. British casualties were also mounting from these engagements and from persistent long - range fire from the militiamen, and the exhausted British were running out of ammunition.
When the British column neared the boundary between Lincoln and Lexington, it encountered another ambush from a hill overlooking the road, set by Captain John Parker 's Lexington militiamen, including some of them bandaged up from the encounter in Lexington earlier in the day. At this point, Lt. Col. Smith was wounded in the thigh and knocked from his horse. Major John Pitcairn assumed effective command of the column and sent light infantry companies up the hill to clear the militia forces.
The light infantry cleared two additional hills as the column continued east -- "The Bluff '' and "Fiske Hill '' -- and took still more casualties from ambushes set by fresh militia companies joining the battle. In one of the musket volleys from the colonial soldiers, Major Pitcairn 's horse bolted in fright, throwing Pitcairn to the ground and injuring his arm. Now both principal leaders of the expedition were injured or unhorsed, and their men were tired, thirsty, and exhausting their ammunition. A few surrendered or were captured; some now broke formation and ran forward toward Lexington. In the words of one British officer, "we began to run rather than retreat in order... We attempted to stop the men and form them two deep, but to no purpose, the confusion increased rather than lessened... the officers got to the front and presented their bayonets, and told the men if they advanced they should die. Upon this, they began to form up under heavy fire. ''
Only one British officer remained uninjured among the three companies at the head of the British column as it approach Lexington Center. He understood the column 's perilous situation: "There were very few men had any ammunition left, and so fatigued that we could not keep flanking parties out, so that we must soon have laid down our arms, or been picked off by the Rebels at their pleasure -- nearer to -- and we were not able to keep them off. '' He then heard cheering further ahead. A full brigade, about 1,000 men with artillery under the command of Earl Percy, had arrived to rescue them. It was about 2: 30 p.m., and the British column had now been on the march since 2 o'clock in the morning. Westford Minuteman, Rev. Joseph Thaxter, wrote of his account:
We pursued them and killed some; when they got to Lexington, they were so close pursued and fatigued, that they must have soon surrendered, had not Lord Percy met them with a large reinforcement and two field - pieces. They fired them, but the balls went high over our heads. But no cannon ever did more execution, such stories of their effects had been spread by the tories through our troops, that from this time more wont back than pursed. We pursued to Charlestown Common, and then retired to Cambridge. When the army collected at Cambridge, Colonel Prescott with his regiment of minute men, and John Robinson, his Lieutenant Colonel, were prompt at being at their post.
In their accounts afterward, British officers and soldiers alike noted their frustration that the colonial militiamen fired at them from behind trees and stone walls, rather than confronting them in large, linear formations in the style of European warfare. This image of the individual colonial farmer, musket in hand and fighting under his own command, has also been fostered in American myth: "Chasing the red - coats down the lane / Then crossing the fields to emerge again / Under the trees at the turn of the road, / And only pausing to fire and load. '' To the contrary, beginning at the North Bridge and throughout the British retreat, the colonial militias repeatedly operated as coordinated companies, even when dispersed to take advantage of cover. Reflecting on the British experience that day, Earl Percy understood the significance of the American tactics:
During the whole affair the Rebels attacked us in a very scattered, irregular manner, but with perseverance & resolution, nor did they ever dare to form into any regular body. Indeed, they knew too well what was proper, to do so. Whoever looks upon them as an irregular mob, will find himself much mistaken. They have men amongst them who know very well what they are about, having been employed as Rangers against the Indians & Canadians, & this country being much covered with wood, and hilly, is very advantageous for their method of fighting.
General Gage had anticipated that Lt. Col. Smith 's expedition might require reinforcement, so Gage drafted orders for reinforcing units to assemble in Boston at 4 a.m. But in his obsession for secrecy, Gage had sent only one copy of the orders to the adjutant of the 1st Brigade, whose servant then left the envelope on a table. Also at about 4 a.m., the British column was within three miles of Lexington, and Lt. Col. Smith now had clear indication that all element of surprise had been lost and that alarm was spreading throughout the countryside. So he sent a rider back to Boston with a request for reinforcements. At about 5 a.m., the rider reached Boston, and the 1st Brigade was ordered to assemble: the line infantry companies of the 4th, 23rd, and 47th Regiments, and a battalion of Royal Marines, under the command of Earl Percy. Unfortunately for the British, once again only one copy of the orders were sent to each commander, and the order for the Royal Marines was delivered to the desk of Major John Pitcairn, who was already on the Lexington Common with Smith 's column at that hour. After these delays, Percy 's brigade, about 1,000 strong, left Boston at about 8: 45 a.m., headed toward Lexington. Along the way, the story is told, they marched to the tune of "Yankee Doodle '' to taunt the inhabitants of the area. By the Battle of Bunker Hill less than two months later, the song would become a popular anthem for the colonial forces.
Percy took the land route across Boston Neck and over the Great Bridge, which some quick - thinking colonists had stripped of its planking to delay the British. His men then came upon an absent - minded tutor at Harvard College and asked him which road would take them to Lexington. The Harvard man, apparently oblivious to the reality of what was happening around him, showed him the proper road without thinking. (He was later compelled to leave the country for inadvertently supporting the enemy.) Percy 's troops arrived in Lexington at about 2: 00 p.m. They could hear gunfire in the distance as they set up their cannon and deployed lines of regulars on high ground with commanding views of the town. Colonel Smith 's men approached like a fleeing mob with the full complement of colonial militia in close formation pursuing them. Percy ordered his artillery to open fire at extreme range, dispersing the colonial militiamen. Smith 's men collapsed with exhaustion once they reached the safety of Percy 's lines.
Against the advice of his Master of Ordnance, Percy had left Boston without spare ammunition for his men or for the two artillery pieces they brought with them, thinking the extra wagons would slow him down. Each man in Percy 's brigade had only 36 rounds, and each artillery piece was supplied with only a few rounds carried in side - boxes. After Percy had left the city, Gage directed two ammunition wagons guarded by one officer and thirteen men to follow. This convoy was intercepted by a small party of older, veteran militiamen still on the "alarm list, '' who could not join their militia companies because they were well over 60 years of age. These men rose up in ambush and demanded the surrender of the wagons, but the regulars ignored them and drove their horses on. The old men opened fire, shot the lead horses, killed two sergeants, and wounded the officer. The British survivors ran, and six of them threw their weapons into a pond before they surrendered.
Percy assumed control of the combined forces of about 1,700 men and let them rest, eat, drink, and have their wounds tended at field headquarters (Munroe Tavern) before resuming the march. They set out from Lexington at about 3: 30 p.m., in a formation that emphasized defense along the sides and rear of the column. Wounded regulars rode on the cannon and were forced to hop off when they were fired at by gatherings of militia. Percy 's men were often surrounded, but they had the tactical advantage of interior lines. Percy could shift his units more easily to where they were needed, while the colonial militia were required to move around the outside of his formation. Percy placed Smith 's men in the middle of the column, while the 23rd Regiment 's line companies made up the column 's rear guard. Because of information provided by Smith and Pitcairn about how the Americans were attacking, Percy ordered the rear guard to be rotated every mile or so, to allow some of his troops to rest briefly. Flanking companies were sent to both sides of the road, and a powerful force of Marines acted as the vanguard to clear the road ahead.
During the respite at Lexington, Brigadier General William Heath arrived and took command of the militia. Earlier in the day, he had traveled first to Watertown to discuss tactics with Joseph Warren, who had left Boston that morning, and other members of the Massachusetts Committee of Safety. Heath and Warren reacted to Percy 's artillery and flankers by ordering the militiamen to avoid close formations that would attract cannon fire. Instead, they surrounded Percy 's marching square with a moving ring of skirmishers at a distance to inflict maximum casualties at minimum risk.
A few mounted militiamen on the road would dismount, fire muskets at the approaching regulars, then remount and gallop ahead to repeat the tactic. Unmounted militia would often fire from long range, in the hope of hitting somebody in the main column of soldiers on the road and surviving, since both British and colonials used muskets with an effective combat range of about 50 yards (46 m). Infantry units would apply pressure to the sides of the British column. When it moved out of range, those units would move around and forward to re-engage the column further down the road. Heath sent messengers out to intercept arriving militia units, directing them to appropriate places along the road to engage the regulars. Some towns sent supply wagons to assist in feeding and rearming the militia. Heath and Warren did lead skirmishers in small actions into battle themselves, but it was the presence of effective leadership that probably had the greatest impact on the success of these tactics. Percy wrote of the colonial tactics, "The rebels attacked us in a very scattered, irregular manner, but with perseverance and resolution, nor did they ever dare to form into any regular body. Indeed, they knew too well what was proper, to do so. Whoever looks upon them as an irregular mob, will find himself very much mistaken. ''
The fighting grew more intense as Percy 's forces crossed from Lexington into Menotomy. Fresh militia poured gunfire into the British ranks from a distance, and individual homeowners began to fight from their own property. Some homes were also used as sniper positions, turning the situation into a soldier 's nightmare: house - to - house fighting. Jason Russell pleaded for his friends to fight alongside him to defend his house by saying, "An Englishman 's home is his castle. '' He stayed and was killed in his doorway. His friends, depending on which account is to be believed, either hid in the cellar, or died in the house from bullets and bayonets after shooting at the soldiers who followed them in. The Jason Russell House still stands and contains bullet holes from this fight. A militia unit that attempted an ambush from Russell 's orchard was caught by flankers, and eleven men were killed, some allegedly after they had surrendered.
Percy lost control of his men, and British soldiers began to commit atrocities to repay for the supposed scalping at the North Bridge and for their own casualties at the hands of a distant, often unseen enemy. Based on the word of Pitcairn and other wounded officers from Smith 's command, Percy had learned that the Minutemen were using stone walls, trees and buildings in these more thickly settled towns closer to Boston to hide behind and shoot at the column. He ordered the flank companies to clear the colonial militiamen out of such places.
Many of the junior officers in the flank parties had difficulty stopping their exhausted, enraged men from killing everyone they found inside these buildings. For example, two innocent drunks who refused to hide in the basement of a tavern in Menotomy were killed only because they were suspected of being involved with the day 's events. Although many of the accounts of ransacking and burnings were exaggerated later by the colonists for propaganda value (and to get financial compensation from the colonial government), it is certainly true that taverns along the road were ransacked and the liquor stolen by the troops, who in some cases became drunk themselves. One church 's communion silver was stolen but was later recovered after it was sold in Boston. Aged Menotomy resident Samuel Whittemore killed three regulars before he was attacked by a British contingent and left for dead. (He recovered from his wounds and later died in 1793 at age 98.) All told, far more blood was shed in Menotomy and Cambridge than elsewhere that day. The colonists lost 25 men killed and nine wounded there, and the British lost 40 killed and 80 wounded, with the 47th Foot and the Marines suffering the highest casualties. Each was about half the day 's fatalities.
The British troops crossed the Menotomy River (today known as Alewife Brook) into Cambridge, and the fight grew more intense. Fresh militia arrived in close array instead of in a scattered formation, and Percy used his two artillery pieces and flankers at a crossroads called Watson 's Corner to inflict heavy damage on them.
Earlier in the day, Heath had ordered the Great Bridge to be dismantled. Percy 's brigade was about to approach the broken - down bridge and a riverbank filled with militia when Percy directed his troops down a narrow track (now Beech Street, near present - day Porter Square) and onto the road to Charlestown. The militia (now numbering about 4,000) were unprepared for this movement, and the circle of fire was broken. An American force moved to occupy Prospect Hill (in modern - day Somerville), which dominated the road, but Percy moved his cannon to the front and dispersed them with his last rounds of ammunition.
A large militia force arrived from Salem and Marblehead. They might have cut off Percy 's route to Charlestown, but these men halted on nearby Winter Hill and allowed the British to escape. Some accused the commander of this force, Colonel Timothy Pickering, of permitting the troops to pass because he still hoped to avoid war by preventing a total defeat of the regulars. Pickering later claimed that he had stopped on Heath 's orders, but Heath denied this. It was nearly dark when Pitcairn 's Marines defended a final attack on Percy 's rear as they entered Charlestown. The regulars took up strong positions on the hills of Charlestown. Some of them had been without sleep for two days and had marched 40 miles (64 km) in 21 hours, eight hours of which had been spent under fire. But now they held high ground protected by heavy guns from HMS Somerset. Gage quickly sent over line companies of two fresh regiments -- the 10th and 64th -- to occupy the high ground in Charlestown and build fortifications. Although they were begun, the fortifications were never completed and would later be a starting point for the militia works built two months later in June before the Battle of Bunker Hill. General Heath studied the position of the British Army and decided to withdraw the militia to Cambridge.
In the morning, Boston was surrounded by a huge militia army, numbering over 15,000, which had marched from throughout New England. Unlike the Powder Alarm, the rumors of spilled blood were true, and the Revolutionary War had begun.
Now under the leadership of General Artemas Ward, who arrived on the 20th and replaced Brigadier General William Heath, they formed a siege line extending from Chelsea, around the peninsulas of Boston and Charlestown, to Roxbury, effectively surrounding Boston on three sides. In the days immediately following, the size of the colonial forces grew, as militias from New Hampshire, Rhode Island, and Connecticut arrived on the scene. The Second Continental Congress adopted these men into the beginnings of the Continental Army. Even now, after open warfare had started, Gage still refused to impose martial law in Boston. He persuaded the town 's selectmen to surrender all private weapons in return for promising that any inhabitant could leave town.
The battle was not a major one in terms of tactics or casualties. However, in terms of supporting the British political strategy behind the Intolerable Acts and the military strategy behind the Powder Alarms, the battle was a significant failure because the expedition contributed to the fighting it was intended to prevent, and because few weapons were actually seized.
The battle was followed by a war for British political opinion. Within four days of the battle, the Massachusetts Provincial Congress had collected scores of sworn testimonies from militiamen and from British prisoners. When word leaked out a week after the battle that Gage was sending his official description of events to London, the Provincial Congress sent a packet of these detailed depositions, signed by over 100 participants in the events, on a faster ship. The documents were presented to a sympathetic official and printed by the London newspapers two weeks before Gage 's report arrived. Gage 's official report was too vague on particulars to influence anyone 's opinion. George Germain, no friend of the colonists, wrote, "the Bostonians are in the right to make the King 's troops the aggressors and claim a victory. '' Politicians in London tended to blame Gage for the conflict instead of their own policies and instructions. The British troops in Boston variously blamed General Gage and Colonel Smith for the failures at Lexington and Concord.
The day after the battle, John Adams left his home in Braintree to ride along the battlefields. He became convinced that "the Die was cast, the Rubicon crossed. '' Thomas Paine in Philadelphia had previously thought of the argument between the colonies and the Home Country as "a kind of law - suit '', but after news of the battle reached him, he "rejected the hardened, sullen - tempered Pharaoh of England forever. '' George Washington received the news at Mount Vernon and wrote to a friend, "the once - happy and peaceful plains of America are either to be drenched in blood or inhabited by slaves. Sad alternative! But can a virtuous man hesitate in his choice? '' A group of hunters on the frontier named their campsite Lexington when they heard news of the battle in June. It eventually became the city of Lexington, Kentucky.
It was important to the early American government that an image of British fault and American innocence be maintained for this first battle of the war. The history of Patriot preparations, intelligence, warning signals, and uncertainty about the first shot was rarely discussed in the public sphere for decades. The story of the wounded British soldier at the North Bridge, hors de combat, struck down on the head by a Minuteman using a hatchet, the purported "scalping '', was strongly suppressed. Depositions mentioning some of these activities were not published and were returned to the participants (this notably happened to Paul Revere). Paintings portrayed the Lexington fight as an unjustified slaughter.
The issue of which side was to blame grew during the early nineteenth century. For example, older participants ' testimony in later life about Lexington and Concord differed greatly from their depositions taken under oath in 1775. All now said the British fired first at Lexington, whereas fifty or so years before, they were n't sure. All now said they fired back, but in 1775, they said few were able to. The "Battle '' took on an almost mythical quality in the American consciousness. Legend became more important than truth. A complete shift occurred, and the Patriots were portrayed as actively fighting for their cause, rather than as suffering innocents. Paintings of the Lexington skirmish began to portray the militia standing and fighting back in defiance.
Ralph Waldo Emerson immortalized the events at the North Bridge in his 1837 "Concord Hymn ''. The "Concord Hymn '' became important because it commemorated the beginning of the American Revolution, and that for much of the 19th century it was a means by which Americans learned about the Revolution, helping to forge the identity of the nation.
After 1860, several generations of schoolchildren memorized Henry Wadsworth Longfellow 's poem "Paul Revere 's Ride ''. Historically it is inaccurate (for example, Paul Revere never made it to Concord), but it captures the idea that an individual can change the course of history.
In the 20th century, popular and historical opinion varied about the events of the historic day, often reflecting the political mood of the time. Isolationist anti-war sentiments before the World Wars bred skepticism about the nature of Paul Revere 's contribution (if any) to the efforts to rouse the militia. Anglophilia in the United States after the turn of the twentieth century led to more balanced approaches to the history of the battle. During World War I, a film about Paul Revere 's ride was seized under the Espionage Act of 1917 for promoting discord between the United States and Britain.
During the Cold War, Revere was used not only as a patriotic symbol, but also as a capitalist one. In 1961, novelist Howard Fast published April Morning, an account of the battle from a fictional 15 - year - old 's perspective, and reading of the book has been frequently assigned in American secondary schools. A film version was produced for television in 1987, starring Chad Lowe and Tommy Lee Jones. In the 1990s, parallels were drawn between American tactics in the Vietnam War and those of the British Army at Lexington and Concord.
The site of the battle in Lexington is now known as the Lexington Battle Green, has been listed on the National Register of Historic Places, and is a National Historic Landmark. Several memorials commemorating the battle have been established there.
The lands surrounding the North Bridge in Concord, as well as approximately 5 miles (8.0 km) of the road along with surrounding lands and period buildings between Meriam 's Corner and western Lexington are part of Minuteman National Historical Park. There are walking trails with interpretive displays along routes that the colonists might have used that skirted the road, and the Park Service often has personnel (usually dressed in period dress) offering descriptions of the area and explanations of the events of the day. A bronze bas relief of Major Buttrick, designed by Daniel Chester French and executed by Edmond Thomas Quinn in 1915, is in the park, along with French 's Minute Man statue.
Four current units of the Massachusetts National Guard units (181st Infantry, 182nd Infantry, 101st Engineer Battalion, and 125th Quartermaster Company) are derived from American units that participated in the Battles of Lexington and Concord. There are only thirty current units of the U.S. Army with colonial roots.
Several ships of the United States Navy, including two World War II aircraft carriers, were named in honor of the Battle of Lexington.
Patriots ' Day is celebrated annually in honor of the battle in Massachusetts, Maine, and by the Wisconsin public schools, on the third Monday in April. Re-enactments of Paul Revere 's ride are staged, as are the battle on the Lexington Green, and ceremonies and firings are held at the North Bridge.
The Town of Concord invited 700 prominent U.S. citizens and leaders from the worlds of government, the military, the diplomatic corps, the arts, sciences, and humanities to commemorate the 200th anniversary of the battles. On April 19, 1975, as a crowd estimated at 110,000 gathered to view a parade and celebrate the Bicentennial in Concord, President Gerald Ford delivered a major speech near the North Bridge, which was televised to the nation.
Freedom was nourished in American soil because the principles of the Declaration of Independence flourished in our land. These principles, when enunciated 200 years ago, were a dream, not a reality. Today, they are real. Equality has matured in America. Our inalienable rights have become even more sacred. There is no government in our land without consent of the governed. Many other lands have freely accepted the principles of liberty and freedom in the Declaration of Independence and fashioned their own independent republics. It is these principles, freely taken and freely shared, that have revolutionized the world. The volley fired here at Concord two centuries ago, ' the shot heard round the world ', still echoes today on this anniversary.
President Ford laid a wreath at the base of The Minute Man statue and then respectfully observed as Sir Peter Ramsbotham, the British Ambassador to the United States, laid a wreath at the grave of British soldiers killed in the battle.
|
where is great sand dunes national park located | Great Sand Dunes National Park and preserve - wikipedia
Great Sand Dunes National Park and Preserve is an American national park that conserves an area of large sand dunes up to 750 feet (229 m) tall on the eastern edge of the San Luis Valley, and an adjacent national preserve located in the Sangre de Cristo Range, in south - central Colorado, United States. The park was originally designated Great Sand Dunes National Monument on March 17, 1932 by President Herbert Hoover. The original boundaries protected an area of 35,528 acres (55.5 sq mi; 143.8 km). A boundary change and redesignation as a national park and preserve was authorized on November 22, 2000 and then established by an act of Congress on September 24, 2004. The park encompasses 107,342 acres (167.7 sq mi; 434.4 km) while the preserve protects an additional 41,686 acres (65.1 sq mi; 168.7 km) for a total of 149,028 acres (232.9 sq mi; 603.1 km). The recreational visitor total was 486,935 in 2017, about 25 % more than the 388,308 visitors of 2016.
The park contains the tallest sand dunes in North America. The dunes cover an area of about 30 sq mi (78 km) and are estimated to contain over 5 billion cubic meters of sand. Sediments from the surrounding mountains filled the valley over geologic time periods. After lakes within the valley receded, exposed sand was blown by the predominant southwest winds toward the Sangre de Cristos, eventually forming the dunefield over an estimated tens of thousands of years. The four primary components of the Great Sand Dunes system are the mountain watershed, the dunefield, the sand sheet, and the sabkha. Ecosystems within the mountain watershed include alpine tundra, subalpine forests, montane woodlands, and riparian zones.
Evidence of human habitation in the San Luis Valley dates back about 11,000 years. The first historic peoples to inhabit the area were the Southern Ute Tribe, while Apaches and Navajo also have cultural connections in the dunes area. In the late 17th century, Don Diego de Vargas -- a Spanish governor of Santa Fe de Nuevo México -- became the first European on record to enter the San Luis Valley. Juan Bautista de Anza, Zebulon Pike, John C. Frémont, and John Gunnison all travelled through and explored parts of the region in the 18th and 19th centuries. The explorers were soon followed by settlers who ranched, farmed and mined in the valley starting in the late 19th century. The park was first established as a national monument in 1932 to protect it from gold mining and the potential of a concrete manufacturing business.
Visitors must walk across the wide and shallow Medano Creek to reach the dunes in spring and summer months. The creek typically has a peak flow from late May to early June in most years. From July through April, the creek is usually no more than a few inches deep, if there is any water at all. Hiking is permitted throughout the dunes with the warning that the sand surface temperature may reach 150 ° F (66 ° C) in summer. Sandboarding and sandsledding are popular activities, both done on specially designed equipment which can be rented just outside the park entrance or in Alamosa. Visitors with street - legal four - wheel drive vehicles may continue past the end of the park 's main road to Medano Pass on 22 miles (35 km) of unpaved road, crossing the stream bed of Medano Creek nine times and traversing 4 miles (6.4 km) of deep sand. Hunting is permitted in the preserve during the months of autumn, while hunting is prohibited within national park boundaries at all times. The preserve encompasses nearly all of the mountainous areas north and east of the dunefield, up to the ridgeline of the Sangre de Cristos.
The oldest evidence of humans in the area dates back about 11,000 years. Some of the first people to enter the San Luis Valley and the Great Sand Dunes area were nomadic hunter - gatherers whose connection to the area centered around the herds of mammoths and prehistoric bison. They were Stone Age people who hunted with large stone spear or dart points now identified as Clovis and Folsom points. These people only stayed when hunting and plant gathering was good, and avoided the region during times of drought and scarcity.
Modern American Indian tribes were familiar with the area when Spaniards first arrived in the 17th century. The traditional Ute phrase for the Great Sand Dunes is Saa waap maa nache, "sand that moves. '' Jicarilla Apaches settled in northern New Mexico and called the dunes Sei - anyedi, "it goes up and down. '' Blanca Peak, just southeast of the dunes, is one of the four sacred mountains of the Navajo, who call it Sisnaajini. These various tribes collected the inner layers of bark from ponderosa pine trees for use as food and medicine. The people from the Tewa / Tiwa - speaking pueblos along the Rio Grande remember a traditional site of great importance located in the valley near the dunes: the lake through which their people emerged into the present world. They call the lake Sip'ophe, meaning "Sandy Place Lake '', which is thought to be the springs and / or lakes immediately west of the dunefield.
In 1694, Don Diego de Vargas became the first European known to have entered the San Luis Valley, although herders and hunters from the Spanish colonies in present - day northern New Mexico probably entered the valley as early as 1598. De Vargas and his men hunted a herd of 500 bison in the southern part of the valley before returning to Santa Fe. In 1776, Juan Bautista de Anza and an entourage of men and livestock probably passed near the dunes as they returned from a punitive raid against a group of Comanches. At this time, the valley was a travel route between the High Plains and Santa Fe for Comanches, Utes, and Spanish soldiers. The dunes were likely a visible landmark for travelers along the trail.
The first known writings about Great Sand Dunes appear in Zebulon Pike 's journals of 1807. As Lewis and Clark 's expedition was returning east, U.S. Army Lt. Pike was commissioned to explore as far west as the Arkansas and Red Rivers. By the end of November 1806, Pike and his men had reached the site of today 's Pueblo, Colorado. Still pushing southwest, and confused about the location of the Arkansas River, Pike crossed the Sangre de Cristos just above the Great Sand Dunes.
After marching some miles, we discovered... at the foot of the White Mountains (today 's Sangre de Cristos) which we were then descending, sandy hills... When we encamped, I ascended one of the largest hills of sand, and with my glass could discover a large river (the Rio Grande)... The sand - hills extended up and down the foot of the White Mountains about 15 miles, and appeared to be about 5 miles in width. Their appearance was exactly that of the sea in a storm, except as to color, not the least sign of vegetation existing thereon.
In 1848, John C. Frémont was hired to find a railroad route from St. Louis to California. He crossed the Sangre de Cristos into the San Luis Valley in winter, courting disaster but proving that a winter crossing of this range was possible. He was followed in 1853 by Captain John Gunnison of the Corps of Topographical Engineers. Gunnison 's party crossed the dunefield on horseback.
In the years that followed, the Rockies were gradually explored, treaties were signed and broken with resident tribes, and people with widely differing goals entered the San Luis Valley from the United States and Mexico. In 1852, Fort Massachusetts was built and then relocated to Fort Garland, about 20 miles southeast of the Great Sand Dunes, to safeguard travel for settlers following the explorers into the valley. Although many settlers arrived via the trails from Santa Fe or La Veta Pass, several routes over the Sangre de Cristos into the valley were well - known to American Indians and increasingly used by settlers in the late 1800s. Medano Pass, also known as Sand Hill Pass, and Mosca Pass, also called Robidoux 's Pass, offered more direct routes from the growing front range cities and dropped into the valley just east of the Great Sand Dunes. Trails were improved into wagon routes and eventually into rough roads. The Mosca Pass Toll Road was developed in the 1870s, and stages and the mail route used it regularly through about 1911 when the western portion was damaged in a flash flood. Partially rebuilt at times in the 1930s through the 1950s, the road was repeatedly closed due to flood damage and is now a hiking trail.
The Herard family -- after whom Mount Herard is named -- established a ranch and homestead along Medano Creek in 1875, using the old Medano Pass Road to travel to and from their home. The modern unpaved road follows the old route and is open only to four - wheel drive, high - clearance vehicles as it passes through deep sand, rises to Medano Pass, and continues east into the Wet Mountain Valley. The Herards grazed and bred cattle in the mountain meadows, raised horses, and established a trout hatchery in the stream. Other families homesteaded near the dunes as well, including the Teofilo Trujillo family whose sheep and cattle ranch in the valley later became part of the Medano -- Zapata Ranch, owned by the Nature Conservancy since 1999. The Trujillo 's extant homestead and the ruins of a destroyed one were declared a National Historic Landmark in 2004. Frank and Virginia Wellington built a cabin and hand - dug the irrigation ditch that parallels Wellington Ditch Trail located south of the park campground.
Gold and silver rushes occurred around the Rockies after 1853, bringing miners by the thousands into the state and stimulating mining businesses that are still in operation. Numerous small strikes occurred in the mountains around the San Luis Valley. People had frequently speculated that gold might be present in the Great Sand Dunes, and local newspapers ran articles in the 1920s estimating its worth at anywhere from 17 cents / ton to $3 / ton. Active placer mining operations sprang up along Medano Creek, and in 1932 the Volcanic Mining Company established a gold mill designed to recover gold from the sand. Although minute quantities of gold were recovered, the technique was too labor - intensive, the stream too seasonal, and the pay - out too small to support any business for long.
The idea that the dunes could be destroyed by gold mining or concrete manufacturing alarmed residents of Alamosa and Monte Vista. By the 1920s, the dunes had become a source of pride for local people, and a potential source of tourist dollars for local businesses. Members of the P.E.O. Sisterhood sponsored a bill to Congress asking for national monument status for Great Sand Dunes. Widely supported by local people, the bill was signed into law in 1932 by President Herbert Hoover. Similar support in the late 1990s resulted in the monument 's expansion into a national park and preserve in 2000 - 2004.
The park contains the tallest sand dunes in North America, rising to a maximum height of 750 feet (229 m) from the floor of the San Luis Valley on the western base of the Sangre de Cristo Range. The dunes cover an area of about 30 sq mi (78 km) and are estimated to contain over 5 billion cubic meters of sand.
Creation of the San Luis Valley began when the Sangre de Cristo Range was uplifted in the rotation of a large tectonic plate. The San Juan Mountains to the west of the valley were created through extended and dramatic volcanic activity. The San Luis Valley encompasses the area between the two mountain ranges and is roughly the size of the state of Connecticut. Sediments from both mountain ranges filled the deep chasm of the valley, along with huge amounts of water from melting glaciers and rain. The presence of larger rocks along Medano Creek at the base of the dunes, elsewhere on the valley floor, and in buried deposits indicates that some of the sediment has been washed down in torrential flash floods.
In 2002, geologists discovered lakebed deposits on hills in the southern part of the valley, confirming theories of a huge lake that once covered much of the San Luis Valley floor. The body of water was named Lake Alamosa after the largest town in the valley. Lake Alamosa suddenly receded after its extreme water pressure broke through volcanic deposits in the southern end of the valley. The water then drained through the Rio Grande, likely forming the steep Rio Grande Gorge near Taos, New Mexico. Smaller lakes still covered the valley floor, including two broad lakes in the northeastern side of the valley. Large amounts of sediment from the volcanic San Juan Mountains continued to wash down into these lakes, along with some sand from the Sangre de Cristo Range. Dramatic natural climate change later significantly reduced these lakes, leaving behind the sand sheet. Remnants of these lakes still exist in the form of sabkha wetlands.
Sand that was left behind after the lakes receded blew with the predominant southwest winds toward a low curve in the Sangre de Cristo Range. The wind funnels toward three mountain passes -- Mosca, Medano, and Music Passes -- and the sand accumulates in this natural pocket. The winds blow from the valley floor toward the mountains, but during storms the winds blow back toward the valley. These opposing wind directions cause the dunes to grow vertically. Two mountain streams -- Medano and Sand Creeks -- also capture sand from the mountain side of the dunefield and carry it around the dunes and back to the valley floor. The creeks then disappear into the sand sheet, and the sand blows back into the dunefield. Barchan and transverse dunes form near these creeks. The combination of opposing winds, a huge supply of sand from the valley floor, and the sand recycling action of the creeks, are all part of the reason that these are the tallest dunes in North America.
Sufficient vegetation has grown on the valley floor that there is little sand blowing into the main dunefield from the valley; however, small parabolic dunes continue to originate in the sand sheet and migrate across grasslands, joining the main dunefield. Some of these migrating dunes become covered by grasses and shrubs and stop migrating. The dunes system is fairly stable as the opposing wind directions balance each other out over time. Also, the main dunefield is moist beneath a thin layer of dry surface sand. While the top few inches of sand are blown around during windstorms, the moist sand remains largely in place.
Scientists estimate that Lake Alamosa disappeared about 440,000 years ago, but the dunes themselves apparently originate from sand deposits from later, smaller lakes. A relatively new dating process, optically stimulated luminescence (OSL), is still in development. This method takes core samples of sand from deep within a dune, and attempts to measure how long quartz grains have been buried in the dark. If the deepest sand deposits can be accurately dated, the age of the dunes could be determined. Samples of sand from deep in the dunes have returned OSL dates varying between a few hundred years to tens of thousands of years old. The oldest dated deposits found so far would have accumulated in the late Pleistocene epoch, during the middle years of the current ice age 's third stage.
The dunes contain dark areas which are deposits of magnetite, a mineral that has eroded out of the Sangre de Cristo Range. Magnetite is both attracted to a magnet and can be magnetized to become a magnet itself; it is the most magnetic mineral in nature. Magnetite is an oxide of iron which is heavier than the surrounding particles of sand. When overlying sand is removed by wind, magnetite deposits stay in place and are visible as dark patches in the dunefield.
Great Sand Dunes National Park and Preserve is located in Saguache and Alamosa Counties, Colorado at approximately 37.75 ° north latitude and 105.5 ° west longitude. The national park is located in the San Luis Valley while the national preserve is located to the east in an adjacent section of the Sangre de Cristo Range of the Rocky Mountains. Elevations range from 7,515 ft (2,291 m) in the valley west of the dunes, to 13,604 ft (4,146 m) at the summit of Tijeras Peak in the northern part of the preserve.
The dunes cover an area of about 30 sq mi (78 km) while the surrounding relatively flat sand sheet which feeds the large dunes is actually the largest component of the entire dunes system, containing about 90 % of all the sand in the park. The forested and often snowcapped mountains exceeding 13,000 ft (4,000 m) to the east are the most prominent feature, towering over the high dunes. Other features include snow - fed creeks originating high in the mountains, and several alpine lakes. Two spring - fed creeks in the sand sheet along with a few small lakes in the valley 's sabkha section southwest of the dunes create a wetland that nurtures wildlife.
The main dunefield measures roughly 4 mi (6.4 km) east - to - west and 6 mi (9.7 km) north - to - south, with an adjacent 6 sq mi (16 km) area to the northwest called the Star Dune Complex, for a total of about 30 sq mi (78 km). The park and preserve together are approximately 15 mi (24 km) east - to - west at the widest point, and approximately 15 mi (24 km) north - to - south, also at the widest point. The park encompasses 107,342 acres (167.7 sq mi; 434.4 km), while the preserve protects an additional 41,686 acres (65.1 sq mi; 168.7 km) for a total of 149,028 acres (232.9 sq mi; 603.1 km).
The Rio Grande National Forest is located to the north and southeast while the remaining forested slopes directly to the east of the dunes were redesignated the Great Sand Dunes National Preserve. The San Isabel National Forest is located to the east of the preserve just beyond the ridge of the Sangre de Cristo Range. Private property abuts most of the southern boundary of the park. The San Luis Lakes State Wildlife Area lies adjacent to the southwestern corner of the park, while the Rio Grande flows through the valley farther to the southwest. The Baca National Wildlife Refuge lies adjacent to the west, and the slopes of the San Juan Mountains begin at the western edge of the valley. Private property of the Baca Grande subdivision of Crestone lies adjacent to the northwest.
The nearest city is Alamosa which is about 30 mi (48 km) away by road to the southwest. The nearest towns are Crestone to the north, Mosca and Hooper to the west, Blanca and Fort Garland to the south, and Gardner to the east. Colorado Springs and Denver are located a few hours away by car to the northeast. The major roads through the San Luis Valley are U.S. Route 160 on an east -- west alignment passing south of the park, and U.S. Route 285 on a north -- south alignment passing west of the park and generally parallel to Colorado State Highway 17, which is the closer of the two north -- south roadways.
The Great Sand Dunes are located in the high elevation desert of the San Luis Valley, just west of the Sangre de Cristo Range. The summer high temperatures are atypical for a desert area as the average high temperature is only slightly above 80 ° F (27 ° C) in the warmest month of July; however, the large spread between high and low temperatures is typical of a high desert climate. Low temperatures during winter nights can be extremely cold, with average low temperatures well below 32 ° F (0 ° C) and record low temperatures below 0 ° F (− 18 ° C) from November through April. Precipitation is very low on the dunes, averaging just 11.13 inches (283 mm) of rainfall per year. The high evaporation rates on the dunes qualify the area as a desert, even though precipitation exceeds 10 inches (250 mm).
Spring can sometimes bring high winds, mainly in the afternoon. Spring conditions vary greatly, from mild and sunny to cold and snowy. March is the snowiest month of the year, though some days are above 50 ° F (10 ° C). In late spring, when Medano Creek usually has its peak flow, snow or high winds are still possible. In summer, daytime high temperatures average 75 -- 80 ° F (24 -- 27 ° C); however, sand surface temperatures can soar to 150 ° F (66 ° C) on sunny afternoons. Summer nights are cool -- the park is located 8,200 ft (2,500 m) above sea level -- with nighttime temperatures often dropping below 50 ° F (10 ° C). Afternoon thundershowers are common in July and August with cool winds, heavy rain and lightning. Fall is generally mild, with Indian summer days when temperatures reach 60 ° F (16 ° C) and higher, but chilly nights below freezing. Occasional fall storms bring icy rain or snow. Cold temperatures well below freezing are typical in winter, although the sunshine is generally abundant and the dry air does n't feel as cold as more humid areas. The average winter high temperatures are just above freezing even in January, which is the coldest month of the year.
The four primary components of the Great Sand Dunes system are the mountain watershed, the dunefield, the sand sheet, and the sabkha. The mountain watershed receives heavy snow and rain which feeds creeks that flow down from alpine tundra and lakes, through subalpine and montane woodlands, and finally around the main dunefield. Sand that has blown from the valley floor is captured in streams and carried back toward the valley. As creeks disappear into the valley floor, the sand is picked up and carried to the dunefield once again. The recycling action of water and wind, along with a 7 % moisture content below the dry surface holding the sand together, contributes to the great height of the dunes.
The dunefield is composed of reversing dunes, transverse dunes -- also called crescentic or barchan dunes -- and star dunes. The sand sheet is the largest component of the system, comprising sandy grasslands that extend around three sides of the dunefield. Almost 90 % of the sand deposit is found in the sand sheet, while the remainder is found in the dunefield. Small parabolic dunes form in the sand sheet and then migrate into the main dunefield. Nabkha dunes form around vegetation. The sabkha forms where sand is seasonally saturated by rising ground water. When the water evaporates away in late summer, minerals similar to baking soda cement sand grains together into a hard, white crust. Areas of sabkha can be found throughout western portions of the sand sheet, wherever the water table meets the surface. Some wetlands in the sabkha are deeper with plentiful plants and animals, while others are shallow and salty.
There are hundreds of plant species in the park and preserve, adapted for environments as diverse as alpine tundra and warm water wetlands. Trees include aspen, Douglas fir, pinyon pine, ponderosa pine, Rocky Mountain juniper, three - leaf sumac, bristlecone pine, red osier dogwood, and narrow - leaf cottonwood.
Among the flowering plants are alpine phlox, dwarf clover, alpine forget - me - not, fairy primrose, alpine aven, Indian paintbrush, lousewort, blue - purple penstemon, aspen daisy, western paintbrush, elephantella, snow buttercups, scurfpea, Indian ricegrass, blowout grass, prairie sunflower, Rocky Mountain beeplant, rubber rabbitbrush, speargrass, small - flowered sand verbena, narrowleaf yucca, prickly pear cactus, Rocky Mountain iris, and white water buttercup.
Inland saltgrass is the primary type of grass around sabkha wetlands in the park.
Mammals include -- from alpine tundra to low elevation grasslands -- the pika, yellow - bellied marmot, bighorn sheep, black bear, snowshoe hare, Abert 's squirrel, cougar, mule deer, water shrew, beaver, kangaroo rat, badger, pronghorn, and elk. Over 1500 bison are currently ranched within park boundaries on private land owned by The Nature Conservancy which is only open to the public through tours.
Over 200 species of birds are found throughout the park. From higher to lower elevations and dependent on season, some of the bird species include the brown - capped rosy finch, white - tailed ptarmigan, red - breasted nuthatch, peregrine falcon, mountain bluebird, northern pygmy owl, dusky grouse, hummingbird (four species), western tanager, burrowing owl, bald eagle, golden eagle, sandhill crane, American avocet, and great blue heron.
Various reptiles live in the park, such as the short - horned lizard, fence lizard, many - lined skink, bullsnake, and garter snake.
Fish living in the park 's streams include the Rio Grande cutthroat trout, Rio Grande sucker (Catostomus plebeius), and fathead minnow.
Amphibians include the tiger salamander, chorus frog, northern leopard frog, spadefoot toad, Great Plains toad, and Woodhouse 's toad.
The park harbors several endemic insects including the Great Sand Dunes tiger beetle, a circus beetle (Eleodes hirtipennis), Werner 's (Amblyderus werneri) and Triplehorn 's (Amblyderus triplehorni) ant - like flower beetle, as well as undescribed species of clown beetle, noctuid moth, and robber fly. More than a thousand different kinds of arthropods have been found at the Great Sand Dunes.
Alpine tundra is the highest elevation ecosystem at Great Sand Dunes where the conditions are too harsh for trees to survive, but wildflowers, pikas, yellow - bellied marmots, ptarmigans, and bighorn sheep thrive. The tundra begins about 11,700 ft (3,600 m) and continues upward to the highest peaks in the park. At subalpine elevations near the tree line grow krummholz (meaning "crooked wood '') -- trees which are stunted and twisted due to high winds, snow, ice, short growing seasons, and shallow, poorly developed soils. The transition zone between subalpine forest and alpine tundra is an important refuge during storms for some mammals and birds who primarily live on tundra. Bristlecone and limber pines grow at an extremely slow rate, with small statures that belie their true ages as some are more than a thousand years old.
Subalpine forests and meadows capture heavy snow in winter and soaking rains in summer. The highest diversity of Rocky Mountain species of plants and animals are found in this ecosystem. Subalpine forest extends from 9,500 ft (2,900 m) to tree line at 11,700 ft (3,600 m). Montane forests and woodlands are found along the drier foothills at approximately 8,000 ft (2,400 m) to 9,500 ft (2,900 m). Pinyon - juniper and ponderosa pine woodlands are found on open, drier slopes, while cottonwood and aspen trees are in drainages. Cougars hunt mule deer here at night. Owls, dusky grouse, turkeys, and bullsnakes all find habitat in these drier, open woodlands.
The riparian zone follows creeks through the subalpine and montane ecosystems at Great Sand Dunes. Cottonwood and aspen trees, red osier dogwood, and alder grow well in this wet environment, in turn providing shade and habitat for black bears, water shrews, and western tanagers. Rio Grande cutthroat trout are found in Medano Creek.
While the top few inches of the dunefield are often dry, the dunes are moist year - round due to ongoing precipitation. Moisture content of 7 % beneath the surface sand allows species such as Ord 's kangaroo rat, Great Sand Dunes tiger beetle, scurfpea, and blowout grass to survive here. Many animals visit the dunes from other habitats, including elk, pronghorn, bison, coyotes, bobcats, and raptors.
The sand sheet includes extensive grasslands and shrublands that surround the dunefield on three sides, from 7,500 ft (2,300 m) to 8,200 ft (2,500 m). The sand sheet varies from wet meadows to cool - grass prairie to desert shrubland, depending on proximity to groundwater and soil type. Elk and pronghorn are common, while burrowing owls nest in the ground and other raptors float through the skies searching for mice, kangaroo rats, and short - horned lizards.
The sabkha is a wetland region where groundwater rises and falls seasonally, leaving white alkali deposits on the surface. Inland saltgrass is common in this area. Toads can reproduce in sabkha wetlands when they are seasonally filled with sufficient fresh water. Shore birds such as the American avocet hunt tadpoles and insects in the shallow water. Wetlands speckle the San Luis Valley and are important habitat for sandhill cranes, shore birds, amphibians, dragonflies, and freshwater shrimp. Grassland species such as elk also use these waters for drinking. The sabkha and wetlands are at approximately 7,500 ft (2,300 m) in elevation.
The park preserves the tallest sand dunes in North America, as well as alpine lakes and tundra, mountain peaks over 13,000 feet (3,962 m) in elevation, mixed conifer forests, grasslands, and wetlands.
Medano Creek, which borders the east side of the dunes, never finds a permanent and stable streambed as fresh sand falls in the creek. Small underwater sand ridges that act like dams form and break down, creating surges of water which resemble waves. The surges occur at an average interval of about 20 seconds. In a high - water year, the surges can be as high as 20 in (51 cm). The "surge flow '' mainly occurs during the peak flow period from late May to early June in most years.
Big Spring Creek is a unique spring - fed creek formed by an unconfined aquifer which creates wetlands that support rare species and plant communities in a generally arid area. The creek was designated a National Natural Landmark in 2012.
Accessing the dunes requires walking across the wide and shallow Medano Creek. The creek typically flows past the main dunes parking area from late April through late June, with peak flow occurring from late May to early June in most years. In other months, the creek is usually only a few inches deep, if there is any water at all. Hiking is permitted throughout the dunes, with the warning that the sand can get very hot in the summer, up to 150 ° F (66 ° C). Sand wheelchairs are available at the visitor center. Sandboards and sand sleds can be rented just outside the park entrance or in Alamosa which is the closest city.
Mosca Pass Trail is a 7 mi (11 km) roundtrip hike that follows a small creek through aspen and evergreen forests to Mosca Pass -- elevation 9,737 ft (2,968 m) -- in the Sangre de Cristo Range. American Indians and early settlers used this route for travel between the San Luis Valley and the Wet Mountain Valley to the east. Several trails located in the northeastern section of the park lead to alpine lakes high in the mountains. A trail to Medano Lake and the summit of Mount Herard is located off Medano Pass Road. A trail along Sand Creek leads to the Sand Creek Lakes and Music Pass -- elevation 11,380 ft (3,470 m) -- with a view of the Upper Sand Creek basin. Spur trails along Sand Creek lead to the four alpine lakes which feed the creek, and to several 13,000 ft (4,000 m) peaks above the basin. The Sand Ramp Trail traverses between the dunefield and the mountains, connecting the park 's campground to Medano Pass Road (follow the road up to Medano Lake and Pass), as well as the base of the Sand Creek Trail. Most of the park 's grasslands, shrublands and wetlands have no established trails but are generally open to hiking. The Nature Conservancy 's Medano Ranch property may only be visited on a guided tour. A fence surrounds the property to contain the Conservancy 's bison herd and demarcate the boundaries.
Medano Pass Road is a 22 mi (35 km) four - wheel drive (4WD) road that begins where the main park road ends. The unpaved road crosses Medano Creek nine times and traverses 4 mi (6.4 km) of deep sand. Only street - licensed 4WD motor vehicles or motorcycles, and bicycles are permitted. Fat tire bikes are the only type of bicycle recommended by the park service due to the deep sandy stretches. The road winds around the eastern side of the dunefield, up through a forested mountain canyon inside the National Preserve, and then over Medano Pass -- elevation 9,982 ft (3,043 m) -- at the 11.2 mi (18.0 km) mark. The road then continues down into the Wet Mountain Valley and connects with Colorado State Highway 69. Travellers are advised that hunting is permitted in the National Preserve during the months of autumn.
Most of the national park and the entire national preserve are open to horseback riding and pack animals. Prohibited zones include all developed areas -- such as the Pinyon Flats campground, picnic areas, and the visitor center -- all paved roadways and many of the hiking trails, along with the dunes area from the parking lot to as far as the High Dune, which is for pedestrian use only. All national park horse camping areas are in the wilderness in designated backcountry campsites or other wilderness areas at least 0.25 mi (0.4 km) from roads or trails. Camping is permitted anywhere in the national preserve and at designated sites along the Medano Pass Road, as long as minimum impact guidelines are followed. Permitted pack animals in the national park and the preserve include horses, mules, burros, donkeys, alpacas, and llamas. Overnight guests at the Zapata Ranch may take guided trips into the park; Zapata Partners is the only NPS - licensed provider of horseback riding in the Great Sand Dunes.
Unlike all other national parks in the contiguous United States, Great Sand Dunes National Park is located directly adjacent to a national preserve. The preserve is also managed by the National Park Service and seasonal hunting is permitted there. Sport hunting has regulations which include not pursuing a wounded animal if it leaves the preserve and enters the national park. Mountain lion hunting with dogs is also allowed in the preserve, but unless the dogs have spotted the lion and are pursuing it, they are required to be leashed. Other game species include turkey, bear, bighorn sheep, elk and mule deer.
The dunes and surrounding area were designated a national monument in 1932 after a bill -- sponsored by the P.E.O. Sisterhood and widely supported by local residents -- was signed into law by President Herbert Hoover. The original monument boundaries protected an area of 35,528 acres (55.5 sq mi; 143.8 km). Similar support in the late 1990s resulted in the monument 's redesignation as Great Sand Dunes National Park and Preserve in 2000 - 2004. The International Union for Conservation of Nature (IUCN) listed the Great Sand Dunes as a protected landscape (management category V) in 2000, including the national park, the preserve, and the adjacent Baca National Wildlife Refuge.
In 1976, the U.S. Congress designated the Great Sand Dunes Wilderness -- a wilderness area encompassing 32,643 acres (51 sq mi; 132 km) within the monument. This wilderness is the only one in the U.S. that protects a saltbush - greasewood ecosystem and includes the entire dunefield as well as much of the area west of the dunes. Congress also designated the Sangre de Cristo Wilderness in 1993, which contains a total of 219,776 acres (343.4 sq mi; 889.4 km) of mountainous terrain. Most of the Sangre de Cristo Wilderness is managed by the U.S. Forest Service while the National Park Service manages the area that has since been designated a national preserve. Mechanized transport and motorized equipment or vehicles are not permitted in wilderness areas, while ATVs are not permitted anywhere in the national park and preserve. The restrictions of a wilderness designation protect native wildlife such as the endemic Great Sand Dunes tiger beetle from potential extinction caused by human activities. Both of the wilderness area designations exclude the existing road corridors that pass through them, specifically the paved park road and the unpaved Medano Pass Road. The IUCN has included the same 51 square miles (132 km) of dunes and surrounding sand sheet on their global list of wilderness areas (management category Ib) since 1976.
In 1999, the Nature Conservancy purchased surrounding state - owned land. The land is within the Medano -- Zapata Ranch, located to the west and south. Some of the ranch land is located within the current national park boundary in its southwestern corner, and includes a fenced area that contains a bison herd within 44,000 acres (68.8 sq mi; 178.1 km) -- an area that can only be visited on a guided tour. The objective of the Nature Conservancy, as well as the federal and state governments, is to combine conservation and sustainable use of the ecosystem, in a manner similar to the protected area mosaic recommended in the early 1980s for parts of the Yukon in Canada.
The eventual redesignation as Great Sand Dunes National Park and Preserve was authorized on November 22, 2000 when President Bill Clinton signed the Congressionally - approved Great Sand Dunes National Park and Preserve Act. The act directed the Secretary of the Interior to "establish the Great Sand Dunes National Park when sufficient land having sufficient diversity of resources has been acquired to warrant its designation. '' The new designation as a national park and preserve would not be made official until 2004 after sufficient land had been acquired.
In 2002, the Nature Conservancy purchased the Baca Ranch -- an area of 97,000 acres (151.6 sq mi; 392.5 km) -- for $31.28 million. Financing was provided by the Department of the Interior, the Colorado State Land Board and private donors. The Baca Ranch had property located in the valley and the adjacent mountains, ranging in elevation from 7,500 ft (2,286 m) west of the dunes to the 14,165 ft (4,317 m) summit of Kit Carson Peak. The purchase approximately tripled the size of the monument. The ranch was split into three sections: the Sangre de Cristos section east of Crestone became part of the Rio Grande National Forest; the section west of the dunes was designated the Baca National Wildlife Refuge and managed by the Fish and Wildlife Service; the section east of the dunes was transferred first to the Rio Grande National Forest and later redesignated a national preserve in 2004 managed by the National Park Service. The national preserve remains open to regulated seasonal hunting, as it was when designated national forest land, but is protected from logging and mining activities which are generally permitted in national forests.
The boundaries of the Great Sand Dunes National Park and Preserve were established by an act of Congress on September 24, 2004.
In 2016, the federal government began negotiations toward purchasing 12,518 acres (19.6 sq mi; 50.7 km) of the Medano -- Zapata Ranch from the Nature Conservancy. The plan is to complete the park, making it fully accessible to the public, by acquiring the final piece of privately - held land located within the current park boundaries. The land includes the area presently occupied by the bison herd, as well as adjacent meadows and wetland areas. The purchase is expected to be completed by 2018.
|
why is there an indian song in inside man | Inside Man - wikipedia
Inside Man is a 2006 American crime thriller film directed by Spike Lee, and written by Russell Gewirtz. The film centers on an elaborate bank heist on Wall Street over a 24 - hour period. It stars Denzel Washington as Detective Keith Frazier, the NYPD 's hostage negotiator; Clive Owen as Dalton Russell, the mastermind who orchestrates the heist and Jodie Foster as Madeleine White, a Manhattan power broker who becomes involved at the request of the bank 's founder, Arthur Case (Christopher Plummer), to keep something in his safe deposit box protected from the robbers. Inside Man marks the fourth film collaboration between Washington and Lee.
Gewirtz spent five years developing the film 's premise before working on what became his first original screenplay. After he completed the script in 2002, Imagine Entertainment purchased it to be made by Universal Studios, with Imagine co-founder Ron Howard attached to direct. After Howard stepped down, his Imagine partner Brian Grazer began looking for a new director for the project and ultimately hired Lee. Principal photography began in June 2005 and concluded in August; filming took place on location in New York City. Inside Man premiered in New York on March 20, 2006 before being released in North America on March 24, 2006. The film received a generally positive critical response and was a commercial success, grossing over $184 million worldwide.
A man named Dalton Russell sits in an unidentified cell and narrates a story of how he has committed the perfect robbery. In New York, masked robbers, dressed as painters and using variants of the name "Steve '' as aliases, seize control of a Manhattan bank and take the patrons and employees hostage. They divide the hostages into groups and hold them in different rooms, forcing them to don painters clothes identical to their own. The robbers rotate the hostages among various rooms and occasionally insert themselves covertly into the groups. They also take turns working on an unspecified project involving demolishing the floor in one of the bank 's storage rooms.
Police surround the bank and Detectives Keith Frazier and Bill Mitchell take charge of the negotiations. Russell, the leader of the robbers, demands food and the police supply them with pizzas whose boxes include listening devices. The bugs pick up a language which the police identify as Albanian. They discover, however, that the conversations are in fact propaganda recordings of deceased Albanian communist dictator Enver Hoxha, implying that the robbers anticipated the attempted surveillance.
When Arthur Case, chairman of the board of directors and founder of the bank, hears of the robbery taking place, he hires "fixer '' Madeleine White to try to protect the contents of his safe deposit box within the bank. White arranges a conversation with Russell, who allows her to enter the bank and inspect the contents of the box, which include documents from Nazi Germany. Russell implies that Case started his bank with money that he received from the Nazis for unspecified services, resulting in the deaths of many Jewish people during World War II. White tells Russell that Case will pay him a substantial sum if he destroys the contents of the box.
Frazier demands to inspect the hostages before allowing the robbers to leave and Russell takes him on a tour of the bank. As he is being shown out, Frazier attacks Russell, but is restrained by another of the robbers. Afterwards he explains that he deliberately tried to provoke Russell and judges that the man is not a killer. However, this is disproven when the robbers execute one of the hostages.
The execution prompts the ESU team into action. They plan to storm the bank and use rubber bullets to knock out those inside. Frazier discovers that the robbers have planted a listening device on the police; aware of the police plans, the robbers detonate smoke grenades and release the hostages. The police detain and question everyone but are unable to distinguish the identically dressed hostages from the robbers. A search of the bank reveals the robbers ' weapons were plastic replicas. They find props for faking the execution, but no money or valuables appear to have been stolen. With no way to identify the suspects and unsure if a crime has even been committed, Frazier 's superior orders him to drop the case.
Frazier, however, searches the bank 's records and finds that safe deposit box number 392 has never appeared on any records since the bank 's founding in 1948. He obtains a search warrant to open it. He is then confronted by White, who informs him of Case 's Nazi dealings. She attempts to persuade Frazier to drop his investigation, but he refuses, playing a recording of an incriminating conversation that she had with him. White confronts Case who admits that the box contained diamonds and a ring that he had taken from a Jewish friend whom he had betrayed to the Nazis.
Russell repeats his opening monologue, but with the revelation that he is in fact hiding behind a fake wall the robbers had constructed inside the bank 's supply room. He emerges a week after the robbery with the contents of Case 's safe deposit box, including incriminating documents and several bags of diamonds. On his way out, he bumps into Frazier, who does not recognize him. When Frazier opens the safe deposit box, he finds the ring and a note from Russell. Frazier confronts Case and urges White to contact the Office of War Crimes Issues at the State Department about Case 's war crimes. At home, Frazier finds a loose diamond, slipped into his pocket by Russell.
Appearing as Russell 's accomplices are:
Appearing as some of the more notable hostages are Ken Leung as Wing, who was distracted in the bank before the heist by the bosomy woman (played by Samantha Ivers) standing behind him and talking loudly on her phone; Gerry Vichi as Howard Kurtz, an elderly hostage suffering chest pains who is quickly released by the robbers; Waris Ahluwalia as Vikram Walia, a Sikh bank clerk whose turban is removed by the cops, which is a religious sacrilege for a Sikh male; Peter Frechette as Peter Hammond, a bank employee whose attempt to hide his cell phone from Russell results in his getting beat; Amir Ali Said as Brian Robinson, an 8 - year - old boy who speaks with both Russell and Frazier and who plays a killing video game; Ed Onipede Blunt as Ray Robinson, Brian 's father; and Marcia Jean Kurtz, who plays an older woman who initially refuses to strip and is forced to do so by Stevie. Kurtz 's character is named Miriam Douglas; Kurtz played a hostage named Miriam in Dog Day Afternoon, a bank robbery film, which unlike Inside Man, contained significant violence. Lionel Pina, who also appeared in Dog Day Afternoon as a pizza delivery man, appears in Inside Man as a policeman who delivers pizzas at the bank 's front doors.
Other roles include Cassandra Freeman as Sylvia, Frazier 's girlfriend; Peter Gerety as Captain Coughlin, Frazier and Mitchell 's superior; Victor Colicchio as Sergeant Collins, the first officer to respond to the bank robbery; Jason Manuel Olazabal as ESU Officer Hernandez; Al Palagonia as Kevin, a sanitation worker who recognizes the language as Albanian, as he was formerly married to an Albanian - born woman; Florina Petcu as Ilina, the Albanian woman in question who explains that they are hearing recordings of Enver Hoxha; Peter Kybart as the Mayor of New York City; Anthony Mangano as an ESU officer; and Daryl Mitchell and Ashlie Atkinson as Mobile Command Officers.
-- Producer Brian Grazer on the script for Inside Man.
Inside Man was Russell Gewirtz 's debut film as a screenwriter. A former lawyer, Gewirtz conceived the idea while vacationing in several countries. He worked for five years to develop the film 's premise. Inexperienced at screenwriting, Gewirtz studied a number of screenplays before working on his own, which he titled "The Inside Man ''. His friend, Daniel M. Rosenberg, assisted in developing the script. After it was completed in 2002, the screenplay was passed around several times. Rosenberg shopped the script to a number of Los Angeles agencies, until Universal Studios executives Scott Stuber and Donna Langley persuaded Gewirtz to take the script to Universal and Imagine Entertainment. Imagine purchased Gewirtz 's screenplay in 2002, and the project began development at Universal, who retitled the film Inside Man.
Imagine co-founder Ron Howard was attached to direct the film, but turned it down after being asked by Russell Crowe to helm Cinderella Man (2004). Howard 's Imagine partner Brian Grazer began looking for a new director. After Howard stepped down, Menno Meyjes contributed to Gewirtz 's screenplay, and Terry George incorporated the Nazi Germany and diamond ring elements to the script. Meyjes was in negotiations to direct the film, but after he declined, Grazer thought this project was a chance to work with Spike Lee, who had already learned of Gewirtz 's script. Lee said of the screenplay, "I liked the script and really wanted to do it. ' Dog Day Afternoon, ' directed by Sidney Lumet, is one of my favorite films, and this story was a contemporary take on that kind of a movie. ''
After being cast, Denzel Washington and Chiwetel Ejiofor worked together on studying their lines and understanding their characters. Lee helped prepare his actors by screening a number of heist films including Dog Day Afternoon (1975) and Serpico (1973). Washington, Ejiofor, Willem Dafoe and other actors met and worked with members of the New York City Police Department, who shared their experiences and stories involving civilians and hostage situations.
Principal photography for Inside Man took place on location in New York City; filming began in June 2005 and concluded in August after 43 days. Universal Pictures provided a budget of $45 million. By filming in New York, the production was eligible for the city 's "Made in NY '' incentives program. Interior sets were created at the New York - based Steiner Studios, and Inside Man was the second film (after 2005 's The Producers) to be shot inside the 15 - acre facility.
Location scouting revealed a former Wall Street bank that had been closed down and repurposed as a cigar bar. The building stood in for the fictional Manhattan Trust Bank branch, where the bank heist occurs. "Without a bank, we did n't have a movie, '' Lee explained. "But everything ended up going very smoothly. We shot in the heart of Wall Street in a bank that had been closed down. It was like having a back lot in the middle of Wall Street. '' An office at the Alexander Hamilton U.S. Custom House doubled as the office of Arthur Case (Christopher Plummer). Plummer believed that the office 's design was essential to his character: "The space literally presents Case 's power, so I found that part of my character was to simply play very cool about everything. You do n't have to push the power, because it 's all around you. '' The location was also used to film a scene where Frazier confronts Madeleine White (Jodie Foster). The American Tract Society building, located at 150 Nassau Street and Spruce Street, Manhattan, doubled as White 's office. Cafe Bravo, a coffee shop located at 76 Beaver Street and Hanover Street, was also used for filming. Other filming locations included Battery Park and the New York Supreme Court House 's Appellate Division located at East 25th Street and Madison Avenue, Manhattan.
Wynn Thomas supervised the production design, continuing a 20 - year collaboration with Lee. With a former Wall Street bank doubling as the fictional Manhattan Trust branch, Thomas and his team restored the former bank to its 1920s architectural structure. The first floor underwent renovations and was used as the first place where the hostages are held captive by the robbers. The bank 's basement was one of several interior sets created at Steiner Studios. Thomas and his team also designed Frazier 's apartment, which he described as "very masculine and rich and highly monochromatic in its many hues of brown. '' He was also tasked with designing a police interrogation room, as well as the interiors of the New York City Police Department and a light - duty Mobile Command vehicle. An actual Mobile Command vehicle, supplied by LDV Group, was used for exteriors.
-- Cinematographer Matthew Libatique describing Lee 's directing style, which involved multiple - camera setups.
Inside Man was director of photography Matthew Libatique 's second film with Lee. Because the filmmakers intended to finish with a digital intermediate (the post-production digital manipulation of color and lighting), Libatique chose to shoot Inside Man in the Super 35 format for a 2.35: 1 aspect ratio. He mainly used Kodak Vision2 500T 5218 and Vision2 Expression 500T 5229 film stocks. The film was shot with Arricam and Arriflex cameras and Cooke S4 lenses.
Several scenes in Inside Man required multiple - camera setups, which meant that Libatique had to instruct and work with multiple camera operators. Lee wanted to create a visual distinction between the characters Russell (Owen) and Frazier (Washington), while incorporating visual metaphors. Russell 's scenes, in which he masterminds the bank heist, were shot with a Steadicam to suggest that the character is in control. Frazier 's scenes, in which he is tasked with handling the hostage situation, were filmed with multiple hand - held cameras to display the character 's confusion. Libatique explained, "I said, ' We want to create a sense of control and largely centered frames with Clive 's character, and we want to have movement with Denzel 's. ' Having three operators on the same character, I 'd watch all three. In a handheld shot, a long lens has a little bit of movement and a wider lens is inherently smoother. I would actually talk to the operator and tell him not to be so steady. It was the first time I 'd worked with so many operators where I was n't one myself. '' Telephone conversations between Russell and Frazier were shot using two cameras simultaneously filming the actors performing on two different sets of a soundstage at Steiner Studios. Steadicam operator Stephen Consentino estimated that 80 % of the film was shot with hand - held cameras or a Steadicam. A total of seven cameras were used to film the scene where the hostages are finally released. A Technocrane was used for a crane shot that would cover the following moment, in which the hostages are placed in buses.
The film features a number of scenes which involve Detectives Frazier and Mitchell (Chiwitel Ejiofor) interrogating several hostages during the aftermath of the heist. Libatique described these scenes as a "flash - forward '' to events, explaining that Lee "wanted a look that would jump out and tell you you 're somewhere else. '' Libatique photographed the scenes with Kodak Ektachrome 100D 5285 reversal film. Technicolor then cross-processed the filmed footage before it was put through a bleach bypass, which neutralized color temperature and created more contrast. Libatique explained, "Basically, it unifies all the color... When you try to apply correction, the film moves in very strange ways. ''
Post-production facility EFILM carried out the digital intermediate (DI), with Libatique overseeing the process and working with colorists Steve Bowen and Steve Scott: "It 's difficult to match all of your shots meticulously when you have three cameras and one lighting setup, so I spent the majority of the DI just adhering to the original vision of the disparity in color temperature, which I can accentuate, versus the unified color temperature. '' A majority of Inside Man was scanned on a Northlight film scanner, while the interrogation scenes had to be scanned on a Spirit DataCine, as the negatives proved "too dense for the Northlight to perform the task. ''
-- Lee commenting on the film 's 30 - second video game sequence, which was created as part of a social commentary.
Inside Man features a scene in which Russell (Owen) interacts with Brian Robinson (Amir Ali Said), an 8 - year - old boy who is playing a violent video game titled "Gangstas iz Genocide '' on his PlayStation Portable. The scene is intercut with a 30 - second animated sequence of the fictional game, in which a character performs a drive - by shooting, before killing an intended target with a hand grenade. Using the Grand Theft Auto franchise as a reference, Lee wanted the scene to serve as a social commentary on gangsta rap, violent crime among African Americans and the rising level of killings in video games.
Cinematographer Matthew Libatique enlisted his cousin, Eric Alba, and a team of graphic artists known as House of Pain to design the 30 - second animated sequence. Lee asked for the sequence to show two black characters in a ghetto environment dressed in gangster attire. He also gave the artists mockups of two scenarios that ended in homicide -- one being a robbery at an ATM, and the other a drive - by shooting.
House of Pain spent 10 days working on "Gangstas iz Genocide ''. Alba digitally photographed images of buildings near the Marcy Houses in Brooklyn, New York. Portions of the sequence were pre-visualized in 3D Studio Max, while stills were imported as texture maps and added to animated cut scenes created in 3D modeling package Maya. The artists also improvised the use of a hand grenade. When Lee saw how violent the sequence was, he improvised the line "Kill Dat Nigga! '' as a subtitle. The entire sequence was rendered out to play onscreen in full frame. The original running time of the animated sequence was 60 seconds. Lee cut it to 30 seconds, feeling that a shorter length would make more of an impact. Upon Inside Man 's theatrical release, he regretted the video game sequence in the film, saying, "The sad thing is somebody is probably gon na make a game out of it and take that as inspiration. ''
Jazz musician and trumpeter Terence Blanchard composed the film score, marking his eleventh collaboration with Lee. The soundtrack for Inside Man features the song "Chaiyya Chaiyya '', composed by A.R. Rahman, which originally appeared in the 1998 Hindi film Dil Se... The song is featured during the opening credits of the film. A remix of the song, titled "Chaiyya, Chaiyya Bollywood Joint '' plays during the end credits, and features Panjabi MC 's added rap lyrics about people of different backgrounds coming together in order to survive. The soundtrack, titled Inside Man: Original Motion Picture Soundtrack, was released on CD in North America on March 21, 2006, through record label Varèse Sarabande.
Inside Man held its premiere in New York at the Ziegfeld Theatre on March 20, 2006, coinciding with Lee 's 49th birthday. On March 24, 2006, Universal Studios released the film in 2,818 theatres in North America. The film was given the widest release of any Spike Lee film, edging out Summer of Sam (1999) by 1,282 theatres. Inside Man was also released throughout 62 foreign markets. The film was released on DVD on August 8, 2006, on HD DVD on October 23, 2007 and on Blu - ray disc on May 26, 2009.
On its opening day in North America, Inside Man grossed $9,440,295 with an average of $3,350 per theatre. By the end of its opening weekend, it had grossed $28,954,945, securing the number one position at the box office. Inside Man held the record for the highest opening weekend gross as a Denzel Washington starring vehicle, surpassing Man on Fire (2004) which debuted with $22.7 million on its first weekend.
Inside Man had dropped 46.7 % in its second weekend, earning $15,437,760; it had dropped to second place behind Ice Age: The Meltdown. The film dropped an additional 40.9 % in its third week, bringing in $9,131,410, though it remained in the Top 10 rankings for the weekend, placing fourth overall. The film remained in the top ten for the fourth weekend in a row, grossing approximately $6,427,815 and finishing sixth for the week. In its fifth weekend, Inside Man had grossed an additional $3,748,955, while in eighth place. In its sixth weekend, Inside Man fell out of the box office top ten, finishing eleventh with an estimated $2,081,690. The film ended its theatrical run in North America on July 6, 2006 after 15 weeks (105 days) of release. It grossed $88,513,495 in the United States and Canada, ranking as Lee 's highest - grossing film, ahead of Malcolm X (1992), which had ended its North American release with over $48 million.
Inside Man was released overseas on March 23, 2006. On its opening weekend, it grossed approximately $9,600,000 in ten territories. The film grossed $95,862,759 in the overseas box office, with a worldwide total of $184,376,254. In North America, it was the twenty - second highest - grossing film of 2006, while it ranked at twenty - first place as the highest - grossing film released worldwide.
Inside Man has received mostly positive reviews. Rotten Tomatoes sampled 197 reviews, and currently has an 86 % rating, making it "Certified Fresh ''. The site 's critical consensus reads, "Spike Lee 's energetic and clever bank - heist thriller is a smart genre film that is not only rewarding on its own terms, but manages to subvert its pulpy trappings with wit and skill -- and Denzel Washington is terrific as a brilliant hostage negotiator. '' Metacritic, another review aggregator, assigned Inside Man a weighted average score of 76 (out of 100) based on 39 reviews from mainstream critics, considered to be "generally favorable reviews ''. CinemaScore polls reported that the average grade cinemagoers gave the film a "B + '' on an A+ to F scale, with exit polls showing that 54 % of the audience was male, while 68 % was at least 30 years old or older. The American Film Institute named Inside Man as one of the top ten films of 2006.
Empire gave the film 4 out of 5 stars with the verdict, "It 's certainly a Spike Lee film, but no Spike Lee Joint. Still, he 's delivered a pacy, vigorous and frequently masterful take on a well - worn genre. Thanks to some slick lens work and a cast on cracking form, Lee proves (perhaps above all to himself?) that playing it straight is not always a bad thing. '' Wesley Morris of The Boston Globe wrote, "The basic story is elemental, but because Lee and Gewirtz invest it with grit, comedy, and a ton of New York ethnic personality, it 's fresh anyway. '' David Ansen of Newsweek commented, "As unexpected as some of its plot twists is the fact that this unapologetic genre movie was directed by Spike Lee, who has never sold himself as Mr. Entertainment. But here it is, a Spike Lee joint that 's downright fun. '' Giving the film a B+ rating, Lisa Schwarzbaum of Entertainment Weekly wrote, "Inside Man is a hybrid of studio action pic and Spike Lee joint. Or else it 's a cross between a 2006 Spike Lee joint and a 1970s - style movie indictment of urban unease. ''
Not all reviewers gave Inside Man positive reviews. Roger Ebert of the Chicago Sun - Times gave it a mixed review, writing, "Here is a thriller that 's curiously reluctant to get to the payoff, and when it does, we see why: We ca n't accept the motive and method of the bank robbery, we ca n't believe in one character and ca n't understand another. '' Peter Bradshaw of The Guardian awarded the film one star out of five, calling it a "supremely annoying and nonsensical film ''. Rex Reed of The New York Observer wrote, "Inside Man has two things going for it: better actors than usual and a slicker look. Otherwise, it 's no different from nine out of 10 other preposterous, contrived, confusingly written, unevenly directed, pointless and forgettable junk films we 've been getting these days. ''
In November 2006, it was announced that a sequel to Inside Man was in development, with Russell Gewirtz reprising screenwriting duties. Under the working title Inside Man 2, the film would have Brian Grazer again serve as producer. Spike Lee was in negotiations to reprise his directing duties while serving as an executive producer alongside returning member Daniel M. Rosenberg. In 2008, Terry George was in negotiations to write the screenplay for the sequel; he later replaced Gewirtz, whose screenplay was abandoned. The plot for the sequel was intended to continue after the events of the first film, with Dalton Russell (played by Clive Owen) masterminding another robbery, and again matching wits with NYPD hostage negotiator Keith Frazier (Denzel Washington). Lee confirmed that Washington, Owen, Jodie Foster and Chiwetel Ejiofor would all reprise their roles. He also expressed interest in filming Inside Man 2 during the fall of 2009.
In 2011, it was announced that plans to make Inside Man 2 had been cancelled. Lee confirmed this, expressing that he could not secure funding for the project. "Inside Man was my most successful film, but we ca n't get the sequel made, '' he said. "And one thing Hollywood does well is sequels. The film 's not getting made. We tried many times. It 's not going to happen. ''
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.