question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
what is the measurement of a goal post | Football pitch - wikipedia
A football pitch (also known as a football field or soccer field) is the playing surface for the game of association football. Its dimensions and markings are defined by Law 1 of the Laws of the Game, "The Field of Play ''. The surface can either be natural or artificial, but FIFA 's Laws of the Game specify that all artificial surfaces must be painted green. The pitch is typically made of turf (grass) or artificial turf, although amateur and recreational teams often play on dirt fields.
All line markings on the pitch form part of the area which they define. For example, a ball on or above the touchline is still on the field of play, and a foul committed over the line bounding the penalty area results in a penalty. Therefore, a ball must completely cross the touchline to be out of play, and a ball must wholly cross the goal line (between the goal posts) before a goal is scored; if any part of the ball is still on or above the line, the ball is still in play.
The field descriptions that apply to adult matches are described below. Note that due to the original formulation of the Laws in England and the early supremacy of the four British football associations within IFAB, the standard dimensions of a football pitch were originally expressed in imperial units. The Laws now express dimensions with approximate metric equivalents (followed by traditional units in brackets), but use of the imperial units remains common in some countries, especially in the United Kingdom.
The pitch is rectangular in shape. The longer sides are called touchlines. The other opposing sides are called the goal lines. The two goal lines must be between 45 and 90 m (50 and 100 yd) and be the same length. The two touch lines must also be of the same length, and be between 90 and 120 m (100 and 130 yd) in length. All lines must be equally wide, not to exceed 12 cm (5 in). The corners of the pitch are marked by corner flags.
For international matches the field dimensions are more tightly constrained; the goal lines must be between 64 and 75 m (70 and 80 yd) long and the touch lines must be between 100 and 110 m (110 and 120 yd). In March 2008 the IFAB attempted to standardise the size of the football pitch for international matches and set the official dimensions of a pitch to 105 m long by 68 m wide. However, at a special meeting of the IFAB on 8 May 2008, it was ruled that this change would be put on hold pending a review and the proposed change has not been implemented.
Although the term goal line is often taken to mean only that part of the line between the goalposts, in fact it refers to the complete line at either end of the pitch, from one corner flag to the other. In contrast, the term byline (or by - line) is often used to refer to that portion of the goal line outside the goalposts. This term is commonly used in football commentaries and match descriptions, such as this example from a BBC match report; "Udeze gets to the left byline and his looping cross is cleared... ''
Goals are placed at the centre of each goal - line. These consist of two upright posts placed equidistant from the corner flagposts, joined at the top by a horizontal crossbar. The inner edges of the posts must be 7.32 metres (8 yd) apart, and the lower edge of the crossbar must be 2.44 metres (8 ft) above the ground. Nets are usually placed behind the goal, though are not required by the Laws.
Goalposts and crossbars must be white, and made of wood, metal or other approved material. Rules regarding the shape of goalposts and crossbars are somewhat more lenient, but they must conform to a shape that does not pose a threat to players. Since the beginning of the football there have always been goalposts, but the crossbar was n't invented until 1875, where a string between the goalposts was used.
A goal is scored when the ball crosses the goal line between the goal - posts, even if a defending player last touched the ball before it crossed the goal line (see own goal). A goal may, however, be ruled illegal (and void by the referee) if the player who scored or a member of his team commits an offence under any of the laws between the time the ball was previously out of play and the goal being scored. It is also deemed void if a player on the opposing team commits an offence before the ball has passed the line, as in the case of fouls being committed, a penalty awarded but the ball continued on a path that caused it to cross the goal line.
The football goal size for a junior match goal is approximately half the size of an adult sized match goal.
Football goals were first described in England in the late 16th and early 17th centuries. In 1584 and 1602 respectively, John Norden and Richard Carew referred to "goals '' in Cornish hurling. Carew described how goals were made: "they pitch two bushes in the ground, some eight or ten foote asunder; and directly against them, ten or twelue (twelve) score off, other twayne in like distance, which they terme their Goales ''. The first reference to scoring a goal is in John Day 's play The Blind Beggar of Bethnal Green (performed circa 1600; published 1659). Similarly in a poem in 1613, Michael Drayton refers to "when the Ball to throw, And drive it to the Gole, in squadrons forth they goe ''. Solid crossbars were first introduced by the Sheffield Rules. Football nets were invented by Liverpool engineer John Brodie in 1891, and they were a necessary help for discussions about whether or not a goal had been scored.
Two rectangular boxes are marked out on the pitch in front of each goal.
The goal area (colloquially the "six - yard box ''), consists of the rectangle formed by the goal - line, two lines starting on the goal - line 5.5 metres (6 yd) from the goalposts and extending 5.5 metres (6 yd) into the pitch from the goal - line, and the line joining these. Goal kicks and any free kick by the defending team may be taken from anywhere in this area. Indirect free kicks awarded to the attacking team within the goal area are taken from the point on the line parallel to the goal line (the "six - yard line '') nearest where the infringement occurred; they can not be taken any closer to the goal line. Similarly drop - balls that would otherwise occur closer to the goal line are taken on this line.
The penalty area (colloquially "The 18 - yard box '' or just "The box '') is similarly formed by the goal - line and lines extending from it, but its lines start 16.5 metres (18 yd) from the goalposts and extend 16.5 metres (18 yd) into the field. This area has a number of functions, the most prominent being to denote where the goalkeeper may handle the ball and where a foul by a defender, usually punished by a direct free kick, becomes punishable by a penalty kick. Both the goal and penalty areas were formed as halfcircles until 1902.
The penalty mark is 11 metres (12 yd) in front of the very centre of the goal; this is the point from where penalty kicks are taken.
The penalty arc (colloquially "the D '') is marked from the outside edge of the penalty area, 9.15 metres (10 yd) from the penalty mark; this, along with the penalty area, marks an exclusion zone for all players other than the penalty kicker and defending goalkeeper during a penalty kick.
The centre circle is marked at 9.15 metres (10 yd) from the centre mark. Similar to the penalty arc, this indicates the minimum distance that opposing players must keep at kick - off; the ball itself is placed on the centre mark. During penalty shootouts all players other than the two goalkeepers and the current kicker are required to remain within this circle.
The half - way line divides the pitch in two. The half which a team defends is commonly referred to as being their half. Players must be within their own half at a kick - off and may not be penalised as being offside in their own half. The intersections between the half - way line and the touchline can be indicated with flags like those marking the corners -- the laws consider this as an optional feature.
The arcs in the corners denote the area (within 1 yard of the corner) in which the ball has to be placed for corner kicks; opposition players have to be 9.15 m (10 yd) away during a corner, and there may be optional lines off - pitch 9.15 metres (10 yards) away from the corner arc on the goal - and touch - lines to help gauge these distances.
Grass is the normal surface of play, although artificial turf may sometimes be used especially in locations where maintenance of grass may be difficult due to inclement weather. This may include areas where it is very wet, causing the grass to deteriorate rapidly; where it is very dry, causing the grass to die; and where the turf is under heavy use. Artificial turf pitches are also increasingly common on the Scandinavian Peninsula, due to the amount of snow during the winter months. The strain put on grass pitches by the cold climate and subsequent snow clearing has necessitated the installation of artificial turf in the stadia of many top - tier clubs in Norway, Sweden and Finland. The latest artificial surfaces use rubber crumbs, as opposed to the previous system of sand infill. Some leagues and football associations have specifically prohibited artificial surfaces due to injury concerns and require teams ' home stadia to have grass pitches. All artificial turf must be green and also meet the requirements specified in the FIFA Quality Concept for Football Turf.
Football can also be played on a dirt field. In most parts of the world, dirt is only used for casual recreational play.
|
who won the mvp in the super bowl this year | Super Bowl most valuable player award - wikipedia
The Super Bowl Most Valuable Player Award, or Super Bowl MVP, is presented annually to the most valuable player of the Super Bowl, the National Football League 's (NFL) championship game. The winner is chosen by a fan vote during the game and by a panel of 16 football writers and broadcasters who vote after the game. The media panel 's ballots count for 80 percent of the vote tally, while the viewers ' ballots make up the other 20 percent. The game 's viewing audience can vote on the Internet or by using cellular phones; Super Bowl XXXV, held in 2001, was the first Super Bowl with fan voting.
The Super Bowl MVP has been awarded annually since the game 's inception in 1967. Through 1989, the award was presented by SPORT magazine. Bart Starr was the MVP of the first two Super Bowls. Since 1990, the award has been presented by the NFL. At Super Bowl XXV, the league first awarded the Pete Rozelle Trophy, named after former NFL commissioner Pete Rozelle, to the Super Bowl MVP. Ottis Anderson was the first to win the trophy. Most award winners have received cars from various sponsors. The most recent Super Bowl MVP, from Super Bowl LII held on February 4, 2018, is Philadelphia Eagles quarterback Nick Foles, who passed for 373 yards and three touchdowns and scored a fourth touchdown as a receiver, becoming the first player to both throw and catch a touchdown pass in a Super Bowl.
Tom Brady is the only player to have won four Super Bowl MVP awards; Joe Montana has won three and three others -- Starr, Terry Bradshaw, and Eli Manning -- have won the award twice. Starr and Bradshaw are the only ones to have won it in back - to - back years. The MVP has come from the winning team every year except 1971, when Dallas Cowboys linebacker Chuck Howley won the award despite the Cowboys ' loss in Super Bowl V to the Baltimore Colts. Harvey Martin and Randy White were named co-MVPs of Super Bowl XII, the only time co-MVPs have been chosen. Including the Super Bowl XII co-MVPs, seven Cowboys players have won Super Bowl MVP awards, the most of any NFL team. Quarterbacks have earned the honor 29 times in 52 games.
|
going to grandma's house mary kate and ashley | To grandmother 's house We go - wikipedia
To Grandmother 's House We Go is a 1992 Christmas television film starring Mary - Kate and Ashley Olsen. The film 's title is one of the first lines of Lydia Maria Child 's Thanksgiving song "Over the River and Through the Wood ''. It debuted on ABC with unexpected success, mostly because of the already growing popularity of the "Olsen Twins ''.
Twin Sisters Sarah and Julie (Mary - Kate Olsen and Ashley Olsen) are two naughty but sweet children who drive their work - obsessed divorced mother, Rhonda (Cynthia Geary), up the wall. They overhear her saying that they are a "handful '' and she needs a "vacation ''. Deciding to give Rhonda what she wants by heading off to their great grandmother 's house for Christmas, the girls pack up their bags and hop on their bicycles. But there 's a problem; they are n't allowed to cross the street on their own. The city bus pulls up, and they sneak on through the back door. While riding the bus, an elderly lady informs them that the bus only goes back and forth between Uptown and Downtown, and that Edgemont (where Grandma lives) is actually several hours away.
After getting off the bus downtown, they spot Eddie (J. Eddie Peck) (a delivery man who has a crush on their mom) and his truck. They sneak into its back thinking that he will lead them to their great - grandmother 's, and only reveal themselves to him because Sarah desperately has to go to the bathroom. He does n't like kids, but eventually starts to enjoy the girls ' company after he figures out that he gets large tips when they deliver packages with him. He even buys them ice cream, as well as a lottery ticket with the numbers of their birth date (6 - 13 - 19 - 8 - 7). Meanwhile, the babysitter has noticed that the girls were missing and inform Rhonda about it, who in concerns of the news frantically closes her opened 24 / 7 store, and rushes home to inspect the place and call the police. Just as she is reporting it to the police, Eddie calls her and gives her the good news that he has the girls and how they got there. She agrees to come pick them up, but he agrees to keep them and watch over them promising to bring them back at the end of the day when he has finished, while having the girls believe that he will take them to Grandma 's if they help out.
After the day 's deliveries are finished, of course against their will, he brings them home even revealing to them that adults will say anything to get kids to go along. He manages to make it back home telling the girls to go to the back to get their suitcases where he will meet them there. He manages to step out of his truck, only to be attacked by two robbers who steal his truck (with the girls still inside). When the robbers, Harvey and Shirley (Jerry Van Dyke and Rhea Perlman), discover them and why they are there, they decide they can make some money by kidnapping them for ransom. Shirley makes a phone call to Rhonda, asking for a ransom which she calls a "reward '' of $10,000 in cash. She also forced Rhonda not to inform the police about it, or else they will permanently disappear with the girls. She tells her that they will make the trade at the ice rink in Edgemont, and that she is to wear a red hat.
Meanwhile, Harvey has begun to like the girls and when he asks Shirley why they never had kids, she replies that it 's because they 're too busy being criminals, though he agrees that it 's just their job rather than a mission, which is how Shirley views it. Eddie and Rhonda reluctantly agree to raise the money for ransom through opening up and selling merchandise which Eddie is supposed to be delivering. However, they succeed as planned and manage to make it close to the threshold of what they 're supposed to raise. Eddie gives Rhonda a red cowgirl cat from his stack of cowboy wannabes. They succeed as planned, and however, the pawnshops start noticing the stolen merchandise and reporting to Detective Gremp (Stuart Margolin) and his officials, writing out a warrant for their arrest, mistaking them for the bandits.
Through managing to make it to the skating rink in Edgemont, through many pros and cons between Eddie and Rhonda and the bandits and how the girls run off again over being upset about the truth that Harvey reveals to them, and through runaway horses disguised as reindeer which Santa left after they had a visit with him, and with the help of Eddie 's intervention and resilience, Sarah and Julie eventually get to their great - grandmother 's house, and then Eddie and Rhonda (who by this time has learned that it was because of her saying she wanted a vacation that the girls ran away, and apologizes to the girls for being upset and making them think like that) get together after he saves the girls. Just as everything becomes okay, and Eddie and Rhonda share a hug, Detective Gremp and one of his officials burst in and handcuff them. They try to reveal to Gremp the truth about what was going on in their side of the story and what Eddie had planned to do to pay it all back, but he still assumes that they are the real bandits because of the whole ransom deal. Harvey 's heart goes out to them as Shirley is trying to force them to get away while they have the chance, and thanks to his conscience kicking in, he intervenes and causes the truth to be revealed. They get handcuffed and sent away, with Harvey telling Shirley that if they ever get out, he promises to make her proud by being the worst convict possible.
Through enough persuasion, Gremp agrees to let Eddie, Rhonda and the girls go all the back to the city with him, so Eddie can be back in time to be able to have a chance at winning the lotto of 1.3 million dollars on a TV show. He promises to split what he wins between Rhonda and the girls, and has the girls spin it for him. Through pure luck, he wins the jackpot. Afterwards, they give all the people their parcels back, and everybody is happy spending Christmas together.
The movie was filmed in Vancouver, British Columbia, Canada. It debuted in the USA on December 6, 1992.
|
which of the following best describes the history of income inequality in the united states | Income inequality in the United States - Wikipedia
Income inequality in the United States has increased significantly since the 1970s after several decades of stability, meaning the share of the nation 's income received by higher income households has increased. This trend is evident with income measured both before taxes (market income) as well as after taxes and transfer payments. Income inequality has fluctuated considerably since measurements began around 1915, moving in an arc between peaks in the 1920s and 2000s, with a 30 - year period of relatively lower inequality between 1950 -- 1980. Recasting the 2012 income using the 1979 income distribution, the bottom 99 % of families would have averaged about $7,100 more income.
Measured for all households, U.S. income inequality is comparable to other developed countries before taxes and transfers, but is among the highest after taxes and transfers, meaning the U.S. shifts relatively less income from higher income households to lower income households. Measured for working - age households, market income inequality is comparatively high (rather than moderate) and the level of redistribution is moderate (not low). These comparisons indicate Americans shift from reliance on market income to reliance on income transfers later in life and less than households in other developed countries do.
The U.S. ranks around the 30th percentile in income inequality globally, meaning 70 % of countries have a more equal income distribution. U.S. federal tax and transfer policies are progressive and therefore reduce income inequality measured after taxes and transfers. Tax and transfer policies together reduced income inequality slightly more in 2011 than in 1979.
While there is strong evidence that it has increased since the 1970s, there is active debate in the United States regarding the appropriate measurement, causes, effects and solutions to income inequality. The two major political parties have different approaches to the issue, with Democrats historically emphasizing that economic growth should result in shared prosperity (i.e., a pro-labor argument advocating income redistribution), while Republicans tend to avoid government involvement in income and wealth generation (i.e., a pro-capital argument against redistribution).
U.S. income inequality has grown significantly since the early 1970s, after several decades of stability, and has been the subject of study of many scholars and institutions. The U.S. consistently exhibits higher rates of income inequality than most developed nations due to the nation 's enhanced support of free market capitalism and less progressive spending on social services.
The top 1 % of households received approximately 20 % of the pre-tax income in 2013, versus approximately 10 % from 1950 to 1980. The top 1 % is not homogeneous, with the very top income households pulling away from others in the top 1 %. For example, the top 0.1 % of households received approximately 10 % of the pre-tax income in 2013, versus approximately 3 -- 4 % between 1951 -- 1981. According to IRS data, adjusted gross income (AGI) of approximately $430,000 was required to be in the top 1 % in 2013.
Most of the growth in income inequality has been between the middle class and top earners, with the disparity widening the further one goes up in the income distribution. The bottom 50 % earned 20 % of the nation 's pre-tax income in 1979; this fell steadily to 14 % by 2007 and 13 % by 2014. Income for the middle 40 % group, a proxy for the middle class, fell from 45 % in 1979 to 41 % in both 2007 and 2014.
To put this change into perspective, if the US had the same income distribution it had in 1979, each family in the bottom 80 % of the income distribution would have had $11,000 more per year in income on average in 2012, or $916 per month. This figure would be $7,100 per year for the bottom 99 % of families comparing 1979 and 2012, or about $600 / month.
The trend of rising income inequality is also apparent after taxes and transfers. A 2011 study by the CBO found that the top earning 1 percent of households increased their income by about 275 % after federal taxes and income transfers over a period between 1979 and 2007, compared to a gain of just under 40 % for the 60 percent in the middle of America 's income distribution. U.S. federal tax and transfer policies are progressive and therefore substantially reduce income inequality measured after taxes and transfers. They became moderately less progressive between 1979 and 2007 but slightly more progressive measured between 1979 and 2011. Income transfers had a greater impact on reducing inequality than taxes from 1979 to 2011.
Americans are not generally aware of the extent of inequality or recent trends. There is a direct relationship between actual income inequality and the public 's views about the need to address the issue in most developed countries, but not in the U.S., where income inequality is larger but the concern is lower. The U.S. was ranked the 6th from the last among 173 countries (4th percentile) on income equality measured by the Gini index.
There is significant and ongoing debate as to the causes, economic effects, and solutions regarding income inequality. While before - tax income inequality is subject to market factors (e.g., globalization, trade policy, labor policy, and international competition), after - tax income inequality can be directly affected by tax and transfer policy. U.S. income inequality is comparable to other developed nations before taxes and transfers, but is among the worst after taxes and transfers. Income inequality may contribute to slower economic growth, reduced income mobility, higher levels of household debt, and greater risk of financial crises and deflation.
Labor (workers) and capital (owners) have always battled over the share of the economic pie each obtains. The influence of the labor movement has waned in the U.S. since the 1960s along with union participation and more pro-capital laws. The share of total worker compensation has declined from 58 % of national income (GDP) in 1970 to nearly 53 % in 2013, contributing to income inequality. This has led to concerns that the economy has shifted too far in favor of capital, via a form of corporatism, corpocracy or neoliberalism.
Although some have spoken out in favor of moderate inequality as a form of incentive, others have warned against the current high levels of inequality, including Yale Nobel prize for economics winner Robert J. Shiller, (who called rising economic inequality "the most important problem that we are facing now today ''), former Federal Reserve Board chairman Alan Greenspan, ("This is not the type of thing which a democratic society -- a capitalist democratic society -- can really accept without addressing ''), and President Barack Obama (who referred to the widening income gap as the "defining challenge of our time '').
The level of concentration of income in the United States has fluctuated throughout its history. The first era of inequality lasted roughly from the post-civil war era or "the Gilded Age '' to sometime around 1937. In 1915, an era in which the Rockefellers and Carnegies dominated American industry, the richest 1 % of Americans earned roughly 18 % of all income. By 2007, the top 1 percent accounted for 24 % of all income and in between, their share fell below 10 % for three decades.
From about 1937 to 1947, a period dubbed as the "Great Compression '' -- income inequality in the United States fell dramatically. Highly progressive New Deal taxation, the strengthening of unions, and regulation of the National War Labor Board during World War II raised the income of the poor and working class and lowered that of top earners. From the early 20th century, when income statistics started to become available, there has been a "great economic arc '' from high inequality "to relative equality and back again '', according to Nobel laureate economist Paul Krugman.
For about three decades ending in the early 1970s, this "middle class society '' with a relatively low level of inequality remained fairly steady, the product of relatively high wages for the US working class and political support for income leveling government policies. Wages remained relatively high because American manufacturing lacked foreign competition, and because of strong trade unions. By 1947 more than a third of non-farm workers were union members, and unions both raised average wages for their membership, and indirectly, and to a lesser extent, raised wages for workers in similar occupations not represented by unions. According to Krugman political support for equalizing government policies was provided by high voter turnout from union voting drives, the support of the otherwise conservative South for the New Deal, and prestige that the massive mobilization and victory of World War II had given the government.
On the other hand, a Marxist writing in the 1950s and 1960s believed "While the American worker enjoys the highest standard of living of any worker in the world, he is also the most heavily exploited. This tremendously productive working class gets back for its own consumption a smaller part of its output and hands over in the form of profit to the capitalist owners of the instruments of production a greater part of its output than does either the English or the French working class. ''
The return to high inequality, or to what Krugman and journalist Timothy Noah have referred as the "Great Divergence '', began in the 1970s. Studies have found income grew more unequal almost continuously except during the economic recessions in 1990 -- 91, 2001 (Dot - com bubble), and 2007 sub-prime bust.
The Great Divergence differs in some ways from the pre-Depression era inequality. Before 1937, a larger share of top earners income came from capital (interest, dividends, income from rent, capital gains). After 1970, income of high - income taxpayers comes predominantly from labor: employment compensation.
Until 2011, the Great Divergence had not been a major political issue in America, but stagnation of middle - class income was. In 2009 the Barack Obama administration White House Middle Class Working Families Task Force convened to focus on economic issues specifically affecting middle - income Americans. In 2011, the Occupy movement drew considerable attention to income inequality in the country.
CBO reported that for the 1979 - 2007 period, after - tax income of households in the top 1 percent of earners grew by 275 %, compared to 65 % for the next 19 %, just under 40 % for the next 60 %, 18 % for the bottom fifth of households. "As a result of that uneven income growth, '' the report noted, "the share of total after - tax income received by the 1 percent of the population in households with the highest income more than doubled between 1979 and 2007, whereas the share received by low - and middle - income households declined... The share of income received by the top 1 percent grew from about 8 % in 1979 to over 17 % in 2007. The share received by the other 19 percent of households in the highest income quintile (one fifth of the population as divided by income) was fairly flat over the same period, edging up from 35 % to 36 %. ''
According to the CBO, the major reason for observed rise in unequal distribution of after - tax income was an increase in market income, that is household income before taxes and transfers. Market income for a household is a combination of labor income (such as cash wages, employer - paid benefits, and employer - paid payroll taxes), business income (such as income from businesses and farms operated solely by their owners), capital gains (profits realized from the sale of assets and stock options), capital income (such as interest from deposits, dividends, and rental income), and other income. Of them, capital gains accounted for 80 % of the increase in market income for the households in top 20 %, in the 2000 -- 2007 period. Even over the 1991 -- 2000 period, according to the CBO, capital gains accounted for 45 % of the market income for the top 20 % households.
In a July 2015 op - ed article, Martin Feldstein, Professor of Economics at Harvard University, stated that the CBO found that from 1980 to 2010 real median household income rose by 15 %. However, when the definition of income was expanded to include benefits and subtracted taxes, the CBO found that the median household 's real income rose by 45 %. Adjusting for household size, the gain increased to 53 %.
Just as higher - income groups are more likely to enjoy financial gains when economic times are good, they are also likely to suffer more significant income losses during economic downturns and recessions when they are compared to lower income groups. Higher - income groups tend to derive relatively more of their income from more volatile sources related to capital income (business income, capital gains, and dividends), as opposed to labor income (wages and salaries). For example, in 2011 the top 1 % of income earners derived 37 % of their income from labor income, versus 62 % for the middle quintile. On the other hand, the top 1 % derived 58 % of their income from capital as opposed to 4 % for the middle quintile. Government transfers represented only 1 % of the income of the top 1 % but 25 % for the middle quintile; the dollar amounts of these transfers tend to rise in recessions.
This effect occurred during the Great Recession of 2007 -- 2009, when total income going to the bottom 99 percent of Americans declined by 11.6 %, but fell by 36.3 % for the top 1 %. Declines were especially steep for capital gains, which fell by 75 % in real (inflation - adjusted) terms between 2007 and 2009. Other sources of capital income also fell: interest income by 40 % and dividend income by 33 %. Wages, the largest source of income, fell by a more modest 6 %.
The share of pretax income received by the top 1 % fell from 18.7 % in 2007 to 16.0 % in 2008 and 13.4 % in 2009, while the bottom four quintiles all had their share of pretax income increase from 2007 to 2009. The share of aftertax income received by the top 1 % income group fell from 16.7 %, in 2007, to 11.5 %, in 2009.
The distribution of household incomes has become more unequal during the post-2008 economic recovery as the effects of the recession reversed. CBO reported in November 2014 that the share of pre-tax income received by the top 1 % had risen from 13.3 % in 2009 to 14.6 % in 2011. During 2012 alone, incomes of the wealthiest 1 percent rose nearly 20 %, whereas the income of the remaining 99 percent rose 1 % in comparison.
By 2012, the share of pre-tax income received by the top 1 % had returned to its pre-crisis peak, at around 23 % of the pre-tax income according to an article in The New Yorker. This is based on widely cited data from economist Emmanuel Saez, which uses "market income '' and relies primarily on IRS data. The CBO uses both IRS data and census data in its computations and reports a lower pre-tax figure for the top 1 %. The two series were approximately 5 percentage points apart in 2011 (Saez at about 19.7 % versus CBO at 14.6 %), which would imply a CBO figure of about 18 % in 2012 if that relationship holds, a significant increase versus the 14.6 % CBO reported for 2011. The share of after - tax income received by the top 1 % rose from 11.5 % in 2009 to 12.6 % in 2011.
Between 2010 and 2013, inflation - adjusted pre-tax income for the bottom 90 % of American families fell, with the middle income groups dropping the most, about 6 % for the 40th - 60th percentiles and 7 % for the 20th - 40th percentiles. Incomes in the top decile rose 2 %.
During the 2009 - 2012 recovery period, the top 1 % captured 91 % of the real income growth per family with their pre-tax incomes growing 34.7 % adjusted for inflation while the pre-tax incomes of the bottom 99 % grew 0.8 %. Measured from 2009 -- 2015, the top 1 % captured 52 % of the total real income growth per family, indicating the recovery was becoming less "lopsided '' in favor of higher income families. By 2015, the top 10 % (top decile) had a 50.5 % share of the pre-tax income, close its highest all - time level.
In 2013, tax increases on higher income earners were implemented with the Affordable Care Act and American Taxpayer Relief Act of 2012. CBO estimated that "average federal tax rates under 2013 law would be higher -- relative to tax rates in 2011 -- across the income spectrum. The estimated rates under 2013 law would still be well below the average rates from 1979 through 2011 for the bottom four income quintiles, slightly below the average rate over that period for households in the 81st through 99th percentiles, and well above the average rate over that period for households in the top 1 percent of the income distribution. '' In 2016, the economists Peter H. Lindert and Jeffrey G. Williamson contended that inequality is the highest it has been since the nation 's founding.
French economist Thomas Piketty attributed the victory of Donald Trump in the 2016 presidential election, which he characterizes as an "electoral upset, '' to "the explosion in economic and geographic inequality in the United States over several decades and the inability of successive governments to deal with this. ''
In May 2017, new data sets from the economists Piketty, Saez, and Gabriel Zucman of University of California, Berkeley demonstrate that inequality runs much deeper than previous data indicated. The share of incomes for those in the bottom half of the U.S. population stagnated and declined during the years 1980 to 2014 from 20 % in 1980 to 12 % in 2014. By contrast, the top 1 % share of income grew from 12 % in 1980 to 20 % in 2014. The top 1 % now makes on average 81 times more than the bottom 50 % of adults, where as in 1981 they made 27 times more. Pretax incomes for the top 0.001 % surged 636 % during the years 1980 to 2014. The economists also note that the growth of inequality during the 1970s to the 1990s can be attributed to wage growth among top earners, but the ever - widening gap has been "a capital - driven phenomenon since the late 1990s. '' They posit that "the working rich are either turning into or being replaced by rentiers. ''
A 2017 report by Philip Alston, the United Nations special rapporteur on extreme poverty and human rights, asserted that Donald Trump and the Republican Congress are pushing policies that would make the United States the "world champion of extreme inequality ''.
According to the CBO and others, "the precise reasons for the (recent) rapid growth in income at the top are not well understood '', but "in all likelihood, '' an "interaction of multiple factors '' was involved. "Researchers have offered several potential rationales. '' Some of these rationales conflict, some overlap. They include:
Paul Krugman put several of these factors into context in January 2015: "Competition from emerging - economy exports has surely been a factor depressing wages in wealthier nations, although probably not the dominant force. More important, soaring incomes at the top were achieved, in large part, by squeezing those below: by cutting wages, slashing benefits, crushing unions, and diverting a rising share of national resources to financial wheeling and dealing... Perhaps more important still, the wealthy exert a vastly disproportionate effect on policy. And elite priorities -- obsessive concern with budget deficits, with the supposed need to slash social programs -- have done a lot to deepen (wage stagnation and income inequality). ''
According to a 2018 report by the OECD, the U.S. has higher income inequality and a larger percentage of low income workers than almost any other advanced nation because the unemployed and at - risk workers get almost no support from the government and are further set back by a very weak collective bargaining system.
There is an ongoing debate as to the economic effects of income inequality. For example, Alan B. Krueger, President Obama 's Chairman of the Council of Economic Advisors, summarized the conclusions of several research studies in a 2012 speech. In general, as income inequality worsens:
Among economists and related experts, many believe that America 's growing income inequality is "deeply worrying '', unjust, a danger to democracy / social stability, or a sign of national decline. Yale professor Robert Shiller, who was among three Americans who won the Nobel prize for economics in 2013, said after receiving the award, "The most important problem that we are facing now today, I think, is rising inequality in the United States and elsewhere in the world. '' Economist Thomas Piketty, who has spent nearly 20 years studying inequality primarily in the US, warns that "The egalitarian pioneer ideal has faded into oblivion, and the New World may be on the verge of becoming the Old Europe of the twenty - first century 's globalized economy. ''
On the other side of the issue are those who have claimed that the increase is not significant, that it does n't matter because America 's economic growth and / or equality of opportunity are what 's important, that it is a global phenomenon which would be foolish to try to change through US domestic policy, that it "has many economic benefits and is the result of... a well - functioning economy '', and has or may become an excuse for "class - warfare rhetoric '', and may lead to policies that "reduce the well - being of wealthier individuals ''.
Economist Alan B. Krueger wrote in 2012: "The rise in inequality in the United States over the last three decades has reached the point that inequality in incomes is causing an unhealthy division in opportunities, and is a threat to our economic growth. Restoring a greater degree of fairness to the U.S. job market would be good for businesses, good for the economy, and good for the country. '' Krueger wrote that the significant shift in the share of income accruing to the top 1 % over the 1979 to 2007 period represented nearly $1.1 trillion in annual income. Since the wealthy tend to save nearly 50 % of their marginal income while the remainder of the population saves roughly 10 %, other things equal this would reduce annual consumption (the largest component of GDP) by as much as 5 %. Krueger wrote that borrowing likely helped many households make up for this shift, which became more difficult in the wake of the 2007 -- 2009 recession.
Inequality in land and income ownership is negatively correlated with subsequent economic growth. A strong demand for redistribution will occur in societies where a large section of the population does not have access to the productive resources of the economy. Rational voters must internalize such issues. High unemployment rates have a significant negative effect when interacting with increases in inequality. Increasing inequality harms growth in countries with high levels of urbanization. High and persistent unemployment also has a negative effect on subsequent long - run economic growth. Unemployment may seriously harm growth because it is a waste of resources, because it generates redistributive pressures and distortions, because it depreciates existing human capital and deters its accumulation, because it drives people to poverty, because it results in liquidity constraints that limit labor mobility, and because it erodes individual self - esteem and promotes social dislocation, unrest and conflict. Policies to control unemployment and reduce its inequality - associated effects can strengthen long - run growth.
Concern extends even to such supporters (or former supporters) of laissez - faire economics and private sector financiers. Former Federal Reserve Board chairman Alan Greenspan, has stated reference to growing inequality: "This is not the type of thing which a democratic society -- a capitalist democratic society -- can really accept without addressing. '' Some economists (David Moss, Paul Krugman, Raghuram Rajan) believe the "Great Divergence '' may be connected to the financial crisis of 2008. Money manager William H. Gross, former managing director of PIMCO, criticized the shift in distribution of income from labor to capital that underlies some of the growth in inequality as unsustainable, saying:
Even conservatives must acknowledge that return on capital investment, and the liquid stocks and bonds that mimic it, are ultimately dependent on returns to labor in the form of jobs and real wage gains. If Main Street is unemployed and undercompensated, capital can only travel so far down Prosperity Road.
He concluded: "Investors / policymakers of the world wake up -- you 're killing the proletariat goose that lays your golden eggs. ''
Among economists and reports that find inequality harming economic growth are a December 2013 Associated Press survey of three dozen economists ', a 2014 report by Standard and Poor 's, economists Gar Alperovitz, Robert Reich, Joseph Stiglitz, and Branko Milanovic.
A December 2013 Associated Press survey of three dozen economists found that the majority believe that widening income disparity is harming the US economy. They argue that wealthy Americans are receiving higher pay, but they spend less per dollar earned than middle class consumers, the majority of the population, whose incomes have largely stagnated.
A 2014 report by Standard and Poor 's concluded that diverging income inequality has slowed the economic recovery and could contribute to boom - and - bust cycles in the future as more and more Americans take on debt in order to consume. Higher levels of income inequality increase political pressures, discouraging trade, investment, hiring, and social mobility according to the report.
Economists Gar Alperovitz and Robert Reich argue that too much concentration of wealth prevents there being sufficient purchasing power to make the rest of the economy function effectively.
Joseph Stiglitz argues that concentration of wealth and income leads the politically powerful economic elite seek to protect themselves from redistributive policies by weakening the state, and this leads to less public investments by the state -- roads, technology, education, etc. -- that are essential for economic growth.
According to economist Branko Milanovic, while traditionally economists thought inequality was good for growth, "The view that income inequality harms growth -- or that improved equality can help sustain growth -- has become more widely held in recent years. The main reason for this shift is the increasing importance of human capital in development. When physical capital mattered most, savings and investments were key. Then it was important to have a large contingent of rich people who could save a greater proportion of their income than the poor and invest it in physical capital. But now that human capital is scarcer than machines, widespread education has become the secret to growth. '' He continued that "Broadly accessible education '' is both difficult to achieve when income distribution is uneven and tends to reduce "income gaps between skilled and unskilled labor. ''
Robert Gordon wrote that such issues as ' rising inequality; factor price equalization stemming from the interplay between globalization and the Internet; the twin educational problems of cost inflation in higher education and poor secondary student performance; the consequences of environmental regulations and taxes... '' make economic growth harder to achieve than in the past.
In response to the Occupy movement Richard A. Epstein defended inequality in a free market society, maintaining that "taxing the top one percent even more means less wealth and fewer jobs for the rest of us. '' According to Epstein, "the inequalities in wealth... pay for themselves by the vast increases in wealth '', while "forced transfers of wealth through taxation... will destroy the pools of wealth that are needed to generate new ventures. One report has found a connection between lowering high marginal tax rates on high income earners (high marginal tax rates on high income being a common measure to fight inequality), and higher rates of employment growth.
Economic sociologist Lane Kenworthy has found no correlation between levels of inequality and economic growth among developed countries, among states of the US, or in the US over the years from 1947 to 2005. Jared Bernstein found a nuanced relation he summed up as follows: "In sum, I 'd consider the question of the extent to which higher inequality lowers growth to be an open one, worthy of much deeper research ''. Tim Worstall commented that capitalism would not seem to contribute to an inherited - wealth stagnation and consolidation, but instead appears to promote the opposite, a vigorous, ongoing turnover and creation of new wealth.
Income inequality was cited as one of the causes of the Great Depression by Supreme Court Justice Louis D. Brandeis in 1933. In his dissent in the Louis K. Liggett Co. v. Lee (288 U.S. 517) case, he wrote: "Other writers have shown that, coincident with the growth of these giant corporations, there has occurred a marked concentration of individual wealth; and that the resulting disparity in incomes is a major cause of the existing depression. ''
Central Banking economist Raghuram Rajan argues that "systematic economic inequalities, within the United States and around the world, have created deep financial ' fault lines ' that have made (financial) crises more likely to happen than in the past '' -- the Financial crisis of 2007 -- 08 being the most recent example. To compensate for stagnating and declining purchasing power, political pressure has developed to extend easier credit to the lower and middle income earners -- particularly to buy homes -- and easier credit in general to keep unemployment rates low. This has given the American economy a tendency to go "from bubble to bubble '' fueled by unsustainable monetary stimulation.
Greater income inequality can lead to monopolization of the labor force, resulting in fewer employers requiring fewer workers. Remaining employers can consolidate and take advantage of the relative lack of competition, leading to declining customer service, less consumer choice, market abuses, and relatively higher prices.
Income inequality lowers aggregate demand, leading to increasingly large segments of formerly middle class consumers unable to afford as many luxury and essential goods and services. This pushes production and overall employment down.
Deep debt may lead to bankruptcy and researchers Elizabeth Warren and Amelia Warren Tyagi found a fivefold increase in the number of families filing for bankruptcy between 1980 and 2005. The bankruptcies came not from increased spending "on luxuries '', but from an "increased spending on housing, largely driven by competition to get into good school districts. '' Intensifying inequality may mean a dwindling number of ever more expensive school districts that compel middle class -- or would - be middle class -- to "buy houses they ca n't really afford, taking on more mortgage debt than they can safely handle ''.
The ability to move from one income group into another (income mobility) is a means of measuring economic opportunity. A higher probability of upward income mobility theoretically would help mitigate higher income inequality, as each generation has a better chance of achieving higher income groups. Conservatives and libertarians such as economist Thomas Sowell, and Congressman Paul Ryan (R., Wisc.) argue that more important than the level of equality of results is America 's equality of opportunity, especially relative to other developed countries such as western Europe.
Nonetheless, results from various studies reflect the fact that endogenous regulations and other different rules yield distinct effects on income inequality. A study examines the effects of institutional change on age - based labor market inequalities in Europe. There is a focus on wage - setting institutions on the adult male population and the rate of their unequal income distribution. According to the study, there is evidence that unemployment protection and temporary work regulation affect the dynamics of age - based inequality with positive employment effects of all individuals by the strength of unions. Even though the European Union is within a favorable economic context with perspectives of growth and development, it is also very fragile.
However, several studies have indicated that higher income inequality corresponds with lower income mobility. In other words, income brackets tend to be increasingly "sticky '' as income inequality increases. This is described by a concept called the Great Gatsby curve. In the words of journalist Timothy Noah, "you ca n't really experience ever - growing income inequality without experiencing a decline in Horatio Alger - style upward mobility because (to use a frequently - employed metaphor) it 's harder to climb a ladder when the rungs are farther apart. ''
The centrist Brookings Institution said in March 2013 that income inequality was increasing and becoming permanent, sharply reducing social mobility in the US. A 2007 study (by Kopczuk, Saez and Song in 2007) found the top population in the United States "very stable '' and that income mobility had "not mitigated the dramatic increase in annual earnings concentration since the 1970s. ''
Economist Paul Krugman, attacks conservatives for resorting to "extraordinary series of attempts at statistical distortion ''. He argues that while in any given year, some of the people with low incomes will be "workers on temporary layoff, small businessmen taking writeoffs, farmers hit by bad weather '' -- the rise in their income in succeeding years is not the same ' mobility ' as poor people rising to middle class or middle income rising to wealth. It 's the mobility of "the guy who works in the college bookstore and has a real job by his early thirties. ''
Studies by the Urban Institute and the US Treasury have both found that about half of the families who start in either the top or the bottom quintile of the income distribution are still there after a decade, and that only 3 to 6 % rise from bottom to top or fall from top to bottom.
On the issue of whether most Americans do not stay put in any one income bracket, Krugman quotes from 2011 CBO distribution of income study
Household income measured over a multi-year period is more equally distributed than income measured over one year, although only modestly so. Given the fairly substantial movement of households across income groups over time, it might seem that income measured over a number of years should be significantly more equally distributed than income measured over one year. However, much of the movement of households involves changes in income that are large enough to push households into different income groups but not large enough to greatly affect the overall distribution of income. Multi-year income measures also show the same pattern of increasing inequality over time as is observed in annual measures.
In other words, "many people who have incomes greater than $1 million one year fall out of the category the next year -- but that 's typically because their income fell from, say, $1.05 million to 0.95 million, not because they went back to being middle class. ''
Several studies have found the ability of children from poor or middle - class families to rise to upper income -- known as "upward relative intergenerational mobility '' -- is lower in the US than in other developed countries -- and at least two economists have found lower mobility linked to income inequality.
In their Great Gatsby curve, White House Council of Economic Advisers Chairman Alan B. Krueger and labor economist Miles Corak show a negative correlation between inequality and social mobility. The curve plotted "intergenerational income elasticity '' -- i.e. the likelihood that someone will inherit their parents ' relative position of income level -- and inequality for a number of countries.
Aside from the proverbial distant rungs, the connection between income inequality and low mobility can be explained by the lack of access for un-affluent children to better (more expensive) schools and preparation for schools crucial to finding high - paying jobs; the lack of health care that may lead to obesity and diabetes and limit education and employment.
Krueger estimates that "the persistence in the advantages and disadvantages of income passed from parents to the children '' will "rise by about a quarter for the next generation as a result of the rise in inequality that the U.S. has seen in the last 25 years. ''
Greater income inequality can increase the poverty rate, as more income shifts away from lower income brackets to upper income brackets. Jared Bernstein wrote: "If less of the economy 's market - generated growth -- i.e., before taxes and transfers kick in -- ends up in the lower reaches of the income scale, either there will be more poverty for any given level of GDP growth, or there will have to be a lot more transfers to offset inequality 's poverty - inducing impact. '' The Economic Policy Institute estimated that greater income inequality would have added 5.5 % to the poverty rate between 1979 and 2007, other factors equal. Income inequality was the largest driver of the change in the poverty rate, with economic growth, family structure, education and race other important factors. An estimated 16 % of Americans lived in poverty in 2012, versus 26 % in 1967.
A rise in income disparities weakens skills development among people with a poor educational background in term of the quantity and quality of education attained. Those with a low level of expertise will always consider themselves unworthy of any high position and pay
Lisa Shalett, chief investment officer at Merrill Lynch Wealth Management noted that, "for the last two decades and especially in the current period,... productivity soared... (but) U.S. real average hourly earnings are essentially flat to down, with today 's inflation - adjusted wage equating to about the same level as that attained by workers in 1970... So where have the benefits of technology - driven productivity cycle gone? Almost exclusively to corporations and their very top executives. '' In addition to the technological side of it, the affected functionality emanates from the perceived unfairness and the reduced trust of people towards the state. The study by Kristal and Cohen showed that rising wage inequality has brought about an unhealthy competition between institutions and technology. The technological changes, with computerization of the workplace, seem to give an upper hand to the high - skilled workers as the primary cause of inequality in America. The qualified will always be considered to be in a better position as compared to those dealing with hand work leading to replacements and unequal distribution of resources.
Economist Timothy Smeeding summed up the current trend:
Americans have the highest income inequality in the rich world and over the past 20 -- 30 years Americans have also experienced the greatest increase in income inequality among rich nations. The more detailed the data we can use to observe this change, the more skewed the change appears to be... the majority of large gains are indeed at the top of the distribution.
According to Janet L. Yellen, chair of the Federal Reserve,
... from 1973 to 2005, real hourly wages of those in the 90th percentile -- where most people have college or advanced degrees -- rose by 30 % or more... among this top 10 percent, the growth was heavily concentrated at the very tip of the top, that is, the top 1 percent. This includes the people who earn the very highest salaries in the U.S. economy, like sports and entertainment stars, investment bankers and venture capitalists, corporate attorneys, and CEOs. In contrast, at the 50th percentile and below -- where many people have at most a high school diploma -- real wages rose by only 5 to 10 % --
Economists Jared Bernstein and Paul Krugman have attacked the concentration of income as variously "unsustainable '' and "incompatible '' with real democracy. American political scientists Jacob S. Hacker and Paul Pierson quote a warning by Greek - Roman historian Plutarch: "An imbalance between rich and poor is the oldest and most fatal ailment of all republics. '' Some academic researchers have written that the US political system risks drifting towards a form of oligarchy, through the influence of corporations, the wealthy, and other special interest groups.
Rising income inequality has been linked to the political polarization in Washington DC. According to a 2013 study published in the Political Research Quarterly, elected officials tend to be more responsive to the upper income bracket and ignore lower income groups.
Paul Krugman wrote in November 2014 that: "The basic story of political polarization over the past few decades is that, as a wealthy minority has pulled away economically from the rest of the country, it has pulled one major party along with it... Any policy that benefits lower - and middle - income Americans at the expense of the elite -- like health reform, which guarantees insurance to all and pays for that guarantee in part with taxes on higher incomes -- will face bitter Republican opposition. '' He used environmental protection as another example, which was not a partisan issue in the 1990s but has since become one.
As income inequality has increased, the degree of House of Representatives polarization measured by voting record has also increased. The voting is mostly by the rich and for the rich making it hard to achieve equal income and resource distribution for the average population (Bonica et al., 2013). There is a little number of people who turn to government insurance with the rising wealth and real income since they consider inequality within the different government sectors. Additionally, there has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians. Professors McCarty, Pool and Rosenthal wrote in 2007 that polarization and income inequality fell in tandem from 1913 to 1957 and rose together dramatically from 1977 on. They show that Republicans have moved politically to the right, away from redistributive policies that would reduce income inequality. Polarization thus creates a feedback loop, worsening inequality.
The IMF warned in 2017 that rising income inequality within Western nations, in particular the United States, could result in further political polarization.
Several economists and political scientists have argued that economic inequality translates into political inequality, particularly in situations where politicians have financial incentives to respond to special interest groups and lobbyists. Researchers such as Larry Bartels of Vanderbilt University have shown that politicians are significantly more responsive to the political opinions of the wealthy, even when controlling for a range of variables including educational attainment and political knowledge.
Historically, discussions of income inequality and capital vs. labor debates have sometimes included the language of class warfare, from President Theodore Roosevelt (referring to the leaders of big corporations as "malefactors of great wealth ''), to President Franklin Roosevelt ("economic royalists... are unanimous in their hate for me -- and I welcome their hatred ''), to more the recent "1 % versus the 99 % '' issue and the question of which political party better represents the interests of the middle class.
Investor Warren Buffett said in 2006 that: "There 's class warfare, all right, but it 's my class, the rich class, that 's making war, and we 're winning. '' He advocated much higher taxes on the wealthiest Americans, who pay lower effective tax rates than many middle - class persons.
Two journalists concerned about social separation in the US are economist Robert Frank, who notes that: "Today 's rich had formed their own virtual country... (T) hey had built a self - contained world unto themselves, complete with their own health - care system (concierge doctors), travel network (Net jets, destination clubs), separate economy... The rich were n't just getting richer; they were becoming financial foreigners, creating their own country within a country, their own society within a society, and their economy within an economy.
George Packer wrote that "Inequality hardens society into a class system... Inequality divides us from one another in schools, in neighborhoods, at work, on airplanes, in hospitals, in what we eat, in the condition of our bodies, in what we think, in our children 's futures, in how we die. Inequality makes it harder to imagine the lives of others.
Even these class levels can affect the politics in certain ways. There has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians. They have a greater influence through their lobbying and contributions that give them an opportunity to immerse wealth for themselves.
Loss of income by the middle class relative to the top - earning 1 % and 0.1 % is both a cause and effect of political change, according to journalist Hedrick Smith. In the decade starting around 2000, business groups employed 30 times as many Washington lobbyists as trade unions and 16 times as many lobbyists as labor, consumer, and public interest lobbyists combined.
From 1998 through 2010 business interests and trade groups spent $28.6 billion on lobbying compared with $492 million for labor, nearly a 60 - to - 1 business advantage.
The result, according to Smith, is a political landscape dominated in the 1990s and 2000s by business groups, specifically "political insiders '' -- former members of Congress and government officials with an inside track -- working for "Wall Street banks, the oil, defense, and pharmaceutical industries; and business trade associations. '' In the decade or so prior to the Great Divergence, middle - class - dominated reformist grassroots efforts -- such as civil rights movement, environmental movement, consumer movement, labor movement -- had considerable political impact.
Joseph Stiglitz
Economist Joseph Stiglitz argues that hyper - inequality may explain political questions -- such as why America 's infrastructure (and other public investments) are deteriorating, or the country 's recent relative lack of reluctance to engage in military conflicts such as the 2003 invasion of Iraq. Top - earning families, wealthy enough to buy their own education, medical care, personal security, and parks, have little interest in helping pay for such things for the rest of society, and have the political influence to make sure they do n't have to. So too, the lack of personal or family sacrifice involved for top earners in the military intervention of their country -- their children being few and far between in the relatively low - paying all - volunteer military -- may mean more willingness by influential wealthy to see its government wage war.
Economist Branko Milanovic argued that globalization and the related competition with cheaper labor from Asia and immigrants have caused U.S. middle - class wages to stagnate, fueling the rise of populist political candidates such as Donald Trump.
The relatively high rates of health problems and social problems, (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use) and lower rates of social goods (life expectancy, educational performance, trust among strangers, women 's status, social mobility, even numbers of patents issued per capita), in the US compared to other developed countries may be related to its high income inequality. Using statistics from 23 developed countries and the 50 states of the US, British researchers Richard G. Wilkinson and Kate Pickett have found such a correlation which remains after accounting for ethnicity, national culture, and occupational classes or education levels. Their findings, based on UN Human Development Reports and other sources, locate the United States at the top of the list in regards to inequality and various social and health problems among developed countries. The authors argue inequality creates psychosocial stress and status anxiety that lead to social ills. A 2009 study conducted by researchers at Harvard University and published in the British Medical Journal attribute one in three deaths in the United States to high levels of inequality. According to The Earth Institute, life satisfaction in the US has been declining over the last several decades, which has been attributed to soaring inequality, lack of social trust and loss of faith in government.
It is claimed in a 2015 study by Princeton University researchers Angus Deaton and Anne Case that income inequality could be a driving factor in a marked increase in deaths among white males between the ages of 45 to 54 in the period 1999 to 2013.
Paul Krugman argues that the much lamented long - term funding problems of Social Security and Medicare can be blamed in part on the growth in inequality as well as the usual culprits like longer life expectancies. The traditional source of funding for these social welfare programs -- payroll taxes -- is inadequate because it does not capture income from capital, and income above the payroll tax cap, which make up a larger and larger share of national income as inequality increases.
Upward redistribution of income is responsible for about 43 % of the projected Social Security shortfall over the next 75 years.
Disagreeing with this focus on the top - earning 1 %, and urging attention to the economic and social pathologies of lower - income / lower education Americans, is conservative journalist David Brooks. Whereas in the 1970s, high school and college graduates had "very similar family structures '', today, high school grads are much less likely to get married and be active in their communities, and much more likely to smoke, be obese, get divorced, or have "a child out of wedlock. ''
The zooming wealth of the top one percent is a problem, but it 's not nearly as big a problem as the tens of millions of Americans who have dropped out of high school or college. It 's not nearly as big a problem as the 40 percent of children who are born out of wedlock. It 's not nearly as big a problem as the nation 's stagnant human capital, its stagnant social mobility and the disorganized social fabric for the bottom 50 percent.
Contradicting most of these arguments, classical liberals such as Friedrich Hayek have maintained that because individuals are diverse and different, state intervention to redistribute income is inevitably arbitrary and incompatible with the concept of general rules of law, and that "what is called ' social ' or distributive ' justice is indeed meaningless within a spontaneous order ''. Those who would use the state to redistribute, "take freedom for granted and ignore the preconditions necessary for its survival. ''
The growth of inequality has provoked a political protest movement -- the Occupy movement -- starting in Wall Street and spreading to 600 communities across the United States in 2011. Its main political slogan -- "We are the 99 % '' -- references its dissatisfaction with the concentration of income in the top 1 %.
A December 2011 Gallup poll found a decline in the number of Americans who felt reducing the gap in income and wealth between the rich and the poor was extremely or very important (21 percent of Republicans, 43 percent of independents, and 72 percent of Democrats). In 2012, several surveys of voters ' attitudes toward growing income inequality found the issue ranked less important than other economic issues such as growth and equality of opportunity, and relatively low in affecting voters "personally ''. In 1998 a Gallup poll had found 52 % of Americans agreeing that the gap between rich and the poor was a problem that needed to be fixed, while 45 % regarded it as "an acceptable part of the economic system ''. In 2011, those numbers are reversed: Only 45 % see the gap as in need of fixing, while 52 % do not. However, there was a large difference between Democrats and Republicans, with 71 % of Democrats calling for a fix.
In contrast, a January 2014 poll found 61 % of Republicans, 68 % of Democrats and 67 % of independents accept the notion that income inequality in the US has been growing over the last decade. The Pew Center poll also indicated that 69 % of Americans supported the government doing "a lot '' or "some '' to address income inequality and that 73 % of Americans supported raising the minimum wage from $7.25 to $10.10 per hour.
Opinion surveys of what respondents thought was the right level of inequality have found Americans no more accepting of income inequality than other citizens of other nations, but more accepting of what they thought the level of inequality was in their country, being under the impression that there was less inequality than there actually was. Dan Ariely and Michael Norton show in a study (2011) that US citizens across the political spectrum significantly underestimate the current US wealth inequality and would prefer a more egalitarian distribution of wealth. Joseph Stiglitz in "The Price of Inequality '' has argued that this sense of unfairness has led to distrust in government and business.
Income inequality (as measured by the Gini coefficient) is not uniform among the states: after - tax income inequality in 2009 was greatest in Texas and lowest in Maine. Income inequality has grown from 2005 to 2012 in more than 2 out of 3 metropolitan areas.
The household income Gini index for the United States was 0.468 in 2009, according to the US Census Bureau, though it varied significantly between states. The states of Utah, Alaska and Wyoming have a pre-tax income inequality Gini coefficient that is 10 % lower than the average, while Washington D.C. and Puerto Rico 10 % higher. After including the effects of federal and state taxes, the U.S. Federal Reserve estimates 34 states in the USA have a Gini coefficient between 0.30 and 0.35, with the state of Maine the lowest. At the county and municipality levels, the pre-tax Gini index ranged from 0.21 to 0.65 in 2010 across the United States, according to Census Bureau estimates.
Measured for all households, U.S. income inequality is comparable to other developed countries before taxes and transfers, but is among the worst after taxes and transfers, meaning the U.S. shifts relatively less income from higher income households to lower income households. Measured for working - age households, market income inequality is comparatively high (rather than moderate) and the level of redistribution is moderate (not low). These comparisons indicate Americans shift from reliance on market income to reliance on income transfers later in life and less fully than do households in other developed countries.
The U.S. was ranked the 41st worst among 141 countries (30th percentile) on income equality measured by the Gini index. The UN, CIA World Factbook, and OECD have used the Gini index to compare inequality between countries, and as of 2006, the United States had one of the highest levels of income inequality among similar developed or high income countries, as measured by the index. While inequality has increased since 1981 in two - thirds of OECD countries most developed countries are in the lower, more equal, end of the spectrum, with a Gini coefficient in the high twenties to mid thirties.
The gini rating (after taxes and government income transfers) of the United States is sufficiently high, however, to put it among less developed countries. The US ranks above (more unequal than) South American countries such Guyana, Nicaragua, and Venezuela, and roughly on par with Uruguay, Nicaragua, and Venezuela, according to the CIA.
The NYT reported in 2014: "With a big share of recent income gains in this country flowing to a relatively small slice of high - earning households, most Americans are not keeping pace with their counterparts around the world. '' Real median per capita income in many other industrialized countries was rising from 2000 - 2010 while the U.S. measure stagnated. The poor in much of Europe receive more than their U.S. counterparts.
One 2013 study indicated that U.S. income inequality is comparable to other developed countries before taxes and transfers, but rated last (worst) among 22 developed countries after taxes and transfers. This means that public policy choices, rather than market factors, drive U.S. income inequality disparities relative to comparable wealthy nations.
Some have argued that inequality is higher in other countries than official statistics indicate because of unreported income. European countries have higher amounts of wealth in offshore holdings.
The NYT reported in 2014 that there were three key reasons for other industrialized countries improving real median income relative to the United States over the 2000 - 2010 period:
According to The New York Times, Canadian middle class incomes are now higher than those in the United States as of 2014, and some European nations are closing the gap as their citizens have been receiving higher raises than their American counterparts. Bloomberg reported in August 2014 that only the wealthy saw pay increases since the 2008 recession, while average American workers saw no boost in their paychecks.
Economists have proposed a variety of solutions for addressing income inequality. For example, Federal Reserve Chair Janet Yellen described four "building blocks '' that could help address income and wealth inequality in an October 2014 speech. These included expanding resources available to children, affordable higher education, business ownership, and inheritance. While before - tax income inequality is subject to market factors, after - tax income inequality can be directly affected by tax and transfer policy. U.S. income inequality is comparable to other developed nations before taxes and transfers, but is among the worst after taxes and transfers. This suggests that more progressive tax and transfer policies would be required to align the U.S. with other developed nations. The Center for American Progress recommended a series of steps in September 2014, including tax reform, subsidizing and reducing healthcare and higher education costs, and strengthening labor influence.
However, there is debate regarding whether a public policy response is appropriate for income inequality. For example, Federal Reserve Economist Thomas Garrett wrote in 2010: "It is important to understand that income inequality is a byproduct of a well - functioning capitalist economy. Individuals ' earnings are directly related to their productivity... A wary eye should be cast on policies that aim to shrink the income distribution by redistributing income from the more productive to the less productive simply for the sake of ' fairness. ' ''
Public policy responses addressing causes and effects of income inequality include: progressive tax incidence adjustments, strengthening social safety net provisions such as Temporary Assistance for Needy Families, welfare, the food stamp program, Social Security, Medicare, and Medicaid, increasing and reforming higher education subsidies, increasing infrastructure spending, and placing limits on and taxing rent - seeking. Democrat and Republican politicians also provided a series of recommendations for increasing median wages in December 2014. These included raising the minimum wage, infrastructure stimulus, and tax reform.
Research shows that children from lower - income households who get good - quality pre-Kindergarten education are more likely to graduate from high school, attend college, hold a job and have higher earnings. In 2010, the U.S. ranked 28th out of 38 advanced countries in the share of four - year - olds enrolled in public or private early childhood education. Gains in enrollment stalled after 2010, as did growth in funding, due to budget cuts arising from the Great Recession. Per - pupil spending in state - funded programs declined by 12 % after inflation since 2010. The U.S. differs from other countries in that it funds public education primarily through sub-national (state and local) taxes. The quality of funding for public education varies based on the tax base of the school system, with significant variation in local taxes and spending per pupil. Better teachers also raise the educational attainment and future earnings of students, but they tend to migrate to higher income school districts. Among developed countries, 70 % of 3 - year - olds go to preschool, versus 38 % in the United States.
Raising taxes on higher income persons to fund healthcare for lower income persons reduces after - tax inequality. The CBO described how the Affordable Care Act (ACA or "Obamacare '') reduced income inequality for calendar year 2014 in a March 2018 report:
Median annual earnings of full - time workers with a four - year bachelor 's degree is 79 % higher than the median for those with only a high school diploma. The wage premium for a graduate degree is considerably higher than the undergraduate degree. College costs have risen much faster than income, resulting in an increase in student loan debt from $260 billion in 2004 to $1.1 trillion in 2014. From 1995 to 2013, outstanding education debt grew from 26 % of average yearly income to 58 %, for households with net worth below the 50th percentile. The unemployment rate is also considerably lower for those with higher educational attainment. A college education is nearly free in many European countries, often funded by higher taxes.
The OECD asserts that public spending is vital in reducing the ever - expanding wealth gap. Lane Kenworthy advocates incremental reforms to the U.S. welfare state in the direction of the Nordic social democratic model, thereby increasing economic security and equal opportunity. Currently, the U.S. has the weakest social safety net of all developed nations.
Welfare spending may entice the poor away from finding remunerative work and toward dependency on the state. Eliminating social safety nets can discourage free market entrepreneurs by increasing the risk of business failure from a temporary setback to financial ruin.
CBO reported that less progressive tax and transfer policies contributed to an increase in after - tax income inequality between 1979 and 2007. This indicates that more progressive income tax policies (e.g., higher income taxes on the wealthy and a higher earned - income tax credit) would reduce after - tax income inequality.
Policies enacted under President Obama increased taxes on the wealthy, including the American Taxpayer Relief Act of 2012 and the Affordable Care Act. As reported by The New York Times in January 2014, these laws include several tax increases on individuals earning over $400,000 and couples earning over $450,000:
These changes are estimated to add $600 billion to revenue over 10 years, while leaving the tax burden on everyone else mostly as it was. This reverses a long - term trend of lower tax rates for upper income persons.
The NYT reported in July 2018 that: "The top - earning 1 percent of households -- those earning more than $607,000 a year -- will pay a combined $111 billion less this year in federal taxes than they would have if the laws had remained unchanged since 2000. That 's an enormous windfall. It 's more, in total dollars, than the tax cut received over the same period by the entire bottom 60 percent of earners. '' This represents the tax cuts for the top 1 % from the Bush tax cuts and Trump tax cuts, partially offset by the tax increases on the top 1 % by Obama.
The CBO estimated that the average tax rate for the top 1 % rose from 28.1 % in 2008 to 33.6 % in 2013, reducing after - tax income inequality relative to a baseline without those policies.
The economists Emmanuel Saez and Thomas Piketty recommend much higher top marginal tax rates on the wealthy, up to 50 percent, or 70 percent or even 90 percent. Ralph Nader, Jeffrey Sachs, the United Front Against Austerity, among others, call for a financial transactions tax (also known as the Robin Hood tax) to bolster the social safety net and the public sector.
The Pew Center reported in January 2014 that 54 % of Americans supported raising taxes on the wealthy and corporations to expand aid to the poor. By party, 29 % of Republicans and 75 % of Democrats supported this action.
During 2012, investor Warren Buffett advocated higher minimum effective income tax rates on the wealthy, considering all forms of income: "I would suggest 30 percent of taxable income between $1 million and $10 million, and 35 percent on amounts above that. '' This would eliminate special treatment for capital gains and carried interest, which are taxed at lower rates and comprise a relatively larger share of income for the wealthy. He argued that in 1992, the tax paid by the 400 highest incomes in the United States averaged 26.4 % of adjusted gross income. In 2009, the rate was 19.9 %.
Tax expenditures (i.e., exclusions, deductions, preferential tax rates, and tax credits) cause revenues to be much lower than they would otherwise be for any given tax rate structure. The benefits from tax expenditures, such as income exclusions for healthcare insurance premiums paid for by employers and tax deductions for mortgage interest, are distributed unevenly across the income spectrum. They are often what the Congress offers to special interests in exchange for their support. According to a report from the CBO that analyzed the 2013 data:
Understanding how each tax expenditure is distributed across the income spectrum can inform policy choices.
Economist Dean Baker argues that the existence of tax loopholes, deductions, and credits for the corporate income tax contributes to rising income inequality by permitting large corporations with many accountants to reduce their tax burden and by permitting large accounting firms to receive payments from smaller businesses in exchange for helping these businesses reduce their tax burden. He says that this redistributes large sums of money that would otherwise be taxed to individuals who are already wealthy yet contribute nothing to society in order to obtain this wealth. He further argues that since a large portion of corporate income is reinvested in the business, taxing corporate income amounts to a tax on reinvestment, which he says should be left untaxed. He concludes that eliminating the corporate income tax, while needing to be offset by revenue increases elsewhere, would reduce income inequality.
In his 2013 State of the Union address, Barack Obama proposed raising the federal minimum wage. The progressive economic think tank the Economic Policy Institute agrees with this position, stating: "Raising the minimum wage would help reverse the ongoing erosion of wages that has contributed significantly to growing income inequality. '' In response to the fast - food worker strikes of 2013, Labor Secretary Thomas Perez said that it was another sign of the need to raise the minimum wage for all workers: "It 's important to hear that voice... For all too many people working minimum wage jobs, the rungs on the ladder of opportunity are feeling further and further apart. ''
The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs... America 's federal minimum wage, at 38 % of median income, is one of the rich world 's lowest. Some studies find no harm to employment from federal of state minimum wages, others see a small one, but none finds any serious damage. ''
The U.S. minimum wage was last raised to $7.25 per hour in July 2009. As of December 2013, there were 21 states with minimum wages above the Federal minimum, with the State of Washington the highest at $9.32. Ten states index their minimum wage to inflation.
The Pew Center reported in January 2014 that 73 % of Americans supported raising the minimum wage from $7.25 to $10.10 per hour. By party, 53 % of Republicans and 90 % of Democrats favored this action. Also in January 2014, six hundred economists sent the President and Congress a letter urging for a minimum wage hike to $10.10 an hour by 2016.
In February 2014, the CBO reported the effects of a minimum wage increase under two scenarios, an increase to $10.10 with indexing for inflation thereafter and an increase to $9.00 with no indexing:
Amalgamated Transit Union international president Lawrence J. Hanley has called for a maximum wage law, which "would limit the amount of compensation an employer could receive to a specified multiple of the wage earned by his or her lowest paid employees. '' CEO pay at the largest 350 U.S. companies was 20 times the average worker pay in 1965; 58 times in 1989 and 273 times in 2012.
Others argue for a basic income guarantee, ranging from civil rights leader Martin Luther King, Jr. to libertarians such as Milton Friedman (in the form of negative income tax), Robert Anton Wilson, Gary Johnson (In the form of the fair tax "prebate '') and Charles Murray to the Green Party.
General limitations on and taxation of rent - seeking is popular with large segments of both Republicans and Democrats.
The economists Richard D. Wolff and Gar Alperovitz claim that greater economic equality could be achieved by extending democracy into the economic sphere. In an essay for Harper 's Magazine, investigative journalist Erik Reece argues that "With the political right entrenched in its opposition to unions, worker - owned cooperatives represent a less divisive yet more radical model for returning wealth to the workers who earned it. ''
The effect on income inequality of monetary policy pursued by the Federal Reserve is challenging to measure. Monetary policy can be used to stimulate the economy (e.g., by lowering interest rates, which encourages borrowing and spending, additional job creation, and inflationary pressure) or tightened, with the opposite effects. Former Fed Chair Ben Bernanke wrote in June 2015 that there are several effects on income and wealth inequality from monetary stimulus that work in opposing directions:
Various methods are used to determine income inequality and different sources may give different figures for gini coefficients or ratio different ratio of percentiles, etc... The United States Census Bureau studies on inequality of household income and individual income show lower levels of inequality than some other sources (Saez and Piketty, and the CBO), but do not include data for the highest - income households where most of change in income distribution has occurred.
Two commonly cited sources of income inequality data are the CBO and economist Emmanuel Saez, which differ somewhat in their sources and methods. According to Saez, for 2011 the share of "market income less transfers '' received by the top 1 % was about 19.5 %. Saez used IRS data in this measure. The CBO uses both IRS data and Census data in its computations and reported a lower "pre-tax '' figure for the top 1 % of 14.6 %. The two data series were approximately 5 percentage points apart in recent years.
Pioneers in the use of IRS income data to analyze income distribution are Emmanuel Saez and Thomas Piketty at the Paris School of Economics showed that the share of income held by the top 1 percent was as large in 2005 as in 1928. Other sources that have noted the increased inequality included economist Janet Yellen who stated, "the growth (in real income) was heavily concentrated at the very tip of the top, that is, the top 1 percent. '' Follow - up research, published in 2014, by Emmanuel Saez and Gabriel Zucman revealed that more than half of those in the top 1 percent had not experienced relative gains in wealth between 1960 and 2012. In fact, those between the top 1 percent and top. 5 percent had actually lost relative wealth. Only those in the top. 1 percent and above had made relative wealth gains during that time.
The comparative use of Census Bureau data, as well as most sources of demographic income data, has been questioned by statisticians for being unable to account for ' mobility of incomes '. At any given time, the Census Bureau ranks all households by household income and then divides this distribution of households into quintiles. The highest - ranked household in each quintile provides the upper income limit for each quintile. Comparing changes in these upper income limits for different quintiles is how changes are measured between one moment in time and the next. The problem with inferring income inequality on this basis is that the census statistics provide only a snapshot of income distribution in the U.S., at individual points in time. The statistics do not reflect the reality that income for many households changes over time -- i.e., incomes are mobile. For most people, income increases over time as they move from their first, low - paying job in high school to a better - paying job later in their lives. Also, some people lose income over time because of business - cycle contractions, demotions, career changes, retirement, etc. The implication of changing individual incomes is that individual households do not remain in the same income quintiles over time. Thus, comparing different income quintiles over time is like comparing apples to oranges, because it means comparing incomes of different people at different stages in their earnings profile.
Gary Burtless of the Brookings Institution notes that many economists and analysts who use U.S. census data fail to recognize recent and significant lower - and middle - income gains, primarily because census data does not capture key information: "A commonly used indicator of middle class income is the Census Bureau 's estimate of median household money income. The main problem with this income measure is that it only reflects households ' before - tax cash incomes. It fails to account for changing tax burdens and the impact of income sources that do not take the form of cash. This means, for example, that tax cuts in 2001 - 2003 and 2008 - 2012 are missed in the census statistics. Furthermore, the Census Bureau measure ignores income received as in - kind benefits and health insurance coverage from employers and the government. By ignoring such benefits as well as sizeable tax cuts in the recession, the Census Bureau 's money income measure seriously overstated the income losses that middle - income families suffered in the recession.
New CBO income statistics are beginning to show the growing importance of these items. In 1980, in - kind benefits and employer and government spending on health insurance accounted for just 6 % of the after - tax incomes of households in the middle one - fifth of the distribution. By 2010 these in - kind income sources represented 17 % of middle class households ' after - tax income. The income items missed by the Census Bureau are increasing faster than the income items included in its money income measure. What many observers miss, however, is the success of the nation 's tax and transfer systems in protecting low - and middle - income Americans against the full effects of a depressed economy. As a result of these programs, the spendable incomes of poor and middle - class families have been better insulated against recession - driven losses than the incomes of Americans in the top 1 %. As the CBO statistics demonstrate, incomes in the middle and at the bottom of the distribution have fared better since 2000 than incomes at the very top. ''
Inequality can be measured before and after the effects of taxes and transfer payments such as social security and unemployment insurance.
Comparisons of income over time should adjust for changes in average age, family size, number of breadwinners, and other characteristics of a population. Measuring personal income ignores dependent children, but household income also has problems -- a household of ten has a lower standard of living than one of two people, though the income of the two households may be the same. People 's earnings tend to rise over their working lifetimes, so "snapshot measures of income inequality can be misleading. '' The inequality of a recent college graduate and a 55 - year - old at the peak of his / her career is not an issue if the graduate has the same career path.
Conservative researchers and organizations have focused on the flaws of household income as a measure for standard of living in order to refute claims that income inequality is growing, becoming excessive or posing a problem for society. According to sociologist Dennis Gilbert, growing inequality can be explained in part by growing participation of women in the workforce. High earning households are more likely to be dual earner households, And according to a 2004 analysis of income quintile data by the Heritage Foundation, inequality becomes less when household income is adjusted for size of household. Aggregate share of income held by the upper quintile (the top earning 20 percent) decreases by 20.3 % when figures are adjusted to reflect household size.
However the Pew Research Center found household income has appeared to decline less than individual income in the twenty - first century because those who are no longer able to afford their own housing have increasingly been moving in with relatives, creating larger households with more income earners in them. The 2011 CBO study "Trends in the Distribution of Household Income '' mentioned in this article adjusts for household size so that its quintiles contain an equal number of people, not an equal number of households. Looking at the issue of how frequently workers or households move into higher or lower quintiles as their income rises or falls over the years, the CBO found income distribution over a multi-year period "modestly '' more equal than annual income. The CBO study confirms earlier studies.
Overall, according to Timothy Noah, correcting for demographic factors (today 's population is older than it was 33 years ago, and divorce and single parenthood have made households smaller), you find that income inequality, though less extreme than shown by the standard measure, is also growing faster than shown by the standard measure.
The Gini coefficient summarizes income inequality in a single number and is one of the most commonly used measures of income inequality. It uses a scale from 0 to 1 -- the higher the number the more inequality. Zero represents perfect equality (everyone having exactly the same income), and 1 represents perfect inequality (one person having all income). (Index scores are commonly multiplied by 100 to make them easier to understand.) Gini index ratings can be used to compare inequality within (by race, gender, employment) and between countries, before and after taxes. Different sources will often give different gini values for the same country or population measured. For example, the U.S. Census Bureau 's official Gini coefficient for the United States was 47.6 in 2013, up from 45.4 in 1993, the earliest year for comparable data. By contrast, the OECD 's Gini coefficient for income inequality in the United States is 37 in 2012 (including wages and other cash transfers), which is still the highest in the developed world, with the lowest being Denmark (24.3), Norway (25.6), and Sweden (25.9).
Professor Salvatore Babones of the University of Sydney notes:
A major gap in the measurement of income inequality is the exclusion of capital gains, profits made on increases in the value of investments. Capital gains are excluded for purely practical reasons. The Census does n't ask about them, so they ca n't be included in inequality statistics.
Obviously, the rich earn much more from investments than the poor. As a result, real levels of income inequality in America are much higher than the official Census Bureau figures would suggest.
Conservative researchers have argued that income inequality is not significant because consumption, rather than income should be the measure of inequality, and inequality of consumption is less extreme than inequality of income in the US. Will Wilkinson of the libertarian Cato Institute states that "the weight of the evidence shows that the run - up in consumption inequality has been considerably less dramatic than the rise in income inequality, '' and consumption is more important than income. According to Johnson, Smeeding, and Tory, consumption inequality was actually lower in 2001 than it was in 1986. The debate is summarized in "The Hidden Prosperity of the Poor '' by journalist Thomas B. Edsall. Other studies have not found consumption inequality less dramatic than household income inequality, and the CBO 's study found consumption data not "adequately '' capturing "consumption by high - income households '' as it does their income, though it did agree that household consumption numbers show more equal distribution than household income.
Others dispute the importance of consumption over income, pointing out that if middle and lower income are consuming more than they earn it is because they are saving less or going deeper into debt. A "growing body of work '' suggests that income inequality has been the driving factor in the growing household debt, as high earners bid up the price of real estate and middle income earners go deeper into debt trying to maintain what once was a middle class lifestyle. Between 1983 and 2007, the top 5 percent saw their debt fall from 80 cents for every dollar of income to 65 cents, while the bottom 95 percent saw their debt rise from 60 cents for every dollar of income to $1.40. Economist Krugman has found a strong correlation between inequality and household debt in the United States over the last hundred years.
Related to income inequality is the topic of wealth inequality, which refers to the distribution of net worth (i.e., what is owned minus what is owed) as opposed to annual income. Net worth is affected by movements in the prices of assets, such as stocks, bonds, and real estate, which can fluctuate significantly over the short - term. Income inequality also has a significant effect over long - term shifts in wealth inequality, as income is accumulated. Wealth inequality is also highly concentrated and increasing:
The increase in wealth for the 1 % was not homogeneous, with much of the wealth gains in the top 0.1 %. Those between the top 1 percent and top 0.5 percent have actually lost a significant share of wealth over the past 50 years.
Further, the top 400 Americans had net worth of $2 trillion in 2013, which was more than the combined net worth of the bottom 50 % of U.S. households. The average net worth of these 400 Americans was $5 billion. The lower 50 % of households held 3 % of the wealth in 1989 and 1 % in 2013. The average net worth of the bottom 50 % of households in 2013 was approximately $11,000.
This wealth inequality is apparent in the share of assets held. In 2010, the top 5 % wealthiest households had approximately 72 % of the financial wealth, while the bottom 80 % of households had 5 %. Financial wealth is measured as net worth minus home values, meaning income - generating financial assets like stocks and bonds, plus business equity.
The Center for American Progress reported in September 2014 that: "The trends in rising inequality are also striking when measured by wealth. Among the top 20 percent of families by net worth, average wealth increased by 120 percent between 1983 and 2010, while the middle 20 percent of families only saw their wealth increase by 13 percent, and the bottom fifth of families, on average, saw debt exceed assets -- in other words, negative net worth... Homeowners in the bottom quintile of wealth lost an astounding 94 percent of their wealth between 2007 and 2010. ''
|
when do you fly a flag half mast | Half - mast - wikipedia
Half - mast or half - staff refers to a flag flying below the summit on a pole. In many countries this is seen as a symbol of respect, mourning, distress, or in some cases, a salute. Strictly speaking, flags are said to be half - mast if flown from ships, and half - staff if on land, although not all regional variations of English use "half - staff ''.
The tradition of flying the flag at half - mast began in the 17th century. According to some sources, the flag is lowered to make room for an "invisible flag of death '' flying above. However, there is disagreement about where on a flagpole a flag should be when it is at half - staff. It is often recommended that a flag at half - staff should be lowered only as much as the hoist, or width, of the flag. British flag protocol is that a flag should be flown no less than two - thirds of the way up the flagpole, with at least the height of the flag between the top of the flag and the top of the pole. It is common for the phrase to be taken literally and for a flag to be flown only halfway up a flagpole, although some authorities deprecate that practice.
When hoisting a flag that is to be displayed at half - mast, it should be raised to the finial of the pole for an instant, then lowered to half - mast. Likewise, when the flag is lowered at the end of the day, it should be hoisted to the finial for an instant, and then lowered.
The flag of Australia is flown half - mast in Australia:
In Australia and other Commonwealth countries, merchant ships "dip '' their ensigns to half - mast when passing an RAN vessel or a ship from the navy of any allied country.
The flag of Cambodia flew at half mast upon the death of King - Father Norodom Sihanouk for 7 days, from 15 -- 22 October 2012.
The term half - mast is the official term used in Canada, according to the Rules For Half - Masting the National Flag of Canada. The decision to fly the flag at half - mast on federal buildings rests with the Department of Canadian Heritage. Federally, the national flag of Canada is flown at half - mast to mark the following occasions:
Certain events are also marked by flying the national flag at half - mast on the Peace Tower at Parliament Hill. These include:
On occasion discretion can dictate the flying of the national flag at half - mast, not only on the Peace Tower, but on all federal facilities. Some examples include 11 September 2001, 11 September 2002, the 2004 Indian Ocean tsunami, the 2005 Mayerthorpe tragedy, the death of Pope John Paul II, the 2005 London bombings, the death of Smokey Smith, the state funerals of former U.S. presidents Ronald Reagan and Gerald Ford, and the death of Jack Layton
There are, however, exceptions to the rules of half - masting in Canada: if Victoria Day or Canada Day fall during a period of half - masting, the flags are to be returned to full - mast for the duration of the day. The national flag on the Peace Tower is also hoisted to full mast if a foreign head of state or head of government is visiting the parliament. These exemptions, though, do not apply to the period of mourning for the death of a Canadian monarch. The Royal Standard of Canada also never flies at half - mast, as it is considered representative of the sovereign, who ascends to the throne automatically upon the death of his or her predecessor. Each province can make its own determination of when to fly the flag at half - mast when provincial leaders or honoured citizens pass away.
To raise a flag in this position, the flag must be flown to the top of the pole first, then brought down halfway before the flag is secured for flying. When such mourning occurs, all flags should be flown at that position or not be flown at all, with the exception of flags permanently attached to poles.
A controversy surfaced in April, 2006, when the newly elected Conservative government discontinued the practice, initiated by the previous Liberal government following the Tarnak Farm incident, of flying the flag at half - mast on all government buildings whenever a Canadian soldier was killed in action in Afghanistan. The issue divided veterans ' groups and military families, some of whom supported the return to the original tradition of using Remembrance Day to honour all soldiers killed in action, while others felt it was an appropriate way to honour the fallen and to remind the population of the costs of war. In spite of the federal government 's policy, local authorities have often decided to fly the flag at half - mast to honour fallen soldiers who were from their jurisdiction, including Toronto and Saskatchewan.
On 2 April 2008, the House of Commons voted in favour of a motion calling on the government to reinstate the former policy regarding the half - masting of the flag on federal buildings. The motion, however, was not binding and the Cabinet refused to recommend any revision in policy to the Governor General. At the same time, a federal advisory committee tabled its report on the protocol of flying the national flag at half - mast, recommending that the Peace Tower flag remain at full height on days such as the Police Officers National Memorial Day and the National Day or Remembrance and Action on Violence Against Women, stating that the flag should only be half - masted on Remembrance Day. At last report, the committee 's findings had been forwarded to the House of Commons all - party heritage committee for further study.
The National Flag Law provides for a number of situations on which the flag should be flown at half - mast, and authorizes the State Council to make such executive orders:
In Finland, the official term for flying a flag at half - mast is known as suruliputus (mourning by flag (ging)). It is performed by raising the flag briefly to the top of the mast and lowering it approximately one - third of the length of the flagpole, placing the lower hoist corner at half - mast. On wall - mounted and roof - top flagpoles the middle of the flag should fly at the middle of the flagpole. When removing the flag from half - mast, it is briefly hoisted to the finial before lowering.
Traditionally, private residences and apartment houses fly the national flag at half - mast on the day of the death of a resident, when the flag is displayed at half - mast until sunset or 21: 00, whichever comes first. Flags are also flown at half - mast on the day of the burial, with the exception that the flag is to be hoisted to the finial after the inhumation takes place.
Flags are also to be flown at half - mast on the days of national mourning. Such days are the deaths of former or current Finnish presidents, as well as significant catastophical events such as the aftermath of 2004 Indian Ocean earthquake and tsunami, 2011 Norway attacks and significant national events such as the 2004 Konginkangas bus disaster and school shootings of Jokela and Kauhajoki.
Historically, flags were flown at half - mast on the Commemoration Day of Fallen Soldiers which takes place on the third Sunday of May. Originally, flag was raised to the finial in the morning, displayed at half - mast from 10: 00 to 14: 00, and again raised to the finial for the rest of the day. In 1995, the 50th anniversary of the end of the Second World War, the tradition of flying the flag at half - mast was discontinued and flag is displayed at the finial in a usual manner.
The French flag is flown half mast on any Day of Mourning by order of the government (for example after the Charlie Hebdo attack on 7 January 2015, the Paris attacks on 13 November 2015 and the Nice attack on 14 July 2016). Other countries have also flown the French flag at half mast because of this too. (Australia 's Sydney Harbour Bridge flew the French flag at half mast because of the Paris attacks on 13 November 2015).
Some occurrences of the French flag being flown half mast have been controversial, especially after the death of Pope John Paul II in 2005 but also in a lesser measure at the time following the death of Joseph Stalin in March 1953.
The flag of Germany and the flags of its federal states are flown at half - mast:
According to Law 851 / 1978, the only day specified on which the Greek flag is flown at half - mast is Good Friday. Also, on other national and public mourning days.
Similar rules as in China apply for Hong Kong. See Flag of Hong Kong for details. Prior to the transfer of sovereignty in 1997, the rules for flying the flag at half - mast were the same as the British ones.
The flag of India is flown at half - mast for the death of a President, Vice-President, or Prime Minister, all over India. For the Speaker of the Lok Sabha and the Chief Justice of The Supreme Court of India, it is flown in Delhi and for a Union Cabinet Minister it is flown in Delhi and the state capitals, from where he or she came. For a Minister of State, it is flown only in Delhi. For a Governor, Lt. Governor, or Chief Minister of a state or union territory, it is flown in the concerned state.
If the intimation of the death of any dignitary is received in the afternoon, the flag shall be flown at halfmast on the following day also at the place or places indicated above, provided the funeral has not taken place before sunrise on that day. On the day of the funeral of a dignitary mentioned above, the flag shall be flown at half - mast at the place of the funeral.
In the event of a halfmast day coinciding with the Republic Day, Independence Day, Mahatma Gandhi 's birthday, National Week (6 to 13 April), any other particular day of national rejoicing as may be specified by the Government of India, or, in the case of a state, on the anniversary of formation of that state, flags are not permitted to be flown at half - mast except over the building where the body of the deceased is lying until it has been removed and that flag shall be raised to the full - mast position after the body has been removed.
Observances of State mourning on the death of foreign dignitaries are governed by special instructions issued from the Ministry of Home Affairs (Home Ministry) in individual cases. However, in the event of death of either the Head of the State or Head of the Government of a foreign country, the Indian Mission accredited to that country may fly the national flag on the above - mentioned days. India observed a five - day period of National Mourning on the death of Nelson Mandela in 2013. India also declared 29 March 2015 as a day of National Mourning as a mark of respect to the former Prime Minister of Singapore, Lee Kuan Yew.
The flag of Iran is flown at half - mast on the death of a national figure or mourning days.
The flag of Ireland is flown at half - mast on the death of a national or international figure, that is, former and current Presidents or Taoiseach, on all prominent government buildings equipped with a flag pole. The death of a prominent local figure can also be marked locally by the flag being flown at half - mast. When the national flag is flown at half - mast, no other flag should be half - masted. When a balcony in Berkeley, California, collapsed, killing six Irish people, flags were flown at half mast above all state buildings.
The flag of Israel is flown at half - mast in Israel:
The flag of Italy was flown at half - mast after the 2013 Sardinia floods on 22 November 2013.
The flag of Indonesia is or has been flown half - mast during several occasions:
The flag of Japan is flown at half - staff upon the death of the Emperor of Japan, other members of the Imperial Family, or a current or former Prime Minister, and also following national disasters such as the 2011 Tōhoku earthquake and tsunami. In addition to the tradition of half - staff, the national flag topped by black cloth may be flown to designate mourning. See the flag of Japan for more.
The flag of Malaysia (Jalur Gemilang) is flown at half - mast all over the country:
As a mark of respect to the passengers and crew who were on board Malaysia Airlines Flight 370 and their family members, some states had their states flag flown at half - mast. Similarly, as a mark of respect to the passengers and crew who were on board Malaysia Airlines Flight 17 and their family members, the national flag was flown at half - mast for three days and also on the national day of mourning, 22 August 2014. The 2015 Sabah earthquake had a mourning day and the flag half - mast on 8 June 2015.
The flag of Malta is flown at half - mast on government buildings by instruction of the government through the Office of the Prime Minister, for example after 2004 Indian Ocean earthquake and tsunami.
The flag of the Netherlands is nationally flown at half - mast:
The royal standard and other flags of the Dutch royal family are never flown at half - mast. Instead, a black pennon may be affixed to the flag in times of mourning.
For both government and public buildings, the flag of New Zealand is flown at half - mast for the following people:
In addition, it can also be flown at half - mast at the request of the Minister for Arts, Culture and Heritage. Examples of this are for the deaths of prominent New Zealanders (e.g. Sir Edmund Hillary and Te Arikinui Dame Te Atairangikaahu, the Maori Queen), and for national tragedies (e.g. the Pike River Mine disaster)
According to the Ministry of Culture and Heritage, the position is always referred to as half - mast. The flag should be at least its own height from the top of the flagpole, though the actual position will depend on the size of the flag and the length of the flagpole.
The flag of Pakistan is routinely flown at half - mast on following days:
Any other day notified by the Government. For example, on the death of Saudi king King Fahd bin Abdul Aziz, the flag was flown at half - mast for seven days (the flag of Saudi Arabia was n't at half - mast because the flag contains the Shahada). Upon the Assassination of Benazir Bhutto, the flag was ordered to be flown at half - mast for three days. On the death of Syedna Mohammed Burhanuddin, spiritual leader of Dawoodi Bohra community, the flag has been ordered by Sindh Chief Minister Qaim Ali Shah, to be flown at half - mast for two days (17 and 18 January) to express solidarity with the bereaved community. In 2014, Prime Minister Nawaz Sharif announced a three - day mourning period from 16 December, including flying the flag at half - mast nationwide and at all Embassies and High Commissions of Pakistan, for the attack on Army Public School in Peshawar.
The flag of the Philippines may be flown at half - mast as a sign of mourning. Upon the official announcement of the death of the President or a former President, the flag should be flown at half - mast for ten days. The flag should be flown at half - mast for seven days following the death of the Vice President, the Chief Justice, the President of the Senate or the Speaker of the House of Representatives.
The flag may also be required to fly at half - mast upon the death of other persons to be determined by the National Historical Institute, for a period less than seven days. The flag shall be flown at half - mast on all the buildings and places where the decedent was holding office, on the day of death until the day of interment of an incumbent member of the Supreme Court, the Cabinet, the Senate or the House of Representatives, and such other persons as may be determined by the National Historical Institute. Such other people determined by the National Historical Institute have included Pope John Paul II, and former U.S. President Ronald Reagan.
As per Republic Act No. 229, flags nationwide are flown at half - mast every Rizal Day on December 30 to commemorate the death of national hero José Rizal.
When flown at half - mast, the flag should be first hoisted to the peak for a moment then lowered to the half - mast position. It should be raised to the peak again before it is lowered for the day.
The flag may also be used to cover the caskets of the dead of the military, veterans of previous wars, national artists, and outstanding civilians as determined by the local government. In such cases, the flag must be placed such that the white triangle is at the head and the blue portion covers the right side of the casket. The flag should not be lowered to the grave or allowed to touch the ground, but should be solemnly folded and handed to the heirs of the deceased.
Flags must also be raised to half - mast immediately in any area recovering from natural disasters such as a typhoon or an earthquake.
The flag of Russia is flown at half - mast and (or) topped by black ribbon:
All the regional flags and the departmental ensigns are flown at half - mast on national or regional mourning days as well as the national flag. Firms and non-governmental organizations, embassies and representatives of international organizations often join the mourning. National or regional mourning usually lasts for one day.
The flag of Saudi Arabia is one of the four flags in the world that are never flown at half - mast because it shows the Shahada. The flag of Somaliland, a self - declared state internationally recognized as part of Somalia, also displays the Shahada. The flag of Iraq bears the Takbir once. The flag of Afghanistan displays the Takbir beneath the Shahada on the top. Since all four bear the concept of oneness of God, the flags are never lowered to half - mast even as a sign of mourning.
The flag of Singapore is flown at half - mast in Singapore following the deaths of an "important personage '' (such as state leaders) and during periods of national mourning. Examples include:
The flag of South Africa is flown at half - mast as a sign of mourning when ordered by the President of South Africa. Upon the official announcement of the death of the current or former President, the flag should be flown at half - mast for ten days. The flag should be flown at half - mast for seven days following the death of the Deputy President, the Chairperson of NCOP, the Speaker of the National Assembly or the Chief Justice. For example, the flag was flown at half - mast from 6 -- 15 December 2013 during the national mourning period for Nelson Mandela.
The flag was flown at half - mast during the week of national mourning following the Marikana massacre in August 2012.
The flag of South Korea (Taegeukgi) is flown at half - mast on Hyeonchungil (Korean Memorial Day).
The flag of Sri Lanka is nationally flown at half - mast on a National day of mourning.
The flag of the Republic of China is flown at half - mast on 28 February to mark the anniversary of the 28 February Incident. On 5 August 2014, Taiwan flew their flag in half - mast for three days to commemorate the victims of the Kaohsiung gas explosions and TransAsia Airways Flight 222 crash.
The flag of Thailand was flown at half - mast for 15 days to mourn for the victims of 2004 Indian Ocean earthquake and tsunami.
The flag of Thailand was flown at half - mast from 2 January to 15 January 2008 on the death of Princess Galyani Vadhana, the Princess of Naradhiwas.
Also from 14 October to 13 November 2016 the flag of Thailand was flown half - mast for 30 days; following the death of King Bhumibol Adulyadej (Rama IX).
The flag of Turkey is flown at half - mast throughout Turkey every 10 November, between 09: 05 and the sunset, in memory of Mustafa Kemal Atatürk, who died on 10 November 1938 at five past nine in the morning. At other times, the government may issue an order for the national flag to be flown at half - mast upon the death of principal figures of the Turkish political life as a mark of respect to their memory (such as Turgut Özal). When such an order is issued, all government buildings, offices, public schools and military bases are to fly their flags at half - mast. To show the sympathy of Turkish people to a foreign leader, flags are also flown at half - mast by governmental order (such as after the deaths of Yasser Arafat or Pope John Paul II). The flag at the Grand National Assembly in Ankara is never lowered to half - mast, regardless of the occasion. The flag at Anıtkabir, the mausoleum of Mustafa Kemal Atatürk, the founder of Turkey, is only lowered to half - mast on November 10. At those times when the flag is to be flown at half - mast, it must first be raised to full height, then lowered to half - mast.
The flag of the United Arab Emirates is flown at half mast on 30 November (Martyrs ' Day) of every year from 08: 00 to 11: 30. The flag is also flown at half mast by decree of the President of the United Arab Emirates usually for three days. Each of the seven Emirs has the right to order flags to be flown at half mast in his Emirate.
The Royal Standard, the flag of the British monarch, is never flown at half - mast, because there is always a living monarch: the throne passes immediately to the successor.
There was some controversy in the United Kingdom in 1997 following the death of Diana, Princess of Wales that no flag was flying at half - mast at Buckingham Palace. Until 1997, the only flag to fly from Buckingham Palace was the Royal Standard, the official flag of the reigning British sovereign, which would only fly when the sovereign was in residence at the Palace (or, exceptionally, after the death of the sovereign, the flag of the next senior member of the Royal Family would be raised, if the new sovereign were not present); otherwise, no flag would fly.
In response to public outcry that the palace was not flying a flag at half mast, Queen Elizabeth II ordered a break with protocol, replacing the Royal Standard with the Union Flag at half - mast as soon as the Queen left the Palace to attend the Princess 's funeral at Westminster Abbey. The Royal Standard was again flown (at full hoist) on her return to the Palace. Since then, the Union Flag flies from the Palace when the Queen is not in residence, and has flown at half mast upon the deaths of members of the Royal Family, such as Princess Margaret and the Queen Mother in 2002 and other times of national mourning such as following the terrorist bombings in London on 7 July 2005.
In the UK, the correct way to fly the flag at half - mast is two - thirds between the bottom and top of the flagstaff, with at least the width of the flag between the top of the flag and the top of the pole according to the Department of Culture, Media and Sport, which decides the flying, on command of the Sovereign. The flag may be flown on a government building at half - mast on the following days:
According to the Department of Culture, Media and Sport, the correct term is Half Mast.
If a flag flying day coincides with a half - mast flag flying day (including the death of a member of the royal family), the flag is flown at full - mast unless a specific command is received from the Sovereign.
If more than one flag is flown on a half - mast day, they must all be flown at half - mast, or not at all. The flag of a foreign nation must never be flown at half - mast on UK soil unless that country has declared mourning.
At the United Nations offices in New York and Geneva, the flag of the United Nations flies at half - mast on the day after the death of a Head of State or a Head of Government of a member state, but generally not during the funeral. Other occasions are at the Secretary - General 's discretion. Other offices may follow local practice. To honor the memory of Dag Hammarskjöld the UN issued postage stamps showing its flag at half - mast.
In the United States, the usual government term for non-nautical use is "half - staff. '' While the term "half - mast '' is commonly used in place of half - staff, U.S. law and post-WW - I military tradition indicate that "half - mast '' is reserved to usage aboard a ship, where flags are typically flown from masts, and at naval ships ashore. Elsewhere ashore, flags are flown at "half - staff. '' In addition, flags are lowered to half - staff, not raised.
In the United States, the President can issue an executive order for the flag of the United States to be flown at half - staff upon the death of principal figures of the United States government and others, as a mark of respect to their memory. When such an order is issued, all government buildings, offices, public schools, and military bases are to fly their flags at half - staff. Under federal law (4 U.S.C. § 7 (f)), the flags of states, cities, localities, and pennants of societies, shall never be placed above the flag of the United States; thus, all other flags also fly at half - staff when the U.S. flag has been ordered to fly at half - staff. There is no penalty for failure to comply with the above law as to enforce such a penalty would violate the First Amendment.
Governors of U.S. states and territories are authorized by federal law to order all U.S. and state flags in their jurisdiction flown at half - staff as a mark of respect for a former or current state official who has died, or for a member of the armed forces who has died in active duty. The governor 's authority to issue the order is more restricted than the president 's, and does not include discretion to issue the order for state residents who do not meet the criteria stated. Since a governor 's executive order affects only his or her state, not the entire country, these orders are distinguished from presidential proclamations.
Under 4 U.S.C. § 7 (m) and established traditions by Presidential proclamations, the flag of the United States is to be flown at half - staff on rare occasions, in the following circumstances:
Federal law includes a Congressional request that the flag be flown at half - staff on Peace Officers Memorial Day (May 15), unless that day is also Armed Forces Day. Presidential proclamations also call for the flag to be flown at half - staff on Pearl Harbour Remembrance Day (December 7), and Patriot Day (September 11).
On October 16, 2001, President George W. Bush approved legislation requiring the United States flag to be lowered to half - staff on all Federal buildings to memorialize fallen firefighters. Pub. L. 107 -- 51 requires this action to occur annually in conjunction with observance of the National Fallen Firefighters Memorial Service. The date of the National Fallen Firefighters Memorial Service is traditionally the first Sunday in October. It is held at the National Fallen Firefighters Memorial in Emmitsburg, Maryland.
4 U.S.C. § 7 (m) was modified with new legislation signed into effect on June 29, 2007, by President Bush, requiring any federal facility within a region, which proclaims half - staff to honor a member of the U.S. Armed Forces who died on active duty, to follow the half - staff proclamation.
Apart from the lowered position of the flag of Vietnam, state mourning also warrants a black ribbon 1 / 10 the width of the flag 's width and equal to the length of the flag to be tied at the summit. Variants have the black ribbon wrapped around the flag itself, preventing it from being unfurled.
The flag of Zimbabwe is flown at half - mast at the conferment of National Hero Status to the deceased. As a first - generation republic, adjudication over such a status is currently done by the politburo of the ZANU -- PF.
|
where do they film the new lost in space | Lost in Space (2018 TV series) - Wikipedia
Lost in Space is an American science fiction television series based on a reimagining of the 1965 series of the same name (itself a reimagining of the 1812 novel The Swiss Family Robinson), following the adventures of a family of space colonists whose spaceship veers off course.
Produced by Legendary Television, Synthesis Entertainment, Clickety - Clack Productions, and Applebox Entertainment, the show is written by Matt Sazama and Burk Sharpless, with Zack Estrin serving as showrunner. Netflix released the series on April 13, 2018, renewing it for a second season the following month.
In the aftermath of an impact event that threatens the survival of humanity, the Robinson family is selected for the 24th mission of the Resolute, an interstellar spacecraft carrying selected families to colonize the Alpha Centauri star system.
Before they reach their destination, an alien robot breaches the Resolute 's hull. Forced to evacuate the mothership in short - range Jupiter spacecraft, scores of colonists, among them the Robinsons, crash on a nearby habitable planet. There they must contend with a strange environment and battle their own personal demons as they search for a way back to the Resolute.
In October 2014, it was announced that Legendary Television and Synthesis Entertainment were developing a new reboot of Lost in Space and had hired screenwriting duo Matt Sazama and Burk Sharpless to pen the pilot episodes. In November 2015, Netflix landed the project. On June 29, 2016, Netflix ordered a full 10 episode season of Lost in Space, with Zack Estrin as executive producer and showrunner. Sazama, Sharpless, Kevin Burns, Jon Jashni, Neil Marshall, and Marc Helwig also serve as executive producers.
Production on the first season began in February 2017, in Vancouver, British Columbia, and concluded in July 2017.
The series was released on April 13, 2018, on Netflix. On March 31, 2018, the series pilot was screened at Awesome Con in Washington, D.C..
The review aggregator website Rotten Tomatoes reported a 68 % approval rating with an average rating of 6.21 / 10, based on 65 reviews. The website 's critical consensus reads: "Lost in Space 's production values are ambitious enough to attract sci - fi adventure fans, while the story 's large heart adds an emotional anchor to all the deep space derring - do. '' Metacritic, which uses a weighted average, assigned a normalized score of 58 out of 100 based on 27 critics, indicating "mixed or average reviews ''.
David Griffin of IGN gave the first season a rating of 8.5 / 10, calling it "an excellent sci - fi adventure with a slight villain problem, '' giving particular praise to the Robinson family, while criticizing Parker Posey 's Dr. Smith as an unsophisticated and one - dimensional character who lacks redeeming qualities. In contrast, Jen Chaney of Vulture characterized Posey 's performance as providing "understated, sly comedic touches ''.
|
what is the far common in a separate peace | The Devon school - wikipedia
The Devon School is a fictional school created by author John Knowles in the novels A Separate Peace and Peace Breaks Out. It is based on Knowles ' alma mater, Phillips Exeter Academy. Like Phillips Exeter during World War II, Devon is a boys ' boarding school in New Hampshire. Knowles places the school in a town that bears its name, specifically at the head of a quaint residential street called Gilman Street. The school "emerged naturally from the town which had produced it. '' A Separate Peace covers the summer of 1942 and the Winter Session of 1942 - 1943. The senior year students are being prepared for the war. The timeframe in Peace Breaks Out is 1946 - 1947. In both of these books, Devon is portrayed as a boys ' preparatory school, just as Phillips Exeter was at the time; although Phillips Exeter is today a co-educational school. The Devon School is one of the most prominent fictional examples of a total institution.
The approach from Gilman Street gradually gives way to the Far Common, a leafy, manicured expanse of ground that proceeds the First Academy Building, which Knowles derives almost entirely from the Academy Building at Phillips Exeter, having the same cupola and similar Latin inscription over the entrance. The First Academy Building, the Georgian red brick dormitories, and the Gothic - style chapel, form a quadrangle around the Center Common. There is then a group of Colonial houses for the Dean, the Headmaster, and other faculty members, along an old London - style lane leading from the dormitories to the Naguamsett River and the Crew House. Progressing in another direction, past the Field House (or "The Cage '') the Center Common opens onto the Playing Fields, with tennis courts on the left, the Devon Woods on the right, and enormous open grounds for playing football, lacrosse and soccer. Directly ahead, far across the Playing Fields, is the stadium (which envelops the swimming pool), the Devon River and the climactic tree that is the basis for a very crucial part of the plot. Beyond all that, Knowles names the excess wilderness as the Fields Beyond.
The town of Devon, as described by Knowles, is of course very similar to the real - life Exeter, New Hampshire and the Devon River and the Naguamsett River are based on the Exeter River and the Squamscott River. Exactly like its real - life basis, the fresh - water Devon River eventually falls into the tidal Naguamsett, a marshy, mud - banked saline body of water that eventually connected to the ocean. The two rivers were separated by a dam and a small waterfall.
|
who published a vindication of the rights of woman | A Vindication of the Rights of Woman - wikipedia
A Vindication of the Rights of Woman: with Strictures on Political and Moral Subjects (1792), written by the 18th - century British proto - feminist Mary Wollstonecraft, is one of the earliest works of feminist philosophy. In it, Wollstonecraft responds to those educational and political theorists of the 18th century who did not believe women should have an education. She argues that women ought to have an education commensurate with their position in society, claiming that women are essential to the nation because they educate its children and because they could be "companions '' to their husbands, rather than mere wives. Instead of viewing women as ornaments to society or property to be traded in marriage, Wollstonecraft maintains that they are human beings deserving of the same fundamental rights as men.
Wollstonecraft was prompted to write the Rights of Woman after reading Charles Maurice de Talleyrand - Périgord 's 1791 report to the French National Assembly, which stated that women should only receive a domestic education; she used her commentary on this specific event to launch a broad attack against sexual double standards and to indict men for encouraging women to indulge in excessive emotion. Wollstonecraft wrote the Rights of Woman hurriedly to respond directly to ongoing events; she intended to write a more thoughtful second volume but died before completing it.
While Wollstonecraft does call for equality between the sexes in particular areas of life, such as morality, she does not explicitly state that men and women are equal. Her ambiguous statements regarding the equality of the sexes have since made it difficult to classify Wollstonecraft as a modern feminist, particularly since the word and the concept were unavailable to her. Although it is commonly assumed now that the Rights of Woman was unfavourably received, this is a modern misconception based on the belief that Wollstonecraft was as reviled during her lifetime as she became after the publication of William Godwin 's Memoirs of the Author of A Vindication of the Rights of Woman (1798). The Rights of Woman was actually well received when it was first published in 1792. One biographer has called it "perhaps the most original book of (Wollstonecraft 's) century ''.
A Vindication of the Rights of Woman was written against the tumultuous background of the French Revolution and the debates that it spawned in Britain. In a lively and sometimes vicious pamphlet war, now referred to as the Revolution controversy, British political commentators addressed topics ranging from representative government to human rights to the separation of church and state, many of these issues having been raised in France first. Wollstonecraft first entered this fray in 1790 with A Vindication of the Rights of Men, a response to Edmund Burke 's Reflections on the Revolution in France (1790). In his Reflections, Burke criticized the view of many British thinkers and writers who had welcomed the early stages of the French revolution. While they saw the revolution as analogous to Britain 's own Glorious Revolution in 1688, which had restricted the powers of the monarchy, Burke argued that the appropriate historical analogy was the English Civil War (1642 -- 1651) in which Charles I had been executed in 1649. He viewed the French revolution as the violent overthrow of a legitimate government. In Reflections he argues that citizens do not have the right to revolt against their government because civilization is the result of social and political consensus; its traditions can not be continually challenged -- the result would be anarchy. One of the key arguments of Wollstonecraft 's Rights of Men, published just six weeks after Burke 's Reflections, is that rights can not be based on tradition; rights, she argues, should be conferred because they are reasonable and just, regardless of their basis in tradition.
When Charles Maurice de Talleyrand - Périgord presented his Rapport sur l'instruction publique (1791) to the National Assembly in France, Wollstonecraft was galvanized to respond. In his recommendations for a national system of education, Talleyrand had written:
Let us bring up women, not to aspire to advantages which the Constitution denies them, but to know and appreciate those which it guarantees them... Men are destined to live on the stage of the world. A public education suits them: it early places before their eyes all the scenes of life: only the proportions are different. The paternal home is better for the education of women; they have less need to learn to deal with the interests of others, than to accustom themselves to a calm and secluded life.
Wollstonecraft dedicated the Rights of Woman to Talleyrand: "Having read with great pleasure a pamphlet which you have lately published, I dedicate this volume to you; to induce you to reconsider the subject, and maturely weigh what I have advanced respecting the rights of woman and national education. '' At the end of 1791, French feminist Olympe de Gouges had published her Declaration of the Rights of Woman and the Female Citizen, and the question of women 's rights became central to political debates in both France and Britain.
The Rights of Woman is an extension of Wollstonecraft 's arguments in the Rights of Men. In the Rights of Men, as the title suggests, she is concerned with the rights of particular men (18th - century British men) while in the Rights of Woman, she is concerned with the rights afforded to "woman '', an abstract category. She does not isolate her argument to 18th - century women or British women. The first chapter of the Rights of Woman addresses the issue of natural rights and asks who has those inalienable rights and on what grounds. She answers that since natural rights are given by God, for one segment of society to deny them to another segment is a sin. The Rights of Woman thus engages not only specific events in France and in Britain but also larger questions being raised by political philosophers such as John Locke and Jean - Jacques Rousseau.
Wollstonecraft did not employ the formal argumentation or logical prose style common to 18th - century philosophical writing when composing her own works. The Rights of Woman is a long essay that introduces all of its major topics in the opening chapters and then repeatedly returns to them, each time from a different point of view. It also adopts a hybrid tone that combines rational argument with the fervent rhetoric of sensibility.
In the 18th century, sensibility was a physical phenomenon that came to be attached to a specific set of moral beliefs. Physicians and anatomists believed that the more sensitive people 's nerves, the more emotionally affected they would be by their surroundings. Since women were thought to have keener nerves than men, it was also believed that women were more emotional than men. The emotional excess associated with sensibility also theoretically produced an ethic of compassion: those with sensibility could easily sympathise with people in pain. Thus historians have credited the discourse of sensibility and those who promoted it with the increased humanitarian efforts, such as the movement to abolish the slave trade. But sensibility also paralysed those who had too much of it; as scholar G.J. Barker - Benfield explains, "an innate refinement of nerves was also identifiable with greater suffering, with weakness, and a susceptibility to disorder ''.
By the time Wollstonecraft was writing the Rights of Woman, sensibility had already been under sustained attack for a number of years. Sensibility, which had initially promised to draw individuals together through sympathy, was now viewed as "profoundly separatist ''; novels, plays, and poems that employed the language of sensibility asserted individual rights, sexual freedom, and unconventional familial relationships based only upon feeling. Furthermore, as Janet Todd, another scholar of sensibility, argues, "to many in Britain the cult of sensibility seemed to have feminized the nation, given women undue prominence, and emasculated men. ''
One of Wollstonecraft 's central arguments in the Rights of Woman is that women should be educated in a rational manner to give them the opportunity to contribute to society. In the 18th century, it was often assumed by both educational philosophers and conduct book writers, who wrote what one might think of as early self - help books, that women were incapable of rational or abstract thought. Women, it was believed, were too susceptible to sensibility and too fragile to be able to think clearly. Wollstonecraft, along with other female reformers such as Catharine Macaulay and Hester Chapone, maintained that women were indeed capable of rational thought and deserved to be educated. She argued this point in her own conduct book, Thoughts on the Education of Daughters (1787), in her children 's book, Original Stories from Real Life (1788), as well as in the Rights of Woman.
Stating in her preface that "my main argument is built on this simple principle, that if (woman) be not prepared by education to become the companion of man, she will stop the progress of knowledge and virtue; for truth must be common to all '', Wollstonecraft contends that society will degenerate without educated women, particularly because mothers are the primary educators of young children. She attributes the problem of uneducated women to men and "a false system of education, gathered from the books written on this subject by men who (consider) females rather as women than human creatures ''. Women are capable of rationality; it only appears that they are not, because men have refused to educate them and encouraged them to be frivolous (Wollstonecraft describes silly women as "spaniels '' and "toys ''). While stressing it is of the same kind, she entertains the notion that women might not be able to attain the same degree of knowledge that men do.
Wollstonecraft attacks conduct book writers such as James Fordyce and John Gregory as well as educational philosophers such as Jean - Jacques Rousseau who argue that a woman does not need a rational education. (Rousseau famously argues in Emile (1762) that women should be educated for the pleasure of men; Wollstonecraft, infuriated by this argument, attacks not only it but also Rousseau himself.) Intent on illustrating the limitations that contemporary educational theory placed upon women, Wollstonecraft writes, "taught from their infancy that beauty is woman 's sceptre, the mind shapes itself to the body, and, roaming round its gilt cage, only seeks to adorn its prison '', implying that without this damaging ideology, which encourages young women to focus their attention on beauty and outward accomplishments, they could achieve much more. Wives could be the rational "companions '' of their husbands and even pursue careers should they so choose: "women might certainly study the art of healing, and be physicians as well as nurses. And midwifery, decency seems to allot to them... they might, also, study politics... Business of various kinds, they might likewise pursue. ''
For Wollstonecraft, "the most perfect education '' is "an exercise of the understanding as is best calculated to strengthen the body and form the heart. Or, in other words, to enable the individual to attach such habits of virtue as will render it independent. '' In addition to her broad philosophical arguments, Wollstonecraft lays out a specific plan for national education to counter Talleyrand 's. In Chapter 12, "On National Education, '' she proposes that children be sent to day schools as well as given some education at home "to inspire a love of home and domestic pleasures, '' and that such schools be free for children "five to nine years of age. '' She also maintains that schooling should be co-educational, contending that men and women, whose marriages are "the cement of society, '' should be "educated after the same model. ''
It is debatable to what extent the Rights of Woman is a feminist text; because the definitions of feminist vary, different scholars have come to different conclusions. Wollstonecraft would never have referred to her text as feminist because the words feminist and feminism were not coined until the 1890s. Moreover, there was no feminist movement to speak of during Wollstonecraft 's lifetime. In the introduction to her seminal work on Wollstonecraft 's thought, Barbara Taylor writes:
Describing (Wollstonecraft 's philosophy) as feminist is problematic, and I do it only after much consideration. The label is of course anachronistic... Treating Wollstonecraft 's thought as an anticipation of nineteenth and twentieth - century feminist argument has meant sacrificing or distorting some of its key elements. Leading examples of this... have been the widespread neglect of her religious beliefs, and the misrepresentation of her as a bourgeois liberal, which together have resulted in the displacement of a religiously inspired utopian radicalism by a secular, class - partisan reformism as alien to Wollstonecraft 's political project as her dream of a divinely promised age of universal happiness is to our own. Even more important however has been the imposition on Wollstonecraft of a heroic - individualist brand of politics utterly at odds with her own ethically driven case for women 's emancipation. Wollstonecraft 's leading ambition for women was that they should attain virtue, and it was to this end that she sought their liberation.
In the Rights of Woman, Wollstonecraft does not make the claim for gender equality using the same arguments or the same language that late 19th - and 20th century feminists later would. For instance, rather than unequivocally stating that men and women are equal, Wollstonecraft contends that men and women are equal in the eyes of God, which means that they are both subject to the same moral law. For Wollstonecraft, men and women are equal in the most important areas of life. While such an idea may not seem revolutionary to 21st - century readers, its implications were revolutionary during the 18th century. For example, it implied that both men and women -- not just women -- should be modest and respect the sanctity of marriage. Wollstonecraft 's argument exposed the sexual double standard of the late 18th century and demanded that men adhere to the same virtues demanded of women.
However, Wollstonecraft 's arguments for equality stand in contrast to her statements respecting the superiority of masculine strength and valour. Wollstonecraft famously and ambiguously states:
Let it not be concluded, that I wish to invert the order of things; I have already granted, that, from the constitution of their bodies, men seem to be designed by Providence to attain a greater degree of virtue. I speak collectively of the whole sex; but I see not the shadow of a reason to conclude that their virtues should differ in respect to their nature. In fact, how can they, if virtue has only one eternal standard? I must therefore, if I reason consequentially, as strenuously maintain that they have the same simple direction, as that there is a God.
Moreover, Wollstonecraft calls on men, rather than women, to initiate the social and political changes she outlines in the Rights of Woman. Because women are uneducated, they can not alter their own situation -- men must come to their aid. Wollstonecraft writes at the end of her chapter "Of the Pernicious Effects Which Arise from the Unnatural Distinctions Established in Society '':
I then would fain convince reasonable men of the importance of some of my remarks; and prevail on them to weigh dispassionately the whole tenor of my observations. -- I appeal to their understandings; and, as a fellow - creature, claim, in the name of my sex, some interest in their hearts. I entreat them to assist to emancipate their companion, to make her a help meet for them! Would men but generously snap our chains, and be content with rational fellowship instead of slavish obedience, they would find us more observant daughters, more affectionate sisters, more faithful wives, more reasonable mothers -- in a word, better citizens.
It is Wollstonecraft 's last novel, Maria: or, The Wrongs of Woman (1798), the fictionalized sequel to the Rights of Woman, that is usually considered her most radical feminist work.
One of Wollstonecraft 's most scathing criticisms in the Rights of Woman is against false and excessive sensibility, particularly in women. She argues that women who succumb to sensibility are "blown about by every momentary gust of feeling ''; because these women are "the prey of their senses '', they can not think rationally. In fact, not only do they do harm to themselves but they also do harm to all of civilization: these are not women who can refine civilization -- these are women who will destroy it. But reason and feeling are not independent for Wollstonecraft; rather, she believes that they should inform each other. For Wollstonecraft, as for the important 18th - century philosopher David Hume, the passions underpin all reason. This was a theme that she would return to throughout her career, but particularly in her novels Mary: A Fiction (1788) and Maria: or, The Wrongs of Woman.
As part of her argument that women should not be overly influenced by their feelings, Wollstonecraft emphasises that they should not be constrained by or made slaves to their bodies or their sexual feelings. This particular argument has led many modern feminists to suggest that Wollstonecraft intentionally avoids granting women any sexual desire. Cora Kaplan argues that the "negative and prescriptive assault on female sexuality '' is a "leitmotif '' of the Rights of Woman. For example, Wollstonecraft advises her readers to "calmly let passion subside into friendship '' in the ideal companionate marriage (that is, in the ideal of a love - based marriage that was developing at the time). It would be better, she writes, when "two virtuous young people marry... if some circumstances checked their passion ''. According to Wollstonecraft, "love and friendship can not subsist in the same bosom ''. As Mary Poovey explains, "Wollstonecraft betrays her fear that female desire might in fact court man 's lascivious and degrading attentions, that the subordinate position women have been given might even be deserved. Until women can transcend their fleshly desires and fleshly forms, they will be hostage to the body. '' If women are not interested in sexuality, they can not be dominated by men. Wollstonecraft worries that women are consumed with "romantic wavering '', that is, they are interested only in satisfying their lusts. Because the Rights of Woman eliminates sexuality from a woman 's life, Kaplan contends, it "expresses a violent antagonism to the sexual '' while at the same time "exaggerat (ing) the importance of the sensual in the everyday life of women ''. Wollstonecraft was so determined to wipe sexuality from her picture of the ideal woman that she ended up foregrounding it by insisting upon its absence. But as Kaplan and others have remarked, Wollstonecraft may have been forced to make this sacrifice: "it is important to remember that the notion of woman as politically enabled and independent (was) fatally linked (during the eighteenth century) to the unrestrained and vicious exercise of her sexuality. ''
Claudia Johnson, a prominent Wollstonecraft scholar, has called the Rights of Woman "a republican manifesto ''. Johnson contends that Wollstonecraft is hearkening back to the Commonwealth tradition of the 17th century and attempting to reestablish a republican ethos. In Wollstonecraft 's version, there would be strong, but separate, masculine and feminine roles for citizens. According to Johnson, Wollstonecraft "denounces the collapse of proper sexual distinction as the leading feature of her age, and as the grievous consequence of sentimentality itself. The problem undermining society in her view is feminized men ''. If men feel free to adopt both the masculine position and the sentimental feminine position, she argues, women have no position open to them in society. Johnson therefore sees Wollstonecraft as a critic, in both the Rights of Men and the Rights of Woman, of the "masculinization of sensitivity '' in such works as Edmund Burke 's Reflections on the Revolution in France.
In the Rights of Woman Wollstonecraft adheres to a version of republicanism that includes a belief in the eventual overthrow of all titles, including the monarchy. She also briefly suggests that all men and women should be represented in government. But the bulk of her "political criticism, '' as Chris Jones, a Wollstonecraft scholar, explains, "is couched predominantly in terms of morality ''. Her definition of virtue focuses on the individual 's happiness rather than, for example, the good of the entire society. This is reflected in her explanation of natural rights. Because rights ultimately proceed from God, Wollstonecraft maintains that there are duties, tied to those rights, incumbent upon each and every person. For Wollstonecraft, the individual is taught republicanism and benevolence within the family; domestic relations and familial ties are crucial to her understanding of social cohesion and patriotism.
In many ways the Rights of Woman is inflected by a bourgeois view of the world, as is its direct predecessor the Rights of Men. Wollstonecraft addresses her text to the middle class, which she calls the "most natural state ''. She also frequently praises modesty and industry, virtues which, at the time, were associated with the middle class. From her position as a middle - class writer arguing for a middle - class ethos, Wollstonecraft also attacks the wealthy, criticizing them using the same arguments she employs against women. She points out the "false - refinement, immorality, and vanity '' of the rich, calling them "weak, artificial beings, raised above the common wants and affections of their race, in a premature unnatural manner (who) undermine the very foundation of virtue, and spread corruption through the whole mass of society ''.
But Wollstonecraft 's criticisms of the wealthy do not necessarily reflect a concomitant sympathy for the poor. For her, the poor are fortunate because they will never be trapped by the snares of wealth: "Happy is it when people have the cares of life to struggle with; for these struggles prevent their becoming a prey to enervating vices, merely from idleness! '' Moreover, she contends that charity has only negative consequences because, as Jones puts it, she "sees it as sustaining an unequal society while giving the appearance of virtue to the rich ''.
In her national plan for education, she retains class distinctions (with an exception for the intelligent), suggesting that: "After the age of nine, girls and boys, intended for domestic employments, or mechanical trades, ought to be removed to other schools, and receive instruction, in some measure appropriated to the destination of each individual... The young people of superior abilities, or fortune, might now be taught, in another school, the dead and living languages, the elements of science, and continue the study of history and politics, on a more extensive scale, which would not exclude polite literature. ''
In attempting to navigate the cultural expectations of female writers and the generic conventions of political and philosophical discourse, Wollstonecraft, as she does throughout her oeuvre, constructs a unique blend of masculine and feminine styles in the Rights of Woman. She uses the language of philosophy, referring to her work as a "treatise '' with "arguments '' and "principles ''. However, Wollstonecraft also uses a personal tone, employing "I '' and "you '', dashes and exclamation marks, and autobiographical references to create a distinctly feminine voice in the text. The Rights of Woman further hybridizes its genre by weaving together elements of the conduct book, the short essay, and the novel, genres often associated with women, while at the same time claiming that these genres could be used to discuss philosophical topics such as rights.
Although Wollstonecraft argues against excessive sensibility, the rhetoric of the Rights of Woman is at times heated and attempts to provoke the reader. Many of the most emotional comments in the book are directed at Rousseau. For example, after excerpting a long passage from Emile (1762), Wollstonecraft pithily states, "I shall make no other comments on this ingenious passage, than just to observe, that it is the philosophy of lasciviousness. '' A mere page later, after indicting Rousseau 's plan for female education, she writes "I must relieve myself by drawing another picture. '' These terse exclamations are meant to draw the reader to her side of the argument (it is assumed that the reader will agree with them). While she claims to write in a plain style so that her ideas will reach the broadest possible audience, she actually combines the plain, rational language of the political treatise with the poetic, passionate language of sensibility to demonstrate that one can combine rationality and sensibility in the same self. Wollstonecraft defends her positions not only with reasoned argument but also with ardent rhetoric.
In her efforts to vividly describe the condition of women within society, Wollstonecraft employs several different analogies. She often compares women to slaves, arguing that their ignorance and powerlessness places them in that position. But at the same time, she also compares them to "capricious tyrants '' who use cunning and deceit to manipulate the men around them. At one point, she reasons that a woman can become either a slave or tyrant, which she describes as two sides of the same coin. Wollstonecraft also compares women to soldiers; like military men, they are valued only for their appearance and obedience. And like the rich, women 's "softness '' has "debased mankind ''.
Wollstonecraft was forced to write the Rights of Woman hurriedly to respond to Talleyrand and ongoing events. Upon completing the work, she wrote to her friend William Roscoe: "I am dissatisfied with myself for not having done justice to the subject. -- Do not suspect me of false modesty -- I mean to say that had I allowed myself more time I could have written a better book, in every sense of the word... I intend to finish the next volume before I begin to print, for it is not pleasant to have the Devil coming for the conclusion of a sheet fore it is written. '' When Wollstonecraft revised the Rights of Woman for the second edition, she took the opportunity not only to fix small spelling and grammar mistakes but also to bolster the feminist claims of her argument. She changed some of her statements regarding female and male difference to reflect a greater equality between the sexes.
Wollstonecraft never wrote the second part to the Rights of Woman, although William Godwin published her "Hints '', which were "chiefly designed to have been incorporated in the second part of the Vindication of the Rights of Woman '', in the posthumous collection of her works. However, she did begin writing the novel Maria: or, The Wrongs of Woman, which most scholars consider a fictionalized sequel to the Rights of Woman. It was unfinished at her death and also included in the Posthumous Works published by Godwin.
When it was first published in 1792, the Rights of Woman was reviewed favourably by the Analytical Review, the General Magazine, the Literary Magazine, New York Magazine, and the Monthly Review, although the assumption persists even today that Rights of Woman received hostile reviews. It was almost immediately released in a second edition in 1792, several American editions appeared, and it was translated into French. Taylor writes that "it was an immediate success ''. Moreover, other writers such as Mary Hays and Mary Robinson specifically alluded to Wollstonecraft 's text in their own works. Hays cited the Rights of Woman in her novel Memoirs of Emma Courtney (1796) and modelled her female characters after Wollstonecraft 's ideal woman.
Although female conservatives such as Hannah More excoriated Wollstonecraft personally, they actually shared many of the same values. As the scholar Anne Mellor has shown, both More and Wollstonecraft wanted a society founded on "Christian virtues of rational benevolence, honesty, personal virtue, the fulfillment of social duty, thrift, sobriety, and hard work ''. During the early 1790s, many writers within British society were engaged in an intense debate regarding the position of women in society. For example, the respected poet and essayist Anna Laetitia Barbauld and Wollstonecraft sparred back and forth; Barbauld published several poems responding to Wollstonecraft 's work and Wollstonecraft commented on them in footnotes to the Rights of Woman. The work also provoked outright hostility. The bluestocking Elizabeth Carter was unimpressed with the work. Thomas Taylor, the Neoplatonist translator who had been a landlord to the Wollstonecraft family in the late 1770s, swiftly wrote a satire called A Vindication of the Rights of Brutes: if women have rights, why not animals too?
After Wollstonecraft died in 1797, her husband William Godwin published his Memoirs of the Author of A Vindication of the Rights of Woman (1798). He revealed much about her private life that had previously not been known to the public: her illegitimate child, her love affairs, and her attempts at suicide. While Godwin believed he was portraying his wife with love, sincerity, and compassion, contemporary readers were shocked by Wollstonecraft 's unorthodox lifestyle and she became a reviled figure. Richard Polwhele targeted her in particular in his anonymous long poem The Unsex 'd Females (1798), a defensive reaction to women 's literary self - assertion: Hannah More is Christ to Wollstonecraft 's Satan. His poem was "well known '' among the responses A Vindication. One reviewer comments this "ingenious poem '' with its "playful sallies of sarcastic wit '' against "our modern ladies, '' though others found it "a tedious, lifeless piece of writing. '' Critical responses largely fell along clear - cut political lines.
Wollstonecraft 's ideas became associated with her life story and women writers felt that it was dangerous to mention her in their texts. Hays, who had previously been a close friend and an outspoken advocate for Wollstonecraft and her Rights of Woman, for example, did not include her in the collection of Illustrious and Celebrated Women she published in 1803. Maria Edgeworth specifically distances herself from Wollstonecraft in her novel Belinda (1802); she caricatures Wollstonecraft as a radical feminist in the character of Harriet Freke. But, like Jane Austen, she does not reject Wollstonecraft 's ideas. Both Edgeworth and Austen argue that women are crucial to the development of the nation; moreover, they portray women as rational beings who should choose companionate marriage.
The negative views towards Wollstonecraft persisted for over a century. The Rights of Woman was not reprinted until the middle of the 19th century and it still retained an aura of ill - repute. George Eliot wrote "there is in some quarters a vague prejudice against the Rights of Woman as in some way or other a reprehensible book, but readers who go to it with this impression will be surprised to find it eminently serious, severely moral, and withal rather heavy ''. The suffragist (i.e. moderate reformer, as opposed to suffragette) Millicent Garrett Fawcett wrote the introduction to the centenary edition of the Rights of Woman, cleansing the memory of Wollstonecraft and claiming her as the foremother of the struggle for the vote. While the Rights of Woman may have paved the way for feminist arguments, 20th century feminists have tended to use Wollstonecraft 's life story, rather than her texts, for inspiration; her unorthodox lifestyle convinced them to try new "experiments in living '', as Virginia Woolf termed it in her famous essay on Wollstonecraft. However, there is some evidence that the Rights of Woman may be influencing current feminists. Ayaan Hirsi Ali, a feminist who is critical of Islam 's dictates regarding women, cites the Rights of Woman in her autobiography Infidel, writing that she was "inspired by Mary Wollstonecraft, the pioneering feminist thinker who told women they had the same ability to reason as men did and deserved the same rights ''.
|
how long did it take goethe to write faust | Goethe 's Faust - Wikipedia
Johann Wolfgang von Goethe 's Faust is a tragic play in two parts usually known in English as Faust, Part One and Faust, Part Two. Although rarely staged in its entirety, it is the play with the largest audience numbers on German - language stages. Faust is considered by many to be Goethe 's magnum opus and the greatest work of German literature.
The earliest forms of the work, known as the Urfaust, were developed between 1772 and 1775; however, the details of that development are not entirely clear. Urfaust has twenty - two scenes, one in prose, two largely prose and the remaining 1,441 lines in rhymed verse. The manuscript is lost, but a copy was discovered in 1886.
The first appearance of the work in print was Faust, a Fragment, published in 1790. Goethe completed a preliminary version of what is now known as Part One in 1806. Its publication in 1808 was followed by the revised 1828 -- 29 edition, the last to be edited by Goethe himself.
Goethe finished writing Faust Part Two in 1831. In contrast to Faust Part One, the focus here is no longer on the soul of Faust, which has been sold to the devil, but rather on social phenomena such as psychology, history and politics, in addition to mystical and philosophical topics. The second part formed the principal occupation of Goethe 's last years. It appeared only posthumously in 1832.
The original 1808 German title page of Goethe 's play read simply: "Faust. / Eine Tragödie '' ("Faust. / A Tragedy ''). The addition of "erster Teil '' ("Part One '', in English) was only retrospectively applied by publishers when the sequel was published in 1832 with a title page which read: "Faust. / Der Tragödie zweiter Teil '' ("Faust. / The Tragedy Part Two ''). The two plays have been published in English under a number of titles, and are most usually referred to as Faust Parts One and Two.
The principal characters of Faust Part One include:
Faust Part One takes place in multiple settings, the first of which is Heaven. Mephistopheles makes a bet with God: he says that he can lure God 's favourite human being (Faust), who is striving to learn everything that can be known, away from righteous pursuits. The next scene takes place in Faust 's study where Faust, despairing at the vanity of scientific, humanitarian and religious learning, turns to magic for the showering of infinite knowledge. He suspects, however, that his attempts are failing. Frustrated, he ponders suicide, but rejects it as he hears the echo of nearby Easter celebrations begin. He goes for a walk with his assistant Wagner and is followed home by a stray poodle (the term then meant a medium - to - big - size dog, similar to a sheep dog).
In Faust 's study, the poodle transforms into the devil (Mephistopheles). Faust makes an arrangement with the devil: the devil will do everything that Faust wants while he is here on Earth, and in exchange Faust will serve the devil in Hell. Faust 's arrangement is that if he is pleased enough with anything the devil gives him that he wants to stay in that moment forever, then he will die in that moment.
When the devil tells Faust to sign the pact with blood, Faust complains that the devil does not trust Faust 's word of honor. In the end, Mephistopheles wins the argument and Faust signs the contract with a drop of his own blood. Faust has a few excursions and then meets Margaret (also known as Gretchen). He is attracted to her and with jewellery and with help from a neighbor, Martha, the devil draws Gretchen into Faust 's arms. With influence from the devil, Faust seduces Gretchen. Gretchen 's mother dies from a sleeping potion, administered by Gretchen to obtain privacy so that Faust could visit her. Gretchen discovers she is pregnant. Gretchen 's brother condemns Faust, challenges him and falls dead at the hands of Faust and Mephistopheles. Gretchen drowns her illegitimate child and is convicted of the murder. Faust tries to save Gretchen from death by attempting to free her from prison. Finding that she refuses to escape, Faust and the devil flee the dungeon, while voices from Heaven announce that Gretchen shall be saved -- "Sie ist gerettet '' -- this differs from the harsher ending of Urfaust -- "Sie ist gerichtet! '' -- "she is condemned. ''
Rich in classical allusion, in Part Two the romantic story of the first Faust is forgotten, and Faust wakes in a field of fairies to initiate a new cycle of adventures and purpose. The piece consists of five acts (relatively isolated episodes) each representing a different theme. Ultimately, Faust goes to Heaven, for he loses only half of the bet. Angels, who arrive as messengers of divine mercy, declare at the end of Act V: "He who strives on and lives to strive / Can earn redemption still '' (V, 11936 -- 7).
Throughout Part One, Faust remains unsatisfied; the ultimate conclusion of the tragedy and the outcome of the wagers are only revealed in Faust Part Two. The first part represents the "small world '' and takes place in Faust 's own local, temporal milieu. In contrast, Part Two takes place in the "wide world '' or macrocosmos.
Goethe 's Faust has inspired a great deal of literature, music, and illustration.
Paul Carus (1852 - 1919) said Goethe 's book had influenced "little less than the Bible. ''
Walter Kaufmann asserts that "Goethe created a character (i.e. Faust) who was accepted by his people as their ideal prototype. ''
Many lines and phrase from Goethe 's Faust have become part of the German language; see quotes here and here.
In 1821, a partial English verse translation of Faust (Part One) was published anonymously by the London publisher Thomas Boosey and Sons, with illustrations by the German engraver Moritz Retzsch. This translation was attributed to the English poet Samuel Taylor Coleridge by Frederick Burwick and James C. McKusick in their 2007 Oxford University Press edition, Faustus: From the German of Goethe, Translated by Samuel Taylor Coleridge. In a letter dated 4 September 1820, Goethe wrote to his son August that Coleridge was translating Faust. However, this attribution is controversial: Roger Paulin, William St. Clair, and Elinor Shaffer provide a lengthy rebuttal to Burwick and McKusick, offering evidence including Coleridge 's repeated denials that he had ever translated Faustus and arguing that Goethe 's letter to his son was based on misinformation from a third party Coleridge 's fellow Romantic Percy Bysshe Shelley produced admired fragments of a translation first publishing Part One Scene II in The Liberal magazine in 1822, with "Scene I '' (in the original, the "Prologue in Heaven '') being published in the first edition of his Posthumous Poems by Mary Shelley in 1824.
In 1828, at the age of twenty, Gérard de Nerval published a French translation of Goethe 's Faust, which given his young age and the complexity of the text is regarded as a remarkable feat, all the more so considering the praise it received from the German author himself.
In 1870 -- 71, Bayard Taylor published an English translation in the original metres.
In 1887 the Irish dramatist W.G. Wills loosely adapted the first part of Faust for a production starring Henry Irving as Mephistopheles at the Lyceum Theatre, London.
Calvin Thomas published translations of Part 1 in 1892 and Part 2 in 1897.
Philosopher Walter Kaufmann was also known for an English translation of Faust, presenting Part One in its entirety, with selections from Part Two, and omitted scenes extensively summarized. Kaufmann 's version preserves Goethe 's metres and rhyme schemes, but objected to translating all of Part Two into English, believing that "To let Goethe speak English is one thing; to transpose into English his attempt to imitate Greek poetry in German is another. ''
In August 1950, Boris Pasternak 's Russian language translation of the first part led him to be attacked in the Soviet literary journal Novy Mir. The attack read in part,
... the translator clearly distorts Goethe 's ideas... in order to defend the reactionary theory of ' pure art '... he introduces an aesthetic and individualist flavor into the text... attributes a reactionary idea to Goethe... distorts the social and philosophical meaning...
In response, Pasternak wrote to the exiled daughter of Marina Tsvetaeva,
There has been much concern over an article in Novy Mir denouncing my Faust on the grounds that the gods, angels, witches, spirits, the madness of poor Gretchen, and everything ' irrational ' has been rendered much too well, while Goethe 's ' progressive ' ideas (what are they?) have been glossed over. But I have a contract to do the second part as well! I do n't know how it will all end. Fortunately, it seems that the article wo n't have any practical effect.
In 1976, Farrar, Straus and Giroux published Randall Jarrell 's translation of Faust, Part One posthumously.
Martin Greenberg 's translations have been credited with capturing the poetic feel of the original.
Much of the content of this article is translated from the equivalent German - language Wikipedia article (retrieved November 6, 2005). The German articles Johann Wolfgang von Goethe, Gustaf Gründgens, and Knittelvers were also referred to. The following references are cited by the German - language Faust I:
|
sex and the city 2 carrie's assistant | Sex and the City (film) - wikipedia
Sex and the City (advertised as Sex and the City: The Movie) is a 2008 American romantic comedy film written and directed by Michael Patrick King in his feature film directorial debut, and a sequel to the 1998 - 2004 HBO comedy series of the same name (itself based on the book of the same name by Candace Bushnell) about four female friends: Carrie Bradshaw (Sarah Jessica Parker), Samantha Jones (Kim Cattrall), Charlotte York Goldenblatt (Kristin Davis), and Miranda Hobbes (Cynthia Nixon), dealing with their lives as single women in New York City. The series often portrayed frank discussions about romance and sexuality.
The world premiere took place at Leicester Square, London, on May 15, 2008, and premiered on May 28, 2008, in the United Kingdom and May 30, 2008, in the United States. Despite mixed reviews from critics, calling the film an extended episode of the series, it was a commercial success, grossing over $415 million worldwide from a $65 million budget.
A sequel to the film, entitled Sex and the City 2, was released in 2010 to similar commercial success but even larger critical failure. Another sequel, entitled Sex and the City 3 was announced in December 2016, that a script for the third and final film had been approved. That same day, the main cast had signed on; however, no start filming date has been announced yet.
Carrie walks through the streets of New York City thinking about events that have happened to her and her friends during Sex and the City. Charlotte is now happily married to Harry Goldenblatt, but she had a hard time getting pregnant - so they adopted a Chinese girl named Lily; Miranda has settled down in Brooklyn with Steve (David Eigenberg) to raise their son Brady together; and Samantha has relocated her business to Los Angeles to be close to Smith (Jason Lewis), who is now a superstar, although she misses her old life and takes every opportunity to fly East to be with Carrie, Miranda and Charlotte.
Carrie herself is now in a relationship with Big (Chris Noth), and they are viewing apartments with plans to move in together. Carrie falls in love with a penthouse far from their price range. Big immediately agrees to pay for it. Carrie offers to sell her own apartment, although she also voices her fear that she would have no legal rights to their home in case they separate, as they are not married. To quell her fears, Big suggests that they marry. Carrie announces the news to her friends. Charlotte and Miranda are happy at the news, but Samantha - as Carrie points out dryly - sounds more excited at the thought of Carrie "finally getting Botox ''. Charlotte suggests her longtime gay friend, Anthony Marantino, as the pushy wedding planner.
Miranda confesses to her friends that she has been so busy she has n't had sex with Steve in six months. When Steve confesses he has cheated on her, Miranda is devastated and immediately separates from him. At Carrie and Big 's rehearsal dinner, Steve tries to reconcile with Miranda, but she rebuffs him. Still upset with Steve, she tells Big bluntly that marriage ruins everything.
On the wedding day, (partially due to Miranda 's words at the rehearsal dinner) Big is too fearful to go through with the ceremony. Carrie, devastated, flees the wedding. Samantha stays behind to clear the guests. Big changes his mind and catches up with Carrie in an attempt to reconcile in the middle of a one - way street. Carrie furiously attacks him with her bouquet while he earns scathing looks from Miranda and Charlotte, as well as from the crowds of New Yorkers watching the scene unfold. To console Carrie (who is depressed, and at the beginning of the holiday does n't eat anything for two days), the four women take the honeymoon that Carrie had booked to Mexico, where they de-stress and collect themselves.
Upon returning to New York, Carrie hires an assistant, Louise (Jennifer Hudson), to help her manage her life. Miranda eventually confesses to Carrie about what happened at the rehearsal dinner, and the two briefly fall out as Carrie storms out of a restaurant on Valentine 's Day. After reflecting on the argument she had with Carrie, Miranda agrees to attend couples counseling with Steve, and they are eventually able to reconcile. Samantha finds her five - year - old relationship passionless, and begins over-eating to keep from cheating on Smith with a gorgeous neighbour, Dante. She eventually decides to say farewell to Smith and moves back to New York. Around the same time, Louise quits her job as Carrie 's assistant to get married and move back permanently to her hometown of St Louis.
Charlotte learns she is pregnant, and for most of her pregnancy is fearful that something might happen to her baby, so she stops her regular running around Central Park. Carrie puts her fear to rest by telling her that, since she already soiled herself in Mexico, her bad luck is finished. Later, Charlotte has a surprise encounter with Big that leaves her so outraged that her water breaks. Big takes her to the hospital and waits until baby Rose is born, hoping to see Carrie. Harry passes on the message that Big would like her to call him, and that he has written to her frequently, but never received a reply. Carrie searches her correspondence and finds in Louise 's personal assistant file that he has sent her dozens of letters copied from the book she read him before their wedding, culminating with one of his own where he apologizes for screwing up and promises to love her forever.
Carrie travels to the exquisite penthouse Big had bought for them to collect a pair of brand new Manolo Blahnik shoes (that later become one of the icons of the movie) that she had left there. She finds Big in the walk - in closet he had built for her, and the moment she sees him, her anger at his betrayal dissipates. They share a passionate kiss, and Big proposes to Carrie properly, using one of her diamond - encrusted shoes in place of a ring. They later marry alone, in a simple wedding in New York City Hall, with Carrie wearing the simple suit that she had intended to wear before being enticed away by the Vivienne Westwood dress that she had modelled in the bridal issue of Vogue. Miranda, Samantha, and Charlotte turn up to surprise Carrie, having been called by Big. The film ends with the four women sipping cosmopolitans, celebrating Samantha 's fiftieth birthday, with Carrie making a toast to the next fifty.
At the end of Sex and the City 's run in February 2004, there were indications of a movie being considered following the series. HBO announced that Michael Patrick King was working on a possible script for the movie which he would direct. Later that year, Kim Cattrall declined to work on the project citing reasons that the script and the start date were overly prolonged and she decided to take other offers at hand. As a result, the immediate follow - up ideas for the movie were dropped.
It was in mid-2007 that the plans for making the movie were announced again. This reportedly resulted after Cattrall 's conditions being accepted along with a future HBO series. In May 2007 the project was halted after HBO decided it was no longer in a position to finance the movie on its own. The project was pitched within the Time Warner family (owners of HBO) and was picked by sister concern New Line Cinema.
In February 2009 it was officially announced that a sequel would be made including all four actresses and writer - director Michael Patrick King.
The film was prominently shot in New York between September -- December 2007. The locations included a number of places around Manhattan and a certain portion was shot in Steiner Studios and Silvercup Studios. The shooting was continually interrupted by paparazzi and onlookers with the security and police authorities employed in order to control the crowd. Efforts were taken to keep the film 's plot secret, including the shooting of multiple endings. As a defense strategy, scenes shot in public or in presence of number of extras were termed by Ryan Jonathan Healy and the main cast as "dream sequences. ''
As in the TV series, fashion played a significant role in plot and production of the movie. Over 300 ensembles were used over the course of entire film. Patricia Field, who created costume designs for the series, also undertook the job in the film. Field has stated that she initially was ambivalent to do the film, for monetary and creative reasons. Field rose to fame particularly after designing for the series from 1998 to 2004, wherein she popularized the concept of using designer clothes with day - to - day fashion.
While dressing the characters for the film, Field decided to stay clear from the latest fashion trends defining the characters and instead focused on the evolution of individual character and the actor portraying it, over the last four years. While Samantha 's dressing was influenced by American TV soap opera Dynasty (see Nolan Miller), Jackie Kennedy was the inspiration for Charlotte 's clothes. Miranda, according to Field, has evolved the most from the series in terms of fashion. This was influenced significantly by development in actress Cynthia Nixon in past years.
The soundtrack was released May 27, 2008, by New Line Records. The soundtrack includes new songs by Fergie and Jennifer Hudson (who plays Carrie 's assistant in the film).
The film 's soundtrack debuted at # 2 on the Billboard 200, the highest debut for a multi-artist theatrical film soundtrack since 2005 's Get Rich or Die Tryin ', and debuting at # 6 on the UK Albums Chart, selling to date more than 55,000 copies.
A second soundtrack, Sex and the City: Volume 2, was released on September 23, 2008, coinciding with the film 's DVD release, featuring the British singers Estelle, Craig David, Mutya Buena and Amy Winehouse. It also featured Janet Jackson, Ciara, and Elijah Kelley.
In December 2008, the orchestral score for the film was released, Sex And The City - The Score, containing 18 tracks of original score composed, co-orchestrated, and conducted by Aaron Zigman. Whilst the order of the tracks does not correspond directly to the order that the score is heard in the film, the score soundtrack contains almost every single piece of score that is present in the film.
The film 's international premiere took place on Monday, May 12, 2008, at Odeon West End in London 's Leicester Square to the audience of 1700. It was next premiered at Sony Center at Potsdamer Platz in Berlin on May 15. The film had its New York City premiere at Radio City Music Hall on Tuesday, May 27, 2008.
The film was a commercial success. Opening in 3,285 theaters, the film made $26.93 million in the US and Canada on its first day. The three - day opening weekend total was $57,038,404, aggregating $17,363 per theater. The film recorded the biggest opening ever for an R - rated comedy and for a romantic comedy, and also for a film starring all women. As of March 2010, the film had grossed $152,647,258 at the US and Canadian box office, and $262,605,528 in other markets, bringing the worldwide total gross revenue to $415,252,786, making it the highest - grossing romantic comedy of 2008.
Sex and the City received mixed reviews from critics. On Rotten Tomatoes, the film has a rating of 49 %, based on 176 reviews, with the site 's critical consensus reading, "Sex and the City loses steam in the transition to the big screen, but will still thrill fans of the show. '' Metacritic gave the film a normalized average score of 53 out of 100, based on 38 critics, indicating "mixed or average reviews ''.
Brian Lowry of Variety said the film "... feels a trifle half - hearted '', while Carina Chocano of the Los Angeles Times stated "the film tackles weighty issues with grace but is still very funny ''. She praised Michael Patrick King 's work saying very few movies "are willing to go to such dark places while remaining a comedy in the Shakespearean sense ''. Colin Bertram of the New York Daily News dubbed the film a "great reunion '', and was happy with the return of "The ' Oh, my God, they did not just do that! ' moments, the nudity, the swearing, the unabashed love of human frailty and downright wackiness ''. The Chicago Tribune 's Jessica Reeves described it as "Witty, effervescent and unexpectedly thoughtful. '' Michael Rechtshaffen at The Hollywood Reporter praised the performances of the four leading ladies and said the film kept the essence of the series, but resembled a super-sized episode.
Manohla Dargis of The New York Times found the film "a vulgar, shrill, deeply shallow -- and, at 2 hours and 22 turgid minutes, overlong -- addendum to a show '', while The Daily Telegraph 's Sukhdev Sandhu panned the film saying "the ladies have become frozen, Spice Girls - style types - angsty, neurotic, predatory, princess - rather than individuals who might evolve or surprise us ''. Rick Groen of The Globe and Mail slammed the film commenting on lack of script and adding that the characters "do n't perform so much as parade, fixed in their roles as semi-animated clothes hangers on a cinematic runway ''. He gave the film zero stars out of four. Anthony Lane, a film critic for The New Yorker, called the film a "superannuated fantasy posing as a slice of modern life ''; he noted that "almost sixty years after All About Eve, which also featured four major female roles, there is a deep sadness in the sight of Carrie and friends defining themselves not as Bette Davis, Anne Baxter, Celeste Holm, and Thelma Ritter did -- by their talents, their hats, and the swordplay of their wits -- but purely by their ability to snare and keep a man... All the film lacks is a subtitle: "The Lying, the Bitch, and the Wardrobe. ''
Ramin Setoodeh of Newsweek speculated that some of the criticism for the film is derived possibly from sexism: "when you listen to men talk about it (and this is coming from the perspective of a male writer), a strange thing happens. The talk turns hateful. Angry. Vengeful. Annoyed... Is this just poor sportsmanship? I ca n't help but wonder -- cue the Carrie Bradshaw voiceover here -- if it 's not a case of ' Sexism in the City. ' Men hated the movie before it even opened... Movie critics, an overwhelmingly male demographic, gave it such a nasty tongue lashing you would have thought they were talking about an ex-girlfriend... The movie might not be Citizen Kane -- which, for the record, is a dude flick -- but it 's incredibly sweet and touching. ''
The movie featured on worst of 2008 lists including that of The Times, Mark Kermode, The New York Observer, The Tart and The Daily Telegraph.
New Line Home Entertainment released a DVD and Blu - ray release of Sex and the City: The Movie on September 23, 2008. There are two versions of the film released in the US on home video. There is a standard, single disc theatrical cut (the version seen in theaters) which comes in fullscreen or widescreen (in separate editions). Both discs are the same, except for the movie presentation. The only features are an audio commentary, deleted scenes, and a digital copy of the film. Also released on the same day as the standard edition is the two - disc special edition, which adds six minutes of footage to the film, along with the commentary from the standard edition DVD and a second disc that contains bonus features, as well as a digital copy of the widescreen theatrical version of the film. The only version of the film released on Blu - Ray is the two - disc extended cut, which is identical to the DVD version of the extended cut.
On December 9, 2008, New Line Home Entertainment released a third edition of Sex and the City: The Movie. This edition is a 4 - disc set entitled Sex and the City: The Movie (The Wedding Collection). The 4 - disc set features the previously released extended cut of the film on the first disc, the second disc has the bonus features from the extended cut and three additional featurettes, the third disc holds even more special features, and the fourth is a music CD with songs inspired by the movie, including the alternative mix of Fergie 's "Labels or Love '' from the beginning of the film. The set also comes with an exclusive hardcover book, featuring photos and quotes from the movie, and a numbered certificate of authenticity in a pink padded box.
A fourth edition was also released in Australia. This set contained the two discs from the Sex and the City: The Movie Special Edition and a bonus ' Sex and the City Inspired ' Clutch Bag. This clutch being black in color in a tile or snake skin material.
The DVD has reached the # 1 on the UK DVD Top Chart and is the fastest selling DVD release of 2008 in the UK, selling over 920,000 copies in one week. It is way ahead of the 700,000 copies sold for Ratatouille which was, prior to Sex and the City 's release, the best selling DVD of 2008 in the UK. Although the record has since been beaten by Mamma Mia!
Sex and the City 2 was released in cinemas on May 27, 2010, in the United States and May 28, 2010, in the United Kingdom. It was co-written, produced and directed by Michael Patrick King. The DVD was available for purchase in the United Kingdom on November 29, 2010. The film stars Sarah Jessica Parker, Kim Cattrall, Kristin Davis, Cynthia Nixon, and Chris Noth, who reprised their roles from the previous film and television series. It also features cameos from Liza Minnelli, Miley Cyrus, Tim Gunn, Ron White, Omid Djalili, and Penélope Cruz, as well as Broadway actors Norm Lewis, Kelli O'Hara, and Ryan Silverman.
It was announced in December 2016, that a script for the third and final film had been approved. That same day, the main cast had signed on; however, no start filming date has been announced yet.
|
when was the sound of music musical written | The Sound of Music - wikipedia
The Sound of Music is a musical with music by Richard Rodgers, lyrics by Oscar Hammerstein II and a book by Howard Lindsay and Russel Crouse. It is based on the memoir of Maria von Trapp, The Story of the Trapp Family Singers. Set in Austria on the eve of the Anschluss in 1938, the musical tells the story of Maria, who takes a job as governess to a large family while she decides whether to become a nun. She falls in love with the children, and eventually their widowed father, Captain von Trapp. He is ordered to accept a commission in the German navy, but he opposes the Nazis. He and Maria decide on a plan to flee Austria with the children. Many songs from the musical have become standards, such as "Edelweiss '', "My Favorite Things '', "Climb Ev'ry Mountain '', "Do - Re-Mi '', and the title song "The Sound of Music ''.
The original Broadway production, starring Mary Martin and Theodore Bikel, opened in 1959 and won five Tony Awards, including Best Musical, out of nine nominations. The first London production opened at the Palace Theatre in 1961. The show has enjoyed numerous productions and revivals since then. It was adapted as a 1965 film musical starring Julie Andrews and Christopher Plummer, which won five Academy Awards. The Sound of Music was the last musical written by Rodgers and Hammerstein; Oscar Hammerstein died of cancer nine months after the Broadway premiere.
After viewing The Trapp Family, a 1956 West German film about the von Trapp family, and its 1958 sequel (Die Trapp - Familie in Amerika), stage director Vincent J. Donehue thought that the project would be perfect for his friend Mary Martin; Broadway producers Leland Hayward and Richard Halliday (Martin 's husband) agreed. The producers originally envisioned a non-musical play that would be written by Lindsay and Crouse and that would feature songs from the repertoire of the Trapp Family Singers. Then they decided to add an original song or two, perhaps by Rodgers and Hammerstein. But it was soon agreed that the project should feature all new songs and be a musical rather than a play.
Details of the history of the von Trapp family were altered for the musical. The real Georg von Trapp did live with his family in a villa in Aigen, a suburb of Salzburg. He wrote to the Nonnberg Abbey in 1926 asking for a nun to help tutor his sick daughter, and the Mother Abbess sent Maria. His wife had died in 1922. The real Maria and Georg married at the Nonnberg Abbey in 1927. Lindsay and Crouse altered the story so that Maria was governess to all of the children, whose names and ages were changed, as was Maria 's original surname (the show used "Rainer '' instead of "Kutschera ''). The von Trapps spent some years in Austria after Maria and the Captain married and was offered a commission in Germany 's navy. Since von Trapp opposed the Nazis by that time, the family left Austria after the Anschluss, going by train to Italy and then traveling on to London and the United States. To make the story more dramatic, Lindsay and Crouse had the family, soon after Maria 's and the Captain 's wedding, escape over the mountains to Switzerland on foot.
In Salzburg, Austria, just before World War II, nuns from Nonnberg Abbey sing the Dixit Dominus. One of the postulants, Maria Rainer, is on the nearby mountainside, regretting leaving the beautiful hills ("The Sound of Music '') where she was brought up. She returns late. The Mother Abbess and the other nuns consider what to do about her ("Maria ''). Maria explains her lateness, saying she was raised on that mountain, and apologizes for singing in the garden without permission. The Mother Abbess joins her in song ("My Favorite Things ''). The Mother Abbess tells her that she should spend some time outside the abbey to decide whether she is ready for the monastic life. She will act as the governess to the seven children of a widower, Austro - Hungarian Navy submarine Captain Georg von Trapp.
Maria arrives at the villa of Captain von Trapp. He explains her duties and summons the children with a boatswain 's call. They march in, clad in uniforms. He teaches her their individual signals on the call, but she openly disapproves of this militaristic approach. Alone with them, she breaks through their wariness and teaches them the basics of music ("Do - Re-Mi '').
Rolf, a young messenger, delivers a telegram and then meets with the oldest child, Liesl, outside the villa. He claims he knows what is right for her because he is a year older than she ("Sixteen Going on Seventeen ''). They kiss, and he runs off, leaving her squealing with joy. Meanwhile, the housekeeper, Frau Schmidt, gives Maria material to make new clothes, as Maria had given all her possessions to the poor. Maria sees Liesl slipping in through the window, wet from a sudden thunderstorm, but agrees to keep her secret. The other children are frightened by the storm. Maria sings "The Lonely Goatherd '' to distract them.
Captain von Trapp arrives a month later from Vienna with Baroness Elsa Schräder and Max Detweiler. Elsa tells Max that something is preventing the Captain from marrying her. He opines that only poor people have the time for great romances ("How Can Love Survive ''). Rolf enters, looking for Liesl, and greets them with "Heil ''. The Captain orders him away, saying that he is Austrian, not German. Maria and the children leapfrog in, wearing play - clothes that she made from the old drapes in her room. Infuriated, the Captain sends them off to change. She tells him that they need him to love them, and he angrily orders her back to the abbey. As she apologizes, they hear the children singing "The Sound of Music '', which she had taught them, to welcome Elsa Schräder. He joins in and embraces them. Alone with Maria, he asks her to stay, thanking her for bringing music back into his house. Elsa is suspicious of her until she explains that she will be returning to the abbey in September.
The Captain gives a party to introduce Elsa, and guests argue over the Anschluss. Kurt asks Maria to teach him to dance the Ländler. When he fails to negotiate a complicated figure, the Captain steps in to demonstrate. He and Maria dance until they come face - to - face; and she breaks away, embarrassed and confused. Discussing the expected marriage between Elsa and the Captain, Brigitta tells Maria that she thinks Maria and the Captain are really in love with each other. Elsa asks the Captain to allow the children to say goodnight to the guests with a song, "So Long, Farewell ''. Max is amazed at their talent and wants them for the Kaltzberg Festival, which he is organizing. The guests leave for the dining room, and Maria slips out the front door with her luggage.
At the abbey, Maria says that she is ready to take her monastic vows; but the Mother Abbess realizes that she is running away from her feelings. She tells her to face the Captain and discover if they love each other, and tells her to search for and find the life she was meant to live ("Climb Ev'ry Mountain '').
Max teaches the children how to sing on stage. When the Captain tries to lead them, they complain that he is not doing it as Maria did. He tells them that he has asked Elsa to marry him. They try to cheer themselves up by singing "My Favorite Things '' but are unsuccessful until they hear Maria singing on her way to rejoin them. Learning of the wedding plans, she decides to stay only until the Captain can arrange for another governess. Max and Elsa argue with him about the imminent Anschluss, trying to convince him that it is inevitable ("No Way to Stop It ''). When he refuses to compromise, Elsa breaks off the engagement. Alone, the Captain and Maria finally admit their love, desiring only to be "An Ordinary Couple ''. As they marry, the nuns reprise "Maria '' against the wedding processional.
During the honeymoon, Max prepares the children to perform at the Kaltzberg Festival. Herr Zeller, the Gauleiter, demands to know why they are not flying the flag of the Third Reich now that the Anschluss has occurred. The Captain and Maria return early from their honeymoon before the Festival. In view of developments, he refuses to allow the children to sing. Max argues that they would sing for Austria, but the Captain points out that it no longer exists. Maria and Liesl discuss romantic love; Maria predicts that in a few years Liesl will be married ("Sixteen Going on Seventeen (Reprise) ''). Rolf enters with a telegram that offers the Captain a commission in the German Navy, and Liesl is upset to discover that Rolf is now a committed Nazi. The Captain consults Maria and decides that they must secretly flee Austria. German Admiral von Schreiber arrives to find out why Captain von Trapp has not answered the telegram. He explains that the German Navy holds him in high regard, offers him the commission, and tells him to report immediately to Bremerhaven to assume command. Maria says that he can not leave immediately, as they are all singing in the Festival concert; and the Admiral agrees to wait.
At the concert, after the von Trapps sing an elaborate reprise of "Do - Re-Mi '', Max brings out the Captain 's guitar. Captain von Trapp sings "Edelweiss '', as a goodbye to his homeland, while using Austria 's national flower as a symbol to declare his loyalty to the country. Max asks for an encore and announces that this is the von Trapp family 's last chance to sing together, as the honor guard waits to escort the Captain to his new command. While the judges decide on the prizes, the von Trapps sing "So Long, Farewell '', leaving the stage in small groups. Max then announces the runners - up, stalling as much as possible. When he announces that the first prize goes to the von Trapps and they do not appear, the Nazis start a search. The family hides at the Abbey, and Sister Margaretta tells them that the borders have been closed. Rolf comes upon them and calls his lieutenant, but after seeing Liesl he changes his mind and tells him they are n't there. The Nazis leave, and the von Trapps flee over the Alps as the nuns reprise "Climb Ev'ry Mountain ''.
Sources: IBDB and Guidetomusicaltheatre.com
The Sound of Music opened on Broadway at the Lunt - Fontanne Theatre on November 16, 1959, moved to the Mark Hellinger Theatre on November 6, 1962, and closed on June 15, 1963, after 1,443 performances. The director was Vincent J. Donehue, and the choreographer was Joe Layton. The original cast included Mary Martin (at age 46) as Maria, Theodore Bikel as Captain Georg von Trapp, Patricia Neway as Mother Abbess, Kurt Kasznar as Max Detweiler, Marion Marlowe as Elsa Schräder, Brian Davies as Rolf and Lauri Peters as Liesl. Sopranos Patricia Brooks and June Card were ensemble members in the original production. The show tied for the Tony Award for Best Musical with Fiorello!. Other awards included Martin for Best Actress in a Musical, Neway for Best Featured Actress, Best Scenic Design (Oliver Smith) and Best Conductor And Musical Director (Frederick Dvonch). Bikel and Kasznar were nominated for acting awards, and Donehue was nominated for his direction. The entire children 's cast was nominated for Best Featured Actress category as a single nominee, even though two of the children were boys.
Martha Wright replaced Martin in the role of Maria on Broadway in October 1961, followed by Karen Gantz in July 1962, Jeannie Carson in August 1962 and Nancy Dussault in September 1962. Jon Voight, who eventually married co-star Lauri Peters, was a replacement for Rolf. The national tour starred Florence Henderson as Maria and Beatrice Krebs as Mother Abbess. It opened at the Grand Riviera Theater, Detroit, on February 27, 1961, and closed November 23, 1963, at the O'Keefe Centre, Toronto. Henderson was succeeded by Barbara Meister in June 1962. Theodore Bikel was not satisfied playing the role of the Captain, because of the role 's limited singing, and Bikel did not like to play the same role over and over again. In his autobiography, he writes: "I promised myself then that if I could afford it, I would never do a run as long as that again. '' The original Broadway cast album sold three million copies.
The musical premiered in London 's West End at the Palace Theatre on May 18, 1961, and ran for 2,385 performances. It was directed by Jerome Whyte and used the original New York choreography, supervised by Joe Layton, and the original sets designed by Oliver Smith. The cast included Jean Bayless as Maria, followed by Sonia Rees, Roger Dann as Captain von Trapp, Constance Shacklock as Mother Abbess, Eunice Gayson as Elsa Schrader, Harold Kasket as Max Detweiler, Barbara Brown as Liesl, Nicholas Bennett as Rolf and Olive Gilbert as Sister Margaretta.
In 1981, at producer Ross Taylor 's urging, Petula Clark agreed to star in a revival of the show at the Apollo Victoria Theatre in London 's West End. Michael Jayston played Captain von Trapp, Honor Blackman was the Baroness and June Bronhill the Mother Abbess. Other notable cast members included Helen Anker, John Bennett and Martina Grant. Despite her misgivings that, at age 49, she was too old to play the role convincingly, Clark opened to unanimous rave reviews and the largest advance sale in the history of British theatre at that time. Maria von Trapp, who attended the opening night performance, described Clark as "the best '' Maria ever. Clark extended her initial six - month contract to thirteen months. Playing to 101 percent of seating capacity, the show set the highest attendance figure for a single week (October 26 -- 31, 1981) of any British musical production in history (as recorded in The Guinness Book of Theatre). It was the first stage production to incorporate the two additional songs ("Something Good '' and "I Have Confidence '') that Richard Rodgers composed for the film version. "My Favorite Things '' had a similar context to the film version, while the short verse "A Bell is No Bell '' was extended into a full - length song for Maria and the Mother Abbess. "The Lonely Goatherd '' was set in a new scene at a village fair.
The cast recording of this production was the first to be recorded digitally. It was released on CD for the first time in 2010 by the UK label Pet Sounds and included two bonus tracks from the original single issued by Epic to promote the production.
Director Susan H. Schulman staged the first Broadway revival of The Sound of Music, with Rebecca Luker as Maria and Michael Siberry as Captain von Trapp. It also featured Patti Cohenour as Mother Abbess, Jan Maxwell as Elsa Schrader, Fred Applegate as Max Detweiler, Dashiell Eaves as Rolf, Patricia Conolly as Frau Schmidt and Laura Benanti, in her Broadway debut, as Luker 's understudy. Later, Luker and Siberry were replaced by Richard Chamberlain as the Captain and Benanti as Maria. Lou Taylor Pucci made his Broadway debut as the understudy for Kurt von Trapp. The production opened on March 12, 1998, at the Martin Beck Theatre, and closed on June 20, 1999, after 533 performances. This production was nominated for a Tony Award for Best Revival of a Musical. It then toured in North America.
An Andrew Lloyd Webber production opened on November 15, 2006, at the London Palladium and ran until February 2009, produced by Live Nation 's David Ian and Jeremy Sams. Following failed negotiations with Hollywood star Scarlett Johansson, the role of Maria was cast through a UK talent search reality TV show called How Do You Solve a Problem like Maria? The talent show was produced by (and starred) Andrew Lloyd Webber and featured presenter / comedian Graham Norton and a judging panel of David Ian, John Barrowman and Zoe Tyler.
Connie Fisher was selected by public voting as the winner of the show. In early 2007, Fisher suffered from a heavy cold that prevented her from performing for two weeks. To prevent further disruptions, an alternate Maria, Aoife Mulholland, a fellow contestant on How Do You Solve a Problem like Maria?, played Maria on Monday evenings and Wednesday matinee performances. Simon Shepherd was originally cast as Captain von Trapp, but after two preview performances he was withdrawn from the production, and Alexander Hanson moved into the role in time for the official opening date along with Lesley Garrett as the Mother Abbess. After Garrett left, Margaret Preece took the role. The cast also featured Lauren Ward as the Baroness, Ian Gelder as Max, Sophie Bould as Liesl, and Neil McDermott as Rolf. Other notable replacements have included Simon Burke and Simon MacCorkindale as the Captain and newcomer Amy Lennox as Liesl. Summer Strallen replaced Fisher in February 2008, with Mulholland portraying Maria on Monday evenings and Wednesday matinees.
The revival received enthusiastic reviews, especially for Fisher, Preece, Bould and Garrett. A cast recording of the London Palladium cast was released. The production closed on February 21, 2009, after a run of over two years and was followed by a UK national tour, described below.
The first Australian production opened at Melbourne 's Princess Theatre in 1961 and ran for three years. The production was directed by Charles Hickman, with musical numbers staged by Ernest Parham. The cast included June Bronhill as Maria, Peter Graves as Captain von Trapp and Rosina Raisbeck as Mother Abbess. A touring company then played for years, with Vanessa Lee (Graves ' wife) in the role of Maria. The cast recording made in 1961 was the first time a major overseas production featuring Australian artists was transferred to disc.
A Puerto Rican production, performed in English, opened at the Tapia Theatre in San Juan under the direction of Pablo Cabrera in 1966. It starred Camille Carrión as María and Raúl Dávila as Captain Von Trapp, and it featured a young Johanna Rosaly as Liesl. In 1968, the production transferred to the Teatro de la Zarzuela in Madrid, Spain, where it was performed in Spanish with Carrión reprising the role of María, Alfredo Mayo as Captain Von Trapp and Roberto Rey as Max.
In 1988, the Moon Troupe of Takarazuka Revue performed the musical at the Bow Hall (Takarazuka, Hyōgo). Harukaze Hitomi and Gou Mayuka starred. A 1990 New York City Opera production, directed by Oscar Hammerstein II 's son, James, featured Debby Boone as Maria, Laurence Guittard as Captain von Trapp, and Werner Klemperer as Max. In the 1993 Stockholm production, Carola Häggkvist played Maria and Tommy Körberg played Captain von Trapp.
An Australian revival played in the Lyric Theatre, Sydney, New South Wales, from November 1999 to February 2000. Lisa McCune played Maria, John Waters was Captain von Trapp, Bert Newton was Max, Eilene Hannan was Mother Abbess, and Rachel Marley was Marta. This production was based on the 1998 Broadway revival staging. The production then toured until February 2001, in Melbourne, Brisbane, Perth and Adelaide. Rachael Beck took over as Maria in Perth and Adelaide, and Rob Guest took over as Captain von Trapp in Perth.
An Austrian production premiered in 2005 at the Volksoper Wien in German. It was directed and choreographed by Renaud Doucet. The cast included Sandra Pires as Maria, Kurt Schreibmayer and Michael Kraus as von Trapp, with Heidi Brunner as Mother Abbess. As of 2012, the production was still in the repertoire of the Volksoper with 12 -- 20 performances per season.
The Salzburg Marionette Theatre has toured extensively with their version that features the recorded voices of Broadway singers such as Christiane Noll as Maria. The tour began in Dallas, Texas, in 2007 and continued in Salzburg in 2008. The director is Richard Hamburger. In 2010, the production was given in Paris, France, with dialogue in French and the songs in English. In 2008, a Brazilian production with Kiara Sasso as Maria and Herson Capri as the Captain played in Rio de Janeiro and São Paulo, and a Dutch production was mounted with Wieneke Remmers as Maria, directed by John Yost.
Andrew Lloyd Webber, David Ian and David Mirvish presented The Sound of Music at the Princess of Wales Theatre in Toronto from 2008 to 2010. The role of Maria was chosen by the public through a television show, How Do You Solve a Problem Like Maria?, which was produced by Lloyd Webber and Ian and aired in mid-2008. Elicia MacKenzie won and played the role six times a week, while the runner - up in the TV show, Janna Polzin, played Maria twice a week. Captain von Trapp was played by Burke Moses. The show ran for more than 500 performances. It was Toronto 's longest running revival ever.
A UK tour began in 2009 and visited more than two dozen cities before ending in 2011. The original cast included Connie Fisher as Maria, Michael Praed as Captain von Trapp and Margaret Preece as the Mother Abbess. Kirsty Malpass was the alternate Maria. Jason Donovan assumed the role of Captain Von Trapp, and Verity Rushworth took over as Maria, in early 2011. Lesley Garrett reprised her role as Mother Abbess for the tour 's final engagement in Wimbledon in October 2011.
A production ran at the Ópera - Citi theater in Buenos Aires, Argentina in 2011. The cast included Laura Conforte as Maria and Diego Ramos as Captain Von Trapp. A Spanish national tour began in November 2011 at the Auditorio de Tenerife in Santa Cruz de Tenerife in the Canary Islands. The tour visited 29 Spanish cities, spending one year in Madrid 's Gran Vía at the Teatro Coliseum, and one season at the Tívoli Theatre in Barcelona. It was directed by Jaime Azpilicueta and starred Silvia Luchetti as Maria and Carlos J. Benito as Captain Von Trapp.
A production was mounted at the Open Air Theatre, Regent 's Park from July to September 2013. It starred Charlotte Wakefield as Maria, with Michael Xavier as Captain von Trapp and Caroline Keiff as Elsa. It received enthusiastic reviews and became the highest - grossing production ever at the theatre. In 2014, the show was nominated for Best Musical Revival at the Laurence Olivier Awards and Wakefield was nominated for Best Actress in a Musical.
A brief South Korean production played in 2014, as did a South African production at the Artscape in Cape Town and at the Teatro at Montecasino based on Lloyd Webber and Ian 's London Palladium production. The same year, a Spanish language translation opened at Teatro de la Universidad in San Juan, under the direction of Edgar García. It starred Lourdes Robles as Maria and Braulio Castillo as Captain Von Trapp, with Dagmar as Elsa. A production (in Thai: มนต์ รัก เพลง สวรรค์) ran at Muangthai ratchadalai Theatre, Bangkok, Thailand, in April 2015 in the Thai language. The production replaced the song "Ordinary couple '' with "Something Good ''.
A North American tour, directed by Jack O'Brien and choreographed by Danny Mefford, began at the Ahmanson Theatre in Los Angeles in September 2015. The tour is scheduled to run until at least July 2017. Kerstin Anderson plays Maria, with Ben Davis as Capt. von Trapp and Ashley Brown as Mother Abess. The production has received warm reviews.
A UK tour produced by Bill Kenwright began in 2015 and toured into 2016. It was directed by Martin Connor and starred Lucy O'Byrne as Maria. A 2016 Australian tour of the Lloyd Webber production, directed by Sams, included stops in Sydney, Brisbane, Melbourne and Adelaide. The cast included Cameron Daddo as Captain Von Trapp, Marina Prior as Baroness Schraeder and Lorraine Bayly as Frau Schmidt. The choreographer was Arlene Phillips.
On March 2, 1965, 20th Century Fox released a film adaption of the musical starring Julie Andrews as Maria Rainer and Christopher Plummer as Captain Georg von Trapp. It was produced and directed by Robert Wise with the screenplay adaption written by Ernest Lehman. Two songs were written by Rodgers specifically for the film, "I Have Confidence '' and "Something Good ''. The film won five Oscars at the 38th Academy Awards, including Best Picture.
A live televised production of the musical aired twice in December 2013 on NBC. It was directed by Beth McCarthy - Miller and Rob Ashford. Carrie Underwood starred as Maria Rainer, with Stephen Moyer as Captain von Trapp, Christian Borle as Max, Laura Benanti as Elsa, and Audra McDonald as the Mother Abbess. The production was released on DVD the same month.
A new version of the musical was broadcast live on ITV in the UK on December 20, 2015. It starred Kara Tointon as Maria, Julian Ovenden as Captain von Trapp, Katherine Kelly as Baroness Schraeder and Alexander Armstrong as Max.
Most reviews of the original Broadway production were favorable. Richard Watts, Jr. of the New York Post stated that the show had "strangely gentle charm that is wonderfully endearing. The Sound of Music strives for nothing in the way of smash effects, substituting instead a kind of gracious and unpretentious simplicity. '' The New York World - Telegram and Sun pronounced The Sound of Music "the loveliest musical imaginable. It places Rodgers and Hammerstein back in top form as melodist and lyricist. The Lindsay - Crouse dialogue is vibrant and amusing in a plot that rises to genuine excitement. '' The New York Journal American 's review opined that The Sound of Music is "the most mature product of the team... it seemed to me to be the full ripening of these two extraordinary talents ''.
Brooks Atkinson of The New York Times gave a mixed assessment. He praised Mary Martin 's performance, saying "she still has the same common touch... same sharp features, goodwill, and glowing personality that makes music sound intimate and familiar '' and stated that "the best of the Sound of Music is Rodgers and Hammerstein in good form ''. However, he said, the libretto "has the hackneyed look of the musical theatre replaced with Oklahoma! in 1943. It is disappointing to see the American musical stage succumbing to the clichés of operetta. '' Walter Kerr 's review in the New York Herald Tribune was unfavorable: "Before The Sound of Music is halfway through its promising chores it becomes not only too sweet for words but almost too sweet for music '', stating that the "evening suffer (s) from little children ''.
Columbia Masterworks recorded the original Broadway cast album a week after the show 's 1959 opening. The album was the label 's first deluxe package in a gatefold jacket, priced $1 higher than previous cast albums. It was # 1 on Billboard 's best - selling albums chart for 16 weeks in 1960. It was released on CD from Sony in the Columbia Broadway Masterworks series. In 1959, singer Patti Page recorded the title song from the show for Mercury Records on the day that the musical opened on Broadway. Since it was recorded a week before the original Broadway cast album, Page was the first artist to record any song from the musical. She featured the song on her TV show, The Patti Page Olds Show, helping to popularize the musical. The 1960 London production was recorded by EMI and was issued on CD on the Broadway Angel Label.
The 1965 film soundtrack was released by RCA Victor and is one of the most successful soundtrack albums in history, having sold over 20 million copies worldwide. Recent CD editions incorporate musical material from the film that would not fit on the original LP. The label has also issued the soundtrack in German, Italian, Spanish and French editions. RCA Victor also released an album of the 1998 Broadway revival produced by Hallmark Entertainment and featuring the full revival cast, including Rebecca Luker, Michael Siberry, Jan Maxwell and Fred Applegate. The Telarc label made a studio cast recording of The Sound of Music, with the Cincinnati Pops Orchestra conducted by Erich Kunzel (1987). The lead roles went to opera stars: Frederica von Stade as Maria, Håkan Hagegård as Captain von Trapp, and Eileen Farrell as the Mother Abbess. The recording "includes both the two new songs written for the film version and the three Broadway songs they replace, as well as a previously unrecorded verse of "An Ordinary Couple '' ". The 2006 London revival was recorded and has been released on the Decca Broadway label. There have been numerous studio cast albums and foreign cast albums issued, though many have only received regional distribution. According to the cast album database, there are 62 recordings of the score that have been issued over the years.
The soundtrack from the 2013 NBC television production starring Carrie Underwood and Stephen Moyer was released on CD and digital download in December 2013 on the Sony Masterworks label. Also featured on the album are Audra McDonald, Laura Benanti and Christian Borle.
|
4. state the various economies of scale in production and their importance | Economies of scale - wikipedia
In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation (typically measured by amount of output produced), with cost per unit of output decreasing with increasing scale.
Economies of scale apply to a variety of organizational and business situations and at various levels, such as a business or manufacturing unit, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale are occurring. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis.
Another source of scale economies is the possibility of purchasing inputs at a lower per - unit cost when they are purchased in large quantities.
The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.
Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for low cost per unit weight commodities is saturating the regional market, thus having to ship product uneconomical distances. Other limits include using energy less efficiently or having a higher defect rate.
Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will therefore avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity grade production to specialty products.
The simple meaning of economies of scale is doing things more efficiently with increasing size or speed of operation. Economies of scale often rely on fixed cost which are constant and do n't vary with output, and variable costs which can be effected with the amount of output. In wholesale and retail distribution, increasing the speed of operations, such as order fulfillment, lowers the cost of both fixed and working capital. Other common sources of economies of scale are purchasing (bulk buying of materials through long - term contracts), managerial (increasing the specialization of managers), financial (obtaining lower - interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short - run average total cost (SRATC) curve down and to the right.
Economies of the scale is a practical concept that may explain real world phenomena such as patterns of international trade or the number of firms in a market. The exploitation of economies of scale helps explain why companies grow large in some industries. It is also a justification for free trade policies, since some economies of scale may require a larger market than is possible within a particular country -- for example, it would not be efficient for Liechtenstein to have its own car maker, if they only sold to their local market. A lone car maker may be profitable, but even more so if they exported cars to global markets in addition to selling to the local market. Economies of scale also play a role in a "natural monopoly ''. There is a distinction between two types of economies of scale: internal and external. An industry that exhibits an internal economy of scale is one where the costs of production falls when the number of firms in the industry drops, but the remaining firms increase their production to match previous levels. Conversely, an industry exhibits an external economy of scale when costs drop due to the introduction of more firms, thus allowing for more efficient use of specialized services and machinery.
The management thinker and translator of the Toyota Production System for service, Professor John Seddon, argues that attempting to create economies by increasing scale is powered by myth in the service sector. Instead, he believes that economies will come from improving the flow of a service, from first receipt of a customer 's demand to the eventual satisfaction of that demand. In trying to manage and reduce unit costs, firms often raise total costs by creating failure demand. Seddon claims that arguments for economy of scale are a mix of a) the plausibly obvious and b) a little hard data, brought together to produce two broad assertions, for which there is little hard factual evidence.
Some of the economies of scale recognized in engineering have a physical basis, such as the square - cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.
In structural engineering, the strength of beams increases with the cube of the thickness.
Drag loss of vehicles like aircraft or ships generally increases less than proportional with increasing cargo volume, although the physical details can be quite complicated. Therefore, making them larger usually results in less fuel consumption per ton of cargo at a given speed.
Heat losses from industrial processes vary per unit of volume for pipes, tanks and other vessels in a relationship somewhat similar to the square - cube law.
Overall costs of capital projects are known to be subject to economies of scale. A crude estimate is that if the capital cost for a given sized piece of equipment is known, changing the size will change the capital cost by the 0.6 power of the capacity ratio (the point six power rule).
In estimating capital cost, it typically requires an insignificant amount of labor, and possibly not much more in materials, to install a larger capacity electrical wire or pipe having significantly greater capacity.
The cost of a unit of capacity of many types of equipment, such as electric motors, centrifugal pumps, diesel and gasoline engines, decreases as size increases. Also, the efficiency increases with size.
Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or "stretched '' to increase payload.
Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.
Karl Marx noted that large scale manufacturing allowed economical use of products that would otherwise be waste. Marx cited the chemical industry as an example, which today along with petrochemicals, remains highly dependent on turning various residual reactant streams into salable products. In the pulp and paper industry it is economical to burn bark and fine wood particles to produce process steam and to recover the spent pulping chemicals for conversion back to a usable form.
Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm 's costs, returns to scale describe the relationship between inputs and outputs in a long - run (all inputs variable) production function. A production function has constant returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are decreasing if, say, doubling inputs results in less than double the output, and increasing if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by the degree of homogeneity of the function. Homogeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one.
If the firm is a perfect competitor in all input markets, and thus the per - unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long - run equilibrium will involve all firms operating at the minimum point of their long - run average cost curves (i.e., at the borderline between economies and diseconomies of scale).
If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input 's per - unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.
The literature assumed that due to the competitive nature of reverse auctions, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.
However, surprisingly enough, Shalev and Asbjornsen found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that auction volume did not correlate with competition, nor with the number of bidders, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.
In his 1844 Economic and Philosophic Manuscripts, Karl Marx observes that economies of scale have historically been associated with increasing concentration of private wealth, and have been used to justify such concentration. Marx points out that concentrated private ownership of large - scale economic enterprises is a historically contingent fact, and not essential to the nature of such enterprises. In the case of agriculture, for example, Marx calls attention to the sophistical nature of the arguments used to justify the system of concentrated ownership of land:
Instead of concentrated private ownership of land, Marx recommends that economies of scale should instead be realized by associations:
|
where did buffalo wild wings get its name | Buffalo Wild Wings - wikipedia
Buffalo Wild Wings (originally Buffalo Wild Wings & Weck, and thus the numeronym BW3) is an American casual dining restaurant and sports bar franchise in the United States, Canada, Mexico, the Philippines, the United Arab Emirates, Saudi Arabia, Oman, and India which specializes in Buffalo wings and sauces.
As of June 2017, it had 1,238 locations (625 directly owned by the company, and 612 franchised locations) across all 50 U.S. states and DC. An alternate nickname in recent usage by the company is B - Dubs.
Buffalo Wild Wings & Weck was founded in 1982 by Jim Disbrow and Scott Lowery. Lowery 's parents had become Disbrow 's guardians as they were his ice skating coaches. After Disbrow had finished judging amateur figure skating competition at Kent State University, the pair met up to get some Buffalo - style chicken wings to eat. Failing to find any restaurant serving them, they decided to open their own restaurant serving wings. For the first location, they selected a location near the Ohio State University, Columbus. Weck was an original part of the name, as beyond the wings and its dozen sauces, the restaurant served beef on weck.
The pair brought on an additional partner, Mark Lutz, within six months after opening. None had any restaurant experience and thus ran all aspects of the business, including its finances, haphazardly. The restaurant expanded into a chain over the next decade as it added six additional locations in Ohio, Indiana, and Steamboat Springs, Colorado. The Colorado location was selected as they skied there.
The company began to franchise in 1992 by working with Francorp, a Chicago - based law firm. The original franchise fee was $15,000 to $20,000 plus a percentage of sales. Its bottled wing sauces were then manufactured by Wilsey, Inc. of Atlanta. The company 's headquarters was set up in 1992 in Cincinnati. By 1993, eight more locations were added, primarily in Ohio.
In late 1994, Disbrow hired a part - time chief financial officer, Sally Smith, who was employed at his new father - in - law 's business. In order to get her full time, the company moved its headquarters to the Minneapolis / Saint Paul area where Smith wished to stay. Smith had to deal with issues with lenders and the Internal Revenue Service and a potential bankruptcy as a part overhauling its finances. Smith was unable to determine the firm 's net income / loss prior to 1995. While in 1995, the company did $12 million in revenue with loss of $1.6 million.
Expecting more growth in 1995, BW3 designed a new prototype free - standing outlet with clear separation between the bar and dining areas and seating 190 in a 5,000 - to 7,500 - square - foot space. This was a shift in strategy from a college sports - bar to casual dining. The company looked closer at new franchisees ' qualifications. Existing franchisees were encouraged to add more locations while more corporate locations were planned. At that time, there were 48 locations with 12 corporate owned.
Smith was promoted to president and CEO in August 1996, while Disbrow became chairman of the board. At the end of the year, 27 new locations were opened. An initial public stock offering was considered in 1998, but discarded given unfavorable market conditions. After using different name variations, bw - 3 and Buffalo Wild Wings, in different markets in its first national ad campaign market; the decision was made to standardize the name through out the system with the latter name. The company moved to increase home sales of their sauces by upgrading the packaging.
The 100th location opened in October 1999 in Apple Valley, Minnesota a short drive from its corporate headquarters. At the time, there were 23 company owned restaurants. Three venture capital firms and other purchased a majority stake, $8.5 million in shares, available in a December 1999 private placement. The funding was planned to fund expansion with expectation to have 260 sites by late 2003. The company tested new several sauces in 2000 but only added two to the menu with its first dessert. In 2000, the chain, now calling its locations, Buffalo Wild Wings Grill & Bar, was in 19 states and 140 locations (at year end) with one finally it the city of its signature menu item, Buffalo, New York.
System - wide revenue were $150 million in 2001 with same - stores averaged growth of 8 percent per year. The company began pushing takeout sales. In late 2001, the company signed on Frito - Lay to its plans for branded potato chips to the retail market.
Disbrow died in October 2002, while Smith continued as company executive with Lowery as vice-president of franchise construction. There were 211 locations in 27 states by the end of third quarter of 2003.
In 2010, the company announced an expansion into Canada. In 2015, Buffalo Wild Wings expanded in the United Arab Emirates when it opened a restaurant in Dubai.
In March 2013, the company took a minority stake in PizzaRev upscale pizza restaurant. In August 2014, BW3 had purchased a majority stake in Rusty Taco chain and changed its name to R Taco the next year.
On September 11, 2017, customers at the Eastvale, California franchise complained that an employee muted the sound for the United States National Anthem at the start of the Monday Night Football game claiming that it was controversial and that it was not company policy to play it. The franchise owner later apologized and clarified that there is no policy regarding this matter.
In November 2017, Roark Capital Group 's Arby 's announced its plan to purchase the chain for about $2.4 billion plus debt. This deal completed on February 5, 2018 with Arby 's Group renamed Inspire Brands an holding parent company to Arby 's, Buffalo Wild Wings, R Taco with each keeping their brands, name, logos, and operating autonomously.
The chain is best known for Buffalo - style chicken wings along with over a dozen sauces, as well as a complement of other items such as chicken tenders and legs. The chain 's menu also features appetizers, burgers, tacos, salads, and desserts, along with beer, wine, and other beverages. They are known for their famous "Blazin Wing Challenge. '' They challenge customers to eat 12 of their hottest wings under six minutes. The winners receive a free T - shirt. The restaurants feature an open layout with a bar area and patio seating flanked by over 50 televisions and media screens. Starting in 2016, the new restaurants being built follow a new type of layout that gives the guests the feeling as if they were actually in a sports stadium. Server uniforms consist of gray shorts and jerseys with the number 82, signifying the year that the chain was established in 1982.
Buffalo Wild Wings location in Athens, Ohio
The exterior of a Buffalo Wild Wings location in Carolina Forest, South Carolina
A Buffalo Wild Wings in Gillette, Wyoming
|
what type of literature is the great gatsby | The Great Gatsby - Wikipedia
The Great Gatsby is a 1925 novel written by American author F. Scott Fitzgerald that follows a cast of characters living in the fictional town of West and East Egg on prosperous Long Island in the summer of 1922. The story primarily concerns the young and mysterious millionaire Jay Gatsby and his quixotic passion and obsession for the beautiful former debutante Daisy Buchanan. Considered to be Fitzgerald 's magnum opus, The Great Gatsby explores themes of decadence, idealism, resistance to change, social upheaval, and excess, creating a portrait of the Roaring Twenties that has been described as a cautionary tale regarding the American Dream.
Fitzgerald -- inspired by the parties he had attended while visiting Long Island 's north shore -- began planning the novel in 1923, desiring to produce, in his words, "something new -- something extraordinary and beautiful and simple and intricately patterned. '' Progress was slow, with Fitzgerald completing his first draft following a move to the French Riviera in 1924. His editor, Maxwell Perkins, felt the book was vague and persuaded the author to revise over the following winter. Fitzgerald was repeatedly ambivalent about the book 's title and he considered a variety of alternatives, including titles that referenced the Roman character Trimalchio; the title he was last documented to have desired was Under the Red, White, and Blue.
First published by Scribner 's in April 1925, The Great Gatsby received mixed reviews and sold poorly; in its first year, the book sold only 20,000 copies. Fitzgerald died in 1940, believing himself to be a failure and his work forgotten. However, the novel experienced a revival during World War II, and became a part of American high school curricula and numerous stage and film adaptations in the following decades. Today, The Great Gatsby is widely considered to be a literary classic and a contender for the title "Great American Novel ''. In 1998, the Modern Library editorial board voted it the 20th century 's best American novel and second best English - language novel of the same time period.
Set on the prosperous Long Island of 1922, The Great Gatsby provides a critical social history of America during the Roaring Twenties within its fictional narrative. That era, known for widespread economic prosperity, the development of jazz music, flapper culture, new technologies in communication (motion pictures, broadcast radio, recorded music) forging a genuine mass culture, and bootlegging, along with other criminal activity, is plausibly depicted in Fitzgerald 's novel. Fitzgerald uses many of these societal developments of the 1920s that were to build Gatsby 's stories from many of the simple details like automobiles to broader themes like Fitzgerald 's discreet allusions to the organized crime culture which was the source of Gatsby 's fortune. Fitzgerald depicts the garish society of the Roaring Twenties by placing the book 's plotline within the historical context of the era.
Fitzgerald 's visits to Long Island 's north shore and his experience attending parties at mansions inspired The Great Gatsby 's setting. Today, there are a number of theories as to which mansion was the inspiration for the book. One possibility is Land 's End, a notable Gold Coast Mansion where Fitzgerald may have attended a party. Many of the events in Fitzgerald 's early life are reflected throughout The Great Gatsby. Fitzgerald was a young man from Minnesota, and, like Nick, he was educated at an Ivy League school, Princeton (in Nick 's case, Yale). Fitzgerald is also similar to Jay Gatsby in that he fell in love while stationed far from home in the military and fell into a life of decadence trying to prove himself to the girl he loved. Fitzgerald became a second lieutenant and was stationed at Camp Sheridan in Montgomery, Alabama. There he met and fell in love with a wild seventeen - year - old beauty named Zelda Sayre. Zelda finally agreed to marry him, but her preference for wealth, fun, and leisure led her to delay their wedding until he could prove a success. Like Nick in The Great Gatsby, Fitzgerald found this new lifestyle seductive and exciting, and, like Gatsby, he had always idolized the very rich. In many ways, The Great Gatsby represents Fitzgerald 's attempt to confront his conflicting feelings about the Jazz Age. Like Gatsby, Fitzgerald was driven by his love for a woman who symbolized everything he wanted, even as she led him toward everything he despised.
In her book Careless People: Murder, Mayhem and the Invention of ' The Great Gatsby (2013), Sarah Churchwell speculates that parts of the ending of The Great Gatsby were based on the Hall - Mills Case. Based on her forensic search for clues, she asserts that the two victims in the Hall - Mills murder case inspired the characters who were murdered in The Great Gatsby.
In the summer of 1922, Nick Carraway, a Yale graduate and veteran of the Great War from the Midwest -- who serves as the novel 's narrator -- takes a job in New York as a bond salesman. He rents a small house on Long Island, in the fictional village of West Egg, next door to the lavish mansion of Jay Gatsby, a mysterious multi-millionaire who holds extravagant parties but does not participate in them. Nick drives around the bay to East Egg for dinner at the home of his cousin, Daisy Fay Buchanan, and her husband, Tom, a college acquaintance of Nick 's. They introduce Nick to Jordan Baker, an attractive, cynical young golfer. She reveals to Nick that Tom has a mistress, Myrtle Wilson, who lives in the "valley of ashes '', an industrial dumping ground between West Egg and New York City. Not long after this revelation, Nick travels to New York City with Tom and Myrtle to an apartment where Tom keeps his affairs with Myrtle and others. At Tom 's New York apartment, a vulgar and bizarre party takes place. It ends with Tom breaking Myrtle 's nose after she annoys him by saying Daisy 's name several times.
Nick eventually receives an invitation to one of Gatsby 's parties. Nick encounters Jordan Baker at the party and they meet Gatsby himself, an aloof and surprisingly young man who recognizes Nick from their same division in the Great War. Through Jordan, Nick later learns that Gatsby knew Daisy through a purely chance meeting in 1917 when Daisy and her friends were doing volunteer services ' work with young officers headed to Europe. From their brief meetings and casual encounters at that time, Gatsby became (and still is) deeply in love with Daisy. Gatsby had hoped that his wild parties would attract an unsuspecting Daisy, who lived across the bay, to appear at his doorstep and allow him to present himself as a man of wealth and position.
Having developed a budding friendship with Nick, Gatsby uses him to arrange a reunion between himself and Daisy. Nick invites Daisy to have tea at his house without telling her that Gatsby will also be there. After an initially awkward reunion, Gatsby and Daisy begin an affair over the summer. At a luncheon at the Buchanans ' house, Daisy speaks to Gatsby with such undisguised intimacy that Tom realizes she is in love with Gatsby. Though Tom is himself involved in an extramarital affair, he is outraged by his wife 's infidelity. He forces the group to drive into New York City and confronts Gatsby in a suite at the Plaza Hotel, asserting that he and Daisy have a history that Gatsby could never understand. In addition to that, he announces to his wife that Gatsby is a criminal whose fortune comes from bootlegging alcohol and other illegal activities. Daisy realizes that her allegiance is to Tom, and Tom contemptuously sends her back to East Egg with Gatsby, attempting to prove that Gatsby can not hurt her.
On the way back, Gatsby 's car strikes and kills Tom 's mistress, Myrtle. Nick later learns from Gatsby that Daisy, not Gatsby himself, was driving the car at the time of the accident. Myrtle 's husband, George Wilson, falsely concludes that the driver of the yellow car is the secret lover he suspects she has. He learns that the yellow car is Gatsby 's, fatally shoots him and then turns the gun on himself. Nick stages an unsettlingly small funeral for Gatsby which none of Gatsby 's associates attend and only one of his partygoers (besides Nick) attends. Later, Nick runs into Tom in New York and finds out that Tom had told George that the yellow car was Gatsby 's and gave him Gatsby 's address. Disillusioned with the East, Nick moves back to the Midwest.
Fitzgerald began planning his third novel in June 1922, but it was interrupted by production of his play, The Vegetable, in the summer and fall. The play failed miserably, and Fitzgerald worked that winter on magazine stories struggling to pay his debt caused by the production. The stories were, in his words, "all trash and it nearly broke my heart, '' although included among those stories was "Winter Dreams '', which Fitzgerald later described as "a sort of first draft of the Gatsby idea ''.
After the birth of their child, the Fitzgeralds moved to Great Neck, New York, on Long Island, in October 1922. The town was used as the scene of The Great Gatsby. Fitzgerald 's neighbors in Great Neck included such prominent and newly wealthy New Yorkers as writer Ring Lardner, actor Lew Fields, and comedian Ed Wynn. These figures were all considered to be "new money '', unlike those who came from Manhasset Neck or Cow Neck Peninsula, places which were home to many of New York 's wealthiest established families, and which sat across the bay from Great Neck. This real - life juxtaposition gave Fitzgerald his idea for "West Egg '' and "East Egg ''. In this novel, Great Neck (King 's Point) became the "new money '' peninsula of West Egg and Port Washington (Sands Point) the old - money East Egg. Several mansions in the area served as inspiration for Gatsby 's home, such as Oheka Castle and Beacon Towers, since demolished.
By mid-1923, Fitzgerald had written 18,000 words for his novel but discarded most of his new story as a false start, some of which resurfaced in the 1924 short story "Absolution ''.
Work on The Great Gatsby began in earnest in April 1924. Fitzgerald wrote in his ledger, "Out of woods at last and starting novel. '' He decided to make a departure from the writing process of his previous novels and told Perkins that the novel was to be a "consciously artistic achievement '' and a "purely creative work -- not trashy imaginings as in my stories but the sustained imagination of a sincere and yet radiant world. '' He added later, during editing, that he felt "an enormous power in me now, more than I 've ever had. '' Soon after this burst of inspiration, work slowed while the Fitzgeralds made a move to the French Riviera, where a serious crisis in their relationship soon developed. By August, however, Fitzgerald was hard at work and completed what he believed to be his final manuscript in October, sending the book to his editor, Maxwell Perkins, and agent, Harold Ober, on October 30. The Fitzgeralds then moved to Rome for the winter. Fitzgerald made revisions through the winter after Perkins informed him in a November letter that the character of Gatsby was "somewhat vague '' and Gatsby 's wealth and business, respectively, needed "the suggestion of an explanation '' and should be "adumbrated ''.
Content after a few rounds of revision, Fitzgerald returned the final batch of revised galleys in the middle of February 1925. Fitzgerald 's revisions included an extensive rewriting of Chapter VI and VIII. Despite this, he refused an offer of $10,000 for the serial rights in order not to delay the book 's publication. He had received a $3,939 advance in 1923 and $1,981.25 upon publication.
The cover of the first printing of The Great Gatsby is among the most celebrated pieces of art in American literature. It depicts disembodied eyes and a mouth over a blue skyline, with images of naked women reflected in the irises. A little - known artist named Francis Cugat was commissioned to illustrate the book while Fitzgerald was in the midst of writing it. The cover was completed before the novel; Fitzgerald was so enamored with it that he told his publisher he had "written it into '' the novel. Fitzgerald 's remarks about incorporating the painting into the novel led to the interpretation that the eyes are reminiscent of those of fictional optometrist Dr. T.J. Eckleburg (depicted on a faded commercial billboard near George Wilson 's auto repair shop) which Fitzgerald described as "blue and gigantic -- their retinas are one yard high. They look out of no face, but instead, from a pair of enormous yellow spectacles which pass over a non-existent nose. '' Although this passage has some resemblance to the painting, a closer explanation can be found in the description of Daisy Buchanan as the "girl whose disembodied face floated along the dark cornices and blinding signs. '' Ernest Hemingway wrote in A Moveable Feast that when Fitzgerald lent him a copy of The Great Gatsby to read, he immediately disliked the cover, but "Scott told me not to be put off by it, that it had to do with a billboard along a highway in Long Island that was important in the story. He said he had liked the jacket and now he did n't like it. ''
Fitzgerald had difficulty choosing a title for his novel and entertained many choices before reluctantly choosing The Great Gatsby, a title inspired by Alain - Fournier 's Le Grand Meaulnes. Prior, Fitzgerald shifted between Gatsby; Among Ash - Heaps and Millionaires; Trimalchio; Trimalchio in West Egg; On the Road to West Egg; Under the Red, White, and Blue; The Gold - Hatted Gatsby; and The High - Bouncing Lover. The titles The Gold - Hatted Gatsby and The High - Bouncing Lover came from Fitzgerald 's epigraph for the novel, one which he wrote himself under the pen name of Thomas Parke D'Invilliers. He initially preferred titles referencing Trimalchio, the crude parvenu in Petronius 's Satyricon, and even refers to Gatsby as Trimalchio once in the novel: "It was when curiosity about Gatsby was at its highest that the lights in his house failed to go on one Saturday night -- and, as obscurely as it had begun, his career as Trimalchio was over. '' Unlike Gatsby 's spectacular parties, Trimalchio participated in the audacious and libidinous orgies he hosted but, according to Tony Tanner 's introduction to the Penguin edition, there are subtle similarities between the two.
In November 1924, Fitzgerald wrote to Perkins that "I have now decided to stick to the title I put on the book... Trimalchio in West Egg '' but was eventually persuaded that the reference was too obscure and that people would not be able to pronounce it. His wife, Zelda, and Perkins both expressed their preference for The Great Gatsby and the next month Fitzgerald agreed. A month before publication, after a final review of the proofs, he asked if it would be possible to re-title it Trimalchio or Gold - Hatted Gatsby but Perkins advised against it. On March 19, 1925, Fitzgerald expressed intense enthusiasm for the title Under the Red, White and Blue, but it was at that stage too late to change. The Great Gatsby was published on April 10, 1925. Fitzgerald remarked that "the title is only fair, rather bad than good. ''
Early drafts of the novel entitled Trimalchio: An Early Version of The Great Gatsby have been published. A notable difference between the Trimalchio draft and The Great Gatsby is a less complete failure of Gatsby 's dream in Trimalchio. Another difference is that the argument between Tom Buchanan and Jay Gatsby is more even, although Daisy still returns to Tom.
Sarah Churchwell sees The Great Gatsby as a "cautionary tale of the decadent downside of the American dream. '' The story deals with the limits and realities of America 's ideals of social and class mobility; and the inevitably hopeless lower class aspirations to rise above the station (s) of their birth. The book in stark relief through the narrator, Nick Carraway, observes that: "... a sense of the fundamental decencies is parcelled out unequally at birth. '' Using elements of irony and tragic ending, it also delves into themes of excesses of the rich, and recklessness of youth. Journalist Nick Gillespie sees The Great Gatsby as a story "underlying permanence of class differences; even in the face of a modern economy 's attempt to assert that the class structure is based; not on status and inherited position; but, upon the innovation and the ability of literally anyone, to succeed by meeting the ever - changing demands and tastes of consumers ' needs. '' This interpretation asserts that The Great Gatsby captures the American experience because it is a story about change and those who resist it; whether the change comes in the form of a new wave of immigrants (Southern Europeans in the early 20th century, Latin Americans today), the nouveau riche, or successful minorities, Americans from the 1920s to the 21st century have plenty of experience with changing economic and social circumstances. As Gillespie states, "While the specific terms of the equation are always changing, it 's easy to see echoes of Gatsby 's basic conflict between established sources of economic and cultural power and upstarts in virtually all aspects of American society. '' Because this concept is particularly American and can be seen throughout American history, readers are able to relate to The Great Gatsby (which has lent the novel an enduring popularity).
Later critical writings on The Great Gatsby, following the novel 's revival, focus in particular on Fitzgerald 's disillusionment with the American Dream -- life, liberty and the pursuit of happiness -- in the context of the hedonistic Jazz Age, a name for the era which Fitzgerald said he had coined. In 1970, Roger Pearson published the article "Gatsby: False prophet of the American Dream '', in which he states that Fitzgerald "has come to be associated with this concept of the AMERICAN Dream more than any other writer of the twentieth century ''. Pearson goes on to suggest that Gatsby 's failure to realize the American dream demonstrates that it no longer exists except in the minds of those as materialistic as Gatsby. He concludes that the American dream pursued by Gatsby "is, in reality, a nightmare '', bringing nothing but discontent and disillusionment to those who chase it as they realize its unsustainability and ultimately its unattainability.
In addition to exploring the trials and tribulations of achieving the great American dream during the Jazz Age, The Great Gatsby explores societal gender expectations as a theme, exemplifying in Daisy Buchanan 's character the marginalization of women in the East Egg social class that Fitzgerald depicts. As an upper - class, white woman living in East Egg during this time period in America, Daisy must adhere to certain societal expectations, including but certainly not limited to actively filling the role of dutiful wife, mother, keeper of the house, and charming socialite. As the reader finds in the novel, many of Daisy 's choices, ultimately culminating in the tragedy of the plot and misery for all those involved, can be at least partly attributed to her prescribed role as a "beautiful little fool '' who is completely reliant on her husband for financial and societal security. For instance, one could argue that Daisy 's ultimate decision to remain with her husband despite her feelings for Gatsby can be attributed to the status, security, and comfort that her marriage to Tom Buchanan provides. Additionally, the theme of the female familial role within The Great Gatsby goes hand in hand with that of the ideal family unit associated with the great American dream -- a dream that goes unrealized for Gatsby and Daisy in Fitzgerald 's prose.
The symbol of the green light serves as a guiding device for understanding Gatsby 's American dream and Nick 's unreliable narration and jealousy. The green light shines across the dock from Gatsby 's house and is frequently mentioned in the background of the plot and is a symbol for money, jealousy, and sickness. Additionally, the green light outlines how Gatsby 's desire for wealth and love creates a web of problems. Throughout the novel, the protagonist becomes infatuated with "things '' which distort his understanding of happiness. Gatsby 's fixation with possessions shine the green light of money.
The symbol is subtle, but always present among the many conflicts occurring. The meaning of the light changes throughout the novel, but first begins as a sign of hope. Nick notes, "he stretched out his arms toward the dark water in a curious way, and far as I was from him I could have sworn he was trembling. Involuntarily I glanced seaward -- and distinguished nothing except a single green light, minute and far away, that might have been the end of a dock ''). After Nick reconnects with Daisy, he returns to his house and notices Gatsby and then, the green light is introduced. This sequence of events serves as a sign of positivity that Gatsby is reuniting with Daisy. The symbol 's meaning early on sets an optimistic tone for the novel.
At first, the green light presents a positive correlation with Gatsby achieving the American dream. However, the green light symbol also sheds light on the competing desires of money and greed. As Gatsby attempts to become closer to his dream, his connection with Daisy fades and is tarnished by his materialism and her perplexing relationship with Tom. The green light is a combination of what is both great and flawed in the novel, especially when considering Gatsby 's dream. Fitzgerald believes the dream is "a mirage that entices us to keep moving forward even as we are ceaselessly borne back into the past ''. His narration suggests that even as the world advances, people will also revert to history and tradition. The American dream will always be both desirable and unreachable because the perception of a "dream '' or "perfection '' is always changing just as characterized with Gatsby who consistently pursues love and money, but fails in the end. Evidently, the American Dream was unattainable from the beginning and the positive green light was not meant to last.
The green light 's positivity in the beginning of the novel fails not only Gatsby, but other characters as well. After Gatsby dies, Nick tries to contact Daisy and Tom, but they have moved away and will not be returning East. Nick acknowledges, "I see now that this has been a story of the West, after all - Tom and Gatsby Daisy and Jordan and I, were all Westerners and perhaps we possessed some deficiency in common which made us subtly unadaptable to Eastern life ''. Nick points out that no character, himself included, found success in the East. Nick, Gatsby, Tom and Daisy all came from the western part of the United States and Nick suggests that they may have more success staying in a location familiar to them. Again, the dream proves to be unattainable and the green light of positivity fades.
The greatest conflict within the green light of money is that Gatsby believes money equals achieving his dream, when in fact it does not. To properly address Gatsby 's internal conflict between wealth and the American dream, I must first define what it means to achieve the American dream. The dream was a set of ideals for prosperity; each person, no matter their class, race, or religion should have an equal opportunity to achieve success. Gatsby wanted happiness, but his happiness was defined acquiring money or things and love / Daisy in a deceitful way; he earned money dishonestly and was chasing a married woman. Gatsby takes Daisy on an in - depth tour of his home, pointing out "I 've got a man in England who buys me clothes. He sends over a selection of things at the beginning of each season ''. Gatsby 's hoping to impress Daisy with his materials from places outside of America, which she ultimately does not care about. It seems that Gatsby 's international imports are to show he 's even more sophisticated than American materialism. Gatsby is under the impression that money result in love with Daisy, but is sadly mistaken. Gatsby 's obsession with money also highlights his lack of substance or personality throughout the novel, the green light fogs his true self. Gatsby 's aggressive chase for money shows that he only cares about being wealthy.
Nick demonstrates the green light of jealousy when the Buchanan 's have him over for dinner. Throughout the novel, Nick 's opinion of Tom and Daisy remains consistent -- and judgmental. His immediate reactions from the scene at their dinner party influence his impressions of them during the rest of the novel. Nick was taken by surprise when Daisy initially told him that she and Tom had moved East permanently; he thought Tom would "drift over forever seeking, a little wistfully, for the dramatic turbulence of some irrecoverable football game ''. Before taking an opportunity to get to know Tom, Nick classifies him as a type of jock hero he knew from college that happens to be dating his cousin. Nick did not anticipate the couple staying together long. Already, Nick is making judgments about Tom, something he promised he would not do, and he is demonstrating jealousy. He views Tom as a transparent upper - class member, and assumes he is the same person he was in college, years later. Nick 's ability to assume Tom 's character comes very easy to him. Additionally, Nick 's speculation about Tom is an early indicator that he favors Gatsby. Nick sees no depth to Tom 's character and finds him unimpressive. Unfortunately, Fitzgerald using Nick as the sole point of view throughout the novel, gives readers a biased view on Tom, deeming Nick 's narration unfair. Additionally, Nick summarizes Tom before the dinner party; his mind is made up about him from years ago when the two attended Yale together. Thus, Nick 's claim of "reserving judgment '' was insincere from the very beginning and his college jealousy is revealed.
Nick 's judgement transitions from Tom to Daisy during their first encounter at dinner. A green light of sickness is revealed when he reunites with Daisy. Nick notes, "her face was sad and lovely with bright things in it, bright eyes and a bright passionate mouth ''). Nick senses that Daisy is unhappy even though he knows her to be a lively person. Nick wants Gatsby to be easily understood and is thrown off when she proves to be otherwise. When Tom and Daisy excuse themselves from the table at their dinner gathering, Jordan tells Nick about Tom 's "woman in New York ''. Nick first has sympathy for Daisy 's sadness until learning that she is indeed aware of Tom 's affair. After leaving the dinner party Nick says, "I was confused and a little disgusted as I drove away. It seemed to me that the thing for Daisy to do was to rush out of the house, child in arms - but apparently, there were no such intentions in her head ''. Nick is sick about Daisy 's decision, but does n't take into consideration why she is staying with Tom through his infidelity and immediately judges the couple. Nick does not consider the many reasons why Daisy would stay with Tom, but instead assumes the worst of the pair. Also, Nick 's opinion that Daisy should drive away with her baby is a harsh statement for someone who is not in her position, has never been married, and does not have a child. Nick 's first interactions with other characters convey his judgmental nature. Additionally, Nick becomes sick when Daisy does n't stand up for herself in front of Tom.
At the end of the novel the green light has reached its final meaning. Nick claims, "Gatsby believed in the green light, the orgastic future that year by year recedes before us. It eluded us then, but that 's no matter -- to - morrow we will run farther, stretch out our arms farther.... ''. Fitzgerald narrates that the money and hope the green light has provided for characters, is ultimately unattainable. Nick analyzes this symbol precisely through the "orgastic future '': Gatsby continuously aims for a climax that will come and go. The green light displays the overarching picture of the novel by symbolizing Gatsby 's American dream, since it is ever changing and unattainable. As the green light fades throughout the novel, so does Gatsby 's dream.
The Great Gatsby was published by Charles Scribner 's Sons on April 10, 1925. Fitzgerald called Perkins on the day of publication to monitor reviews: "Any news? '' "Sales situation doubtful, '' read a wire from Perkins on April 20, "(but) excellent reviews. '' Fitzgerald responded on April 24, saying the cable "depressed '' him, closing the letter with "Yours in great depression. '' Fitzgerald had hoped the novel would be a great commercial success, perhaps selling as many as 75,000 copies. By October, when the original sale had run its course, the book had sold fewer than 20,000 copies. Despite this, Scribner 's continually kept the book in print; they carried the original edition on their trade list until 1946, by which time Gatsby was in print in three other forms and the original edition was no longer needed. Fitzgerald received letters of praise from contemporaries T.S. Eliot, Edith Wharton, and Willa Cather regarding the novel; however, this was private opinion, and Fitzgerald feverishly demanded the public recognition of reviewers and readers.
The Great Gatsby received mixed reviews from literary critics of the day. Generally the most effusive of the positive reviews was Edwin Clark of The New York Times, who felt the novel was "A curious book, a mystical, glamourous (sic) story of today. '' Similarly, Lillian C. Ford of the Los Angeles Times wrote, "(the novel) leaves the reader in a mood of chastened wonder '', calling the book "a revelation of life '' and "a work of art. '' The New York Post called the book "fascinating... His style fairly scintillates, and with a genuine brilliance; he writes surely and soundly. '' The New York Herald Tribune was unimpressed, but referred to The Great Gatsby as "purely ephemeral phenomenon, but it contains some of the nicest little touches of contemporary observation you could imagine - so light, so delicate, so sharp... a literary lemon meringue. '' In The Chicago Daily Tribune, H.L. Mencken called the book "in form no more than a glorified anecdote, and not too probable at that, '' while praising the book 's "careful and brilliant finish. ''
Several writers felt that the novel left much to be desired following Fitzgerald 's previous works and promptly criticized him. Harvey Eagleton of The Dallas Morning News believed the novel signaled the end of Fitzgerald 's success: "One finishes Great Gatsby with a feeling of regret, not for the fate of the people in the book, but for Mr. Fitzgerald. '' John McClure of The Times - Picayune said that the book was unconvincing, writing, "Even in conception and construction, The Great Gatsby seems a little raw. '' Ralph Coghlan of the St. Louis Post-Dispatch felt the book lacked what made Fitzgerald 's earlier novels endearing and called the book "a minor performance... At the moment, its author seems a bit bored and tired and cynical. '' Ruth Snyder of New York Evening World called the book 's style "painfully forced '', noting that the editors of the paper were "quite convinced after reading The Great Gatsby that Mr. Fitzgerald is not one of the great American writers of to - day. '' The reviews struck Fitzgerald as completely missing the point: "All the reviews, even the most enthusiastic, not one had the slightest idea what the book was about. ''
Fitzgerald 's goal was to produce a literary work which would truly prove himself as a writer, and Gatsby did not have the commercial success of his two previous novels, This Side of Paradise and The Beautiful and Damned. Although the novel went through two initial printings, some of these copies remained unsold years later. Fitzgerald himself blamed poor sales on the fact that women tended to be the main audience for novels during this time, and Gatsby did not contain an admirable female character. According to his own ledger, now made available online by University of South Carolina 's Thomas Cooper library, he earned only $2,000 from the book. Although 1926 brought Owen Davis ' stage adaption and the Paramount - issued silent film version, both of which brought in money for the author, Fitzgerald still felt the novel fell short of the recognition he hoped for and, most importantly, would not propel him to becoming a serious novelist in the public eye. For several years afterward, the general public believed The Great Gatsby to be nothing more than a nostalgic period piece.
In 1940, Fitzgerald suffered a third and final heart attack, and died believing his work forgotten. His obituary in The New York Times mentioned Gatsby as Fitzgerald "at his best ''. A strong appreciation for the book had developed in underground circles; future writers Edward Newhouse and Budd Schulberg were deeply affected by it and John O'Hara showed the book 's influence. The republication of Gatsby in Edmund Wilson 's edition of The Last Tycoon in 1941 produced an outburst of comment, with the general consensus expressing the sentiment that the book was an enduring work of fiction.
In 1942, a group of publishing executives created the Council on Books in Wartime. The Council 's purpose was to distribute paperback books to soldiers fighting in the Second World War. The Great Gatsby was one of these books. The books proved to be "as popular as pin - up girls '' among the soldiers, according to the Saturday Evening Post 's contemporary report. 155,000 copies of Gatsby were distributed to soldiers overseas..
By 1944, full - length articles on Fitzgerald 's works were being published, and the following year, "the opinion that Gatsby was merely a period piece had almost entirely disappeared. '' This revival was paved by interest shown by literary critic Edmund Wilson, who was Fitzgerald 's friend. In 1951, Arthur Mizener published The Far Side of Paradise, a biography of Fitzgerald. He emphasized The Great Gatsby 's positive reception by literary critics, which may have influenced public opinion and renewed interest in it.
By 1960, the book was steadily selling 50,000 copies per year, and renewed interest led The New York Times editorialist Arthur Mizener to proclaim the novel "a classic of twentieth - century American fiction ''. The Great Gatsby has sold over 25 million copies worldwide as of 2013, annually sells an additional 500,000 copies, and is Scribner 's most popular title; in 2013, the e-book alone sold 185,000 copies.
Scribner 's copyright is scheduled to expire in 2020, according to Maureen Corrigan 's book about the making of The Great Gatsby, So We Read On.
The Great Gatsby has resulted in a number of film and television adaptations:
The New York Metropolitan Opera commissioned John Harbison to compose an operatic treatment of the novel to commemorate the 25th anniversary of James Levine 's debut. The work, called The Great Gatsby, premiered on December 20, 1999.
|
who is the winner of 2016 fifa world cup football | 2016 FIFA Club World Cup - wikipedia
The 2016 FIFA Club World Cup (officially known as the FIFA Club World Cup Japan 2016 presented by Alibaba YunOS Auto for sponsorship reasons) was the 13th edition of the FIFA Club World Cup, a FIFA - organised international club football tournament between the champion clubs from each of the six continental confederations, as well as the national league champion from the host country. The tournament was hosted by Japan. Real Madrid won their second Club World Cup, defeating hosts Kashima Antlers in the final.
The application process for the 2015 -- 16 as well as the 2017 -- 18 editions, i.e. two hosts, each hosting two years, began in February 2014. Member associations interested in hosting had to submit a declaration of interest by 30 March 2014, and provide the complete set of bidding documents by 25 August 2014. The FIFA Executive Committee was to select the hosts at their meeting in Morocco in December 2014. However, no such decision regarding the 2015 -- 2016 host was made until 2015.
The following countries expressed an interest in bidding to host the tournament:
Japan was officially confirmed as hosts of the 2015 and 2016 tournaments on 23 April 2015.
On 9 June 2016, Suita City Football Stadium in Osaka and International Stadium Yokohama in Yokohama were named as the two venues of the tournament.
The appointed match officials were:
Video assistant referees were tested during the tournament. The system was used for the first time when a penalty was awarded by referee Viktor Kassai in the first half of the semi-final between Atlético Nacional and Kashima Antlers after a review of video replay.
Each team had to name a 23 - man squad (three of whom must be goalkeepers). Injury replacements were allowed until 24 hours before the team 's first match. The official squads (excluding the host team, who was yet to be determined) were announced on 1 December 2016.
The schedule of the tournament was announced on 15 July 2016.
A draw was held on 21 September 2016, 11: 00 CEST (UTC + 2), at the FIFA headquarters in Zürich, Switzerland, to determine the positions in the bracket for the three teams which enter the quarter - finals.
If a match was tied after normal playing time:
On 18 March 2016, the FIFA Executive Committee agreed that the competition would be part of the International Football Association Board 's trial to allow a fourth substitute to be made during extra time.
All times are local, JST (UTC + 9).
Kashima Antlers v Auckland City
Jeonbuk Hyundai Motors v América
Mamelodi Sundowns v Kashima Antlers
Jeonbuk Hyundai Motors v Mamelodi Sundowns
Atlético Nacional v Kashima Antlers
América v Real Madrid
América v Atlético Nacional
Real Madrid v Kashima Antlers
Per statistical convention in football, matches decided in extra time are counted as wins and losses, while matches decided by penalty shoot - out are counted as draws.
The following awards were given at the conclusion of the tournament.
|
when does blair and chuck have a baby | Blair Waldorf - wikipedia
Blair Cornelia Waldorf is the lead character of Gossip Girl, introduced in the original series of novels and also appearing in the television and manga adaptations. Described as "a girl of extremes '' by creator Cecily von Ziegesar, she is a New York City socialite and a comical overachiever who possesses both snobbish and sensitive sides. Due to her position as queen bee of Manhattan 's social scene, Blair 's actions and relations are under constant scrutiny from the mysterious Gossip Girl, a popular blogger.
Leighton Meester, who portrayed the character in the television drama, has described Blair as being insecure about her social status. At times, this anxiety creates flaws and complexities which contribute to character development. In Meester 's view, the true Blair is ultimately a good girl at heart.
Blair has been compared to vintage film and literary figures, including Becky Sharp and Lizzie Eustace. Meester 's portrayal has also drawn comparisons to roles played by Joan Collins and Audrey Hepburn. She is the most critically acclaimed character of the franchise, while the television character has drawn real - life attention surrounding fashion and her love life.
Gossip Girl is a series of novels about socially prominent young adults in New York City. The story primarily follows Blair Waldorf and her best friend Serena van der Woodsen during their years in high school and college. Due to her fame on the Upper East Side, Blair is featured on the website of "Gossip Girl, '' an anonymous gossip blogger whose posts appear occasionally throughout the story.
In the first book, Blair is introduced as a privileged, comically vain overachiever. She is described as an alluring brunette, and occasionally models her appearance and demeanor after famous actresses, including Marilyn Monroe and, most often, Audrey Hepburn. In early novels, the character is also written as bulimic.
Blair is largely motivated by matters surrounding family, romance, and ambition. However, her tendency to overachieve can lead to feelings of paranoia, with dramatic or comical results. In a review for The New Yorker, Janet Malcolm remarked that Blair 's issues made her "both a broader caricature and a more real person '' than the other Gossip Girl characters. In a 2009 interview, Gossip Girl creator Cecily von Ziegesar claimed to identify with Blair the most, stating, "She is so unpredictable and dramatic. Such a bitch, but we understand why she is a bitch and we like her anyway ''.
Throughout most of the series ' run, Blair grapples with a number of changes within her family. Two years prior to the opening novel, Blair 's parents divorced after her father ran off with another man. When her mother remarries, and her father leaves the country, Blair has difficulty accepting her stepfather, Cyrus Rose. In addition, her stress over these matters occasionally affects her other relationships. However, Blair ultimately remains close to her father Harold, who she often turns to for comfort. By contrast, she maintains a somewhat tenser relationship with her mother Eleanor. It is also revealed that Blair has struggled with bulimia.
In the opening novel, Blair learns of an affair between her friend Serena and her boyfriend Nate. This marks the beginning of the story 's primary love triangle, which recurs throughout the series. Blair 's romantic life has various effects on her character development. After Nate repeatedly hurts her, she eventually refuses to take him back until she believes in his ability to commit to her. In pursuing Nate, however, Blair herself cheats on her new boyfriend Pete, which results in her losing both of them. She eventually begins to acknowledge her mistakes, with her father 's help. She later grows closer to Chuck Bass (which initially occurred in the series ' television adaptation), who she 'd previously known for years, which leads them to briefly date one another.
Blair is noted for an over-achieving nature, which often appears in humorous scenes. In a review for New York magazine, Emily Nussbaum lauded one of Blair 's fantasies, which involves "joining the Peace Corps, getting a killer tan, winning the Nobel Peace Prize, and having dinner with the president, ' who would then write her a recommendation to Yale, and then Yale would fall all over themselves to accept her. ' '' The New Yorker 's Janet Malcolm remarked that unlike some of her forerunners in film and literature, "Blair already has all the money and position anyone could want. She is pure naked striving, restlessly seeking an object, any object, and never knowing when enough is enough. ''
Blair encounters a setback during her interview at Yale by revealing the recent stress in her life, and then kissing her interviewer on the cheek upon dismissal. Her father then makes a donation to the school, though Blair is still wait - listed. In the twelfth book, I Will Always Love You, it is revealed that she has been admitted to the university.
In addition to her feelings for Nate, Blair is sometimes said to feel competitive with Serena in other areas, including matters of beauty and popularity. This also leads to an occasional envy on Blair 's part. It is unclear how much of Blair 's perception of Serena is in line with reality; the narrative describes both characters as "hands down the two hottest girls on the Upper East Side, and maybe all of Manhattan, or even the whole world. ''
In 2010, Yen Press began publishing a manga adaptation titled Gossip Girl: For Your Eyes Only, written and illustrated by HyeKyung Baek. This series adapts notable scenarios from the novel -- including the triangle with Nate and Serena -- but also features new material.
After losing her position as queen bee, Blair attempts to regain her former status while adjusting to a less privileged lifestyle. In addition to this series, Blair also appears in a manga adaptation of the novel Gossip Girl: Psycho Killer, a parody of horror stories.
In 2007, Gossip Girl was adapted for television. According to Cecily von Ziegesar, the television character is largely faithful to the original. Among the aspects to be maintained are her admiration for Audrey Hepburn and her interest in Yale University. However, the series is also noted for its deviations from the source material, including the exclusion of Blair 's brother Tyler. The show also explores romances between Blair and multiple male leads, resulting in occasional love triangles. In the fifth season, Blair is revealed to be pregnant with Prince of Monaco, Louis Grimaldi 's child. However the child later dies before birth after a car crash Blair and Chuck were in.
Among fans and the media, Blair 's bond with Chuck Bass was commonly known by the portmanteau "Chair '', while her relationship with Dan Humphrey was referred to as "Dair ''. The nicknames and viewer interest in these relationships were recognized by the show 's producers.
They were like, ' Be bitchy and nice, ugly and pretty, young and old, stupid and smart, innocent and slutty, blond and brunette. Can you be all those things? '
To prepare for the part of Blair, actress Leighton Meester, a natural blonde, dyed her hair brown before auditioning, and also studied the first novel. She has described the character as multi-faceted, labeling her "a little bit of everything which is pretty amazing. '' Like von Ziegesar, Meester has also claimed to relate to Blair on certain levels. Prior to the show 's debut in 2007, the actress stated that, "The only way to play Blair, or any character, and make her human, is to find what she is inside me. And I know I have my insecurities, too. '' She went on to say that, "The way Blair and I are not alike when it comes to insecurities is: She pays so much attention to hers! '' Meester 's casting was described by Yahoo! as a star - making role which moved her "into the pop culture vanguard, '' while Cecily von Ziegesar has called her a perfect choice.
In December 2010, Meester revealed plans to leave the show in 2012. E! Online and other outlets speculated that her departure would possibly mark the end of the series.
In Season 1 of Gossip Girl, Blair is introduced as the Upper East Side 's beautiful and popular queen bee. She is the daughter of Eleanor Waldorf, a famous fashion designer. She is dating Nate Archibald, and is best friends with Serena van der Woodsen. She also finds a close companion in Nate 's best friend and her childhood friend Chuck Bass, who becomes a partner for her schemes. When Serena returns home from boarding school, Blair learns from Nate that he lost his virginity to a drunken Serena over a year ago. Blair retaliates by publicly revealing Serena 's connection to a rehab hospital. She then learns that the actual patient is Serena 's younger brother, Eric, who had been committed after a suicide attempt. Afterward, a remorseful Blair reconciles with Serena.
After learning that Nate no longer loves her, Blair sleeps with Chuck, eventually falling for him. This leads to a heated affair and an eventual love triangle. Her inability to choose creates much of the first season 's story line. There are brief mentions of Blair 's past struggle with anorexia - bulimia that are never mentioned again past the first season.
She also begins a brief power struggle with freshman Jenny Humphrey. After she unites with Chuck and Nate in order to save Serena from the scheming Georgina Sparks, Chuck realizes that his feelings for Blair are real and suggests that they spend the summer together in Tuscany. However, he is discouraged by his father at the last minute, and stands Blair up.
At the launch of the second season, Blair was described by creators as the queen at the center of the Gossip Girl chess game. A large portion of her story line in Season 2 revolves around her love - hate relationship with Chuck Bass, which was labeled "the heart of GG '' by People magazine. While competing with Serena, Blair forms an unexpected friendship with Jenny, who states that they each work for everything they achieve, while Serena often glides through life. During their interviews at Yale University, Blair and Serena apologize for their ill feelings and resume their friendship.
In the episode "O Brother, Where Bart Thou? '', Chuck is devastated by news of his father 's death, prompting Blair to offer her support while telling Chuck that she loves him. He initially shuns her advances, but later turns to her for comfort. However, the two stop seeing each other due to Chuck 's uncle, Jack Bass, convincing him he has an inability to commit to a relationship. After being rejected by Yale, Blair finds unexpected encouragement from Nate. She is later accepted into New York University, and her competitive relationship with Georgina is eventually renewed.
As the season ends, Blair crowns Jenny the new queen of Constance Billard School. In the season finale, it is discovered that Blair slept with Chuck 's uncle Jack and that Chuck had slept with Vanessa Abrams. Chuck then departs for Europe. He later returns to New York and reconciles with Blair while declaring his love for her, and the two finally begin a committed relationship.
In the third season, Blair joins Vanessa, Georgina, and Jenny 's brother Dan at NYU. Much of her story line concerns her inability to attain her previous status at her new school. She finds emotional support from her mother, as well as Chuck. However, she and Chuck separate once again when Blair feels that he manipulated her while competing with his uncle (Chuck made a deal with Jack that Blair could sleep with him and in exchange, Chuck gets his hotel back that Jack initially stole to ' destroy Chuck ').
She later transfers to Columbia University, and learns that an emotionally reformed Chuck was responsible for her enrollment. They later team up as part of a role - playing scheme to help Serena 's mother and Chuck 's adopted mother, Lily. In the season finale, Chuck attempts to propose to Blair, but is interrupted by Dan, who reveals that Chuck had slept with Jenny. Two weeks later, Blair and Serena depart for Paris intending to spend the summer together.
In Season 4, Blair and Chuck become competitive once again, but eventually resume their sexual relations before recognizing their love for one another. When the relationship interferes with their business interests, she and Chuck break up once more. Chuck promises he 'll wait for her, and both affirm their belief that their love will reunite them in the end.
Blair then teams up with Dan when the two share common goals. They also end up working together at W. magazine, where friction develops between the two. On Valentine 's Day, she discovers that Chuck has romantic feelings for Raina Thorpe, the daughter of his business rival. Later, she and Dan spend the evening talking on their cell phones while watching Rosemary 's Baby. Blair later quits W. and is shown asleep with Dan in his Brooklyn flat. Later, upon growing curious of their feelings for one another, Blair and Dan share a kiss before the mid-season hiatus.
Blair eventually decides that she wants to be with Chuck, but shuns him once again after he tries to humiliate Dan. She is later courted by a prince from Monaco named Louis. During a private confrontation, a drunken Chuck punches a window after he finds out Prince Louis has proposed to Blair, which cuts Blair 's face as it shatters. Afterward, Blair chooses to accept a proposal from Louis. Blair later attempts to warn Chuck about potential trouble in his family. She is then abducted by an enemy of the Basses, Raina 's father Russell Thorpe. Chuck later rescues Blair and apologizes for his violent actions. Following a night out together, the two have sex before Chuck advises Blair to return to Louis, believing that she will be happier with him. However, the season ends with the revelation that Blair may be pregnant.
-- Joshua Safran, producer
Amidst the fourth season, the romance between Dan and Blair became a polarizing topic among viewers which also drew significant media interest. Jarett Wieselman of the New York Post applauded the development, feeling that Blair had "more chemistry '' with Dan than with Chuck. New York magazine 's Chris Rovzar called Blair and Dan 's story line "believable '', citing their common ground in education and taste. Rovzar further stated, "Since they live in a world where both only seem to have half a dozen real friends (if that), is it so crazy they 'd end up together? '' An article from The Huffington Post declared that "Dan and Blair together are like Harry Burns and Sally Albright reincarnated -- the couple was obviously inspired by When Harry Met Sally. '' It further commented that the couple paralleled Pacey and Joey 's relationship from Dawson 's Creek and concluded that Dan and Blair were "the most inspired storyline and couple of the show ''. Entertainment Weekly stated that Dan and Blair were like stars in an "updated version of You 've Got Mail ''. Tierney Bricker of E! ranked all 25 Gossip Girl couples placing ' Dair ' as the third best couple after Blair and Serena (# 1) and ' Chair ' (# 2). Bricker stated that "(Dan and Blair), out of all the main characters, were the most well - suited for each other. In real life, they would 've been ' endgame '. ''
Dawn Fallik of The Wall Street Journal was less positive, asserting that "both characters have been so Blandified that there 's no fun left in the show. '' A writer for E! Online 's Team WWK labeled the Dan / Blair relationship "nomance nonsense ''.
With regard to Chuck and Blair, Meester stated, "I can really relate to it -- not necessarily because it 's this dramatic, tumultuous relationship, but because the way they love each other is very real, and not for the sake of being dramatic. It 's actual love. There 's nobody for each other but them. '' Meester also expressed fondness for Dan and Blair, however, stating, "I think they 're good for each other in a lot of ways, in a way that Chuck and Blair are n't. '' Badgley claimed that he thought "Blair (was) Dan 's soul mate '' and further stated that he thought the Blair and Dan storyline was "the most exciting for Dan as a person ''. When Hollywoodlife.com asked Ed Westwick, who played Chuck Bass, which character he thought loved Blair more, Chuck or Dan, Westwick pointed to Badgley, saying "definitely him. ''
Producers initially noticed chemistry between Blair and Dan in the Season 1 episode "Bad News Blair ''. According to producer Joshua Safran, the creators planned to revisit their relationship once the timing was right. Safran also stated that the outcome was n't necessarily decided ahead of time. "One thing we are very conscious of -- and I know some fans get upset about this -- is we really try to treat the characters as living, breathing, well - rounded individuals. And we 're often surprised by where their journeys take them; they open new doors for us all the time, '' he said.
Following the twentieth episode of Season 4, Safran spoke on behalf of the series regarding the scene in which Chuck became violent with Blair.
The way we viewed it, I think it 's very clear that Blair is not afraid in those moments, for herself. They have a volatile relationship, they always have, but I do not believe -- or I should say we do not believe -- that it is abuse when it 's the two of them. Chuck does not try to hurt Blair. He punches the glass because he has rage, but he has never, and will never, hurt Blair. He knows it and she knows it, and I feel it 's very important to know that she is not scared -- if anything, she is scared for Chuck -- and what he might do to himself, but she is never afraid of what he might do to her. Leighton and I were very clear about that.
In response to these comments, Carina MacKenzie of Zap2it stated, "We 're left wondering if Safran missed the part where she went home bleeding because Chuck was using physical intimidation to release his own emotions. '' While reviewing the episode, Tierney Bricker of Zap2it felt that there were "really no excuses for Chuck Bass anymore. '' MacKenzie concluded that Chuck 's behavior throughout Season 4 fit the signs of an abusive relationship, citing examples from HelpGuide.org, a non-profit health resource. She noted Chuck 's public humiliation of Blair, his attempt to pawn her during a business deal, and his use of physical intimidation. MacKenzie also called the show 's explanation "disturbing, particularly given the young, female target demographic of Gossip Girl and The CW. ''
In a review for the Los Angeles Times, Judy Berman addressed Safran 's description of Blair during the scene. "Considering how terrified Blair looked at the end of their encounter, and how quickly she got out of there, the show is sending a mixed message at best. '' She went on to state, "We have no right to expect Gossip Girl to be a paragon of morality, or even realism, but the idea that true love requires taking a shard of glass to the face is disturbing even in this alternate, soap - opera dimension. ''
In the fifth season premiere, Blair continues to plan her wedding, but begins to encounter problems in her relationship with Louis. It is later revealed that she is pregnant. Blair tells Chuck that the child is Louis ', and states that part of her wanted Chuck to be the father. Dan becomes Blair 's confidante and is shown to be in love with her, though she remains oblivious to his feelings and states that there is nothing more than friendship between them.
Though she insists that she is in love with Louis, Blair begins to seek Chuck out as the season progresses. The two eventually declare their love for each other prior to a car accident in her limousine while being chased by paparazzi. Though both recover, Blair suffers a miscarriage from the crash. After the crash Blair decides that she must commit herself to Louis, converting to Catholicism and cutting off any connection with Chuck. At the wedding, Gossip Girl releases a recording of Blair confessing her love for Chuck. Nevertheless, Louis and Blair get married, making Blair a Princess of Monaco, though Louis informs her that they will have a loveless marriage of convenience. She then receives support from Dan, leading them to share a kiss on Valentine 's Day. Amidst these developments, Blair grows conflicted between her feelings for Dan and Chuck. After taking steps to end her marriage, she chooses to begin a romantic relationship with Dan. By the end of the season, however, after a debate about which love is the best -- with Dan she feels safe, with Chuck she feels vulnerable -- Blair declares that she is still in love with Chuck, and chooses to pursue him.
In the final season, Blair resumes her romantic relationship with Chuck, while Chuck and his father Bart -- who is revealed to be alive in the previous season -- become bitter rivals. Blair pursues her career as head of Waldorf Designs, with several mishaps, before staging a successful line. In the penultimate episode, Bart falls to his death while trying to attack Chuck atop a building. Afterward, Blair and Chuck depart together. In the series finale, Blair marries Chuck which results in her not having to testify against him in his father 's murder case. Five years later, Blair is shown to be running her mother 's successful fashion line and she works with Jenny in a line called "J for Waldorf '', and she and Chuck are shown to have a son named Henry.
While covering the book series, Janet Malcolm of The New Yorker labeled the character "an antiheroine of the first rank, '' and asserted that "the series belongs to awful Blair, who inspires von Ziegesar 's highest flights of comic fancy. '' She also compared Blair to vintage film and literary figures such as Becky Sharp of Vanity Fair, and Lizzie Eustace of The Eustace Diamonds. In addition, the character has drawn comparisons to notable contemporaries, including Lila Fowler of the Sweet Valley High series.
In the 2007 book Children 's Literature and Culture, writer Harry Edwin Eiss chastised the depiction of Blair 's bulimia. "If handled properly, the inclusion of her illness could have provided a powerful lesson for young adult readers who worry about their weight and food consumption. Unfortunately, Cecily von Ziegesar, the author of the series, presents a seriously flawed treatment of the problem. In a failed attempt at humor, the writer regards Blair 's sickness as just another source of gossip, '' he said. Emily Nussbaum of New York magazine had similar comments, calling the bulimia "more of an icky weakness than a full - fledged pathology. '' However, she went on to commend Blair as "hilariously self - centered ''.
Julie Opipari of Manga Maniac Cafe gave the initial Gossip Girl manga a negative review, citing displeasure with the characters and plot. However, she acknowledged that this gave her a greater appreciation for Blair in the second volume, noting that Blair "really had to learn how to rough it '' after losing her privileged lifestyle. She went on to state that "Blair is one character that is fun to hate on. So imagine my surprise when I actually started to like her by the end of the book ''.
The show 's breakout character, Blair Waldorf has garnered much media recognition. Yahoo! proclaims Blair a member of "television 's pantheon of razor - witted, solipsistic high school Alpha females. '' While commenting on Meester, New York magazine 's 2008 cover story of the series states, "Her villain - you - want - to - root - for is the most sophisticated performance on the show. '' In another 2008 article, People magazine commented that "Meester has burst out of this ensemble to stardom. '' Variety described her performance as similar to that of "a predatory junior Joan Collins who practically breathes fire out of her pinched, perfectly WASP - ish nostrils. '' FHM Online ranked the actress the "Hottest TV Star '' of autumn 2008, stating that as Blair Waldorf, "Leighton Meester has stolen the spotlight with her mind - blowing good looks and amazing performance. '' OK! magazine likened her character to Audrey Hepburn 's portrayal of Holly Golightly. The physical similarity was also noted by USA Today.
In its 2009 "Hot List '', Rolling Stone cited Blair as "the reason we love the back - stabby soap most. '' Regarding the fictional fame of Blair 's friend Serena, Tim Stack of Entertainment Weekly asserted that "Serena may be the star of the media but Blair is quickly becoming the star of this show. '' While citing Serena 's long - revered allure, Glamour and its readers compared the two characters in 2008, with Blair being recognized as more beautiful than Serena. In a review from The Atlantic, Blair and Serena 's friendship was praised as "it offered a relationship whose depth and complexity approached Rory and Paris ' (from Gilmore Girls). ''
In May 2009, Blair received attention from Forbes, which interviewed her via the series ' writers. Television Without Pity listed Meester in their "Golden Globes 2009: Overlooked TV Shows and Performances '' article, labeling Blair "so multi-faceted, well - dressed and beautifully played that she elevates this teen soap to something we do n't even feel guilty about admitting we love. '' Meester won the Teen Choice Award for "Choice TV Actress Drama '' in 2009, and again in 2010. In 2009, she was voted the "Best Mean Girl '' in Zap2it 's first poll of the best television characters in the 2000s. In February 2012, Zap2it held another poll to determine TV 's Most Crushworthy. Blair was elected TV 's Most Crushworthy 1 % Female over Temperance "Bones '' Brennan. The 1 percenters are people that "have everything going for them with their fantastic good looks and their opulent lifestyle ''. Her relationship with Chuck Bass was included in TV Guide 's list of "The Best TV Couples of All Time '' as well as Entertainment Weekly 's "30 Best ' Will They / Wo n't They? ' TV Couples ''.
The character 's wardrobe -- credited to designers Abigail Lorick and Eric Daman -- was popular, earning mentions from periodicals such as InStyle and New York, along with recognition from websites. TV Guide listed Blair among its "Best Dressed TV Characters of 2007 ''. Entertainment Weekly named Blair Waldorf and Chuck Bass the "Most Stylish '' characters of 2008. Lifetime television ranks Blair first in its listing of "The Top 10 Best - Dressed TV Characters '', while Glamour has named her among its best - dressed TV characters of all time. In a Vanity Fair interview, costume designers Eric Daman and Meredith Markworth - Pollack named Audrey Hepburn and Vogue editor - in - chief Anna Wintour as their inspirations when dressing Meester as Blair. The designers also cited New York socialites Tinsley Mortimer and Arden Wohl as influences. TV Guide named her the sixth most fashionable TV character.
|
who played mason in wizards of waverly place | Gregg Sulkin - wikipedia
Gregg Sulkin is an English - American actor. At age ten he made his film debut in the 2002 Doctor Zhivago mini-series. He later landed the starring role in the 2006 British release Sixty Six, and subsequently became known for appearing in the Disney Channel comedy series As the Bell Rings and Wizards of Waverly Place. In 2010, he starred in the Disney Channel television movie Avalon High. He also appeared in the television special The Wizards Return: Alex vs. Alex. He starred on MTV 's show Faking It as Liam Booker from 2014 until its cancellation in 2016. He also appeared on Pretty Little Liars as Ezra 's brother, Wesley "Wes '' Fitzgerald. In 2016, he starred in the role of Sam Fuller in the horror - thriller film Do n't Hang Up. He is currently starring as Chase Stein in the TV show Runaways, based on the Marvel Comics series of the same name.
Sulkin was born in Westminster, London. He is Jewish and had his Bar Mitzvah at the Western Wall in Jerusalem. He attended Highgate School in North London.
Sulkin made his acting debut in the 2002 mini-series Doctor Zhivago. He subsequently starred in the comedy Sixty Six, as Bernie Rubens, alongside Helena Bonham Carter, Eddie Marsan and Catherine Tate. Sulkin also played the role of JJ in the Disney Channel comedy, As the Bell Rings, worked on a CBBC children sci - fi show The Sarah Jane Adventures (spin - off of Doctor Who), playing Adam in series 3 two - episode story The Mad Woman in the Attic.
Sulkin was part of Disney Channel 's Pass the Plate as Gregg from the UK. He had a recurring guest role on the Disney Channel series Wizards of Waverly Place, where he played Alex 's love interest Mason Greyback he reprised his role in 4 episodes of season 3 and returned to the series in its fourth season, and through to its finale. Sulkin has also landed a role in the thriller The Heavy.
In 2010, he went to New Zealand to film the Disney Channel Original Movie Avalon High, which premiered on 12 November 2010. In an interview with Kyle Martino, aired on Soccer Talk Live on the Fox Soccer Channel in the US, Sulkin announced that he was a fan of Arsenal. Sulkin participated the Disney Channel 's Friends for Change Games and was on the Yellow Team. He reprised his role as Mason Greyback in ' ' The Wizards Return: Alex vs. Alex which premiered on Disney Channel in March 2013.
In 2012, Sulkin starred in the American teen drama series Pretty Little Liars as a recurring character, Wesley Fitzgerald, brother of Ezra Fitz. In February 2013, it was announced that Sulkin would play Julian Fineman in FOX 's television adaptation of Lauren Oliver 's young adult novel, Delirium. On 8 May, it was reported that Fox has decided not to pick up Delirium. Since 2014, Sulkin has starred in the MTV comedy Faking It. Sulkin portrays Liam Booker. Sulkin defeated Victoria Justice on 30 July 2015 episode of Spike TV 's Lip Sync Battle, where he wore a wig and went shirtless to perform the hit Kelis song "Milkshake ''.
Sulkin starred in the leading role of Sam Fuller in the horror - thriller film, Do n't Hang Up which was released in theatres on 10 February 2017.
Sulkin currently plays Chase Stein on Runaways a Hulu series set in the Marvel Cinematic Universe.
Sulkin dated actress Bella Thorne from 2015 to 2016.
Sulkin became an American citizen on 23 May 2018. He will retain his British citizenship.
|
____ is the percentage of black or white added to a main color | RGB color model - wikipedia
The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green and blue.
The main purpose of the RGB color model is for the sensing, representation and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.
RGB is a device - dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual R, G and B levels vary from manufacturer to manufacturer, or even in the same device over time. Thus a RGB value does not define the same color across devices without some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, Quantum - Dots etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as JumboTron. Color printers, on the other hand are not RGB devices, but subtractive color devices (typically CMYK color model).
This article discusses concepts common to all the different color spaces that use the RGB color model, which are used in one implementation or another in color image - producing technology.
To form a color with RGB, three light beams (one red, one green and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture.
The RGB color model is additive in the sense that the three light beams are added together, and their light spectra add, wavelength for wavelength, to make the final color 's spectrum. This is essentially opposite to the subtractive color model that applies to paints, inks, dyes, and other substances whose color depends on reflecting the light under which we see them.
Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system 's white point. When the intensities for all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.
When one of the components has the strongest intensity, the color is a hue near this primary color (reddish, greenish or bluish), and when two components have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta or yellow). A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green + blue, magenta is red + blue, and yellow is red + green. Every secondary color is the complement of one primary color; when a primary and its complementary secondary color are added together, the result is white: cyan complements red, magenta complements green, and yellow complements blue.
The RGB color model itself does not define what is meant by red, green and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB; see RGB color spaces for more details.
The choice of primary colors is related to the physiology of the human eye; good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle.
The normal three kinds of light - sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish - green light and to differences between hues in the green - to - orange region.
As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally -- the long - wavelength cells will respond more. The difference in the response can be detected by the brain, and this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones simultaneously but to different degrees.
Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light.
The RGB color model is based on the Young -- Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann Helmholtz in the early to mid nineteenth century, and on James Clerk Maxwell 's color triangle that elaborated that theory (circa 1860).
The first experiments with RGB in early color photography were made in 1861 by Maxwell himself, and involved the process of combining three color - filtered separate takes. To reproduce the color photograph, three matching projections over a screen in a dark room were necessary.
The additive RGB model and variants such as orange -- green -- violet were also used in the Autochrome Lumière color plates and other screen - plate technologies such as the Joly color screen and the Paget process in the early twentieth century. Color photography by taking three separate plates was used by other pioneers, such as the Russian Sergey Prokudin - Gorsky in the period 1909 through 1915. Such methods lasted until about 1960 using the expensive and extremely complex tri-color carbro Autotype process.
When employed, the reproduction of prints from three - plate photos was done by dyes or pigments using the complementary CMY model, by simply using the negative plates of the filtered takes: reverse red gives the cyan plate, and so on.
Before the development of practical electronic TV, there were patents on mechanically scanned color systems as early as 1889 in Russia. The color TV pioneer John Logie Baird demonstrated the world 's first RGB color transmission in 1928, and also the world 's first color broadcast in 1938, in London. In his experiments, scanning and display were done mechanically by spinning colorized wheels.
The Columbia Broadcasting System (CBS) began an experimental RGB field - sequential color system in 1940. Images were scanned electrically, but the system still used a moving part: the transparent RGB color wheel rotating at above 1,200 rpm in synchronism with the vertical scan. The camera and the cathode - ray tube (CRT) were both monochromatic. Color was provided by color wheels in the camera and the receiver. More recently, color wheels have been used in field - sequential projection TV receivers based on the Texas Instruments monochrome DLP imager.
The modern RGB shadow mask technology for color CRT displays was patented by Werner Flechsig in Germany in 1938.
Early personal computers of the late 1970s and early 1980s, such as those from Apple, Atari and Commodore, did not use RGB as their main method to manage colors, but rather composite video. IBM introduced a 16 - color scheme (four bits -- one bit each for red, green, blue, and intensity) with the Color Graphics Adapter (CGA) for its first IBM PC (1981), later improved with the Enhanced Graphics Adapter (EGA) in 1984. The first manufacturer of a truecolor graphic card for PCs (the TARGA) was Truevision in 1987, but it was not until the arrival of the Video Graphics Array (VGA) in 1987 that RGB became popular, mainly due to the analog signals in the connection between the adapter and the monitor which allowed a very wide range of RGB colors. Actually, it had to wait a few more years because the original VGA cards were palette - driven just like EGA, although with more freedom than VGA, but because the VGA connectors were analogue, later variants of VGA (made by various manufacturers under the informal name Super VGA) eventually added truecolor. In 1992, magazines heavily advertised truecolor Super VGA hardware.
One common application of the RGB color model is the display of colors on a cathode ray tube (CRT), liquid crystal display (LCD), plasma display, or organic light emitting diode (OLED) display such as a television, a computer 's monitor, or a large scale screen. Each pixel on the screen is built by driving three small and very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which tricks the eye to see a given solid color. All the pixels together arranged in the rectangular screen surface conforms the color image.
During digital image processing each pixel can be represented in the computer memory or interface hardware (for example, a graphics card) as binary values for the red, green, and blue color components. When properly managed, these values are converted into intensities or voltages via gamma correction to correct the inherent nonlinearity of some devices, such that the intended intensities are reproduced on the display.
The Quattron released by Sharp uses RGB color and adds yellow as a sub-pixel, supposedly allowing an increase in the number of available colors.
RGB is also the term referring to a type of component video signal used in the video electronics industry. It consists of three signals -- red, green and blue -- carried on three separate cables / pins. RGB signal formats are often based on modified versions of the RS - 170 and RS - 343 standards for monochrome video. This type of video signal is widely used in Europe since it is the best quality signal that can be carried on the standard SCART connector. This signal is known as RGBS (4 BNC / RCA terminated cables exist as well), but it is directly compatible with RGBHV used for computer monitors (usually carried on 15 - pin cables terminated with 15 - pin D - sub or 5 BNC connectors), which carries separate horizontal and vertical sync signals.
Outside Europe, RGB is not very popular as a video signal format; S - Video takes that spot in most non-European regions. However, almost all computer monitors around the world use RGB.
A framebuffer is a digital device for computers which stores data in the so - called video memory (comprising an array of Video RAM or similar chips). This data goes either to three digital - to - analog converters (DACs) (for analog monitors), one per primary color, or directly to digital monitors. Driven by software, the CPU (or other specialized chips) write the appropriate bytes into the video memory to define the image. Modern systems encode pixel color values by devoting eight bits to each of the R, G and B components. RGB information can be either carried directly by the pixel bits themselves or provided by a separate color look - up table (CLUT) if indexed color graphic modes are used.
A CLUT is a specialized RAM that stores R, G and B values that define specific colors. Each color has its own address (index) -- consider it as a descriptive reference number that provides that specific color when the image needs it. The content of the CLUT is much like a palette of colors. Image data that uses indexed color specifies addresses within the CLUT to provide the required R, G and B values for each specific pixel, one pixel at a time. Of course, before displaying, the CLUT has to be loaded with R, G and B values that define the palette of colors required for each image to be rendered. Some video applications store such palettes in PAL files (Microsoft AOE game, for example uses over half - a-dozen) and can combine CLUTs on screen.
This indirect scheme restricts the number of available colors in an image CLUT -- typically 256 - cubed (8 bits in three color channels with values of 0 -- 255) -- although each color in the RGB24 CLUT table has only 8 bits representing 256 codes for each of the R, G and B primaries combinatorial math theory says this means that any given color can be one of 16,777,216 possible colors. However, the advantage is that an indexed - color image file can be significantly smaller than it would be with only 8 bits per pixel for each primary.
Modern storage, however, is far less costly, greatly reducing the need to minimize image file size. By using an appropriate combination of red, green and blue intensities, many colors can be displayed. Current typical display adapters use up to 24 - bits of information for each pixel: 8 - bit per component multiplied by three components (see the Digital representations section below (24bits = 256, each primary value of 8 bits with values of 0 -- 255). With this system, 16,777,216 (256 or 2) discrete combinations of R, G and B values are allowed, providing millions of different (though not necessarily distinguishable) hue, saturation and lightness shades. Increased shading has been implemented in various ways, some formats such as. png and. tga files among others using a fourth greyscale color channel as a masking layer, often called RGB32.
For images with a modest range of brightnesses from the darkest to the lightest, eight bits per primary color provides good - quality images, but extreme images require more bits per primary color as well as advanced display technology. For more information see High Dynamic Range (HDR) imaging.
In classic cathode ray tube (CRT) devices, the brightness of a given point over the fluorescent screen due to the impact of accelerated electrons is not proportional to the voltages applied to the electron gun control grids, but to an expansive function of that voltage. The amount of this deviation is known as its gamma value (γ (\ displaystyle \ gamma)), the argument for a power law function, which closely describes this behavior. A linear response is given by a gamma value of 1.0, but actual CRT nonlinearities have a gamma value around 2.0 to 2.5.
Similarly, the intensity of the output on TV and computer display devices is not directly proportional to the R, G and B applied electric signals (or file data values which drive them through Digital - to - Analog Converters). On a typical standard 2.2 - gamma CRT display, an input intensity RGB value of (0.5, 0.5, 0.5) only outputs about 22 % of full brightness (1.0, 1.0, 1.0), instead of 50 %. To obtain the correct response, a gamma correction is used in encoding the image data, and possibly further corrections as part of the color calibration process of the device. Gamma affects black - and - white TV as well as color. In standard color TV, broadcast signals are gamma corrected.
In color television and video cameras manufactured before the 1990s, the incoming light was separated by prisms and filters into the three RGB primary colors feeding each color into a separate video camera tube (or pickup tube). These tubes are a type of cathode ray tube, not to be confused with that of CRT displays.
With the arrival of commercially viable charge - coupled device (CCD) technology in the 1980s, first the pickup tubes were replaced with this kind of sensor. Later, higher scale integration electronics was applied (mainly by Sony), simplifying and even removing the intermediate optics, thereby reducing the size of home video cameras and eventually leading to the development of full camcorders. Current webcams and mobile phones with cameras are the most miniaturized commercial forms of such technology.
Photographic digital cameras that use a CMOS or CCD image sensor often operate with some variation of the RGB model. In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1: 2: 1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green and blue detectors arranged so that the first row is RGRGRGRG, the next is GBGBGBGB, and that sequence is repeated in subsequent rows. For every channel, missing pixels are obtained by interpolation in the demosaicing process to build up the complete image. Also, other processes used to be applied in order to map the camera RGB measurements into a standard RGB color space as sRGB.
In computing, an image scanner is a device that optically scans images (printed text, handwriting, or an object) and converts it to a digital image which is transferred to a computer. Among other formats, flat, drum and film scanners exist, and most of them support RGB color. They can be considered the successors of early telephotography input devices, which were able to send consecutive scan lines as analog amplitude modulation signals through standard telephonic lines to appropriate receivers; such systems were in use in press since the 1920s to the mid-1990s. Color telephotographs were sent as three separated RGB filtered images consecutively.
Currently available scanners typically use charge - coupled device (CCD) or contact image sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. Early color film scanners used a halogen lamp and a three - color filter wheel, so three exposures were needed to scan a single color image. Due to heating problems, the worst of them being the potential destruction of the scanned film, this technology was later replaced by non-heating light sources such as color LEDs.
A color in the RGB color model is described by indicating how much of each of the red, green, and blue is included. The color is expressed as an RGB triplet (r, g, b), each component of which can vary from zero to a defined maximum value. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white.
These ranges may be quantified in several different ways:
For example, brightest saturated red is written in the different RGB notations as:
In many environments, the component values within the ranges are not managed as linear (that is, the numbers are nonlinearly related to the intensities that they represent), as in digital cameras and TV broadcasting and receiving due to gamma correction, for example. Linear and nonlinear transformations are often dealt with via digital image processing. Representations with only 8 bits per component are considered sufficient if gamma encoding is used.
Following is the mathematical relationship between RGB space to HSI space (hue, saturation, and intensity: HSI color space):
I = R + G + B 3 S = 1 − 3 (R + G + B) min (R, G, B) H = cos − 1 (1 2 ((R − G) + (R − B)) (R − G) 2 + (R − B) (G − B)) 1 2 (\ displaystyle (\ begin (aligned) I& = (\ frac (R + G + B) (3)) \ \ S& = 1 \, - \, (\ frac (3) ((R + G + B))) \, \ min (R, G, B) \ \ H& = \ cos ^ (- 1) \ left ((\ frac ((\ frac (1) (2)) ((R-G) + (R-B))) ((R-G) ^ (2) + (R-B) (G-B))) \ right) ^ (\ frac (1) (2)) \ end (aligned)))
The RGB color model is one of the most common ways to encode color in computing, and several different binary digital representations are in use. The main characteristic of all of them is the quantization of the possible values per component (technically a Sample (signal)) by using only integer numbers within some range, usually from 0 to some power of two minus one (2 -- 1) to fit them into some bit groupings. Encodings of 1, 2, 4, 5, 8 and 16 bits per color are commonly found; the total number of bits used for an RGB color is typically called the color depth.
Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three - dimensional volume is described by treating the component values as ordinary cartesian coordinates in a euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0 -- 1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black.
An RGB triplet (r, g, b) represents the three - dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out - of - gamut computations can also be performed this way.
The RGB color model for HTML was formally adopted as an Internet standard in HTML 3.2, though it had been in use for some time before that. Initially, the limited color depth of most video hardware led to a limited color palette of 216 RGB colors, defined by the Netscape Color Cube. With the predominance of 24 - bit displays, the use of the full 16.7 million colors of the HTML RGB color code no longer poses problems for most viewers.
The web - safe color palette consists of the 216 (6) combinations of red, green, and blue where each color can take one of six values (in hexadecimal): # 00, # 33, # 66, # 99, # CC or # FF (based on the 0 to 255 range for each value discussed above). These hexadecimal values = 0, 51, 102, 153, 204, 255 in decimal, which = 0 %, 20 %, 40 %, 60 %, 80 %, 100 % in terms of intensity. This seems fine for splitting up 216 colors into a cube of dimension 6. However, lacking gamma correction, the perceived intensity on a standard 2.5 gamma CRT / LCD is only: 0 %, 2 %, 10 %, 28 %, 57 %, 100 %. See the actual web safe color palette for a visual confirmation that the majority of the colors produced are very dark or see Xona.com Color List for a side - by - side comparison of proper colors next to their equivalent lacking proper gamma correction.
The syntax in CSS is:
where # equals the proportion of red, green and blue respectively. This syntax can be used after such selectors as "background - color: '' or (for text) "color: ''.
Proper reproduction of colors, especially in professional environments, requires color management of all the devices involved in the production process, many of them using RGB. Color management results in several transparent conversions between device - independent and device - dependent color spaces (RGB and others, as CMYK for color printing) during a typical production cycle, in order to ensure color consistency throughout the process. Along with the creative processing, such interventions on digital images can damage the color accuracy and image detail, especially where the gamut is reduced. Professional digital devices and software tools allow for 48 bpp (bits per pixel) images to be manipulated (16 bits per channel), to minimize any such damage.
ICC - compliant applications, such as Adobe Photoshop, use either the Lab color space or the CIE 1931 color space as a Profile Connection Space when translating between color spaces.
All luminance -- chrominance formats used in the different TV and video standards such as YIQ for NTSC, YUV for PAL, YD D for SECAM, and YP P for component video use color difference signals, by which RGB color images can be encoded for broadcasting / recording and later decoded into RGB again to display them. These intermediate formats were needed for compatibility with pre-existent black - and - white TV formats. Also, those color difference signals need lower data bandwidth compared to full RGB signals.
Similarly, current high - efficiency digital color image data compression schemes such as JPEG and MPEG store RGB color internally in YC C format, a digital luminance - chrominance format based on YP P. The use of YC C also allows to perform lossy subsampling with the chroma channels (typically to 4: 2: 2 or 4: 1: 1 ratios), which it aids to reduce the resultant file size.
|
who won the soccer world cup in 1982 | 1982 FIFA World Cup - wikipedia
The 1982 FIFA World Cup, the 12th FIFA World Cup, was held in Spain from 13 June to 11 July 1982.
The tournament was won by Italy, who defeated West Germany 3 -- 1 during the final match, held in the Spanish capital of Madrid. It was Italy 's third World Cup win and first since 1938. The defending champions Argentina were eliminated in the second group round. Algeria, Cameroon, Honduras, Kuwait and New Zealand made their first appearances in the finals.
The tournament featured the first penalty shoot - out in World Cup competition. It was also the third time (1934 and 1966) that all four semifinalists were European.
In the first round of Group 3, Hungary defeated El Salvador 10 -- 1, equalling the largest margin of victory recorded in the finals (Hungary over South Korea 9 -- 0 in 1954, and Yugoslavia over Zaire 9 -- 0 in 1974).
Spain was chosen as the host nation by FIFA in London, England on 6 July 1966. Hosting rights for the 1974 and 1978 tournaments were awarded at the same time. West Germany agreed a deal with Spain by which Spain would support West Germany for the 1974 tournament, and in return West Germany would allow Spain to bid for the 1982 World Cup unopposed.
For the first time, the World Cup finals expanded from 16 to 24 teams. This allowed more teams to participate, especially from Africa and Asia.
Teams absent from the finals were 1974 and 1978 runners - up Netherlands (eliminated by Belgium and France), Mexico (eliminated by Honduras and El Salvador), and the three times 1970s participants Sweden (eliminated by Scotland and Northern Ireland). Northern Ireland qualified for the first time since 1958. Belgium, Czechoslovakia, El Salvador, England, and the Soviet Union were back in the Finals after a 12 - year absence. England had its first successful World Cup qualifying campaign in 20 years -- the English team had qualified automatically as hosts in 1966 and as defending champions in 1970, then had missed the 1974 and 1978 tournaments. Yugoslavia and Chile were also back after having missed the 1978 tournament.
Algeria, Cameroon, Honduras, Kuwait, and New Zealand all participated in the World Cup for the first time. As of 2018, this was the last time that El Salvador and Kuwait qualified for a FIFA World Cup finals.
There was some consideration given as to whether England, Northern Ireland, and Scotland should withdraw from the tournament because of the Falklands War between Argentina and the United Kingdom. A directive issued by the British sports minister Neil Macfarlane in April, at the start of the conflict, suggested that there should be no contact between British representative teams and Argentina. This directive was not rescinded until August, following the end of hostilities. Macfarlane reported to Prime Minister Margaret Thatcher that some players and officials were uneasy about participating because of the casualties suffered by British forces. FIFA advised the British Government that there was no prospect that Argentina (the defending champions) would be asked to withdraw. It also became apparent that no other countries would withdraw from the tournament. It was decided to allow the British national teams to participate so that Argentina could not use their absence for propaganda purposes, reversing the intended effect of applying political pressure onto Argentina.
The following 24 teams qualified for the final tournament.
The 1982 competition used a unique format. The first round was a round - robin group stage containing six groups of four teams each. Two points were awarded for a win and one for a draw, with goal difference used to separate teams equal on points. The top two teams in each group advanced. In the second round, the twelve remaining teams were split into four groups of three teams each, with the winner of each group progressing to the knockout semi-final stage.
The composition of the groups in the second round was predetermined before the start of the tournament. In the aggregate, Groups A and B were to include one team from each of Groups 1 through 6, and Groups C and D included the remaining six teams. The winners of Groups 1 and 3 were in Group A whilst the runners - up were in Group C. The winners of Groups 2 and 4 were in Group B whilst the runners - up were in Group D. The winner of Group 5 was in Group D whilst the runner - up was in Group B. The winner of Group 6 was in Group C whilst the runner - up was in Group A. Thus, Group A mirrored Group C, and Group B mirrored Group D with the winners and runners - up from the first round being placed into opposite groups in the second round.
The second - round groups that mirrored each other (based on the first - round groupings) faced off against each other in the semifinals. Thus, the Group A winner played the Group C winner, and the Group B winner player the Group D winner. This meant that if two teams which played in the same first - round group both emerged from the second round, they would meet for the second time of the tournament in a semifinal match. It also guaranteed that the final match would feature two teams that had not previously played each other in the tournament. As it turned out, Italy and Poland who were both in Group 1 in the first round, each won their second - round groups and played each other in a semifinal match.
In Group 1, newcomers Cameroon held both Poland and Italy to draws, and were denied a place in the next round on the basis of having scored fewer goals than Italy (the sides had an equal goal difference). Poland and Italy qualified over Cameroon and Peru. Italian journalists and tifosi criticised their team for their uninspired performances that managed three draws; the squad was reeling from the recent Serie A scandal, where national players were suspended for match fixing and illegal betting.
Group 2 saw one of the great World Cup upsets on the first day with the 2 -- 1 victory of Algeria over reigning European Champions West Germany. In the final match in the group, West Germany met Austria.
Algeria had already played their final group game the day before, and West Germany and Austria knew that a West German win by 1 or 2 goals would qualify them both, while a larger German victory would qualify Algeria over Austria, and a draw or an Austrian win would eliminate the Germans. After 10 minutes of all - out attack, West Germany scored through a goal by Horst Hrubesch. After the goal was scored, the two teams kicked the ball around aimlessly for the rest of the match. Chants of "Fuera, fuera '' ("Out, out '') were screamed by the Spanish crowd, while angry Algerian supporters waved banknotes at the players. This performance was widely deplored, even by the German and Austrian fans. One German fan was so upset by his team 's display that he burned his German flag in disgust. Algeria protested to FIFA, who ruled that the result be allowed to stand; FIFA introduced a revised qualification system at subsequent World Cups in which the final two games in each group were played simultaneously.
Group 3, where the opening ceremony and first match of the tournament took place, saw Belgium beat defending champions Argentina 1 -- 0. The Camp Nou stadium was the home of Barcelona, and many fans had wanted to see the club 's new signing, Argentinian star Diego Maradona, who did not perform to expectations. Both Belgium and Argentina ultimately advanced at the expense of Hungary and El Salvador despite Hungary 's 10 -- 1 win over the Central American nation -- which, with a total of 11 goals, is the second highest scoreline in a World Cup game (equal with Brazil 's 6 -- 5 victory over Poland in the 1938 tournament and Hungary 's 8 -- 3 victory over West Germany in the 1954 tournament).
Group 4 opened with England midfielder Bryan Robson 's goal against France after only 27 seconds of play. England won 3 -- 1 and qualified along with France over Czechoslovakia and Kuwait, though the tiny Gulf emirate held Czechoslovakia to a 1 -- 1 draw. In the game between Kuwait and France, with France leading 3 -- 1, France midfielder Alain Giresse scored a goal vehemently contested by the Kuwait team, who had stopped play after hearing a piercing whistle from the stands, which they thought had come from Soviet referee Miroslav Stupar. Play had not yet resumed when Sheikh Fahad Al - Ahmed Al - Jaber Al - Sabah, brother of the Kuwaiti Emir and president of the Kuwaiti Football Association, rushed onto the pitch to remonstrate with the referee. Stupar countermanded his initial decision and disallowed the goal to the fury of the French. Maxime Bossis scored another valid goal a few minutes later and France won 4 -- 1.
In Group 5, Honduras held hosts Spain to a 1 -- 1 draw. Northern Ireland won the group outright, eliminating Yugoslavia and beating hosts Spain 1 -- 0; Northern Ireland had to play the majority of the second half with ten men after Mal Donaghy was dismissed. Spain scraped by thanks to a controversial penalty in the 2 -- 1 victory over Yugoslavia. At 17 years and 41 days, Northern Ireland forward Norman Whiteside was the youngest player to appear in a World Cup match.
Brazil were in Group 6. With Zico, Sócrates, Falcão, Éder and others, they boasted an offensive firepower that promised a return to the glory days of 1970. They beat the USSR 2 -- 1 thanks to a 20 - metre Éder goal two minutes from time, then Scotland and New Zealand with four goals each. The Soviets took the group 's other qualifying berth on goal difference at the expense of the Scots.
Poland opened Group A with a 3 -- 0 defeat of Belgium thanks to a Zbigniew Boniek hat - trick. The Soviet Union prevailed 1 -- 0 in the next match over Belgium. The Poles edged out the USSR for the semi-final spot on the final day on goal difference thanks to a 0 -- 0 draw in a politically charged match, as Poland 's then - Communist government had imposed a martial law a few months earlier to quash internal dissent.
In Group B, a match between England and West Germany ended in a goalless draw. West Germany put the pressure on England in their second match by beating Spain 2 -- 1. The home side drew 0 -- 0 against England, denying Ron Greenwood 's team a semi-final place and putting England in the same position as Cameroon, being eliminated without losing a game.
In Group C, with Brazil, Argentina and Italy, in the opener, Italy prevailed 2 -- 1 over Diego Maradona and Mario Kempes 's side after a game in which Italian defenders Gaetano Scirea and Claudio Gentile proved themselves equal to the task of stopping the Argentinian attack. Argentina now needed a win over Brazil on the second day, but lost 3 -- 1 -- Argentina only scoring in the last minute. Diego Maradona kicked Brazilian player João Batista in the groin and was sent off in the 85th minute.
The match between Brazil and Italy pitted Brazil 's attack against Italy 's defense, with the majority of the game played around the Italian area, and with the Italian midfielders and defenders returning the repeated set volleys of Brazilian shooters such as Zico, Sócrates and Falcão. Italian centre back Gentile was assigned to mark Brazilian striker Zico, earning a yellow card and a suspension for the semi-final. Paolo Rossi opened the scoring when he headed in Antonio Cabrini 's cross with just five minutes played. Sócrates equalised for Brazil seven minutes later. In the twenty - fifth minute Rossi stepped past Júnior, intercepted a pass from Cerezo across the Brazilians ' goal, and drilled the shot home. The Brazilians threw everything in search of another equalizer, while Italy defended bravely. On 68 minutes, Falcão collected a pass from Junior and as Cerezo 's dummy run distracted three defenders, fired home from 20 yards out. Now Italy had gained the lead twice thanks to Rossi 's goals, and Brazil had come back twice; At 2 -- 2, Brazil would have been through on goal difference, but in the 74th minute, a poor clearance from an Italian corner kick went back to the Brazilian six - yard line where Rossi and Francesco Graziani were waiting. Both aimed at the same shot, Rossi connecting to get a hat trick and sending Italy into the lead for good. In the 86th minute Giancarlo Antognoni scored an apparent fourth goal for Italy, but it was wrongly disallowed for offside. In the dying moments Dino Zoff made a miraculous save to deny Oscar a goal, ensuring that Italy advanced to the semi-final.
In the last group, Group D, France dispatched Austria 1 -- 0 with a free kick goal by Bernard Genghini, and then defeated Northern Ireland 4 -- 1 to reach their first semi-final since 1958.
In a re-match of the encounter in the first round, Italy beat Poland in the first semi-final through two goals from Paolo Rossi. In the game between France and West Germany, the Germans opened the scoring through a Pierre Littbarski strike in the 17th minute, and the French equalised nine minutes later with a Michel Platini penalty. In the second half a long through ball sent French defender Patrick Battiston racing clear towards the German goal. With both Battiston and the lone German defender trying to be the first to reach the ball, Battiston flicked it past German keeper Harald Schumacher from the edge of the German penalty area and Schumacher reacted by jumping up to block. Schumacher did n't seem to go for the ball, however, and clattered straight into the oncoming Battiston -- which left the French player unconscious and knocked two of his teeth out. Schumacher 's action has been described as "one of history 's most shocking fouls ''. The ball went just wide of the post and Dutch referee Charles Corver deemed Schumacher 's tackle on Battiston not to be a foul and awarded a goal kick. Play was interrupted for several minutes while Battiston, still unconscious and with a broken jaw, was carried off the field on a stretcher.
After French defender Manuel Amoros had sent a 25 - metre drive crashing onto the West German crossbar in the final minute, the match went into extra time. On 92 minutes, France 's sweeper Marius Trésor fired a swerving volley under Schumacher 's crossbar from ten metres out to make it 2 -- 1. Six minutes later, an unmarked Alain Giresse drove in an 18 - metre shot off the inside of the right post to finish off a counter-attack and put France up 3 -- 1. But West Germany would not give up. In the 102nd minute a counter-attack culminated in a cross that recent substitute Karl - Heinz Rummenigge turned in at the near post from a difficult angle with the outside of his foot, reducing France 's lead to 3 -- 2. Then in the 108th minute Germany took a short corner and after France failed to clear, the ball was played by Germany to Littbarski whose cross to Horst Hrubesch was headed back to the centre towards Klaus Fischer, who was unmarked but with his back to goal. Fischer in turn volleyed the ball past French keeper Jean - Luc Ettori with a bicycle kick, levelling the scores at 3 -- 3.
The match went to penalties, with France and West Germany participating in the first penalty shootout at a World Cup finals. Giresse, Manfred Kaltz, Manuel Amoros, Paul Breitner and Dominique Rocheteau all converted penalties until Uli Stielike was stopped by Ettori, giving France the advantage. But then Schumacher stepped forward, lifted the tearful Stielike from the ground, and saved Didier Six 's shot. With Germany handed the lifeline they needed Littbarski converted his penalty, followed by Platini for France, and then Rummenigge for Germany as the tension mounted. France defender Maxime Bossis then had his kick parried by Schumacher who anticipated it, and Hrubesch stepped up to score and send Germany to the World Cup final yet again with a victory on penalties, 5 -- 4.
In the third - place match, Poland edged the French side 3 -- 2 which matched Poland 's best performance at a World Cup previously achieved in 1974. France would go on to win the European Championship two years later.
In the final, Antonio Cabrini fired a penalty wide of goal in the first half. In the second half, Paolo Rossi scored first for the third straight game by heading home Gentile 's bouncing cross at close range. Exploiting the situation, Italy scored twice more on quick counter-strikes, all the while capitalising on their defence to hold the Germans. With Gentile and Gaetano Scirea holding the centre, the Italian strikers were free to counter-punch the weakened German defence. Marco Tardelli 's shot from the edge of the area beat Schumacher first, and Alessandro Altobelli, the substitute for injured striker Francesco Graziani, made it 3 -- 0 at the end of a solo sprint down the right side by the stand - out winger Bruno Conti. Italy 's lead appeared secure, encouraging Italian president Sandro Pertini to wag his finger at the cameras in a playful "not going to catch us now '' gesture. In the 83rd minute, Paul Breitner scored for West Germany, but it was only a consolation goal as Italy won 3 -- 1 to claim their first World Cup title in 44 years, and their third in total.
Italy became the first team to advance from the first round without winning a game, drawing all three (while Cameroon were eliminated in the same way by virtue of having only one goal scored against Italy 's two), and also the only World Cup winner to draw or lose three matches at the Finals. By winning, Italy equalled Brazil 's record of winning the World Cup three times. Italy 's total of twelve goals scored in seven matches set a new low for average goals scored per game by a World Cup winning side (subsequently exceeded by Spain in 2010), while Italy 's aggregate goal difference of + 6 for the tournament remains a record low for a champion, equalled by Spain.
Italy 's 40 - year - old captain - goalkeeper Dino Zoff became the oldest player to win the World Cup. This was the first World Cup in which teams from all six continental confederations participated in the finals, something that did not happen again until 2006.
17 stadiums in 14 cities hosted the tournament, a record that stood until the 2002 tournament, which was played 20 stadiums in two countries. The most used venue was FC Barcelona 's Camp Nou stadium, which hosted five matches, including a semi-final; it was the largest stadium used for this tournament. With the city 's Sarrià Stadium also hosting three matches, Barcelona was the Spanish city with the most matches in Espana ' 82 with eight; Madrid, the nation 's capital, followed with seven.
This particular World Cup was organized in such a way that all of the matches of each of the six groups were assigned stadiums in cities near to each other, in order to reduce the stress of travel on the players and fans. For example, Group 1 matches were played in Vigo and A Coruña, Group 2 in Gijon and Oviedo, Group 3 in Elche and Alicante (except for the first match, which was the opening match of the tournament, which was played at the Camp Nou), Group 4 in Bilbao and Valladolid, Group 5 (which included hosts Spain) in Valencia and Zaragoza, and Group 6 in Seville and Malaga (of the three first - round matches in Seville, the first match between Brazil and the Soviet Union was played in the Pizjuán Stadium, and the other two were played in the Villamarín Stadium).
When the tournament went into the round - robin second round matches, all the aforementioned cities excluding Barcelona, Alicante and Seville did not host any more matches in Espana ' 82. Both the Santiago Bernabéu and Vicente Calderón stadiums in Madrid and the Sarrià Stadium in Barcelona were used for the first time for this tournament for the second round matches. Madrid and Barcelona hosted the four second round group matches; Barcelona hosted Groups A and C (Camp Nou hosted all three of Group A 's matches, and Sarrià did the same with Group C 's matches) and Madrid hosted Groups B and D (Real Madrid 's Bernabeu Stadium hosted all three of Group B 's matches, and Atletico Madrid 's Calderon Stadium did the same with the Group D matches)
The two semi final matches were held at Camp Nou and the Pizjuán Stadium in Seville, the third largest stadium used for the tournament (one of only two Espana ' 82 matches it hosted), the third place match was held in Alicante and the final was held at the Bernabeu, the second largest stadium used for this tournament.
For a list of all squads that appeared in the final tournament, see 1982 FIFA World Cup squads.
The 24 qualifiers were divided into four groupings which formed the basis of the draw for the group stage. FIFA announced the six seeded teams on the day of the draw and allocated them in advance to the six groups; as had become standard, the host nation and the reigning champions were among the seeds. The seeded teams would play all their group matches at the same venue (with the exception of World Cup holders Argentina who would play in the opening game scheduled for the Camp Nou, the largest of the venues). The remaining 18 teams were split into three pots based on FIFA 's assessment of the team 's strength, but also taking in account geographic considerations. The seedings and group venues for those teams were tentatively agreed at an informal meeting in December 1981 but not officially confirmed until the day of the draw. FIFA executive Hermann Neuberger told the press that the seeding of England had been challenged by other nations but they were to be seeded as "the Spanish want England to play in Bilbao for security reasons ''.
On 16 January 1982 the draw was conducted at the Palacio de Congresos in Madrid, where the teams were drawn out from the three pots to be placed with the seeded teams in their predetermined groups. Firstly a draw was made to decide the order in which the three drums containing pots A, B and C would be emptied. The teams were then drawn one - by - one and entered in the groups in that order. A number was then drawn to determine the team 's "position '' in the group and hence the fixtures.
The only stipulation of the draw was that no group could feature two South American teams. As a result, Pot B -- which contained two South American teams -- was initially drawn containing only the four Europeans, which were then to be immediately allocated to Groups 3 and 6 which contained the two South American seeds Argentina and Brazil. Once these two groups had been filled with the entrants from Pot B, then Chile and Peru would be added to the pot and the draw continue as normal. In the event, FIFA executives Sepp Blatter and Hermann Neuberger conducting the draw initially forgot this stipulation and immediately placed the first team drawn from this pot (Belgium) into Group 1, rather than Group 3 before then placing the second team drawn out (Scotland) into Group 3; they then had to correct this by moving Belgium to Group 3 and Scotland into Group 6. The ceremony suffered further embarrassment when one of the revolving drums containing the teams broke down.
All times are Central European Summer Time (UTC + 2)
The group winners and runners - up advanced to the second round.
Teams were ranked on the following criteria:
The second round of matches consisted of four 3 - way round - robin groups, each confined to one stadium in one of Spain 's two largest cities: 2 in Madrid, and 2 in Barcelona. The winners of each one of these groups would progress to the semi-finals.
Teams were ranked on the following criteria:
Although the fixtures were provisionally determined in advance, the teams competing in each fixture depended on the result of the opening match in each group: Should a team be defeated in the opening game of the group, that team would then have to play in the second fixture against the team not participating in the opening group game; the winner of the opening game would, by contrast, be rewarded by not needing to play again until the final fixture of the group and therefore gained extra recovery time. If the opening game was a draw, the predetermined order of games would proceed as planned. These regulations helped ensure that the final group games were of importance as no team could already have progressed to the semi-finals by the end of the second fixtures.
The 43,000 - capacity Sarria Stadium in Barcelona, used for the Group C round - robin matches between Italy, Argentina and Brazil was, unlike any of the other matches (except 1) in the other groups, severely overcrowded for all 3 matches. The venue was then heavily criticized for its lack of space and inability to handle such rampant crowds; although no one had foreseen such crowds at all; the Group A matches held at the nearby and much larger 99,500 - capacity Camp Nou stadium never went past 65,000 and hosted all European teams; it was anticipated there would be larger crowds for the Camp Nou - hosted second round matches between Belgium, the Soviet Union and Poland.
Paolo Rossi received the Golden Boot for scoring six goals. In total, 146 goals were scored by 100 players, with only one of them credited as own goal.
Source:
In 1986, FIFA published a report that ranked all teams in each World Cup up to and including 1986, based on progress in the competition, overall results and quality of the opposition. The rankings for the 1982 tournament were as follows:
The official mascot of this World Cup was Naranjito, an anthropomorphised orange, a typical fruit in Spain, wearing the kit of the host 's national team. Its name comes from naranja, the Spanish word for orange, and the diminutive suffix "- ito ''.
Football in Action (fútbol en acción) was the name of an educational animated series first aired in 1982 on public broadcaster RTVE. Chapters had a duration of 20 minutes and the main character was Naranjito. The series lasted for 26 episodes and the theme was football, adventures and World Cup of 82. Naranjito was accompanied by other characters, as his girlfriend Clementina, his friend Citronio and Imarchi the robot.
The match ball for 1982 World Cup, manufactured by Adidas, was the Tango España.
|
who was in charge of the my lai massacre | My Lai massacre - wikipedia
The Mỹ Lai Massacre (/ ˌmiːˈlaɪ /; Vietnamese: Thảm sát Mỹ Lai, (thâːm ʂǎːt mǐˀ lāːj) (listen)) was the Vietnam War mass murder of unarmed Vietnamese civilians by U.S. troops in South Vietnam on 16 March 1968. Between 347 and 504 unarmed people were massacred by the U.S. Army soldiers from Company C, 1st Battalion, 20th Infantry Regiment, 11th Brigade, 23rd (Americal) Infantry Division. Victims included men, women, children, and infants. Some of the women were gang - raped and their bodies mutilated. Twenty - six soldiers were charged with criminal offenses, but only Lieutenant William Calley Jr., a platoon leader in C Company, was convicted. Found guilty of killing 22 villagers, he was originally given a life sentence, but served only three and a half years under house arrest.
The massacre, which was later called "the most shocking episode of the Vietnam War '', took place in two hamlets of Sơn Mỹ village in Quảng Ngãi Province. These hamlets were marked on the U.S. Army topographic maps as Mỹ Lai and Mỹ Khê.
The U.S. Army slang name for the hamlets and sub-hamlets in that area was Pinkville, and the carnage was initially referred to as the Pinkville Massacre. Later, when the U.S. Army started its investigation, the media changed it to the Massacre at Songmy. Currently, the event is referred to as the My Lai Massacre in the United States and called the Sơn Mỹ Massacre in Vietnam.
The incident prompted global outrage when it became public knowledge in November 1969. The massacre increased to some extent domestic opposition to the U.S. involvement in the Vietnam War when the scope of killing and cover - up attempts were exposed. Initially, three U.S. servicemen who had tried to halt the massacre and rescue the hiding civilians were shunned, and even denounced as traitors by several U.S. Congressmen, including Mendel Rivers, Chairman of the House Armed Services Committee. Only after thirty years were they recognized and decorated, one posthumously, by the U.S. Army for shielding non-combatants from harm in a war zone. Along with the No Gun Ri massacre in Korea eighteen years earlier, Mỹ Lai was one of the largest single massacres of civilians by U.S. forces in the 20th century.
Charlie Company, 1st Battalion, 20th Infantry Regiment, 11th Brigade, 23rd Infantry Division, arrived in South Vietnam in December 1967. Though their first three months in Vietnam passed without any direct contact with North Vietnamese - backed forces, by mid-March the company had suffered 28 casualties involving mines or booby - traps.
During the Tet Offensive (January 1968), attacks were carried out in Quảng Ngãi by the 48th Local Force Battalion of the National Liberation Front (NLF), commonly referred to by the U.S. Army as the Viet Cong. U.S. military intelligence assumed that the 48th NLF Battalion, having retreated and dispersed, was taking refuge in the village of Sơn Mỹ, in Quảng Ngãi Province. A number of specific hamlets within that village -- designated Mỹ Lai (1) through My Lai (6) -- were suspected of harboring the 48th.
In February and March 1968, the U.S. Military Assistance Command, Vietnam was aggressively trying to regain the strategic initiative in South Vietnam after the Tet Offensive, and the search - and - destroy operation against the 48th NLF Battalion thought to be located in Sơn Mỹ became a small part of America 's grand strategy. Task Force Barker (TF Barker), a battalion - sized ad hoc unit of 11th Brigade, was to be employed for the job. It was formed in January 1968, composed of three rifle companies of the 11th Brigade, including Company C from the 20th Infantry, led by Lieutenant Colonel (LTC) Frank A. Barker. Sơn Mỹ village was included in the area of operations of TF Barker codenamed Muscatine AO. (Muscatine County, Iowa was the home county of the 23rd Division 's commander, Major General Samuel W. Koster.)
In February 1968, TF Barker had already tried to secure Sơn Mỹ, with limited success. After that, the village area began to be called Pinkville by TF Barker troops.
The men of Charlie Company had suffered 28 casualties since their arrival. Just two days before the massacre the company had lost a popular sergeant to a land mine.
On 16 -- 18 March, TF Barker planned to engage and destroy the remnants of the 48th NLF Battalion, allegedly hiding in the Sơn Mỹ village area. Before engagement, Colonel (COL) Oran K. Henderson, the 11th Brigade commander, urged his officers to "go in there aggressively, close with the enemy and wipe them out for good ''. In turn, LTC Barker reportedly ordered the 1st Battalion commanders to burn the houses, kill the livestock, destroy food supplies, and destroy the wells.
On the eve of the attack, at the Charlie Company briefing, Captain (CPT) Ernest Medina told his men that nearly all the civilian residents of the hamlets in Sơn Mỹ village would have left for the market by 07: 00, and that any who remained would be NLF or NLF sympathizers. He was asked whether the order included the killing of women and children. Those present later gave differing accounts of Medina 's response. Some, including platoon leaders, testified that the orders, as they understood them, were to kill all guerrilla and North Vietnamese combatants and "suspects '' (including women and children, as well as all animals), to burn the village, and pollute the wells. He was quoted as saying, "They 're all VC, now go and get them '', and was heard to reply to the question "Who is my enemy? '', by saying, "Anybody that was running from us, hiding from us, or appeared to be the enemy. If a man was running, shoot him, sometimes even if a woman with a rifle was running, shoot her. ''
At Calley 's trial, one defense witness testified that he remembered Medina instructing to destroy everything in the village that was "walking, crawling or growing ''.
Charlie Company was to enter the village of Sơn Mỹ spearheaded by 1st Platoon, engage the enemy, and flush it out. The other two companies from TF Barker were ordered to secure the area and provide support if needed. The area was designated a free fire zone, where American forces were allowed to deploy artillery and air strikes in populated areas.
On the Saturday morning of 16 March at 7: 30 a.m., around 100 soldiers from Charlie Company led by CPT Ernest Medina, following a short artillery and helicopter gunship barrage, landed in helicopters at Sơn Mỹ, a patchwork of settlements, rice paddies, irrigation ditches, dikes, and dirt roads, connecting an assortment of hamlets and sub-hamlets. The largest among them were the hamlets Mỹ Lai, Cổ Lũy, Mỹ Khê, and Tu Cung.
Although the GIs were not fired upon after landing, they still suspected there were Vietcong guerrillas hiding underground or in the huts. Confirming their suspicions, the gunships engaged several armed enemy in a vicinity of Mỹ Lai; later, one weapon was retrieved from the site.
According to the operational plan, 1st Platoon led by Second Lieutenant (2LT) William Calley and 2nd Platoon led by 2LT Stephen Brooks entered the hamlet of Tu Cung in line formation at 08: 00, while the 3rd Platoon commanded by 2LT Jeffrey U. Lacross and Captain Medina 's command post remained outside. On approach, both platoons fired at people they saw in the rice fields and in the brush.
The villagers, who were getting ready for a market day, at first did not panic or run away, and they were herded into the hamlet 's commons. Harry Stanley, a machine gunner from Charlie Company, said during the U.S. Army Criminal Investigation Division inquiry that the killings started without warning. He first observed a member of 1st Platoon strike a Vietnamese man with a bayonet. Then, the same trooper pushed another villager into a well and threw a grenade in the well. Next, he saw fifteen or twenty people, mainly women and children, kneeling around a temple with burning incense. They were praying and crying. They were all killed by shots in the head.
Most of the killings occurred in the southern part of Tu Cung, a sub-hamlet of Xom Lang, which was a home to 700 residents. Xom Lang was erroneously marked on the U.S. military operational maps of Quảng Ngãi Province as Mỹ Lai.
A large group of approximately 70 -- 80 villagers was rounded up by 1st Platoon in Xom Lang and led to an irrigation ditch east of the settlement. All detainees were pushed into the ditch and then killed after repeated orders issued by Lieutenant Calley, who was also shooting. PFC Paul Meadlo testified that he expended several M16 magazines. He recollected that women were allegedly saying "No VC '' and were trying to shield their children. He remembered that he was shooting into women with babies in their hands since he was convinced at that time that they were all booby - trapped with grenades and were poised to attack. On another occasion during the security sweep of My Lai, Meadlo again fired into civilians side - by - side with Lieutenant Calley.
PFC Dennis Konti, a witness for the prosecution, told of one especially gruesome episode during the shooting, "A lot of women had thrown themselves on top of the children to protect them, and the children were alive at first. Then, the children who were old enough to walk got up and Calley began to shoot the children ''. Other 1st Platoon members testified that many of the deaths of individual Vietnamese men, women and children occurred inside Mỹ Lai during the security sweep. Livestock was shot as well.
When PFC Michael Bernhardt entered the subhamlet of Xom Lang, the massacre was underway:
I walked up and saw these guys doing strange things... Setting fire to the hootches and huts and waiting for people to come out and then shooting them... going into the hootches and shooting them up... gathering people in groups and shooting them... As I walked in you could see piles of people all through the village... all over. They were gathered up into large groups. I saw them shoot an M79 (grenade launcher) into a group of people who were still alive. But it was mostly done with a machine gun. They were shooting women and children just like anybody else. We met no resistance and I only saw three captured weapons. We had no casualties. It was just like any other Vietnamese village -- old papa - sans, women and kids. As a matter of fact, I do n't remember seeing one military - age male in the entire place, dead or alive.
One group of 20 -- 50 villagers was herded south of Xom Lang and killed on a dirt road. According to Ronald Haeberle 's eyewitness account of the massacre, in one instance,
There were some South Vietnamese people, maybe fifteen of them, women and children included, walking on a dirt road maybe 100 yards (90 m) away. All of a sudden the GIs just opened up with M16s. Beside the M16 fire, they were shooting at the people with M79 grenade launchers... I could n't believe what I was seeing.
Lieutenant Calley testified that he heard the shooting and arrived on the scene. He observed his men firing into a ditch with Vietnamese people inside and he then started shooting, with an M16, from a distance of five feet. Then, a helicopter landed on the other side of the ditch and a pilot asked Calley if he could provide any medical assistance to the wounded civilians in Mỹ Lai; Calley admitted replying that a hand grenade was the only available means that he had for their evacuation. After that, around 11: 00, Captain Medina radioed to cease fire and the 1st Platoon took a lunch break.
Members of 2nd platoon killed at least 60 -- 70 Vietnamese, as they swept through the northern half of Mỹ Lai and through Binh Tay, a small sub-hamlet about 400 metres (1,300 ft) north of Mỹ Lai. The platoon suffered one dead and seven wounded by mines and booby traps. After the initial sweeps by 1st and 2nd platoons, 3rd Platoon was dispatched to deal with any "remaining resistance ''. 3rd platoon, which stayed in reserve, also reportedly rounded up and killed a group of seven to twelve women and children.
Since Charlie Company had not met any enemy opposition at Mỹ Lai and did not request back - up, Bravo Company, 4th Battalion, 3rd Infantry Regiment of TF Barker was transported by air between 08: 15 and 08: 30 3 km (2 mi) away. It attacked the subhamlet My Hoi of the hamlet known as Cổ Lũy, which was mapped by the Army as Mỹ Khê. During this operation, between 60 and 155 people, including women and children, were killed.
Over the next day, both companies were involved in additional burning and destruction of dwellings, as well as mistreatment of Vietnamese detainees. While some soldiers of Charlie Company did not participate in the crimes, they neither openly protested nor complained later to their superiors.
William Thomas Allison, a professor of Military History at Georgia Southern University, wrote, "By midmorning, members of Charlie Company had killed hundreds of civilians and raped or assaulted countless women and young girls. They encountered no enemy fire and found no weapons in My Lai itself ''.
Warrant Officer (WO1) Hugh Thompson, Jr., a helicopter pilot from Company B (Aero - Scouts), 123rd Aviation Battalion, Americal Division, saw dead and wounded civilians as he was flying over the village of Sơn Mỹ, providing close - air support for ground forces. The crew made several attempts to radio for help for the wounded. They landed their helicopter by a ditch, which they noted was full of bodies and in which there was movement. Thompson asked a sergeant he encountered there (David Mitchell of the 1st Platoon) if he could help get the people out of the ditch, and the sergeant replied that he would "help them out of their misery ''. Thompson, shocked and confused, then spoke with Calley, who claimed to be "just following orders ''. As the helicopter took off, Thompson saw Mitchell firing into the ditch.
Thompson and his crew witnessed an unarmed woman being kicked and shot at point - blank range by Captain Medina, who later claimed that he thought she had a hand grenade. Thompson then saw a group of civilians (again consisting of children, women, and old men) at a bunker being approached by ground personnel. Thompson landed and told his crew that if the soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire on these soldiers.
Thompson later testified that he spoke with a lieutenant (identified as Stephen Brooks of 2nd Platoon) and told him there were women and children in the bunker, and asked if the lieutenant would help get them out. According to Thompson, "he (the lieutenant) said the only way to get them out was with a hand grenade ''. Thompson testified that he then told Brooks to "just hold your men right where they are, and I 'll get the kids out. '' He found 12 -- 16 people in the bunker, coaxed them out and led them to the helicopter, standing with them while they were flown out in two groups.
Returning to Mỹ Lai, Thompson and other air crew members noticed several large groups of bodies. Spotting some survivors in the ditch, Thompson landed again. A crew member, Glenn Andreotta entered the ditch and returned with a bloodied but apparently unharmed four - year old girl, who was flown to safety. Thompson then reported what he had seen to his company commander, Major (MAJ) Frederic W. Watke, using terms such as "murder '' and "needless and unnecessary killings. '' Thompson 's statements were confirmed by other helicopter pilots and air crew members.
For the actions at My Lai, Thompson was awarded the Distinguished Flying Cross (DFC) and his crew members Glenn Andreotta and Lawrence Colburn were awarded Bronze Star medals. Glenn Andreotta was awarded his medal posthumously, as he was killed in Vietnam on 8 April 1968. As the DFC citation included a fabricated account of rescuing a young girl from My Lai from "intense crossfire '' Thompson threw his medal away. He later received a Purple Heart for other services in Vietnam.
In March 1998, the helicopter crew 's medals were replaced by the Soldier 's Medal, the highest the U.S. Army can award for bravery not involving direct conflict with the enemy. The medal citations state they were "for heroism above and beyond the call of duty while saving the lives of at least 10 Vietnamese civilians during the unlawful massacre of non-combatants by American forces at My Lai ''.
Thompson initially refused the medal when the U.S. Army wanted to award it quietly. He demanded it be done publicly and that his crew also be honored in the same way. The veterans also contacted the survivors of Mỹ Lai.
After returning to base at about 11: 00, Thompson reported the massacre to his superiors. His allegations of civilian killings quickly reached LTC Barker, the operation 's overall commander. Barker radioed his executive officer to find out from Captain Medina what was happening on the ground. Medina then gave the cease - fire order to Charlie Company to "cut (the killing) out - knock it off ''.
Since Thompson made an official report of the civilian killings, he was interviewed by Colonel Oran Henderson, the commander of the 11th Infantry Brigade (the parent organization of the 20th Infantry). Concerned, senior American officers canceled similar planned operations by Task Force Barker against other villages (My Lai 5, My Lai 1, etc.) in Quảng Ngãi Province.
Despite Thompson 's revealing information, Colonel Henderson issued a Letter of Commendation to Captain Medina on 27 March 1968. The following day (28 March) the commander of Task Force Barker submitted a combat action report for the 16 March operation in which he stated that the operation in Mỹ Lai was a success with 128 Viet Cong partisans killed. The Americal Division commander, Major General S.W. Koster, sent a congratulatory message to Company C. General William C. Westmoreland, the head of Military Assistance Command, Vietnam (MACV), also congratulated Company C, 1st Battalion, 20th Infantry for "outstanding action '', saying that they had "dealt (the) enemy (a) heavy blow ''.
Later, he changed his stance, writing in his memoir that it was "the conscious massacre of defenseless babies, children, mothers, and old men in a kind of diabolical slow - motion nightmare that went on for the better part of a day, with a cold - blooded break for lunch ''.
Owing to the chaotic circumstances of the war and the U.S. Army 's decision not to undertake a definitive body count of noncombatants in Vietnam, the number of civilians killed at Mỹ Lai can not be stated with certainty. Estimates vary from source to source, with 347 and 504 being the most commonly cited figures. The memorial at the site of the massacre lists 504 names, with ages ranging from one to 82. A later investigation by the U.S. Army arrived at a lower figure of 347 deaths, the official U.S. estimate. The official estimate by the local government remains 504.
Initial reports claimed "128 Viet Cong and 22 civilians '' had been killed in the village during a "fierce fire fight ''. General Westmoreland, the MACV commander, congratulated the unit on the "outstanding job ''. As relayed at the time by Stars and Stripes magazine, "U.S. infantrymen had killed 128 Communists in a bloody day - long battle. ''
On 16 March 1968, in the official press briefing known as the "Five O'Clock Follies '', a mimeographed release included this passage: "In an action today, Americal Division forces killed 128 enemy near Quang Ngai City. Helicopter gunships and artillery missions supported the ground elements throughout the day. ''
Initial investigations of the Mỹ Lai operation were undertaken by the 11th Light Infantry Brigade 's commanding officer, Colonel Henderson, under orders from the Americal Division 's executive officer, Brigadier General George H. Young. Henderson interviewed several soldiers involved in the incident, then issued a written report in late - April claiming that some 20 civilians were inadvertently killed during the operation. The Army at this time was still describing the event as a military victory that had resulted in the deaths of 128 enemy combatants.
Six months later, Tom Glen, a 21 - year - old soldier of the 11th Light Infantry Brigade, wrote a letter to General Creighton Abrams, the new MACV commander. He described an ongoing and routine brutality against Vietnamese civilians on the part of American forces in Vietnam that he personally witnessed and then concluded,
It would indeed be terrible to find it necessary to believe that an American soldier that harbors such racial intolerance and disregard for justice and human feeling is a prototype of all American national character; yet the frequency of such soldiers lends credulity to such beliefs... What has been outlined here I have seen not only in my own unit, but also in others we have worked with, and I fear it is universal. If this is indeed the case, it is a problem which can not be overlooked, but can through a more firm implementation of the codes of MACV (Military Assistance Command Vietnam) and the Geneva Conventions, perhaps be eradicated.
Colin Powell, then a 31 - year - old Army major serving as an assistant chief of staff of operations for the Americal Division, was charged with investigating the letter, which did not specifically refer to Mỹ Lai, as Glen had limited knowledge of the events there. In his report, Powell wrote, "In direct refutation of this portrayal is the fact that relations between Americal Division soldiers and the Vietnamese people are excellent. '' Powell 's handling of the assignment was later characterized by some observers as "whitewashing '' the atrocities of Mỹ Lai.
In May 2004, Powell, then United States Secretary of State, told CNN 's Larry King, "I mean, I was in a unit that was responsible for Mỹ Lai. I got there after My Lai happened. So, in war, these sorts of horrible things happen every now and again, but they are still to be deplored. ''
In 1966, the Bình Hòa massacre purportedly at the hands of South Korean troops occurred in Quảng Ngãi Province; in February 1968, in neighboring Quảng Nam province, during a similar Muscatine counterinsurgency search - and - destroy operation, the Phong Nhị and Phong Nhất massacre and the purported Hà My massacre were committed by South Korean Marines. Seven months prior to the massacre at Mỹ Lai, on Robert McNamara 's orders, the Inspector General of the U.S. Defense Department investigated press coverage of alleged atrocities committed in South Vietnam. In August 1967, the 200 - page report "Alleged Atrocities by U.S. Military Forces in South Vietnam '' was completed. It concluded that many American troops did not fully understand the Geneva Conventions. No further action was taken.
Independently of Glen, Specialist 5 Ronald L. Ridenhour, a former door gunner from the Aviation Section, Headquarters Company, 11th Infantry Brigade, sent a letter in March 1969 to thirty members of Congress imploring them to investigate the circumstances surrounding the "Pinkville '' incident. He and his pilot, Warrant Officer Gilbert Honda, flew over Mỹ Lai several days after the operation and observed a scene of complete destruction. At one point, they hovered over a dead Vietnamese woman with a patch of the 11th Brigade on her body.
Ridenhour had learned about the events at Mỹ Lai secondhand from talking to members of Charlie Company over a period of months beginning in April 1968. He became convinced that something "rather dark and bloody did indeed occur '' at Mỹ Lai, and was so disturbed by the tales he heard that within three months of being discharged from the Army he penned his concerns to Congress. He included the name of Michael Bernhardt, an eyewitness who agreed to testify, in the letter.
Most recipients of Ridenhour 's letter ignored it, with the exception of Congressman Mo Udall and Senators Barry Goldwater and Edward Brooke. Udall urged the House Armed Services Committee to call on Pentagon officials to conduct an investigation.
Independent investigative journalist Seymour Hersh, after extensive interviews with Calley, broke the Mỹ Lai story on 12 November 1969, on the Associated Press wire service; on 20 November, Time, Life and Newsweek all covered the story, and CBS televised an interview with Paul Meadlo, a soldier in Calley 's unit during the massacre. The Plain Dealer (Cleveland, Ohio) published explicit photographs of dead villagers killed at Mỹ Lai.
As members of Congress called for an inquiry and news correspondents abroad expressed their horror at the massacre, the General Counsel of the Army Robert Jordan was tasked with speaking to the press. He refused to confirm allegations against Calley. Noting the significance of the fact that the statement was given at all, Bill Downs of ABC News said it amounted to the first public expression of concern by a "high defense official '' that American troops "might have committed genocide. ''
In November 1969, Lieutenant General William R. Peers was appointed by the Secretary of the Army and the Army Chief of Staff to conduct a thorough review of the My Lai incident, 16 -- 19 March 1968, and its investigation by the Army. Peers 's final report, presented to higher - ups on March 17, 1970, was highly critical of top officers at brigade and divisional levels for participating in the cover - up, and the Charlie Company officers for their actions at Mỹ Lai.
According to Peers 's findings:
(The 1st Battalion) members had killed at least 175 -- 200 Vietnamese men, women, and children. The evidence indicates that only 3 or 4 were confirmed as Viet Cong although there were undoubtedly several unarmed VC (men, women, and children) among them and many more active supporters and sympathizers. One man from the company was reported as wounded from the accidental discharge of his weapon... a tragedy of major proportions had occurred at Son My.
Critics of the Peers Report pointed out that it sought to place the real blame on four officers who were already dead, foremost among them the commander of Task Force Barker, LTC Frank Barker, who was killed in a mid-air collision on 13 June 1968. Also, the Peers Report avoided drawing any conclusions or recommendations regarding the further examination of the treatment of civilians in a war zone. In 1968, an American journalist, Jonathan Schell, wrote that in the Vietnamese province of Quang Ngai, where the Mỹ Lai massacre occurred, up to 70 % of all villages were destroyed by the air strikes and artillery bombardments, including the use of napalm; 40 % percent of the population were refugees, and the overall civilian casualties were close to 50,000 a year. Regarding the massacre at Mỹ Lai, he stated, "There can be no doubt that such an atrocity was possible only because a number of other methods of killing civilians and destroying their villages had come to be the rule, and not the exception, in our conduct of the war ''.
In May 1970, a sergeant who participated in Operation Speedy Express wrote a confidential letter to then Army Chief of Staff Westmoreland describing civilian killings he said were on the scale of the massacre occurring as "a My Lai each month for over a year '' during 1968 -- 69. Two other letters to this effect from enlisted soldiers to military leaders in 1971, all signed "Concerned Sergeant '', were uncovered within declassified National Archive documents. The letters describe common occurrences of civilian killings during population pacification operations. Army policy also stressed very high body counts and this resulted in dead civilians being marked down as combatants. Alluding to indiscriminate killings described as unavoidable, the commander of the 9th Division, then Major General Julian Ewell, in September 1969, submitted a confidential report to Westmoreland and other generals describing the countryside in some areas of Vietnam as resembling the battlefields of Verdun.
In July 1969, the Office of Provost Marshal General of the Army began to examine the evidence collected by the General Peers inquiry regarding possible criminal charges. Eventually, Calley was charged with several counts of premeditated murder in September 1969, and 25 other officers and enlisted men were later charged with related crimes.
On 17 November 1970, a court - martial in the United States charged 14 officers, including Major General Samuel Koster, the Americal Division 's commanding officer, with suppressing information related to the incident. Most of the charges were later dropped. Brigade commander Colonel Henderson was the only high ranking commanding officer who stood trial on charges relating to the cover - up of the Mỹ Lai massacre; he was acquitted on 17 December 1971.
During the four - month - long trial, Lieutenant Calley consistently claimed that he was following orders from his commanding officer, Captain Medina. Despite that, he was convicted and sentenced to life in prison on 29 March 1971, after being found guilty of premeditated murder of not fewer than twenty people. Two days later, President Richard Nixon made the controversial decision to have Calley released from armed custody at Fort Benning, Georgia, and put under house arrest pending appeal of his sentence. Calley 's conviction was upheld by the Army Court of Military Review in 1973 and by the U.S. Court of Military Appeals in 1974.
In August 1971, Calley 's sentence was reduced by the Convening Authority from life to twenty years. Calley would eventually serve three and one - half years under house arrest at Fort Benning including three months in a disciplinary barracks in Fort Leavenworth, Kansas. In September 1974, he was paroled by the Secretary of the Army, Howard Callaway.
In a separate trial, Captain Medina denied giving the orders that led to the massacre, and was acquitted of all charges, effectively negating the prosecution 's theory of "command responsibility '', now referred to as the "Medina standard ''. Several months after his acquittal, however, Medina admitted he had suppressed evidence and had lied to Colonel Henderson about the number of civilian deaths.
Captain Kotouc, an intelligence officer from 11th Brigade, was also court - martialed and found not guilty. Major General Koster was demoted to brigadier general and lost his position as the Superintendent of West Point. His deputy, Brigadier General Young, received a letter of censure. Both were stripped of Distinguished Service Medals which had been awarded for service in Vietnam.
Of the 26 men initially charged, Lieutenant Calley was the only one convicted. Some have argued that the outcome of the Mỹ Lai courts - martial failed to uphold the laws of war established in the Nuremberg and Tokyo War Crimes Tribunals. Telford Taylor, a senior American prosecutor at Nuremberg, wrote that legal principles established at the war crimes trials could have been used to prosecute senior American military commanders for failing to prevent atrocities such as the one at My Lai.
Howard Callaway, Secretary of the Army, was quoted in The New York Times in 1976 as stating that Calley 's sentence was reduced because Calley honestly believed that what he did was a part of his orders -- a rationale that contradicts the standards set at Nuremberg and Tokyo, where following orders was not a defense for committing war crimes. On the whole, aside from the Mỹ Lai courts - martial, there were thirty - six military trials held by the U.S. Army from January 1965 to August 1973 for crimes against civilians in Vietnam.
Some authors have argued that the light punishments of the low - level personnel present at Mỹ Lai and unwillingness to hold higher officials responsible was part of a pattern in which the body - count strategy and the so - called "Mere Gook Rule '' encouraged U.S. soldiers to err on the side of killing too many South Vietnamese civilians. This in turn, Nick Turse argues, made lesser known massacres like Mỹ Lai and a pattern of war crimes common in Vietnam.
In early 1972, the camp at Mỹ Lai (2) where the survivors of the Mỹ Lai massacre had been relocated, was largely destroyed by Army of the Republic of Vietnam artillery and aerial bombardment, and remaining eyewitnesses were dispersed. The destruction was officially attributed to "Viet Cong terrorists ''. The truth was revealed by Quaker service workers in the area through testimony in May 1972 by Martin Teitel at hearings before the Congressional Subcommittee to Investigate Problems Connected with Refugees and Escapees in South Vietnam. In June 1972, Teitel 's account was published in The New York Times.
Many American soldiers who had been in Mỹ Lai during the massacre accepted personal responsibility for the loss of civilian lives. Some of them expressed regrets without acknowledging any personal guilt, as, for example, Ernest Medina, who said, "I have regrets for it, but I have no guilt over it because I did n't cause it. That 's not what the military, particularly the United States Army, is trained for. ''
Lawrence La Croix, a squad leader in Charlie Company in Mỹ Lai, stated in 2010: "A lot of people talk about Mỹ Lai, and they say, ' Well, you know, yeah, but you ca n't follow an illegal order. ' Trust me. There is no such thing. Not in the military. If I go into a combat situation and I tell them, ' No, I 'm not going. I 'm not going to do that. I 'm not going to follow that order ', well, they 'd put me up against the wall and shoot me. ''
On 16 March 1998, a gathering of local people and former American and Vietnamese soldiers stood together at the place of the Mỹ Lai massacre in Vietnam to commemorate its 30th anniversary. American veterans Hugh Thompson and Lawrence Colburn, who were shielding civilians during the massacre, addressed the crowd. Among the listeners was Phan Thi Nhanh, a 14 - year - old girl at the time of the massacre. She was saved by Thompson and vividly remembered that tragic day, "We do n't say we forget. We just try not to think about the past, but in our hearts we keep a place to think about that ''. Colburn challenged Lieutenant Calley, "... to face the women we faced today who asked the questions they asked, and look at the tears in their eyes and tell them why it happened ''. No American diplomats nor any other officials attended the meeting.
More than a thousand people turned out on 16 March 2008, forty years after the massacre. The Son My Memorial drew survivors and families of victims and some returning U.S. veterans. One survivor (an 8 - year girl at the time), said, "Everyone in my family was killed in the Mỹ Lai massacre -- my mother, my father, my brother and three sisters. They threw me into a ditch full of dead bodies. I was covered with blood and brains. '' The U.S. was unofficially represented by a volunteer group from Wisconsin called Madison Quakers, who in 10 years built three schools in Mỹ Lai and planted a peace garden.
On 19 August 2009, Calley made his first public apology for the massacre in a speech to the Kiwanis club of Greater Columbus, Georgia:
There is not a day that goes by that I do not feel remorse for what happened that day in Mỹ Lai ", he told members of the club. "I feel remorse for the Vietnamese who were killed, for their families, for the American soldiers involved and their families. I am very sorry... If you are asking why I did not stand up to them when I was given the orders, I will have to say that I was a 2nd lieutenant getting orders from my commander and I followed them -- foolishly, I guess.
Trần Văn Đức, who was seven years old at the time of Mỹ Lai massacre and now resides in Remscheid, Germany, called the apology "terse ''. He wrote a public letter to Calley describing the plight of his and many other families to remind him that time did not ease the pain, and that grief and sorrow over lost lives will forever stay in Mỹ Lai.
Altogether, 14 officers directly and indirectly involved with the operation, including two generals, were investigated in connection with the Mỹ Lai massacre, except for LTC Frank A. Barker, CPT Earl Michaels, and 2LT Stephen Brooks, who all died before the beginning of the investigation.
Before being shipped to South Vietnam, all of Charlie Company 's soldiers went through an advanced infantry training and basic unit training at Pohakuloa Training Area in Hawaii. At Schofield Barracks they were taught how to treat POWs and how to distinguish Vietcong guerrillas from civilians by a Judge Advocate.
A photographer and a reporter from the 11th Brigade Information Office were attached to the Task Force Barker and landed with Charlie Company in Sơn Mỹ on 16 March 1968. However, the Americal News Sheet published 17 March 1968, as well as the Trident, 11th Infantry Brigade newsletter from 22 March 1968, did not mention the death of noncombatants in Mỹ Lai. The Stars and Stripes published a laudatory piece, "U.S. troops Surrounds Red, Kill 128 '' on March 18.
On 12 April 1968, the Trident wrote that, "The most punishing operations undertaken by the brigade in Operation Muscatine 's area involved three separate raids into the village and vicinity of My Lai, which cost the VC 276 killed ''. On 4 April 1968, the information office of the 11th Brigade issued a press - release, Recent Operations in Pinkville, without any information about mass casualties among civilians. Subsequent criminal investigation uncovered that, "Both individuals failed to report what they had seen, the reporter wrote a false and misleading account of the operation, and the photographer withheld and suppressed from proper authorities the photographic evidence of atrocities he had obtained. ''
-- David H. Hackworth
The first mentions of the Mỹ Lai massacre appeared in the American media after Fort Benning 's vague press release concerning the charges pressed against Lieutenant Calley, which was distributed on September 5, 1969.
Consequently, NBC aired on 10 September 1969 a segment in the Huntley - Brinkley Report which mentioned the murder of a number of civilians in South Vietnam. Following that, emboldened Ronald Ridenhour decided to disobey the Army 's order to withhold the information from the media. He approached reporter Ben Cole of the Phoenix Republic, who chose not to handle the scoop. Charles Black from the Columbus Enquirer uncovered the story on his own but also decided to put it on hold. Two major national news press outlets -- The New York Times and The Washington Post, received some tips with partial information but did not act on them.
A phone call on 22 October 1969, answered by freelance investigative journalist, Seymour Hersh, and his subsequent independent inquiry, broke the wall of silence that was surrounding the Mỹ Lai massacre. Hersh initially tried to sell the story to Life and Look magazines; both turned it down. Hersh then went to the small Washington - based Dispatch News Service, which sent it to fifty major American newspapers; thirty of them accepted it for publication. New York Times reporter Henry Kamm investigated further and found several Mỹ Lai massacre survivors in South Vietnam. He estimated the number of killed civilians as 567.
Next, Ben Cole published an article about Ronald Ridenhour, a helicopter gunner and an Army whistleblower, who was among the first who started to uncover the truth about the Mỹ Lai massacre. Joseph Eszterhas of The Plain Dealer, a friend of Ron Haeberle, knew about the photo evidence of the massacre and published the grisly images of the dead bodies of old men, women, and children on 20 November 1969. Time Magazine 's article on 28 November 1969 and in Life magazine on 5 December 1969, finally brought Mỹ Lai to the fore of the public debate about Vietnam War.
Richard L. Strout, the Christian Science Monitor political commentator, emphasized that, "American press self - censorship thwarted Mr. Ridenhour 's disclosures for a year. '' "No one wanted to go into it '', his agent said of telegrams sent to Life, Look, and Newsweek magazines outlining allegations.
Afterwards, interviews and stories connected to the Mỹ Lai massacre started to appear regularly in the American and international press.
The Lieutenant is a 1975 Broadway rock opera that concerns the Mỹ Lai massacre and resulting courts martial. It was nominated for four Tony Awards including Best Musical and Best Book of a Musical.
The Mỹ Lai massacre, like many other events in Vietnam, was captured on camera by U.S. Army personnel. The most published and graphic images were taken by Ronald Haeberle, a U.S. Army Public Information Detachment photographer who accompanied the men of Charlie Company that day.
In 2009, Haeberle admitted that he destroyed a number of photographs he took during the massacre. Unlike the photographs of the dead bodies, the destroyed photographs depicted Americans in the actual process of murdering Vietnamese civilians.
The epithet "baby killers '' was often used by anti-war activists to describe American soldiers, largely as a result of the Mỹ Lai Massacre. The Mỹ Lai massacre and the Haeberle photographs both further solidified the stereotype of drug - addled soldiers who killed babies. According to M. Paul Holsinger, the And babies poster, which used a Haeberle photo, was "easily the most successful poster to vent the outrage that so many felt about the human cost of the conflict in Southeast Asia. Copies are still frequently seen in retrospectives dealing with the popular culture of the Vietnam War era or in collections of art from the period. ''
Another soldier, John Henry Smail of the 3rd Platoon, took at least 16 color photographs depicting U.S. Army personnel, helicopters, and aerial views of Mỹ Lai. These, along with Haeberle 's photographs, were included in the "Report of the Department of the Army review of the Preliminary Investigations into the My Lai Incident ''. Former First Lieutenant (1LT) Roger L. Alaux Jr., a forward artillery observer, who was assigned to Charlie Company during the combat assault on Ly Mai 4, also took some photographs from a helicopter that day, including aerial views of Mỹ Lai, and of the C Company 's landing zone.
Mrs Nguyễn Thị Tẩu (chín Tẩu), killed by US soldiers
Unidentified dead Vietnamese man
Unidentified dead body thrown down a well
SP5 Capezza burning a dwelling
PFC Mauro, PFC Carter, and SP4 Widmer (Carter shot himself in the foot with a. 45 pistol during the My Lai Massacre)
SP4 Dustin setting fire to a dwelling
Unidentified Vietnamese man
Victims at Mỹ Lai
Mỹ Lai holds a special place in American and Vietnamese collective memory.
A 2.4 - hectare (5.9 - acre) Sơn Mỹ Memorial dedicated to victims of the Sơn Mỹ (My Lai) massacre was created in the village of Tịnh Khê, Sơn Tịnh District, Quảng Ngãi Province of Vietnam. The graves with headstones, signs on the places of killing and a museum are all located on memorial site. The War Remnants Museum in Ho Chi Minh City has an exhibition on Mỹ Lai.
Some American veterans chose to go on pilgrimage to the site of the massacre to heal and reconcile.
On the 30th anniversary of the massacre, 16 March 1998, a groundbreaking ceremony for the Mỹ Lai Peace Park was held 2 km (1 mi) away from the site of the massacre. Veterans, including Hugh Thompson Jr. and Lawrence Colburn from the helicopter rescue crew, attended the ceremony. Mike Boehm, a veteran who was instrumental in the peace park effort, said, "We can not forget the past, but we can not live with anger and hatred either. With this park of peace, we have created a green, rolling, living monument to peace. ''
On 16 March 2001, the Mỹ Lai Peace Park was dedicated, a joint venture of the Quảng Ngãi Province Women 's Union, the Madison Quakers ' charitable organization, and the Vietnamese government.
|
where does the saying open a can of worms come from | Wikipedia: opening up a can of worms - Wikipedia
If you ' open a can of worms ', you (often unexpectedly) set in motion or discover something that has wide - reaching consequences. This sometimes happens on Wikipedia, and nearly always leads to massive fluctuations in the community or policy.
There are occasional bad faith AfD nominations on Wikipedia, and there are some rewrites which may cause havoc with other users. There are also more mild cases of these that can actually change the way Wikipedians look at things, sometimes for the better. If you want to change something that 's potentially a staple part of the community in some people 's minds, then you can certainly do it, just keep in mind that it could cause some serious ramifications, hence the term, can of worms.
The biggest example of the can of worms is most likely Wikipedia: Miscellany for deletion / Wikipedia: Esperanza. When User: Dev920 nominated this for deletion, there was no way of knowing if she would have been attacked by many or praised. This is an example of the benefits of opening the can of worms properly. She ended up being mostly praised.
Conversely, attempting to open up the can of worms in some cases does not work, such as in the case of Wikipedia: Categories for discussion / Log / 2007 January 4 # Category: User en. If that category were to have been deleted or renamed, then hundreds of others would have to follow suit. User: Twp opened it up, yet was met with mostly disagreement. This in itself is not bad, as opening up a can of worms unsuccessfully can help reiterate both policy and general consensus within the Wikipedia community.
|
compatible games for xbox 360 to xbox one | List of Backward compatible games for Xbox One - wikipedia
The Xbox One gaming console has received updates from Microsoft since its launch in 2013 that enable it to play select games from its two predecessor consoles, Xbox and Xbox 360. On June 15, 2015, backward compatibility with supported Xbox 360 games became available to eligible Xbox Preview program users with a beta update to the Xbox One system software. The dashboard update containing backward compatibility was released publicly on November 12, 2015. On October 24, 2017, another such update added games from the original Xbox library. The following is a list of all backward compatible games on Xbox One under this functionality.
At its launch in November 2013, the Xbox One did not have native backward compatibility with original Xbox or Xbox 360 games. Xbox Live director of programming Larry "Major Nelson '' Hryb suggested users could use the HDMI - in port on the console to pass an Xbox 360 or any other device with HDMI output through Xbox One. Senior project management and planning director Albert Penello explained that Microsoft was considering a cloud gaming platform to enable backward compatibility, but he felt it would be "problematic '' due to varying internet connection qualities.
During Microsoft 's E3 2015 press conference on June 15, 2015, Microsoft announced plans to introduce Xbox 360 backward compatibility on the Xbox One at no additional cost. Supported Xbox 360 games will run within an emulator and have access to certain Xbox One features, such as recording and broadcasting gameplay. Games do not run directly from discs. A ported form of the game is downloaded automatically when a supported game is inserted, while digitally - purchased games will automatically appear for download in the user 's library once available. As with Xbox One titles, if the game is installed using physical media, the disc is still required for validation purposes.
Not all Xbox 360 games will be supported; 104 Xbox 360 games were available for the feature 's public launch on November 12, 2015 with Xbox One preview program members getting early access. Microsoft stated that publishers will only need to provide permission to the company to allow the repackaging, and they expect the list to grow significantly over time. Unlike the emulation of original Xbox games on the Xbox 360, the Xbox One does not require game modification, since it emulates an exact replica of its predecessor 's environment -- both hardware and software operating systems. The downloaded game is a repackaged version of the original that identifies itself as an Xbox One title to the console. At Gamescom, Microsoft revealed it has plans to ensure "all future Xbox 360 Games with Gold titles will be playable on Xbox One. '' On December 17, 2015 Microsoft made another sixteen Xbox 360 games compatible with Xbox One, including titles such as Halo: Reach, Fable III and Deus Ex: Human Revolution. On January 21, 2016, Microsoft made another ten Xbox 360 games compatible, including The Witcher 2: Assassins of Kings and Counter-Strike: Global Offensive. On May 13, 2016, Microsoft made Xbox 360 titles with multiple discs compatible, starting with Deus Ex: Human Revolution Director 's Cut.
In January 2016, Microsoft announced that future titles would be added as they became available, instead of waiting until a specific day each month.
During Microsoft 's E3 2017 press conference on June 11, 2017, Microsoft announced that roughly 50 % of Xbox One users had played an Xbox 360 game on Xbox One through the system 's backward - compatibility feature. Based on popular demand, Phil Spencer, Microsoft 's Head of Xbox, announced that Xbox One consoles would be able to play select games made for the original Xbox console, first released in 2001. The compatibility will work on all consoles in the Xbox One family, including the Xbox One X, and will be available as a free update planned for the fall of 2017.
The functionality will be similar to that for back - compatibility with Xbox 360 games. Users insert the Xbox game disc into their Xbox One console to install the compatible version of the game. While players will not be able to access any old game saves or connect to Xbox Live on these titles, system link functions will remain available. Xbox games will not receive achievement support, although when asked about this component, Spencer responded that they had nothing to announce at the current time.
Realizing that game discs for original Xbox consoles could be scarce, Spencer said that plans were in place to make compatible Xbox games available digitally. Spencer also said that such games may also be incorporated into the Xbox Game Pass subscription service. In a later interview, Spencer indicated that the potential library of Xbox titles being playable on Xbox One will be smaller than that currently available from the Xbox 360 library. Spencer noted two reasons for the more limited library were the availability of content rights for the games and the technical difficulties related to the conversion.
Backwards compatible original Xbox and Xbox 360 titles will benefit from becoming Xbox One X enhanced with the following:
There are currently 492 on this list out of 2099 games released for the Xbox 360.
There are currently 33 on this list out of 1047 released for the Xbox. All Original Xbox games run at 4 times the original resolution on Xbox One and Xbox One S consoles (up to 960p), and 16 times on Xbox One X (up to 1920p).
|
how long was the guiding light on tv | Guiding Light - wikipedia
Guiding Light (known as The Guiding Light before 1975) is an American television soap opera. It is listed in Guinness World Records as the longest - running drama in television in American history, broadcast on CBS for 57 years from June 30, 1952, until September 18, 2009, overlapping a 19 - year broadcast on radio from 1937 to 1956. Its radio and television runs taken together, Guiding Light is the longest running soap opera and the fifth - longest running program in all of broadcast history; only the American country music radio program Grand Ole Opry (first broadcast in 1925), the BBC religious program The Daily Service (1928), the CBS religious program Music and the Spoken Word (1929), and the Norwegian children 's radio program Lørdagsbarnetimen (first aired in 1924, cancelled in 2010) have been on the air longer.
Guiding Light was created by Irna Phillips, and began as an NBC Radio serial on January 25, 1937. On June 2, 1947, the series was transferred to CBS Radio, before starting on June 30, 1952, on CBS Television. It continued to be broadcast concomitantly on radio until June 29, 1956. The series was expanded from 15 minutes to a half - hour during 1968 (and also switched from broadcasting live to pre-taping around this same time), and then to a full hour on November 7, 1977. The series broadcast its 15,000 th CBS episode on September 6, 2006.
On April 1, 2009, CBS announced that it canceled Guiding Light after a run of 72 years due to low ratings. The show taped its final scenes on August 11, 2009, and its final episode on the network aired on September 18, 2009. On October 5, 2009, CBS replaced Guiding Light with an hour - long revival of Let 's Make a Deal, hosted by Wayne Brady.
On August 22, 2013, Grant Aleksander, who had portrayed Phillip Spaulding on Guiding Light from 1983 through the series finale in 2009, revealed in an interview with Carolyn Hinsey that former Guiding Light executive producer Paul Rauch had been working on a continuation of Guiding Light at the time of his death in December 2012.
Guiding Light has had a number of plot sequences during the series ' long history, on both radio and television. These plot sequences include complex storylines, and different writers and casting.
The series was created by Irna Phillips, who based it on personal experiences. After giving birth to a still - born baby at age 19, she found spiritual comfort listening to the radio sermons of Preston Bradley, a famous Chicago preacher and founder of the People 's Church, a church which promoted the brotherhood of man. These sermons originated the idea of the creation of The Guiding Light, which began as a radio series. The original radio series was first broadcast as 15 - minute episodes on NBC Radio, starting on January 25, 1937. The series was transferred to CBS Radio during 1947.
The Guiding Light was broadcast first by CBS Television on June 30, 1952, replacing the canceled soap opera The First Hundred Years. These episodes were also 15 minutes long. During the period from 1952 to 1956, The Guiding Light existed as both a radio and television serial, with actors recording their performances twice for each day that the shows were broadcast. The radio broadcast of The Guiding Light ceased production during 1956, ending this overlap.
With the transition to television, the main characters became the Bauers, a lower - middle class German immigrant family who were first introduced in the radio serial in 1948. Many storylines revolved around Bill Bauer (son of patriarch Friedrich "Papa '' Bauer) and his new wife Bertha (nicknamed "Bert ''). Papa Bauer, who came to the United States during World War I with just a few dollars in his pocket, was a salt of the earth character who succeeded in offering opportunities to his children by working hard, and he instilled that work ethic into his children. Bert had dreams of climbing the social ladder and keeping up appearances, and it was up to Bill (and sometimes Papa Bauer) to bring her down to earth.
The Guiding Light ranked as the number one - rated soap opera during both 1956 and 1957, before being replaced during 1958 by As the World Turns. After Irna Phillips was transferred to As the World Turns during 1958, her protege Agnes Nixon became head writer of The Guiding Light.
The first television producer of The Guiding Light was Luci Ferri Rittenberg, who produced the show over 20 years.
Agnes Nixon relinquished her role as chief writer during 1965 to work for the series Another World. On March 13, 1967, The Guiding Light was first broadcast in color. On September 9, 1968, the program was expanded from 15 to 30 minutes.
The 1960s featured the introduction of African American characters, and the main emphasis of the series shifted to Bill and Bert 's children, Mike and Ed; the character of Bill Bauer was written out in July 1969, presumed dead after a plane crash. The show also became a bit more topical during the 1960s, with such storylines as Bert Bauer 's diagnosis of uterine cancer in 1962.
A number of new characters were introduced during the mid - to late 1960s, including Dr. Sara McIntyre, who remained a major character through the early 1980s.
Much of the story during the first half of the 1970s was dominated by Stanley Norris ' November 1971 murder and the ensuing trial, as well as the exploits of villainesses Charlotte Waring and Kit Vested. Charlotte (at the time played by Melinda Fee) was murdered by Kit (Nancy Addison) on August 26, 1973, and then Kit herself was shot by Dr. Joe Werner (Anthony Call) in self - defense on April 24, 1974, after she had attempted to poison Dr. Sara McIntyre.
A pivotal character, off - and - on, until the spring of 1998, Roger Thorpe, was introduced on April 1, 1971. The role of Roger was originally proposed to be blonde, fair - skinned preppy type, a man who was dating his boss 's daughter Holly. Ultimately, Michael Zaslow, a dark haired actor with a more ethnic appearance, was hired for the role instead by long - time casting director, Betty Rea. Zaslow portrayed Roger as a complicated and multifaceted villain.
Theo Goetz, the actor who played Papa Bauer since the first episode of The Guiding Light, died in 1972. The decision was made to have Papa Bauer die in the storyline as well. The cast paid tribute to Goetz and Papa Bauer in a special memorial episode which aired on February 27, 1973.
Pressured by newer, more youth - oriented soap operas such as All My Children, Procter & Gamble hired head writers Bridget and Jerome Dobson during 1975, who started writing in November 1975. The Dobsons introduced a more nuanced, psychologically layered writing style, and included timely story lines, including a complex love / hate relationship between estranged spouses / step - siblings Roger and Holly. They also created a number of well - remembered characters, including Rita Stapleton, whose complex relationships with Roger and Ed propelled much of the story for the remainder of the decade, Alan Spaulding, and Ross Marler, both of whom remained major characters into the 2000s.
The decision was made during the fall of 1977 to reintroduce the thought - dead character of Bill Bauer, in a major retcon. The other characters thought that he had died in an airplane crash in July 1969, but he was said to actually be alive. (Many viewers who had also paid attention to the show and story line back in September 1968, remembered that Bill was told he would only have nine more years to live.) One of the problems with this return is that the Dobsons seemed not quite sure what to do with his return. Although it was shocking, at first, to many of the characters, Bill himself ended up being charged for a murder of a man in Vancouver (Mike got his father off for the crime, proving that it was an accident, rather quickly and by April 1978 Bill had left town, again. Although Bill returned briefly in November 1978, April 1980, and then again in July 1983 and in flashbacks in November 1983.) Bill 's return introduced the audience and the Bauers to another character that stayed on the show until September 1984, Hillary Kincaid, R.N. (Bauer), Bill 's daughter (and thus Ed and Mike 's half - sister; Bill had accidentally killed the man that Hillary originally thought was her father, but was actually her step - father) and she becomes a nurse at Cedars and a major character.
Surprising many viewers, Jerome and Bridget Dobson killed the show 's young heroine, Leslie Jackson Bauer Norris Bauer, in June 1976. Lynne Adams was reported at the time to want to leave the role, and the Dobsons decided against recasting the part. Leslie was killed by a drunk driver. Her father, Dr. Steve Jackson, remained on the show for the remainder of the ' 70s, serving as a senior physician at Cedars, and as a friend and companion to Bert Bauer.
In November 1975, the name was changed in the show 's opening and closing visuals from The Guiding Light to Guiding Light. On November 7, 1977, the show expanded to a full hour and was broadcast from 2: 30 to 3: 30 pm daily.
The series during the 1970s emphasized the Bauers and the Spauldings. Several notable characters were introduced.
Bridget and Jerome Dobson assumed the head writing duties of As the World Turns in late 1979. Former actor Douglas Marland, assumed the head writing reins of Guiding Light in 1979. He introduced many new characters, including the Reardon family. During May 1980, Guiding Light won its first Outstanding Drama Series Daytime Emmy. One of Marland 's stories featured the character of Carrie Todd Marler, played by Jane Elliot. Carrie was diagnosed with multiple personalities. Marland had barely delved into her psychosis when Elliot 's contract was abruptly terminated by Executive Producer Allen M. Potter in 1982. As a result, Marland resigned in protest.
During the early 1980s, the show began to emphasize younger characters more, as an attempt to compete with the younger - skewing ABC serials. A number of longtime characters were eliminated during this time, including Ben and Eve McFarren, Diane Ballard, Dr. Sara McIntyre, Adam Thorpe, Barbara Norris Thorpe, Justin Marler and Steve Jackson. Actress Lenore Kasdorf quit the show in 1981, and producers decided not to recast the role of Rita Stapleton Bauer, given how popular Kasdorf had been. The Bauer family matriarch, Bertha ' Bert ' Bauer, died in March 1986, following the real - life death of Charita Bauer in 1985. During Guiding Light 's 50th anniversary year in 1987, a commitment was pledged to showcase the Bauer family in primary roles as much as possible, after audience reaction to the Oklahoma - bred Lewis and Shayne families turned out to be mixed. As a result, the tradition of the Bauer July 4th family barbecue began that year, and continued until 2009, the serial 's final year on CBS Television.
An ever more complicated storyline emphasized the Bauer, Spaulding, Reardon, and Raines families. Pam Long, actress and writer for NBC 's Texas from 1981 to 1982, became head writer during 1983 and reemphasized the series on Freddy Bauer Phillip Spaulding, Mindy Lewis, and Beth Raines. She also introduced characters Alexandra Spaulding, performed by actress Beverlee McKinsey, of Another World and Texas fame; and Reva Shayne, played by Kim Zimmer. The ratings in the mid - to late 1980s were solid and healthy. Pamela K. Long returned for a second head writer stint from 1987 to 1990.
The characters of Roger Thorpe and Holly Norris returned to the series during the late 1980s. Maureen Garrett reprised her role of Holly # 2 in 1988, followed by Michael Zaslow as Roger in 1989.
With the new decade, the series ' storytelling transitioned from Long 's homespun style to a more realistic style with a new group of chief writers. The Bauer, Spaulding, Lewis, and Cooper families had been established as core families, and most major plot developments concerned them. The show generally held on in the middle of the pack as far as ratings went throughout the decade.
The show suffered major character losses mid-decade, including the car accident death of Maureen Bauer and the exit of Alexandra Spaulding from the story. As the decade progressed, the program developed a series of outlandish plot twists seemingly to compete with the serials Passions and Days of Our Lives.
In an attempt to revive the series, the character Reva Shayne was brought back to Springfield during April 1995. She 'd been presumed dead for the previous five years, after having driven her car off of a bridge and into the water off the Florida Keys. Later that July, antiheroine Tangie Hill (played by Marcy Walker, who declined to renew her contract) was eliminated after nearly two years with the show in favor of the full - time return of fan favorite Nola Chamberlain, played by Lisa Brown.
During January 1996, soap opera veteran Mary Stuart joined the cast as Meta Bauer (though referred to many times over the years, the long - running character originally played by Ellen Demming had not been seen onscreen since 1974); the character remained on the show until Stuart 's death during 2002.
January 1998, Bethany Joy Lenz came to the show as "Teenage Reva Clone ''. Producers were so impressed with her acting and attitude during her three - week role as "Teenage Reva Clone '' on the show that she was re-hired later that year in the contract role of "Michelle Bauer Santos '' on the daytime serial. From 1999 - 2000.
The 2000s began with the division of the show into two locales: Springfield and the fictional island nation of San Cristobel. In Springfield, the Santos mob dynasty created much of the drama. Meanwhile, the royal Winslow family had their own series of intrigues with which to deal. During 2002, however, San Cristobel was eliminated from the series and the mob 's influence in the story was subsequently diminished and, with the departure of character Danny Santos during 2005, eliminated altogether. Also, Guiding Light celebrated its 50th Anniversary as a television show on June 30, 2002.
During 2004, former director and actress Ellen Wheeler (Emmy Award winner as an actress for the series All My Children and Another World) took over as executive producer of Guiding Light. She and writer David Kreizman made numerous changes to the sets, stories, and the cast. Several veteran actors were eliminated, mainly because of budget decreases. Because of the lack of veteran influence, Wheeler reemphasized the youth of Springfield, especially the controversial pairing of cousins Jonathan and Tammy.
During 2006, an episode featured character Harley Cooper gaining heroic abilities. The episode was semi-continued in an 8 - page story in select Marvel Comics productions.
The series had its 70th broadcast anniversary during 2007. The anniversary was commemorated with the initiation of website FindYourLight.net and a program of outreach, representing Irna Phillips 's original message. There was also a special episode during January 2007, with current cast members playing Phillips and some of the earlier cast members. The series also introduced special beginning credits commemorating the anniversary.
Despite low ratings, the show won 2007 Daytime Emmy Awards for Best Writing and Best Show (sharing Best Show with The Young and the Restless).
On April 1, 2009, CBS announced that it would not renew Guiding Light, and the last broadcast date would be September 18, 2009. Procter & Gamble initially announced that they would attempt to find another outlet to distribute the series, but later admitted that they had been unsuccessful in doing so, and that on September 18, 2009, after 57 years on television (preceded by 15 years on radio for a total broadcast history of 72 years), Guiding Light would end its broadcast history on CBS.
During the final weeks of the series, numerous characters from the series ' past passed through Springfield one last time, culminating with Ed and Holly, who impulsively embarked on an unspecified journey together. Alan Spaulding suffered fatal heart failure during the final week, but not before resolving conflicts with many former adversaries, including Jonathan. Alan 's death brought the characters together in a way that could not have happened while he was still alive. Alexandra is especially distraught about Alan 's death, but was pleased when Fletcher Reade came to the Spaulding Mansion after Alan 's service, and convinced her to accompany him to Europe. Beth and Phillip have grown closer and decided to remarry; Mindy Lewis returned to Springfield for good, and she and Rick also became fonder of each other. Reva and Josh had a discussion, and agreed that they each had their respective problems that they need to solve. Josh told Reva that he was leaving Springfield for a job for the next year, but proposes that he return one year from that date and, if by that time, she wants to reunite with him, she should meet him at the lighthouse and, if she is not there, he will assume she is not interested.
The final episode is pleasant, featuring many of the characters gathering in the park for a large picnic. Toward the end of the episode, it jumps forward one year, by which time, Phillip and Beth have reunited, as have Rick and Mindy. Olivia and Natalia, happy with their new baby, pick up Raphael as he returns from the army. The episode concludes with Josh arriving at the lighthouse, as promised, and finding Reva there. They declare their undying love. James, Ashlee, and Daisy leave Springfield and relocate to Santa Barbara, California. Josh asks if Reva is packed, to go on an adventure. The two grab the luggage, and with Reva 's young son, they climb into Josh 's pick - up truck. Josh says to Reva, "You ready? '' She replies "Always. '' As the truck drives away with the lighthouse in the background, "The End '' appears on the screen before a final fadeout. The song heard playing in the background during the final scene is "Together '' by Michelle Branch.
The final episode also included the original tag line, with some revision, printed on the screen with the words "There is a destiny that makes us FAMILY '' (replacing the word ' brothers '), as well as quick film clips of each of the show 's title cards and announcers during the six decades it was on television, leading to the show 's former long - time beginning announcement: "And now, The Guiding Light ''.
Guiding Light was broadcast from three locations: Chicago (where creator Irna Phillips resided), from 1937 until 1946; Hollywood, from 1947 until 1949; and New York City starting during 1949. It was relocated from Chicago to Hollywood (despite objections of both Phillips and Arthur Peterson) to take advantage of the talent pool. Production was subsequently relocated to New York City, where the majority of soap operas were produced during the 1950s, 1960s and much of the 1970s; it remained based in New York City until the show 's conclusion. Its final taping location was the CBS studios in midtown Manhattan. From the 1970s to the 1990s it was filmed at the Chelsea Studios. From soon before February 29, 2008, outdoor scenes were filmed on location in Peapack, New Jersey. The location filming coincided with another significant production change, as the series became the first American weekday soap opera to be recorded digitally. The production team chose to film with Canon XH - G1 HDV camcorders. Unlike the old production model with pedestal - style cameras and traditional three - sided sets, handheld cameras allowed producers to choose as many locations as they wished.
During the daytime drama 's 57th season on television and 72nd overall season, the series had changed its look to a more realistic experience in an attempt to compete with the growing popularity of reality television. The new look of Guiding Light included free - hand camera work and less action shown on traditional studio sets. Producer Ellen Wheeler introduced a "shaky - cam '' style, present in a number of movies, featuring extreme - closeups and frequent cuts, including those that "broke the axis '' (which proved disorienting to viewers accustomed to shows with the traditional "soap opera look ''). Also new was the filming of outdoor scenes in actual outdoor settings. Even many indoor scenes had more of an "on location '' feel, repurposing real locations, such as Guiding Light 's production offices, to be motel rooms, nail salons, quick - mart and other businesses or locations. Thereby, the series had numerous sets without the cost of numerous separate locations. CBS and the show 's producers had hoped that the new look would increase ratings, but the plan was ultimately unsuccessful.
On April 1, 2009, the series was canceled by CBS after 72 years, with the series finale airing on September 18, 2009, making it the second - to - last Procter & Gamble soap opera to end.
The action has also been set in three different locales -- it was based in the fictional towns of Five Points and Selby Flats before its final locale of Springfield.
Unlike most popular radio serials transitioning to television, The Guiding Light had no difficulty holding onto its old listening audience and simultaneously earning a new television fanbase. For at the time The Guiding Light made its television debut, neither ABC nor NBC had broadcast programs on their respective networks at 2: 30 p.m. Eastern / 1: 30 Central, where CBS first placed The Guiding Light. However, six months into the run, the network moved the serial to a timeslot that gave it great popularity with its housewife audience: 12: 45 p.m. / 11: 45 a.m. It kept the new timeslot for the next 19 years and eight months, sharing the half - hour with its sister Procter & Gamble - packaged soap opera, Search for Tomorrow.
The Guiding Light handled the competition breezily, even against otherwise - legendary shows such as Queen for a Day on ABC (briefly in 1960) and NBC 's Truth or Consequences. Usually, The Guiding Light ranked second in the Nielsen ratings behind another serial, As the World Turns. 1968, however, saw changing viewership trends that prompted CBS to expand its last two 15 - minute daytime dramas, disrupting long - standing viewing habits. Search for Tomorrow took over the entire 12: 30 -- 1 / 11: 30 -- Noon period, with The Guiding Light returning to its first timeslot, 2: 30 / 1: 30, albeit in the now - standard half - hour format, on September 9. This twin bill of expansions also caused the dislocation of The Secret Storm and the beloved Art Linkletter 's House Party, as well as the cancellation of the daytime To Tell the Truth. The next 12 years brought several similar shifts around CBS ' lineup.
The 1970s saw the popularity of The Guiding Light dip somewhat. The competition imposed upon The Guiding Light during this decade was from other serials such as The Doctors on NBC, but it still garnered decent ratings. After four years, CBS bumped its timeslot up by a half - hour to accommodate Procter & Gamble 's demand that The Edge of Night move to 2: 30 / 1: 30, a move that led to the end of that show on CBS three years later. In the meantime, The Guiding Light stayed steadily on course against NBC 's Days of Our Lives and ABC 's The Newlywed Game. In late 1974, ABC replaced The Newlywed Game with The $10,000 Pyramid, which went on to garner strong ratings, but not greatly at The Guiding Light 's expense. Meanwhile, by fall 1975 (at which point the show had officially dropped the word "The '' from its title, although it was still referred to as The Guiding Light on air for several years after), the impending departure of The Edge of Night for ABC - to say nothing of CBS ' planned expansion of some serials - affected Guiding Light by pushing it back to 2: 30 / 1: 30 once more in December. At that time, NBC still ran The Doctors in the 2: 30 slot, and ABC had a short - lived hit the next year with an updated version of the game show Break the Bank. To complicate the picture further, ABC opted to make its first show expansions, that of One Life to Live and General Hospital, in July 1976; each of those shows occupied one - half of a 90 - minute block until November 4, 1977.
With this in mind, ABC and CBS acted to give a contending chance to both General Hospital and Guiding Light by expanding them to an hour in length. CBS did so first by expanding Guiding Light on November 7, 1977. This gained particular importance when ABC finally added 15 minutes to both One Life to Live and General Hospital on January 16, 1978, so that Guiding Light straddled those two programs, as well as the first half of sister P&G show Another World on NBC. Despite that General Hospital surprising all observers by skyrocketing from near - cancellation to the top place in the ratings with the various storylines, Guiding Light held its own while in direct competition with General Hospital, still hit an upswing as the decade ended.
On February 4, 1980, CBS bumped Guiding Light down again, to 3pm / 2c, and its sister P&G soap As The World Turns to 2pm / 1c, in the midst of a major scheduling shuffle intended to give The Young and the Restless (itself now expanding to an hour length) a shot at beating ABC 's All My Children. NBC did the same with its soap operas as well with all three networks now going head - head in every time slot. It remained in this time slot for the rest of its run in some markets, facing General Hospital and NBC entries such as Texas (a spin - off of Another World), The Match Game - Hollywood Squares Hour and Santa Barbara. None of these shows - not even General Hospital - had any significant impact on the ratings of Guiding Light at 3: 00 pm during this period.
Overall, the first half of the 1980s saw a revival in Guiding Light 's popularity, with a top - five placing achieved in most years and even a brief dethroning of then - powerhouse General Hospital from the # 1 ratings spot for three consecutive weeks. However, as the decade progressed, the ratings slipped a bit, although it was still performing solidly. In 1995, beginning with CBS flagship station WCBS - TV in New York, Guiding Light began airing at 10 a.m. Eastern time in several markets. Its once - solid performance began to crumble by the mid-1990s, when its ratings sunk as low as ninth place out of ten. However, during the controversial clone storyline in 1998, the ratings experienced a brief resurgence, moving up to fifth for many weeks that summer. Nielsen reported Guiding Light had 5 million viewers in 1999.
Up until its finale in 2009, stations in a number of markets aired Guiding Light in the morning either at 9 or 10 a.m. local time: Miami, Chicago, Baltimore, Boston, Detroit, New York City, Los Angeles, San Francisco, Philadelphia, Pittsburgh, Dallas - Fort Worth, Orlando, Atlanta, Columbia, SC, Fort Wayne, IN, South Bend, IN, Portland, OR, Quad Cities, Buffalo, Reno, Portland, ME, Milwaukee, Albany, NY, and Scranton - Wilkes Barre, PA. Guiding Light aired at 12 noon local time in Honolulu, Hawaii. In Savannah, GA, it aired at 4: 00 pm local time.
Before 2004, stations that aired Guiding Light in the morning were always one episode behind those that aired the program at its official timeslot of 3: 00 pm (ET). This changed in March 2004, during the first day of the NCAA March Madness basketball tournament, in which stations airing the show at 10: 00 am were able catch up with stations that televised it at 3: 00 pm. Starting in 2006, stations that televised Guiding Light at 9: 00 am were also offered a same - day feed to catch up with the rest of the network. As a result of this, daily episodes for the remaining years of GL were the same on all stations regardless of timeslot.
Guiding Light maintained strong ratings in Pittsburgh, despite being moved to 10: 00 am in 2006. According to a 2006 article in the Pittsburgh Post-Gazette, Dr. Phil had n't been able to pull in the same numbers that Guiding Light did in that time slot a year prior, while Guiding Light was maintaining its audience share.
One CBS affiliate that did not air the show was KOVR - TV in Sacramento, California, which had become a CBS affiliate in 1995. Before CBS affiliated with KOVR, it had been affiliated in Sacramento with KXTV, which had dropped Guiding Light from its schedule in 1992 and did not air it again. As such, the show was preempted in the Sacramento area from 1992 to the show 's cancellation. WNEM - TV in Flint / Saginaw / Bay City, Michigan, which also became a CBS affiliate that year, initially ran the soap before dropping it in 1996 because of disappointing ratings. In the fall of 2006, WNEM began running Guiding Light on its digital channel My 5 at 10 am, airing there for the remainder of its run.
Internationally, Guiding Light currently airs in Iceland, Italy, Hungary and Serbia. It also aired September 3, 2007 to August 26, 2011 in the UK on Zone Romantica / CBS Drama, and was pulled at the point where the outside location filming was due to begin. The last screened scene of the show in the UK was Cassie hiding out with troubled son Will -- just as the rest of the family were discovering that he had actually killed his uncle Alonzo.
Since it ended its CBS run on September 18, 2009, the reruns of Guiding Light currently air on Sky 1 since September 21, 2009.
In Canada, Guiding Light was available to viewers directly through CBS - TV network affiliates from border cities or cable TV feeds until the show 's ending in 2009. In addition, Guiding Light also made it on several Canadian television networks through the 1980s up until its last air date.
Atlantic Satellite Network (ASN) -- a supplementary service to its ATV system of CTV affiliates exclusively for Atlantic Canada -- aired the soap simultaneously with the CBS feed from 1983 to 1984; then, the broadcast was moved to 12 noon until 1985.
The show also aired in French in Quebec. TVA, a Quebec privately owned French - language television network, rebroadcast episodes in French translation, twelve months behind, for a short period in 1984.
In the early 1990s, the Canadian Broadcasting Corporation (CBC) briefly aired the P&G serial nationally at 3: 00 p.m. in each specific local Canadian time zone. The CBC Television broadcast of Guiding Light was also on its scheduled during the latter part of the 1960s during the serial 15 - minute format. On both occasions, the daytime drama was only aired for a few seasons.
After a hiatus from Canadian television stations for many years, the series came back on CHCH - TV, exclusively for the Ontario market. In September 2007, Global picked up the show nationwide after CHCH - TV dropped it, claiming Passions ' former time slot. Guiding Light returned to CHCH for the rest of its run when Global decided to air the 2008 TV series The Doctors.
In January 2012, SoapClassics released a four - disc DVD collection of 20 selected episodes. The oldest episode on the collection dates from April 1, 1980, while the latest episode is from September 14, 2009, during the show 's final broadcast week.
The company has since released special collections celebrating Reva Shayne and Phillip Spaulding.
In May 2012, SoapClassics released the final ten Guiding Light episodes on a two - disc DVD set.
Also beginning in June 2012 the series was later released on DVD in Germany beginning with the 1979 episodes.
|
who plays medusa in percy jackson and the lightning thief | List of Percy Jackson & the Olympians cast members - wikipedia
This list of Percy Jackson and the Olympians cast members is a list of actors who portray characters appearing in the Percy Jackson and the Olympians film series based on the book series by Rick Riordan.
|
will there ever be a sequel to i am number 4 | I Am Number Four (film) - wikipedia
I Am Number Four is a 2011 American teen science fiction action thriller film directed by D.J. Caruso and starring Alex Pettyfer, Timothy Olyphant, Teresa Palmer, Dianna Agron, and Callan McAuliffe. The screenplay, by Alfred Gough, Miles Millar, and Marti Noxon, is based on the 2010 novel of the same name, one of the Lorien Legacies young adult science fiction novels.
Produced by Michael Bay, I Am Number Four was the first film production from DreamWorks Pictures to be released by Touchstone Pictures, as part of the studio 's 2009 distribution deal with Walt Disney Studios Motion Pictures. The Hollywood Reporter estimated the budget to be between $50 million and $60 million. The film was released in both conventional and IMAX theatres on February 18, 2011, received generally negative reviews, and grossed $150 million.
John Smith is an alien from the planet Lorien. He was sent to Earth as a child with eight others to escape the invading Mogadorians, who destroyed Lorien. Here, John is protected by a Guardian, Henri, and has developed powers, including enhanced strength, speed and agility, as well as seven other powers he will develop later in life, known as legacies. He is protected by a Loric charm which protects him from all Mogadorian harm as long as he is hurt out of order. He can only be hurt after the first three Loriens are killed in order.
The Mogadorians, led by the Commander, learn about the nine children and come to Earth to find them. The Lorien can only be killed in sequence; Number One through Number Nine. Three of them are already dead, with John being Number Four. Knowing this, he and Henri move from a beachside bungalow in Florida to an old farm in Paradise, Ohio, where John befriends conspiracy theorist Sam Goode and a dog which he names after Bernie Kosar. He also falls for an amateur photographer, Sarah Hart. Her ex-boyfriend, football player Mark James is a bully who torments both John and Sam.
During the Halloween Festival, Mark and his friends chase Sarah and John into the woods, where they try to beat John up. However, he uses his powers to fend them off and rescue Sarah. Sam witnesses this, which forces John to reveal his true origins. The next day, Mark 's father, the local sheriff, interrogates Henri on John 's whereabouts when his son and his friends were attacked.
Henri tells John that too many people are suspicious of them so they have to leave. John refuses because of Sarah. Meanwhile, the Mogadorians continue searching for John, while being trailed by another Garde, Number Six, who is also trying to locate Number Four. Number Six 's guardian was killed, and she realizes that the remaining six of the Garde will have to team up and fight against the Mogadorians. She knows Number Three is dead and that Number Four is being hunted.
The Mogadorians eventually locate John and manipulate two conspiracy theorists into capturing Henri. When John and Sam go to rescue him, they are attacked but manage to fend the Mogadorians off. However, Henri dies after John and Sam escape with some Lorien artefacts, including a blue rock that acts as a tracking device for other Garde. Sam 's father, a conspiracy theorist who disappeared while hunting aliens in Mexico, had another of the rocks. While Sam searches for it, John tries to say goodbye to Sarah at a party, only to discover that the Mogadorians have framed him and Henri for the murders of the conspiracy theorists. Mark sees John and calls his father, who corners John and Sarah. John saves Sarah from a fall, revealing his powers in the process, and they escape to their high school.
Meanwhile, The Commander arrives in Paradise, in a convoy of trucks. He is confronted by Mark and his father, and after injuring the sheriff, he forces Mark to show him where John is hiding. Mark takes him to the school, which he knows is Sarah 's hideout.
There, John, Sarah, and Sam are attacked by the Mogadorians, who brought two Piken to hunt the trio. They are saved by Number Six and Bernie, who is actually a shapeshifting Chimera sent by John 's biological parents to protect him. John and Number Six, who can turn invisible and can block energy attacks, continue to fight the Mogadorians. They eventually defeat them all, including the Commander.
The following day, John and Number Six unite their blue rocks and discover the location of the other four surviving Garde. John decides to let Sam come with them in hopes of finding Sam 's father. They set off to find the others so they can all protect Earth from the Mogadorians, leaving Sarah and a repentant Mark, who lies to his father about John 's whereabouts, telling him to head west since he told his father he was heading east, and returns a magic box left to John by his Dad that Henri hid that was in police evidence. John thanks Mark and promises Sarah that he will come back to Paradise to find her. While they share their goodbye kiss a hurt and slightly agitated Mark is shown, but he shakes his head before waving and putting on a fake smile. The film ends with John, Sam, Bernie and Number 6 driving out towards the sunset, smiling and vowing to protect their new home, Earth.
The film was produced by DreamWorks and Reliance Entertainment. Film producer and director Michael Bay brought the manuscript of the young adult novel I Am Number Four to Stacey Snider and Steven Spielberg at DreamWorks. A bidding war developed for the film rights between DreamWorks and J.J. Abrams, with DreamWorks winning the rights in June 2009, with the intention of having Bay produce and possibly direct the project. The rights were purchased with the hope of attracting teenage fans of The Twilight Saga films, and the potential of establishing a film franchise, with at least six more installments planned by the book 's publisher.
Al Gough and Miles Millar, the creators of the television series Smallville, were hired to write the screenplay in August 2009. Marti Noxon, writer and producer for the television series Buffy the Vampire Slayer, also contributed to the screenplay. D.J. Caruso was brought on to direct in early 2010, after Bay opted to focus on directing the third film of the Transformers series. Caruso had been selected by Spielberg to direct Disturbia and Eagle Eye for DreamWorks, and had success with both films. Caruso had less than a year to prepare, shoot and edit the film, due to a worldwide release date set for Presidents Day weekend.
Chris Bender, J.C. Spink, and David Valdes executive produced the film. Steven Spielberg contributed to the film 's characters, but did not take a credit on the film. It was the first DreamWorks film to be released by Disney 's Touchstone Pictures label, as part of the 30 - picture distribution deal between DreamWorks and Walt Disney Studios Motion Pictures. The film was also the first release for DreamWorks after the studio 's financial restructuring in 2008.
In March 2010, Alex Pettyfer was in talks to play the title character in the film, Number Four. It was later confirmed that the 21 - year - old British actor would play the lead. Sharlto Copley was going to star as Henri, Number Four 's guardian and mentor, but had to drop out due to press obligations with his film The A-Team. Copley was replaced by Timothy Olyphant. Kevin Durand plays the villain of the film, Commander, the Mogadorian who leads the hunt for the Loriens on Earth.
DreamWorks went through multiple rounds of tests to find the right actress for the female romantic lead. Dianna Agron, a star in the Fox television series Glee, won the role. She plays Sarah Hart, a girl who used to date a high school football player, but falls for Number Four and keeps his secret. Jake Abel plays the football player, Mark James, an antagonist in the film. Teresa Palmer plays the other Loric, Number Six, and 16 - year - old Australian actor Callan McAuliffe plays Sam Goode, Number Four 's best friend.
Principal photography began on May 17, 2010, using 20 locations all within the Pittsburgh metropolitan area. DreamWorks selected the area primarily due to tax incentives from the Pennsylvania Film Production Tax Credit. The film studio also had a positive experience shooting She 's Out of My League in Pittsburgh in 2008. The production was scheduled to last 12 to 13 weeks.
Cinematographer Guillermo Navarro shot the film on 35 mm, using a format known as Super 1: 85. Beaver, the former Conley Inn in Homewood, and nearby Buttermilk Falls were used as locations in the film; interior and exterior scenes were shot near a boat launch in Monaca. A spring fair scene was filmed in Deer Lakes Park in West Deer; Port Vue, North Park, New Kensington and Hyde Park were also used as locations. The setting of the film 's fictional town of Paradise, Ohio is Vandergrift, Pennsylvania, where filming took place from June to July 2010. Producers chose Vandergrift as the "hero town '' of the film because of its unique look and curved streets, laid out by Frederick Law Olmsted, the designer of New York City 's Central Park.
Franklin Regional High School in Murrysville was chosen over 50 other schools in the area, due to its proximity to nearby woods, a part of the film 's plot, and its surrounding hills. The school was also selected for its one floor layout, wide hallways, and its football stadium in front. Teachers and recent graduates appear in the film, and a set that replicates the school was built in a studio in Monroeville for filming explosion scenes. In the last few weeks of production, scenes were filmed at the 200 - year - old St. John 's Lutheran Stone Church in Lancaster Township. Additional filming took place in the Florida Keys in the beginning of the film in Big Pine Key, Florida as well as the spanning of the drive over the bridge showcases the keys 7 mile bridge.
I Am Number Four was edited by Jim Page, with Industrial Light & Magic developing the visual effects for the alien Pikens. The film was scored by former Yes guitarist Trevor Rabin.
A teaser trailer for the film was issued in late September 2010, and a full - length trailer premiered on December 8. Advertisements ran in Seventeen and Teen Vogue magazines, Disney released a promotional iPhone app in January 2011. Disney also developed a lot of Internet content to target a teen audience with their marketing campaign. A cast tour, in association with American retailer Hot Topic, and cast media appearances were scheduled to lead up to the release of the film.
I Am Number Four premiered at the Village Theatre in Los Angeles on February 9, 2011. The film was released in theaters on February 18, 2011, and was also released in the IMAX format.
The film was released by Touchstone Home Entertainment on Blu - ray, DVD, and digital download on May 24, 2011. The release was produced in three different packages: a three - disc Blu - ray, DVD, and "Digital Copy '' combo pack, a single - disc Blu - ray, and a single - disc DVD. The "Digital Copy '' included with the 3 - disc version is a stand - alone disc that allows users to play the film from any location via iTunes or Windows Media Player. All releases include bloopers and the "Becoming Number 6 '' featurette, while the single - disc Blu - ray and three - disc Blu - ray, DVD, and "Digital Copy '' combo pack versions additionally include six deleted scenes with an introduction from the director. In its first three weeks of release, 316,081 Blu - ray units were sold in the US, bringing in $7,693,808. As of October 2, 2011, the standard DVD of ' I am Number Four ' has sold 767,692 copies in the United States, generating $12,571,326, and thus bringing the total gross to $166,247,931.
Review aggregation website Rotten Tomatoes reports a 33 % approval rating and an average rating of 4.8 / 10 based on 158 reviews. The website 's consensus reads, "It 's positioned as the start of a franchise, but I Am Number Four 's familiar plot and unconvincing performances add up to one noisy, derivative, and ultimately forgettable sci - fi thriller. '' Metacritic gives the film an average score of 36 out of 100 based on reviews from 30 critics, indicating "generally unfavorable reviews ''. Empire Magazine gave the film three out of a possible five stars and said, "If you can make it through the bland schmaltz of the first half you 'll be rewarded with a spectacular blast of sustained action and the promise of even better to come. This could be the start of something great. ''
I Am Number Four grossed $55,100,437 at the North American box office and $90,778,000 overseas, for a worldwide total of $149,878,437. It topped the worldwide box office on its second weekend (February 25 -- 27, 2011) with $28,086,805.
The film opened at number two in the United States and Canada with a gross of $19,449,893. In its second weekend, it dropped 43.4 %, earning $11,016,126. The only other market where it has grossed over $10 million is China. It began in third place with $3.4 million, but had an increase of 91 % in its second week, topping the box office with $6.4 million. In its third week, it decreased by 21 % to $5.0 million. As of March 27, 2011, it has grossed $17,328,244.
Source:
In 2011, screenwriter Noxon told Collider.com that plans for an imminent sequel were shelved due to the disappointing performance of the first installment at the box office.
In 2013, director Caruso was asked if there are any possibilities that The Power of Six will get a movie adaption, he replied: "There 's been some talk in the past couple of months about trying to do something because there is this audience appetite out there (...). Most of the people on Twitter that contact me from all over the world ask, "Where 's the next movie? '' I think DreamWorks is getting those too so it 'll be interesting. I do n't know if I 'd be involved, but I know they 're talking about it. ''
In a 2015 interview, James Frey, the co-author of the series, said that he hoped more movies would be made.
|
what are the 3 foot pedals on a piano | Piano pedals - wikipedia
Piano pedals are foot - operated levers at the base of a piano that change the instrument 's sound in various ways. Modern pianos usually have three pedals, from left to right, the soft pedal (or una corda), the sostenuto pedal, and the sustaining pedal (or damper pedal). Some pianos omit the sostenuto pedal, or have a middle pedal with a different purpose such as a muting function also known as silent piano.
The development of the piano 's pedals is an evolution that began from the very earliest days of the piano, and continued through the late 19th century. Throughout the years, the piano had as few as one modifying stop, and as many as six or more, before finally arriving at its current configuration of three.
The damper pedal, sustain pedal, or sustaining pedal is to the right of the other pedals, and is used more often than the other pedals. It raises all the dampers off the strings so that they keep vibrating after the player releases the key. In effect, the damper pedal makes every string on the piano a sympathetic string, creating a rich tonal quality. This effect may be behind the saying that the damper pedal is "... the soul of the piano. '' The damper pedal has the secondary function of allowing the player to connect into a legato texture notes that otherwise could not thus be played.
The soft pedal, or una corda pedal, was invented by Bartolomeo Cristofori. It was the first mechanism invented to modify the piano 's sound. This function is typically operated by the left pedal on modern pianos. Neither of its common names -- soft pedal or una corda pedal -- completely describe the pedal 's function. The una corda primarily modifies the timbre, not just the volume of the piano. Soon after its invention, virtually all makers integrated the una corda as a standard fixture. On Cristofori 's pianos, the una corda mechanism was operated by a hand stop, not a pedal. The stop was a knob on the side of the keyboard. When the una corda was activated, the entire action shifted to the right so that the hammers hit one string (una corda) instead of two strings (due corde). Dominic Gill says that when the hammers strike only one string, the piano "... produces a softer, more ethereal tone. ''
By the late 18th century, piano builders had begun triple stringing the notes on the piano. This change, affecting the una corda 's function, is described by Joseph Banowetz:
On the pianos of the late eighteenth to early nineteenth centuries, the pianist could shift from the normal three - string (tre corde) position to one in which either two strings (due corde) or only one (una corda) would be struck, depending on how far the player depressed the pedal. This subtle but important choice does not exist on modern pianos, but was readily available on the earlier instruments.
The sound of the una corda on early pianos created a larger difference in color and timbre than it does on the modern piano. On the modern piano, the una corda pedal makes the hammers of the treble section hit two strings instead of three. In the case of the bass strings, the hammer normally strikes either one or two strings per note. The lowest bass notes on the piano are a single thicker string. For these notes, the action shifts the hammer so that it strikes the string on a different, lesser - used part of the hammer nose.
Edwin Good states,
On the modern piano, the timbre is subtly different, but many people can not hear it. In that respect, at least, the modern piano does not give the player the flexibility of changing tone quality that early ones did.
Beethoven took advantage of the ability of his piano to create a wide range of tone color in two of his piano works. In his Piano Concerto No. 4, Beethoven specifies the use of una corda, due corde, and tre corde. He calls for una corda, then "poco a poco due ed allora tutte le corde '', gradually two and then all strings, in Sonata Op. 106.
On the modern upright piano, the left pedal is not truly an una corda, because it does not shift the action sideways. The strings run at such an oblique angle to the hammers that if the action moved sideways, the hammer might strike one string of the wrong note. A more accurate term for the left pedal on an upright piano is the half - blow pedal. When the pedal is activated, the hammers move closer to the strings, so that there is less distance for the hammer to swing.
The last pedal added to the modern grand was the middle pedal, the sostenuto, which was inspired by the French. Using this pedal, a pianist can sustain selected notes, while other notes remain unaffected. The sostenuto was first shown at the French Industrial Exposition of 1844 in Paris, by Boisselot & Fils, a Marseille company. French piano builders Alexandre François Debain and Claude Montal built sostenuto mechanisms in 1860 and 1862, respectively. These innovative efforts did not immediately catch on with other piano builders. In 1874, Albert Steinway perfected and patented the sostenuto pedal. He began to advertise it publicly in 1876, and soon the Steinway company was including it on all of their grands and their high - end uprights. Other American piano builders quickly adopted the sostenuto pedal into their piano design. The adoption by European manufacturers went far more slowly and was essentially completed only in recent times.
The term "sostenuto '' is perhaps not the best descriptive term for what this pedal actually does. Sostenuto in Italian means sustained. This definition alone would make it sound as if the sostenuto pedal accomplishes the same thing as the damper, or "sustaining '' pedal. The sostenuto pedal was originally called the "tone - sustaining '' pedal. That name would be more accurately descriptive of what the pedal accomplishes, i.e., sustainment of a single tone or group of tones. The pedal holds up only dampers that were already raised at the moment that it was depressed. So if a player: (i) holds down a note or chord, and (ii) while so doing depresses this pedal, and then (iii) lifts the fingers from that note or chord while keeping the pedal depressed, then that note or chord is not damped until the foot is lifted -- despite subsequently played notes being damped normally on their release. Uses for the sostenuto pedal include playing transcriptions of organ music (where the selective sustaining of notes can substitute for the organ 's held notes in its pedals), or in much contemporary music, especially spectral music. Usually, the sostenuto pedal is played with the right foot.
It is common to find uprights and even grand pianos that lack a middle pedal. Even if a piano has a middle pedal, one can not assume it is a true sostenuto, for there are many other functions a middle pedal can have other than that of sostenuto. Often an upright 's middle pedal is another half - blow pedal, like the one on the left, except that the middle pedal slides into a groove to stay engaged. Sometimes, the middle pedal may only operate the bass dampers. The middle pedal may sometimes lower a muffler rail of felt between the hammers and the strings to mute and significantly soften the sound, so that one can practice quietly (also known as a "Practice Rail ''). True sostenuto is rare on uprights, except for more expensive models such as those from Steinway and Bechstein. They are more common on digital pianos as the effect is straightforward to mimic in software.
Among other pedals sometimes found on early pianos are the lute stop, moderator or celeste, bassoon, buff, cembalo, and swell. The lute pedal created a pizzicato - type sound. The moderator, or celeste mechanism used a layer of soft cloth or leather between hammers and strings to provide a sweet, muted quality. According to Good, "(The piece of leather or cloth was) graduated in thickness across its short dimension. The farther down one pushed the pedal, the farther the rail lowered and the thicker the material through which the hammer struck the strings. With the thicker material, the sound was softer and more muffled. Such a stop was sometimes called a pianissimo stop. ''
The moderator stop was popular on Viennese pianos, and a similar mechanism is still sometimes fitted on upright pianos today in the form of the practice rail (see Sostenuto pedal, above). Joseph Banowetz states that for the bassoon pedal, paper or silk was placed over the bass strings to create "... a buzzing noise that listeners of the day felt resembled the sound of the bassoon. '' The buff stop and cembalo stops seem similar to each other in method of manipulation and sound produced. The buff ("leather '') stop used "... a narrow strip of soft leather... pressed against the strings to give a dry, soft tone of little sustaining power. '' The cembalo stop pressed leather weights on the strings and modified the sound to make it resemble that of the harpsichord. Johannes Pohlmann used a swell pedal on his pianos to raise and lower the lid of the piano to control the overall volume. Instead of raising and lowering the lid, the swell was sometimes operated by opening and closing slots in the sides of the piano case.
Often called "the father of the pianoforte '', Muzio Clementi was a composer and musician who founded a piano - building company, and was active in the designing of the pianos that his company built. The Clementi piano firm was later renamed Collard and Collard in 1830, two years before Clementi 's death. Clementi added a feature called a harmonic swell. "(This pedal) introduced a kind of reverberation effect to give the instrument a fuller, richer sound. The effect uses the sympathetic vibrations set up in the untuned non-speaking length of the strings. Here the soundboard is bigger than usual to accommodate a second bridge (the ' bridge of reverberation '). ''
The Dolce Campana pedal pianoforte c. 1850, built by Boardman and Gray, New York, demonstrated yet another creative way of modifying the piano 's sound. A pedal controlled a series of hammers or weights attached to the soundboard that would fall onto an equal number of screws, and created the sound of bells or the harp. The Fazioli concert grand piano model F308 includes a fourth pedal to the left of the traditional three pedals. This pedal acts similarly to the "half - blow '' pedal on an upright piano, in that it collectively moves the hammers somewhat closer to the strings to reduce the volume without changing the tone quality, as the una - corda does. The F308 is the first modern concert grand to offer such a feature.
In the early years of piano development, many novelty pedals and stops were experimented with before finally settling on the three that are now used on the modern piano. Some of these pedals were meant to modify levels of volume, color, or timbre, while others were used for special effects, meant to imitate other instruments. Banowetz speaks of these novelty pedals: "At their worst, these modifications threatened to make the piano into a vulgar musical toy. ''
During the late 18th century, Europeans developed a love for Turkish band music, and the Turkish music style was an outgrowth of this. According to Good, this possibly began "... when King Augustus the Strong of Poland received the gift of a Turkish military band at some time after 1710. '' "Janissary '' or "janizary '' '' refers to the Turkish military band that used instruments including drums, cymbals, and bells, among other loud, cacophonous instruments. Owing to the desire of composers and players to imitate the sounds of the Turkish military marching bands, piano builders began including pedals on their pianos by which snare and bass drums, bells, cymbals, or the triangle could be played by the touch of a pedal while simultaneously playing the keyboard.
Up to six pedals controled all these sound effects. Alfred Dolge states, "The Janizary pedal, one of the best known of the early pedal devices, added all kinds of rattling noises to the normal piano performance. It could cause a drumstick to strike the underside of the soundboard, ring bells, shake a rattle, and even create the effect of a cymbal crash by hitting several bass strings with a strip of brass foil. '' Mozart 's Rondo alla Turca, from Sonata K. 331, written in 1778, was sometimes played using these Janissary effects.
The sustaining, or damper stop, was first controlled by the hand, and was included on some of the earliest pianos ever built. Stops operated by hand were inconvenient for the player, who would have to continue playing with one hand while operating the stop with the other. If this was not possible, an assistant would be used to change the stop, just as organists do even today. Johannes Zumpe 's square piano, made in London in 1767, had two hand stops in the case, which acted as sustaining stops for the bass strings and the treble strings.
The knee lever to replace the hand stop for the damper control was developed in Germany sometime around 1765. According to David Crombie, "virtually all the fortepianos of the last three decades of the eighteenth century were equipped with a knee lever to raise and lower the dampers... ''
Sometime around 1777, Mozart had an opportunity to play a piano built by Johann Andreas Stein, who had been an apprentice of Gottfried Silbermann. This piano had knee levers, and Mozart speaks highly of their functionality in a letter: "The machine which you move with the knee is also made better by (Stein) than by others. I scarcely touch it, when off it goes; and as soon as I take my knee the least bit away, you ca n't hear the slightest after - sound. ''
The only piano Mozart ever owned was one by Anton Walter, c. 1782 - 1785. It had two knee levers; the one on the left raised all the dampers, while the one on the right raised only the treble dampers. A moderator stop to produce a softer sound (see Other pedals, above) was centrally above the keyboard.
Although there is some controversy among authorities as to which piano builder was actually the first to employ pedals rather than knee levers, one could say that pedals are a characteristic first developed by manufacturers in England. James Parakilas states that the damper stop was introduced by Gottfried Silbermann, who was the first German piano builder. Parakilas, however, does not specify whether Silbermann 's damper stop was in the form of a hand lever, knee lever, or pedal. However, many successful English piano builders had apprenticed with Silbermann in Germany, and then left for London as a result of the disturbances of the Seven Years ' War in Saxony. Among those who re-located to England were Johannes Zumpe, Americus Backers, and Adam Beyer. Americus Backers, Adam Beyer, and John Broadwood, all piano builders in England, are credited as being among the first to incorporate the new feature. Americus Backers ' 1772 grand, his only surviving instrument, has what are believed to be original pedals, and is most likely the first piano to use pedals rather than knee levers. A square piano built by Adam Beyer of London in 1777 has a damper pedal, as do pianos built by John Broadwood, ca. 1783.
After their invention, pedals did not immediately become the accepted form for piano stops. German and Viennese builders continued to use the knee levers for quite some time after the English were using pedals. Pedals and knee levers were even used together on the same instrument on a Nannette Streicher grand built in Vienna in 1814. This piano had two knee levers that were Janissary stops for bell and drum, and four pedals for una corda, bassoon, dampers, and moderator.
Throughout his lifetime, Ludwig van Beethoven owned several different pianos by different makers, all with different pedal configurations. His pianos are fine examples of some experimental and innovative pedal designs of the time. In 1803, the French piano company Erard gave him a grand, "(thought to be) the most advanced French grand piano of the time... It had... four pedals, including an una - corda, as well as a damper lift, a lute stop, and a moderator for softening the tone ''.
Beethoven 's Broadwood grand, presented as a gift to him from the Broadwood company in 1817, had an una corda pedal and a split damper pedal -- one half was the damper for the treble strings, the other was for the bass strings. In an effort to give Beethoven an instrument loud enough for him to hear when his hearing was failing, Conrad Graf designed an instrument in 1824 especially for Beethoven with quadruple stringing instead of triple. Graf only made three instruments of this nature. David Crombie describes this instrument: "by adding an extra string, Graf attempted to obtain a tone that was richer and more powerful, though it did n't make the instrument any louder than his Broadwood ''. This extra string would have provided a bigger contrast when applying keyboard - shifting stops, because this keyboard shift pedal moved the action from four to two strings. Crombie states, "these provide a much wider control over the character of the sound than is possible on Graf 's usual instruments ''.
This piano had five pedals: a keyboard shift (quad to due corde), bassoon, moderator 1, moderator 2, and dampers. A different four - string system, aliquot stringing, was invented by Julius Blüthner in 1873, and is still a feature of Blüthner pianos. The Blüthner aliquot system uses an additional (hence fourth) string in each note of the top three piano octaves. This string is slightly higher than the other three strings so that it is not struck by the hammer. Whenever the hammer strikes the three conventional strings, the aliquot string vibrates sympathetically.
As a composer and pianist, Beethoven experimented extensively with pedal. His first marking to indicate use of a pedal in a score was in his first two piano concertos, in 1795. Earlier than this, Beethoven had called for the use of the knee lever in a sketch from 1790 -- 92; "with the knee '' is marked for a series of chords. According to Joseph Banowetz, "This is the earliest - known indication for a damper control in a score ''. Haydn did not specify its use in a score until 1794. All in all, there are nearly 800 indications for pedal in authentic sources of Beethoven 's compositions, making him by far the first composer to be highly prolific in pedal usage.
Along with the development of the pedals on the piano came the phenomenon of the pedal piano, a piano with a pedalboard. Some of the early pedal pianos date back to 1815. The pedal piano developed partially for organists to be able to practice pedal keyboard parts away from the pipe organ. In some instances, the pedal piano was actually a special type of piano with a built - in pedal board and a higher keyboard and bench, like an organ. Other times, an independent pedal board and set of strings could be connected to a regular grand piano. Mozart had a pedalboard made for his piano. His father, Leopold, speaks of this pedalboard in a letter: "(the pedal) stands under the instrument and is about two feet longer and extremely heavy ''.
Alfred Dolge writes of the pedal mechanisms that his uncle, Louis Schone, constructed for both Robert Schumann and Felix Mendelssohn in 1843. Schumann preferred the pedal board to be connected to the upright piano, while Mendelssohn had a pedal mechanism connected to his grand piano. Dolge describes Mendelssohn 's pedal mechanism: "The keyboard for pedaling was placed under the keyboard for manual playing, had 29 notes and was connected with an action placed at the back of the piano where a special soundboard, covered with 29 strings, was built into the case ''.
In addition to using his pedal piano for organ practice, Schumann composed several pieces specifically for the pedal piano. Among these compositions are Six Studies Op. 56, Four Sketches Op. 58, and Six Fugues on Bach Op. 60. Other composers who used pedal pianos were Mozart, Liszt, Alkan and Gounod.
The piano, and specifically the pedal mechanism and stops underwent much experimentation during the formative years of the instrument, before finally arriving at the current pedal configuration. Banowetz states, "These and a good number of other novelty pedal mechanisms eventually faded from existence as the piano grew to maturity in the latter part of the nineteenth century, finally leaving as survivors of this tortuous evolution only today 's basic three pedals ''.
The location of pedals on the piano was another aspect of pedal development that fluctuated greatly during the evolution of the instrument. Piano builders were quite creative with their pedal placement on pianos, which sometimes gave the instruments a comical look, compared to what is usually seen today. The oldest surviving English grand, built by Backers in 1772, and many Broadwood grands had two pedals, una corda and damper, which were attached to the legs on the left and right of the keyboard. James Parakilas describes this pedal location as giving the piano a "pigeon - toed look '', for they turned in slightly. A table piano built by Jean - Henri Pape in the mid-19th century had pedals on the two front legs of the piano, but unlike those on the Backers and Broadwood, these pedals faced straight in towards each other rather than out.
A particularly unusual design is demonstrated in the "Dog Kennel '' piano. It was built by Sebastien Mercer in 1831, and was nicknamed the "Dog Kennel '' piano because of its shape. Under the upright piano where the modern pedals would be located is a semi-circular hollow space where the feet of the player could rest. The una corda and damper pedals are at the left and right of this space, and face straight in, like the table piano pedals. Eventually during the 19th century, pedals were attached to a frame located centrally underneath the piano, to strengthen and stabilize the mechanism. According to Parakilas, this framework on the grand piano "often took the symbolic shape and name of a lyre '', and it still carries the name "pedal lyre '' today.
Although the piano and its pedal configuration has been in its current form since the late 19th century, there is a possibility that sometime in the future the pedal configuration may change again. In 1987, the Fazioli piano company in Sacile, Italy, designed the longest piano made until this time (10 feet 2 inches (3.10 m)). This piano has four pedals: damper, sostenuto, una corda, and half - blow.
|
who sings the song all i need is the air that i breathe | The Air That I Breathe - wikipedia
"The Air That I Breathe '' is a ballad written by Albert Hammond and Mike Hazlewood, initially recorded by Albert Hammond on his 1972 album It Never Rains in Southern California.
This song was a major hit for The Hollies in early 1974, reaching number two in the United Kingdom. In the summer of 1974, the song reached number six in the United States on the Billboard Hot 100 chart and number three on the Adult Contemporary chart. In Canada, the song peaked at number five on the RPM Magazine charts. The audio engineering for "The Air That I Breathe '' was done by Alan Parsons. It proved to be the Hollies ' final charting hit in the US.
The 1992 Radiohead song "Creep '' uses a similar chord progression and shares some melodic content with "The Air That I Breathe ''. As a result, Hammond and Hazlewood sued Radiohead for plagiarism and won.
|
when does season 8 of pretty little liars come out | List of Pretty Little Liars episodes - wikipedia
Pretty Little Liars is a TV series which premiered on ABC Family on June 8, 2010. Developed by I. Marlene King, the series is based on the Pretty Little Liars book series by Sara Shepard. The series follows the lives of four girls, Aria Montgomery, Hanna Marin, Emily Fields, and Spencer Hastings, whose clique falls apart after the disappearance of their leader, Alison DiLaurentis. One year later, the estranged friends are reunited as they begin receiving messages from a mysterious figure named "A '' who threatens to expose their deepest secrets, including ones they thought only Alison knew.
After an initial order of 10 episodes, ABC Family ordered an additional 12 episodes for season one on June 28, 2010. The first season 's "summer finale '' aired on August 10, 2010, with the remaining 12 episodes began airing on January 3, 2011. On January 11, 2011, ABC Family picked up Pretty Little Liars for a second season of 24 episodes. It began airing on Tuesday, June 14, 2011. It was announced in June that a special Halloween - themed episode would air as part of ABC Family 's 13 Nights of Halloween line - up. This increased the episode count from 24 to 25. On November 29, 2011, ABC Family renewed the series for a third season, consisting of 24 episodes. On October 4, 2012, ABC Family renewed the series for a fourth season, consisting of 24 episodes. On March 26, 2013, ABC Family renewed the series for a fifth season. On January 7, 2014, showrunner I. Marlene King wrote on Twitter that season 5 will have 25 episodes, including a holiday - themed episode. On June 10, 2014, it was announced that the show was renewed for an additional 2 seasons. Season 6 will air in mid-2015, and season 7 will air in mid-2016. It was announced by I. Marlene King that the sixth and the seventh season will consist of 20 episodes each. It was announced on August 29, 2016, that the show would be ending after the seventh season, and that the second half of the season would begin airing April 18, 2017.
During the course of the series, 160 episodes of Pretty Little Liars aired over seven seasons.
For the fourth season, see "Pretty Little Liars: Season Four Ratings ''. TV Series Finale. March 20, 2014. Retrieved July 26, 2017.
For the fifth season, see "Pretty Little Liars: Season Five Ratings ''. TV Series Finale. March 25, 2015. Retrieved July 26, 2017.
For the sixth season, see "Pretty Little Liars: Season Six Ratings ''. TV Series Finale. March 16, 2016. Retrieved July 26, 2017.
For the seventh season, see "Pretty Little Liars: Season Seven Ratings ''. TV Series Finale. June 28, 2017. Retrieved July 26, 2017.
|
who has won the most titles in tennis | Tennis Players with most Titles in the Open Era - wikipedia
This page lists all tennis players who have won at least 30 top - level professional tournament titles since the Open Era began in 1968.
Titles can be any combination of singles and doubles, so the combined total is the default sorting of the lists. The current top - level events are on the ATP World Tour for men and the WTA Tour for women.
|
how many seasons of vampire knight anime are there | List of vampire Knight episodes - wikipedia
The episodes of the Vampire Knight anime adaptation is based on the manga series of the same name written by Matsuri Hino. They are directed by Kiyoko Sayama, and produced by Studio Deen and Nihon Ad Systems. The plot of the episodes follows Yuki Cross, a student at the Cross Academy, where she acts as a guardian of the "Day Class '' along with vampire hunter Zero Kiryu from the secret vampires of the "Night Class '' led by Kaname Kuran.
The first season premiered on TV Tokyo in Japan on April 7, 2008, and ran for thirteen episodes until the season 's conclusion on June 30, 2008. The episodes were aired at later dates on TV Aichi, TV Hokkaido, TV Osaka, TV Setōchi, and TVQ Kyushu Broadcasting Co. The second season, named Vampire Knight Guilty, premiered on the same station on October 6, 2008 and ran until its conclusion on December 29, 2008. As of December 2008, five DVD compilations of the first season have been released by Aniplex and Sony Pictures between July 23, 2008 and November 26, 2008. The first DVD compilation for the second season was released by Aniplex on January 28, 2009, and the second compilation was released on February 25, 2009.
The series is licensed by Viz Media, who is streaming episodes of their Viz Anime website and broadcasting dubbed episodes on their online network, Neon Alley.
Four pieces of theme music are used for the episodes: two opening themes and two closing themes. The opening theme for the first season is "Futatsu no Kodō to Akai Tsumi '' (ふたつ の 鼓動 と 赤い 罪) by On / Off, and the closing theme is "Still Doll '' by Kanon Wakeshima. The opening theme of the second season is "Rinne Rondo '' (輪廻 - ロンド -) by On / Off, and the ending theme is "Suna no Oshiro '' (砂 の お 城) by Kanon Wakeshima. The original soundtrack for the series was released on June 25, 2008.
The Region 2 DVD compilations of the Vampire Knight episodes are released in Japan by Aniplex and Sony Pictures. As of November 2008, five DVD compilations have been released, containing the entire first season. The first DVD compilation for the second season was released on January 28, 2009, and the last was released on May 27, 2009.
|
how many maid of the mist boats are there | Maid of the Mist - wikipedia
The Maid of the Mist is a boat tour of Niagara Falls, starting and ending on the American side, crossing briefly into Ontario during a portion of the trip. (The actual boats used are also named Maid of the Mist, followed by a different Roman numeral in each case.) The boat starts off at a calm part of the Niagara River, near the Rainbow Bridge, and takes its passengers past the American and Bridal Veil Falls, then into the dense mist of spray inside the curve of the Horseshoe Falls, also known as the Canadian Falls. The tour starts and returns on the U.S. side of the river.
The original Maid of the Mist was built at a landing near Niagara Falls on the American side of the border. The boat was christened in 1846 as a border - crossing ferry; its first trip was on September 18, 1846. The two - stage barge - like steamer was designed primarily as a link for a proposed ferry service between New York City and Toronto. It was a 72 - foot - long side - wheeler with an 18 - foot beam which was powered by steam produced from a wood - and coal - fired boiler. It could carry up to 100 passengers.
The ferry did well until 1848, when the opening of a suspension bridge between the United States and Canada cut into the ferry traffic. It was then that the owners decided to make the journey a sightseeing trip, plotting a journey closer to the Falls.
The present day Maid of the Mist Corporation was formed in 1884 by Captain R.F. Carter and Frank LaBlond, who invested in a new Maid that would launch in 1885. Captain Carter and Mr. LaBlond hired Alfred H. White from Port Robinson, Ontario to build the new ship. A letter in the archives of the Buffalo Historical Society from Mr. LaBlond to Alfred White says that they are well pleased with the vessel and asks Alfred to add a wale onto the boat.
Maid of the Mist is was known for her role in the 9 July 1960, rescue of Roger Woodward, a seven - year - old boy who became the first person to survive a plunge over the Horseshoe Falls with nothing but a life jacket. The boat involved in the rescue (Maid of the Mist II) was retired from service in 1983 and relocated to the Amazon River, where she served as a missionary ship for some years.
Access to the river - level attraction on the Canadian side was provided by the Maid of the Mist Incline Railway, a funicular railway, between 1894 and 1990, to travel between street level and the boat dock. As this service proved increasingly inadequate in transporting the growing passenger base of the 1990s, four high - speed elevators replaced the railway in 1991. On the American side, the dock is reached by four elevators enclosed in the observation tower.
The service is run by Maid of the Mist Corp. of Niagara Falls, New York. Maid of the Mist has been owned by the Glynn family since 1971.
James V. Glynn is chairman and chief executive officer of Maid of the Mist Corp. Glynn joined Maid of the Mist in 1950 as a ticket seller, and purchased the company in 1971. During his tenure, Maid of the Mist expanded operations, achieving ten-fold growth.
While on his 1860 tour of Canada, Albert Edward, Prince of Wales (later King Edward VII), rode on Maid of the Mist.
In June 1952, Marilyn Monroe rode the Maid of the Mist while in Niagara Falls to film the movie Niagara.
In 1991 The Prince and Princess of Wales, and their two young sons, Princes William and Harry, rode on Maid of the Mist.
Mikhail Gorbachev was a passenger in 1983.
Maid of the Mist I
A second Maid of the Mist I was built in 1854
Maid of the Mist I, this one sailed closer to Horseshoe Falls than any had previously.
Maid of the Mist II
These boats sailed the lower river until April 22, 1955, when they burned in a pre-season accident. Later that year, they were replaced by two new ships. The type and style of the boats is still seen today; they were made of steel and powered by diesel engines.
Maid of the Mist I
Maid of the Mist II
More ships have been added to the fleet.
Maid of the Mist III
Maid of the Mist IV
Maid of the Mist V
Maid of the Mist VI
Maid of the Mist VII
Little Maid
A partial history of Maid of the Mist is featured in the IMAX film Niagara: Miracles, Myths and Magic.
|
how tall is flip or flop atlanta host | Flip or Flop Atlanta - Wikipedia
Flip or Flop Atlanta is a television series airing on HGTV hosted by real estate agents Ken and Anita Corsini. It is a spin - off of the HGTV series Flip or Flop. It premiered on July 20, 2017 and is held in Atlanta, Georgia area. On August 21, 2017, HGTV announced Flip or Flop Atlanta would be renewed for a second season, with 14 episodes, which is expected to debut in 2018.
On March 1, 2017, HGTV announced "Flip or Flop '' will expand to Atlanta, Georgia. The shows will feature a new couple, Ken and Anita Corsini, flipping houses in Atlanta, Georgia. Ken and Anita Corsini will have the same roles as Tarek and Christina in this show.
Ken Corsini is a full - time real estate investor who lives in the suburbs of Atlanta with his wife Anita and their three children. Ken founded Georgia Residential Partners in 2005 and since that time has bought and sold over 500 properties. He has a bachelor 's degree in risk management from the University of Georgia as well as a master 's degree in residential development from Georgia Tech.
Ken and Anita run a family business, renovating many houses a year in the Atlanta metro area.
|
is the mandible part of the axial or appendicular skeleton | Axial skeleton - wikipedia
The axial skeleton is the part of the skeleton that consists of the bones of the head and trunk of a vertebrate. In the human skeleton, it consists of 80 bones and is composed of six parts; the skull bones, the ossicles of the middle ear, the hyoid bone, the rib cage, sternum and the vertebral column. The axial skeleton together with the appendicular skeleton form the complete skeleton. Another definition of axial skeleton is the bones including the vertebrae, sacrum, coccyx, ribs, and sternum.
Flat bones house the brain and other vital organs. This article mainly deals with the axial skeletons of humans; however, it is important to understand the evolutionary lineage of the axial skeleton. The human axial skeleton consists of 80 different bones. It is the medial core of the body and connects the pelvis to the body, where the appendix skeleton attaches. As the skeleton grows older the bones get weaker with the exception of the skull. The skull remains strong to protect the brain from injury.
The human skull consists of the cranium and the facial bones. The cranium holds and protects the brain in a large space called the cranial vault. The cranium is formed from eight plate - shaped bones which fit together at meeting points (joints) called sutures. In addition there are 14 facial bones which form the lower front part of the skull. Together the 22 bones that compose the skull form additional, smaller spaces besides the cranial vault, such as the cavities for the eyes, the internal ear, the nose, and the mouth. The most important facial bones include the jaw or mandible, the upper jaw or maxilla, the zygomatic or cheek bone, and the nasal bone.
Humans are born with separate plates which later fuse to allow flexibility as the skull passes through the pelvis and birth canal during birth. During development the eight separate plates of the immature bones fuse together into one single structure known as the Skull. The only bone that remains separate from the rest of the skull is the mandible.
The rib cage is composed of 12 pairs of ribs plus the sternum for a total of 25 separate bones. The rib cage functions as protection for the vital organs such as the heart and lungs. The ribs are shaped like crescents, with one end flattened and the other end rounded. The rounded ends are attached at joints to the thoracic vertebrae at the back and the flattened ends come together at the sternum, in the front.
The upper seven pairs of ribs attach to the sternum with costal cartilage and are known as "true ribs. '' The 8th through 10th ribs have non-costal cartilage which connects them to the ribs above. The last two ribs are called "floating ribs '' because they do not attach to the sternum or to other ribs and simply "hang free. '' The length of each rib increases from number one to seven and then decreases until rib pair number 12. The first rib is the shortest, broadest, flattest, and most curved.
At birth the majority of humans have 33 separate vertebrae. However, during normal development several vertebrae fuse together, leaving a total of 24, in most cases. The confusion about whether or not there are 32 - 34 vertebrae stems from the fact that the two lowest vertebrae, the sacrum and the coccyx, are single bones made up of several smaller bones which have fused together. This is how the vertebrae are counted: 24 separate vertebrae and the sacrum, formed from 5 fused vertebrae and the coccyx, formed from 3 - 5 fused vertebrae. If you count the coccyx and sacrum each as one vertebra, then there are 26 vertebrae. If the fused vertebrae are all counted separately, then the total number of vertebrae comes to between 32 and 34.
The vertebral column consists of 5 parts. The most cranial (uppermost) part is made up by the cervical vertebrae (7), followed by thoracic (12), lumbar (5), sacral (4 -- 5) and coccygeal vertebrae (3 -- 4).
Cervical vertebrae make up the junction between the vertebral column and the cranium. Sacral and coccygeal vertebras are fused and thus often called "sacral bone '' or "coccygeal bone '' as unit. The sacral bone makes up the junction between the vertebral column and the pelvic bones.
The word "Axial '' is taken from the word "axis '' and refers to the fact that the bones are located close to or along the central "axis '' of the body.
Axisl skeleton consist of 80 bones.
Axial skeleton (shown in red).
Illustration depicting anterior and posterior view of axial skeleton
|
the chmod command can be used on a file by | Chmod - wikipedia
In Unix - like operating systems, chmod is the command and system call which may change the access permissions to file system objects (files and directories). It may also alter special mode flags. The request is filtered by the umask. The name is an abbreviation of change mode.
A chmod command first appeared in AT&T Unix version 1.
As systems grew in number and types of users, access control lists were added to many file systems in addition to these most basic modes to increase flexibility.
Throughout this document user refers to the owner of the file as a reminder that the symbolic form of the command uses u.
chmod (options) mode (, mode) file1 (file2...)
Usual implemented options include:
If a symbolic link is specified, the target object is affected. File modes directly associated with symbolic links themselves are typically not used.
To view the file mode, the ls or stat commands may be used:
The r, w, and x specify the read, write, and execute access. The first character of the ls display denotes the object type; a hyphen represents a plain file. This script can be read, written to, and executed by the dgerman, read and executed by members of the staff group and can be read by others.
The main parts of the chmod permissions:
Example: drwxrwx -- -
To the right of the "d '':
the left three characters rwx define permissions of the user (i.e. the owner).
the middle three characters rwx define permissions of the Group.
the right three characters -- - define permissions of Others. In this example users Other than the owning user and members of the Group has no permission to access the file.
The chmod numerical format accepts up to four octal digits. The three rightmost digits refer to permissions for the file user, the group, and others. The optional leading digit, when 4 digits are given, specifies the special setuid, setgid, and sticky flags. Each digit of the three rightmost digits represent a binary value, which it 's bits control the read, write and execute respectively, where 1 means allow and 0 means do n't. This is similar to the octal notation, but represented in decimal numbers.
For example, 754 would allow:
read, write, and execute for the user, as the binary value of 7 is 111, meaning all bits are on.
read and execute for the Group, as the binary value of 5 is 101, meaning read and execute are on but write is off.
read only for Others, as the binary value of 4 is 100, meaning that only read is on.
Change permissions to permit members of the programmers group to update a file.
Since the setuid, setgid and sticky bits are not specified, this is equivalent to:
The chmod command also accepts a finer - grained symbolic notation, which allows modifying specific modes while leaving other modes untouched. The symbolic mode is composed of three components, which are combined to form a single string of text:
Classes of users are used to distinguish to whom the permissions apply. If no classes are specified "all '' is implied. The classes are represented by one or more of the following letters:
The chmod program uses an operator to specify how the modes of a file should be adjusted. The following operators are accepted:
The modes indicate which permissions are to be granted or removed from the specified classes. There are three basic modes which correspond to the basic permissions:
Multiple changes can be specified by separating multiple symbolic modes with commas (without spaces). If a user is not specified, chmod will check the umask and the effect will be as if "a '' was specified except bits that are set in the umask are not affected.
Add write permission (w) to the group 's (g) access modes of a directory, allowing users in the same group to add files:
Remove write permissions (w) for all classes (a), preventing anyone from writing to the file:
Set the permissions for the user and the group (ug) to read and execute (rx) only (no write permission) on referenceLib, preventing anyone to add files.
The chmod command is also capable of changing the additional permissions or special modes of a file or directory. The symbolic modes use ' s ' to represent the setuid and setgid modes, and ' t ' to represent the sticky mode. The modes are only applied to the appropriate classes, regardless of whether or not other classes are specified.
Most operating systems support the specification of special modes using octal modes, but some do not. On these systems, only the symbolic modes can be used.
The POSIX standard defines the following function prototype:
The mode parameter is a bitfield composed of various flags:
|
which hemisphere of the cerebral cortex dominates processing of speech | Language processing in the brain - Wikipedia
Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Thus it is how the brain creates and understands language. Most recent theories consider that this process is carried out entirely by and inside the brain; however, environmental factors play a role in the development of language processing as well.
This is considered one of the most characteristic abilities of the human species. However very little is known about it and there is huge scope for research on it. This part of the brain also enhances the way linguistic learners learn and think.
Most of the knowledge acquired to date on the subject has come from patients who have suffered some type of significant head injury, whether external (wounds, bullets) or internal (strokes, tumors, degenerative diseases).
Studies have shown that most of the language processing functions are carried out in the cerebral cortex. The essential function of the cortical language areas is symbolic representation. Even though language exists in different forms, all of them are based on symbolic representation.
Much of the language function is processed in several association areas, and there are two well - identified areas that are considered vital for human communication: Wernicke 's area and Broca 's area. These areas are usually located in the dominant hemisphere (the left hemisphere in 97 % of people) and are considered the most important areas for language processing. This is why language is considered a localized and lateralized function.
However, the non-dominant hemisphere also participates in this cognitive function, and there is ongoing debate on the level of participation of the less - dominant areas. The non-dominant hemisphere may be particularly involved in processing the prosody of spoken language.
Other factors are believed to be relevant to language processing and verbal fluency, such as cortical thickness, participation of prefrontal areas of the cortex, and communication between right and left hemispheres.
Wernicke 's area is classically located in the posterior section of the superior temporal gyrus of the dominant hemisphere (Brodmann area 22), with some branches extending around the posterior section of the lateral sulcus, in the parietal lobe.
Wernicke 's area is located between the auditory cortex and the visual cortex. The former is located in the transverse temporal gyrus (Brodmann areas 41 and 42), in the temporal lobe, while the latter is located in the posterior section of the occipital lobe (Brodmann areas 17, 18 and 19).
While the dominant hemisphere is in charge of most of language comprehension, recent studies have demonstrated that the non-dominant (right hemisphere in 97 % of people) homologous area participates in the comprehension of ambiguous words, whether they are written or heard.
Receptive speech has traditionally been associated with Wernicke 's area of the posterior superior temporal gyrus (STG) and surrounding areas. Current models of speech perception include greater Wernicke 's area, but also implicate a "dorsal '' stream that includes regions also involved in speech motor processing.
First identified by Carl Wernicke in 1874, its main function is the comprehension of language and the ability to communicate coherent ideas, whether the language is vocal, written, signed.
Broca 's area is usually formed by the pars triangularis and the pars opercularis of the inferior frontal gyrus (Brodmann areas 44 and 45). It follows Wernicke 's area, and as such they both are usually located in the left hemisphere of the brain.
Broca 's area is involved mostly in the production of speech. Given its proximity to the motor cortex, neurons from Broca 's area send signals to the larynx, tongue and mouth motor areas, which in turn send the signals to the corresponding muscles, thus allowing the creation of sounds.
A recent analysis of the specific roles of these sections of the left inferior frontal gyrus in verbal fluency indicates that Brodmann area 44 (pars opercularis) may subserve phonological fluency, whereas the Brodmann area 45 (pars triangularis) may be more involved in semantic fluency. Further analysis shows that Broca 's area may have less involvement with information for producing individual words, but, instead, Broca 's area is shown to coordinate language processing information for speech production on a greater scale.
The arcuate fasciculus is the area of the brain between Wernicke 's area and Broca 's area that connects the two through bundles of nerve fibers. This portion of the brain serves as a transit center between the two areas dealing most largely with speech and communication.
Recent studies have shown that the rate of increase in raw vocabulary fluency was positively correlated with the rate of cortical thinning. In other words, greater performance improvements were associated with greater thinning. This is more evident in left hemisphere regions, including the left lateral dorsal frontal and left lateral parietal regions: the usual locations of Broca 's area and Wernicke 's area, respectively.
After Sowell 's studies, it was hypothesized that increased performance on the verbal fluency test would correlate with decreased cortical thickness in regions that have been associated with language: the middle and superior temporal cortex, the temporal -- parietal junction, and inferior and middle frontal cortex. Additionally, other areas related to sustained attention for executive tasks were also expected to be affected by cortical thinning.
One theory for the relation between cortical thinning and improved language fluency is the effect that synaptic pruning has in signaling between neurons. If cortical thinning reflects synaptic pruning, then pruning may occur relatively early for language - based abilities. The functional benefit would be a tightly honed neural system that is impervious to "neural interference '', avoiding undesired signals running through the neurons which could possibly worsen verbal fluency.
The strongest correlations between language fluency and cortical thicknesses were found in the temporal lobe and temporal -- parietal junction. Significant correlations were also found in the auditory cortex, the somatosensory cortex related to the organs responsible for speech (lips, mouth), and frontal and parietal regions related to attention and performance monitoring. The frontal and parietal regions are also evident in the right hemisphere.
Environmental effects, such as social and economic factors and quality of input, affect the development language processing.
Differences in family socioeconomic status affects language development, leading to those from high - socioeconomic - status families having greater efficiency with language processing. These differences in language processing are evident in children as young as 18 months. Children from lower socioeconomic statuses who receive less cognitive stimulation from their environment are at a greater disadvantage with language processing.
Research on bilingual speakers shows that information about both languages is activated in the brain even when a speaker is only using one language. Some research shows that, because bilingual speakers access linguistic information in their brain differently from monolingual speakers, they have an advantage in language processing, and they outperform monolingual speakers in reaction times for language processing and then producing relevant language in certain tasks. However, other studies have found that this may not be applicable to all bilinguals.
The use of child - directed speech affects this development in children. Those who are exposed to a greater quantity of child - directed speech develop greater proficiency with language processing.
Acoustic stimuli are received by the auditive organ and are converted to bioelectric signals on the organ of Corti. These electric impulses are then transported through scarpa 's ganglion (vestibulocochlear nerve) to the primary auditory cortex, on both hemispheres. Each hemisphere treats it differently, nevertheless: while the left side recognizes distinctive parts such as phonemes, the right side takes over prosodic characteristics and melodic information.
The signal is then transported to Wernicke 's area on the left hemisphere (the information that was being processed on the right hemisphere is able to cross through inter-hemispheric axons), where the already noted analysis takes part.
During speech comprehension, activations are focused in and around Wernicke 's area. A large body of evidence supports a role for the posterior superior temporal gyrus in acoustic -- phonetic aspects of speech processing, whereas more ventral sites such as the posterior middle temporal gyrus (pMTG) are thought to play a higher linguistic role linking the auditory word form to broadly distributed semantic knowledge.
Also, the pMTG site shows significant activation during the semantic association interval of the verb generation and picture naming tasks, in contrast to the pSTG sites that remain at or below baseline levels during this interval. This is consistent with a greater lexical -- semantic role for pMTG relative to a more acoustic -- phonetic role for pSTG.
From Wernicke 's area, the signal is taken to Broca 's area through the arcuate fasciculus. Speech production activations begin prior to verbal response in the peri-Rolandic cortices (pre - and postcentral gyri). The role of ventral peri-Rolandic cortices in speech motor functions has long been appreciated (Broca 's area). The superior portion of the ventral premotor cortex also exhibited auditory responses preferential to speech stimuli and are part of the dorsal stream.
Involvement of Wernicke 's area in speech production has been suggested and recent studies document the participation of traditional Wernicke 's area (mid-to posterior superior temporal gyrus) only in post-response auditory feedback, while demonstrating a clear pre-response activation from the nearby temporal - parietal junction (TPJ).
It is believed that the common route to speech production is through verbal and phonological working memory using the same dorsal stream areas (temporal - parietal junction, sPMv) implicated in speech perception and phonological working memory. The observed pre-response activations at these dorsal stream sites are suggested to subserve phonological encoding and its translation to the articulatory score for speech. Post-response Wernicke 's activations, on the other hand, are involved strictly in auditory self - monitoring.
Several authors support a model in which the route to speech production runs essentially in reverse of speech perception, as in going from conceptual level to word form to phonological representation.
Early auditory processing and word recognition take place in inferior temporal areas ("what '' pathway), where the signal arrives from the primary and secondary visual cortices. The representation of the object in the "what '' pathway and nearby inferior temporal areas itself constitutes a major aspect of the conceptual -- semantic representation. Additional semantic and syntactic associations are also activated, and during this interval of highly variable duration (depending on the subject, the difficulty of the current object, etc.), the word to be spoken is selected. This involves some of the same sites -- prefrontal cortex (PFC), supramarginal gyrus (SMG), and other association areas -- involved in the semantic selection stage of verb generation.
The acquired language disorders that are associated to brain activity are called aphasias. Depending on the location of the damage, the aphasias can present several differences.
The aphasias listed below are examples of acute aphasias which can result from brain injury or stroke.
|
who won comeback kitchen food network star 2018 | Food Network Star (season 14) - wikipedia
The fourteenth season of the American reality television series titled Food Network Star premiered on June 10, 2018 on Food Network. Food Network chefs Bobby Flay and Giada de Laurentiis returned as judges.
(in order of elimination)
^ 1: Manny was eliminated midway through the finale.
This season of Star Salvation is hosted by Alex Guarnaschelli.
|
where did all the vegas knights players come from | 2017 NHL expansion Draft - wikipedia
The 2017 NHL Expansion Draft was an expansion draft conducted by the National Hockey League on June 18 -- 20, 2017 to fill the roster of the league 's expansion team for the 2017 -- 18 season, the Vegas Golden Knights. The team 's selections were announced on June 21 during the NHL Awards ceremony at T - Mobile Arena.
In the offseason before the 2015 -- 16 NHL season, the league opened a window for ownership groups to bid for expansion teams for the first time since 2000. Two ownership groups submitted bids to the league, one each from Las Vegas and Quebec City. If chosen, this would be the first "Big Four '' major professional sports league to place a franchise in Las Vegas (not counting the city 's short - lived and ill - fated football teams in the Canadian Football League and XFL, who played in 1994 and 2001 respectively), but the NHL has had a limited presence in the city with annual preseason games, beginning with an outdoor game in 1991 and the Frozen Fury series held each year since 1997. Quebec City was previously home of the Quebec Nordiques, a team that had moved in 1995 and became the Colorado Avalanche; it has hosted occasional preseason games since that time, and has constructed a new ice hockey arena to receive a potential NHL team. Due to political delays, a bid was not submitted from Seattle despite the presence of three different ownership groups publicly campaigning to start an NHL team; a number of other potential expansion sites, such as Kansas City and Saskatchewan, declined to place bids because of cost concerns.
Las Vegas was approved for the 2017 -- 18 NHL season on June 22, 2016; at the same time the Quebec City bid was deferred, largely because of concerns over the Canadian dollar 's value and the geographic balance of the league 's conferences.
The initial proposal of the rules for the draft were decided upon by the NHL in March 2016. They allowed each team to either protect seven forwards, three defencemen, and one goaltender or, one goaltender and eight skaters regardless of position. Because the NHL wanted to ensure the competitive viability of any new teams, the number of protected players allowed was lower than in the 2000 NHL Expansion Draft which populated the Minnesota Wild and Columbus Blue Jackets, when each team could protect nine forwards, five defencemen, and one goalie, or two goalies, three defencemen, and seven forwards. Under these rules, each of the 30 teams would lose one top - four defencemen or third - line forward per number of new teams. Only players with more than two years of professional experience -- NHL or AHL as defined in the collective bargaining agreement -- were included in the draft.
Teams had to submit their list of protected players by June 17, 2017, and they had to expose at least two forwards and one defenceman that had played at least 40 games in the 2016 -- 17 season or more than 70 games in the 2015 -- 16 and 2016 -- 17 seasons combined and had to still be contracted for the 2017 -- 18 season. The exposed goaltender had to either be under contract for the 2017 -- 18 season or became a Restricted free agent in 2017. At least twenty of the thirty players selected by Vegas had to be under contract for the 2017 -- 18 season, and they were required to select a minimum of fourteen forwards, nine defencemen and three goaltenders. Vegas was granted a 48 - hour window prior to the draft to sign any pending free agent (RFA or UFA, one per team) that was left unprotected. If a team lost a player to Vegas during this signing window they did not have a player selected from their roster during this draft.
Teams were required to protect any contracted players with no move clauses (NMCs) with one of the team 's slots for protected players, unless the contract expired on July 1, 2017, in which case the NMC was considered void for the draft. Players whose NMCs had limited no trade clauses had to still be protected, and any players with NMCs were able to waive the clause and become eligible for the expansion draft.
Any player picked in the expansion draft could not have their contract bought out until after the completion of the 2017 -- 18 season. The expansion team was guaranteed the same odds in the draft lottery as third lowest finishing team from the 2016 -- 17 NHL season for the 2017 NHL Entry Draft; after their first season the team will be subject to same draft lottery rules as the other teams in the league. The NHL 's deputy commissioner, Bill Daly, said that teams that do not follow the expansion draft rules would face penalties, including possibly the "loss of draft picks and / or players. ''
The protected players ' list was published on June 18, 2017.
Italics: Players protected for contractual reasons.
In return for agreeing not to select certain unprotected players, the Golden Knights were granted concessions by other franchises.
Not all players selected by the Golden Knights in the Expansion Draft would remain with the team. Some players were traded in the following days, some the day after:
Other players who were no longer on the Golden Knights ' roster at the start of the 2017 -- 18 NHL season include the following:
|
why did garth brooks write the song the dance | The Dance (song) - wikipedia
"The Dance '' is a song written and composed by Tony Arata, and recorded by American country music singer Garth Brooks as the tenth and final track from his self - titled debut album, from which it was also released as the album 's fourth and final single in April 1990. It is considered by many to be Brooks ' signature song. In a 2015 interview with Patrick Kielty of BBC Radio 2, Brooks credits the back to back success of both "The Dance '' and its follow up "Friends in Low Places '' for his phenomenal success.
At the opening of the music video, Brooks explains that the song is written with a double meaning - both as a love song about the end of a passionate relationship, and a story of someone dying because of something he believes in, after a moment of glory.
The song 's music video, directed by John Lloyd Miller, shows several American icons and examples of people who died for a dream. These include archive footage of the following:
It was awarded Video of the Year at the 1990 ACM Music Awards.
On the Billboard Hot Country Songs chart, The Dance reached number one and remained there for three consecutive weeks until it was knocked off by "Good Times '' by Dan Seals
Released near the beginning of his career, "The Dance '' was a hit single around the world, including the United States, Europe, and Ireland, charting inside the British pop top 40. In 1990, it was named both Song of the Year and Video of the Year by the Academy of Country Music. It was awarded the number 14 position in the CMT 100 Greatest Songs of Country Music broadcast in 2003 and also the number 5 position on the network 's The Greatest: 100 Greatest Music Videos special in 2004.
In a 1994 Playboy interview, Brooks said, "unless I am totally surprised, The Dance will be the greatest success as a song we will ever do. I 'll go to my grave with The Dance. It 'll probably always be my favorite song. ''
In 2001, after the death of Dale Earnhardt, Brooks was invited to the NASCAR awards ceremony that was honoring Earnhardt to play the song as a tribute. The song has been used as several country stations ' last song before changing formats. It was also the second song to be played on UK station Country 1035, the first being another Brooks number.
On 6 February 2014, "The Dance '' was performed by Brooks on the final episode of The Tonight Show with Jay Leno on NBC.
US 7 - inch promotional single Capitol Nashville NR - 44629, 1990
US 7 '' Jukebox single Liberty S7 - 17441 - A, 1990
UK CD single Capitol CDCLS - 735, 1993 Disc 1
Disc 2
"The Dance '' is the fifth single in the overall discography of American freestyle recording artist Rockell, and a cover version of the chart single performed by Garth Brooks. (See above.) It is the first single she released from her second album, Instant Pleasure. There was no video made for this single.
US CD single
|
what episode do chandler and joey win the apartment | The One with the Embryos - wikipedia
"The One with the Embryos '' is the twelfth episode of Friends ' fourth season. It first aired on the NBC network in the United States on January 15, 1998. In the episode, Phoebe (Lisa Kudrow) agrees to be the surrogate mother for her brother Frank Jr. (Giovanni Ribisi) and his older wife Alice Knight (Debra Jo Rupp). Meanwhile, a display by Chandler (Matthew Perry) and Joey (Matt LeBlanc) of how well they know Monica (Courteney Cox) and Rachel (Jennifer Aniston) by guessing the items in their shopping bag leads to a large - scale bet on a quiz, for which Ross (David Schwimmer) acts as the gamemaster.
The episode was directed by Kevin S. Bright and co-written by Jill Condon and Amy Toomin. The idea for Kudrow 's character Phoebe becoming a surrogate mother coincided with the actress ' real - time pregnancy. The producers wanted to find a way to use the pregnancy in a narrative for the fourth season and designated the task to the writers. Ribisi and Rupp reprised their recurring roles of Frank Jr. and Alice respectively which was initially difficult as both had filming commitments.
In its original broadcast on NBC, "The One with the Embryos '' acquired a 17.3 Nielsen rating, finishing the week ranked fourth. The episode received critical acclaim, is generally considered one of the best of the entire series, and is a favorite amongst the cast members and producers. In 2009, "The One with the Embryos '' was ranked # 21 on TV Guide 's list of "TV 's Top 100 Episodes of All Time. ''
When Joey and Chandler (Matt LeBlanc and Matthew Perry) correctly identify the contents of Rachel 's (Jennifer Aniston) shopping bag, Monica (Courteney Cox) suggests a trivia contest to see who knows more about whom: the men or the women. They place a $100 bet on the outcome and Ross (David Schwimmer) puts together some questions and plays as host. Meanwhile, Phoebe (Lisa Kudrow) has agreed to be a surrogate mother for her brother Frank (Giovanni Ribisi) and his older wife Alice (Debra Jo Rupp), but is concerned that they are paying $16,000 for the IVF procedure which only has a 25 % chance of success, and is helpless to influence the results.
The trivia game begins, with various facts about the characters being revealed such as Joey 's space - cowboy imaginary friend (Maurice) and Rachel 's actual favorite movie (Weekend at Bernie 's). A nine - to - nine score leads to a lightning round. Monica raises the stakes: If the women win, Joey and Chandler must give up their birds, as the maturing Chick is crowing in the mornings and waking them up. Chandler rebuts by suggesting Rachel and Monica give up their apartment to them, which Monica agrees to without consulting Rachel. The girls lose the lightning round because they can not identify Chandler 's job, and the boys win.
As the four pack up their respective apartments -- Rachel, in particular, displeased about having to switch -- Phoebe returns home and takes a pregnancy test, though it is too soon for a result. Later with packing complete, Rachel finally refuses to move as Frank and Alice come by with another pregnancy test. The boys and the girls begin to argue along with Ross, which is cut short when Phoebe emerges from the bathroom and joyfully announces she is pregnant, the mood turning to one of celebration.
"The One with the Embryos '' was co-written by Jill Condon and Amy Toomin and directed by Kevin S. Bright. In October 1997, Lisa Kudrow announced she and her husband Michel Stern were expecting their first child. When Marta Kauffman first learned of Kudrow 's pregnancy, she was overjoyed and wanted to find a solution of incorporating it into the show without choosing to cover up. At the time of filming "The One with the Embryos '' in December, Kudrow was four months pregnant and the writers discussed ways of narrating the pregnancy on the show, settling with Kudrow 's character carrying her brother 's embryos.
According to David Crane, the story arc with Phoebe carrying Frank and Alice 's baby was considered "risky ''. When the plot was first discussed, the main concern was whether it was "too crazy... where 's the line with Phoebe? ''. Crane felt if it were not for the actors, the storyline would not have been believable. The producers found it difficult to get Giovanni Ribisi to reprise his role as Frank Jr. on a longer term basis because the actor had continuous filming commitments. A similar situation occurred with Debra Jo Rupp, who was named as a cast member in the upcoming period sitcom, That ' 70s Show on the Fox network.
The chick and the duck, who first appeared in "The One with a Chick and a Duck '' as Chandler and Joey 's pets were used "as a spark '' for the main plot. The animals were originally intended for one episode but because the producers believed they got "so much mileage out of them '', they made recurring appearances. As many television shows used similar fictional pets, the producers settled on a chicken and a duck as they were different.
In the trivia contest, the answer "Viva Las Gay - gas '' in response to ' What is the name of Chandler 's dad 's show in Vegas? ' changed "about a million times '' in drafts according to Crane. On the night the show was being filmed, writers continued to pitch for different answers in order to receive a better response from the audience. The staff found it difficult coming up with different points of view for each character as all wanted to win the game.
In its original airing, "The One with the Embryos '' finished fourth in ratings for the week of January 12 -- January 18, 1998, with a Nielsen rating of 17.3, equivalent to approximately 16.8 million viewing households. It was the fourth highest - rated show on NBC that week, following ER, Seinfeld and Veronica 's Closet -- all of which aired on the network 's Thursday night Must See TV lineup.
"The One with the Embryos '' was Courteney Cox and Matt LeBlanc 's favorite episode of the series. Cox liked the episode because she enjoys playing Monica at her most competitive, while LeBlanc spoke fondly of the pace of the episode and the information about the characters that came out. He identified scenes that featured just the six core cast as the best, "because you do n't have to introduce a character -- you do n't have to lay any pipeline -- you just get right to the funny ''. On the DVD audio commentary for the episode, Marta Kauffman cited the episode being "so much fun to do '' and enjoyed the writing process. The scene involving Phoebe talking to the embryos was Kevin S. Bright 's favorite in the show 's history because of Kudrow 's ability to "draw you into the scene... even though it 's only her talking to the dish ''. David Crane highlights how the episode explores generosity; doing a selfless act which pays off when Phoebe gives birth in "The One Hundredth ''. Bright moreover felt the trivia contest was the catalyst that rejuvenated the entire fourth season and "put Friends in a different place ''.
In a 2001 review, Entertainment Weekly rated the episode A+, stating that "Thanks to the trivia contest alone, Embryos is quite possibly Friends ' finest moment ''. The article singles out Rachel 's line "He 's a transpon - transpondster! '' (in response to the question "What is Chandler Bing 's job? '') as the best line of the episode. The authors of Friends Like Us: The Unofficial Guide to Friends called it a "sure - fire contender for the best episode of all time... not one to be missed under any circumstances ''. In 2004, Tara Ariano of MSNBC.com wrote that the character trivia is "revealed in a manner completely organic to the plot. Beautifully written and acted, ' The One With The Embryos ' encapsulates the whole series in a single episode ''. The episode was ranked # 21 on TV Guide 's list of "TV 's Top 100 Episodes of All Time ''.
The episode was released as part of Friends: The Complete Fourth Season in Regions 1, 2 and 4. As part of the DVD release, "Who Knows Whom Best? -- Ross 's Ultimate Challenge '' an interactive game was included, based on the quiz in "The One with the Embryos ''. The game uses clips from the show to provide answers, allows viewers to choose a team (boys or girls) and call the coin toss.
|
who was the united states first serial killer | List of Serial Killers in the United States - wikipedia
A serial killer is typically a person who murders three or more people, with the murders taking place over more than a month and including a significant period of time between them. The Federal Bureau of Investigation (FBI) defines serial killing as "a series of two or more murders, committed as separate events, usually, but not always, by one offender acting alone ''.
This is a list of unidentified serial killers who committed crimes within the United States.
|
what are the branches of government in barbados | Government of Barbados - wikipedia
The Government of Barbados (GoB), is headed by the monarch, Queen Elizabeth II as Head of State. Since 1 June 2012, the Queen has been represented by the Governor - General, Sir Elliott Belgrave, G.C.M.G., K.A.
The country has a bicameral legislature and a political party system, based on universal adult suffrage and fair elections. The Senate has 21 members, appointed by the Governor - General on behalf of the monarch, 12 on the advice of the Prime Minister, two on the advice of the Leader of the Opposition, and seven in the Governor - General 's sole discretion. The House of Assembly has 30 members, all elected. Both houses debate all legislation. However, the House of Assembly may override Senate 's rejection of money bills and other bills except bills amending the Constitution.
Officers of each house (President and Deputy President of the Senate; Speaker, Deputy Speaker, and Chairman of Committees of the Assembly) are elected from the members of the respective houses.
In keeping with the Westminster system of governance, Barbados has evolved into an independent parliamentary democracy and Constitutional monarchy, meaning that all political power rests with the Parliament under a non-political monarch as head of state, which allows stability. Executive authority is vested in the monarch, who normally acts only on the advice of the Prime Minister and Cabinet, who are collectively responsible to Parliament. Barbadian law is rooted in English common law, and the Constitution of Barbados implemented in 1966, is the supreme law of the land.
Barbados PM Freundel Stuart announced that Barbados would become a republic before the 50th anniversary of independence in 2016. Barbados will replace the Governor General as head of state with a ceremonial president. However Barbados will still retain association with the crown through membership within the Commonwealth of Nations.
Fundamental rights and freedoms of the individual are set out in the Constitution and are protected by a strict legal code.
The Cabinet is headed by the Prime Minister, who must be an elected member of Parliament, and other ministers are appointed from either chamber by the Governor - General, as advised by the Prime Minister.
The Governor - General appoints as Leader of the Opposition the member of House of Assembly who commands the support of the largest number of members of that House in opposition to the ruling party 's government.
The maximum duration of a Parliament is five years from the first sitting. There is a simultaneous dissolution of both Houses of Parliament by the Governor - General, acting on the advice of the Prime Minister.
There is an established non-political civil service. Also, there are separate constitutional commissions for the Judicial and Legal Service, the Public Service, and the Police Service.
The government has been chosen by elections since 1961 elections, when Barbados achieved full self - governance. Before then, the government was a Crown colony consisting of either colonial administration solely (such as the Executive Council), or a mixture of colonial rule and a partially elected assembly, such as the Legislative Council.
Since independence the Barbados Labour Party (BLP) governed from 1976 to 1986, and from September 1994 -- 2008. The Democratic Labour Party (DLP) held office 1966 to 1976, from 1986 to 1994, and has formed the Government January 2008 to present.
The Executive Branch of government conducts the ordinary business of government. These functions are called out by the Prime Minister or president and cabinet ministers. The prime minister chooses the ministers of government they wish to have in the cabinet but they are actually appointed by the governor general.
The Constitution of Barbados is the supreme law of the nation. The Attorney General heads the independent judiciary. Historically, Barbadian law was based entirely on English common law with a few local adaptations. At the time of independence, the Parliament of the United Kingdom lost its ability to legislate for Barbados, but the existing English and British common law and statutes in force at that time, together with other measures already adopted by the Barbadian Parliament, became the basis of the new country 's legal system.
Legislation may be shaped or influenced by such organisations as the United Nations, the Organization of American States, or other international bodies to which Barbados has obligatory commitments by treaty. Additionally, through international co-operation, other institutions may supply the Barbados Parliament with key sample legislation to be adapted to meet local circumstances before enacting it as local law.
New acts are passed by the Barbadian Parliament and require royal assent by the Governor - General to become law.
The Judiciary is the legal system through which punishments are handed out to individuals who break the law. The Judiciary also settles disputes. The Functions of The judiciary: ° To Enforce Laws ° To interpret Laws ° To conduct court hearings ° To hear court appeals
The local court system of Barbados is made up of:
Transparency International ranked Barbados as 17th place (of 179) in the world on its corruption perceptions index in 2010, with only one nation scoring better in the Americas. ((1))
Office of the Prime Minister
The Cabinet Office in the Government Headquarters complex
Main entrance to the Government Headquarters complex, with a statue of Sir Grantley Adams in the foreground
|
when does the new season of bitten start | Bitten (TV series) - Wikipedia
Bitten is a Canadian television series based on the Women of the Otherworld series of books by author Kelley Armstrong. The name was inspired by the first book in the series. The show was produced as an original series for Space, with most filming in Toronto and Cambridge, Ontario. Its third and final season finished in April 2016.
The first season the story centres on Elena Michaels (portrayed by Laura Vandervoort), a female werewolf who is torn between a normal life with her human boyfriend Philip in Toronto and her "family '' obligations as a werewolf in upstate New York. Among her pack is her ex-fiancé Clayton, who is responsible for her becoming a werewolf.
The second season Left Hand Path is the narrative as the story features on the aftermath of the battle as well as meeting the witches Paige and Ruth, who are looking to save their lost coven member, Savannah, from an evil warlock Aleister.
On May 22, 2014, the series was renewed for a second season of 10 episodes, with production beginning in Summer. On May 22, 2015, Space confirmed the series to be renewed for a third season with filming set to begin summer / fall 2015. It was confirmed in December 2015 that the third season of Bitten would be the show 's final season.
Todor Kobakov was hired to compose the score for the series.
Bitten -- Score Soundtrack Vol. 1 was released on March 18, 2014.
The series was acquired by Syfy for airing in the United States, and premiered in January 2014. It premiered in Australia on August 8, 2015 on FOX8. SyfyUK commenced broadcast on 19 May 2016.
Bitten has received mixed reviews. Metacritic gave the first season a score of 59 out of 100 (based on 8 reviews).
In ratings, Bitten averaged 348,000 viewers in its timeslot, making it Space 's highest - rated original series of all time.
A live after - show titled InnerSpace: After Bite premiered on Space on February 7, 2015, following the season two premiere. After Bite features hosts Morgan Hoffman and Teddy Wilson, discussing the latest episode with actors and producers of Bitten.
|
what happened to one of the judges on forged in fire | Forged in Fire (TV series) - wikipedia
Forged in Fire is an American competition series that airs on the History channel, and is produced by Outpost Entertainment, a Leftfield Entertainment company. In each episode, four bladesmiths compete in a three - round elimination contest to forge bladed weapons, with the overall winner receiving $10,000 and the day 's championship title. The series is hosted by Wil Willis, with a three - judge panel consisting of J. Neilson (Jason Knight during portions of season 3 and 4; Ben Abbott during portions of season 4), David Baker, and Doug Marcaida, experts in weapon history and use. History ordered an initial eight episodes of the series with the first program premiering on Monday, June 22, 2015, at 10pm ET. Season two premiered on February 16, 2016. The third season premiered with a "champion of champions '' match on August 23, 2016, and was announced as having 16 episodes. The fourth season premiered on April 11, 2017, with a "Judges ' Pick '' episode in which the four judges (Neilson, Knight, Baker, Marcaida) each selected one smith from past seasons to compete again. The fifth season premiered on March 7, 2018.
On April 17, 2018, a spin - off series titled Forged in Fire: Knife or Death premiered on History. This series is hosted by Bill Goldberg and co-hosted by Tu Lam, a martial arts expert and retired member of the Green Berets.
The series is filmed in Brooklyn, New York. The set, referred to as "The Forge, '' is stocked with a wide range of metalworking equipment, including propane forges, coal forges, grinders, power hammers and hydraulic presses. Medical personnel are present to treat any injuries or other health problems and may, at their discretion, disqualify smiths who are unable to continue safely. At the end of each round, the smith whose weapon is judged to be the least satisfactory must surrender it and leave the competition.
In the first round, the four smiths are presented with a starting material that they must use to forge a blade. In some episodes, they all begin with the same material; in others, they may choose from an assortment of metal objects or must salvage their material from a source such as a junked car or lawnmower. Willis states one set of criteria concerning blade or blade / tang length, and often a second set for a feature that must be incorporated, such as serrations or a fuller groove; not all competitions require a special feature. The smiths are typically given 10 minutes to sketch out their designs, but this time is occasionally extended or omitted altogether. Following the design period, they are given a set length of time to forge their blades. Once time expires, the judges evaluate the blades based on Willis ' criteria and inspect their workmanship, quality, and design, then deliberate privately before announcing their decision. Any smiths who fail to meet the criteria, or who fail to turn in a blade at all, are subject to immediate elimination.
For the second round, the three remaining smiths are given an additional length of time to turn their blades into fully operational weapons. They must attach a handle, choosing from a range of provided materials and incorporating any special feature stated by Willis, and grind, sharpen, and polish the blades. They may also address any flaws or issues pointed out by the judges in the first round, if they choose to do so. After the time expires, the judges put each weapon through a series of tests to gauge properties such as sharpness, durability, and ease of use. For these tests, the weapons are used to chop / slash / stab objects that include ropes, ice blocks, animal carcasses, and steel car doors. If a weapon suffers catastrophic failure, defined as damage that renders it unsafe or ineffective for further testing, its maker is immediately disqualified. The judges may, at their discretion, choose not to subject a weapon to a particular test if it is sufficiently cracked or flawed.
The working time in each of the first two rounds is typically three hours, but may be extended to four hours if an added feature poses a sufficient challenge, such as being required to forge a billet with modern damascus steel methods and use it for the blade.
In the third round, the two remaining smiths are shown a historically significant (and technically difficult) weapon and are given five days to create a version of it. They return to their home forges to do the work and comply with any specifications set by Willis. Afterwards, they return to the Forge and submit their weapons for testing against objects and environments similar to the historical scenarios in which they were typically used. Based on the test results, the judges select one smith to receive the $10,000 prize.
The "Master & Apprentice '' episode in Season 4 featured four master / apprentice pairs of smiths. Only one member of each pair was allowed to work at any time, trading off every 30 minutes in the first two rounds, and every day in the third. The non-working member was allowed to offer advice. For this episode, the forging time in the first round was extended to three and a half hours.
The "Ultimate Champions Edition '' (season 4) and "Rookies Edition '' (season 5) each featured five smiths instead of four. The smiths were required to forge a particular type of blade at their homes and bring those weapons to the studio for a preliminary test. One smith was eliminated based on the results of this test, after which the competition proceeded through the normal three rounds.
Willis is a former Army Ranger and decorated Air Force para-rescue specialist. Willis ' previous television experience includes Special Ops Mission and Triggers, two series that aired on the former Military Channel.
J. Neilson, a knife and sword expert, holds the rank of Master Smith within the American Bladesmith Society. He has over 20 years ' experience in making knives and edged weapons. He examines the weapons ' technical qualities and tests their durability. In Season 3, Neilson took a leave of absence in order to have surgery on his hand; Jason Knight, another ABS Master Smith, filled his seat on the judges ' panel during that time. Neilson appeared alongside Knight for the Season 4 premiere, then resumed his seat in the eighth episode. Starting with the 21st episode of the fourth season, he was replaced by two - time Forged in Fire champion Ben Abbott.
David Baker, a Hollywood prop maker who has appeared on the Spike series Deadliest Warrior, is an authority on weapons history and an expert on replicating period - accurate weapons for both museums and films. He judges the weapons ' historical accuracy and aesthetic beauty.
Doug Marcaida, an edged - weapons specialist, is a U.S. military contractor, martial arts instructor and knife designer for FOX Knives Italy. Specializing in the Southeast Asian fighting style of Kali, he has taught classes in weapon awareness and use for military, law enforcement, and security organizations. Marcaida evaluates the smiths ' weapons to determine their effectiveness in combat. When he can not test the smiths ' weapons himself due to injury, he has a co-worker or family member perform this task in his place.
Tim Healy and Steve Ascher are executive producers for History. Jodi Flynn, Brent Montgomery, David George, Shawn Witt and Simon Thomas are executive producers for Outpost Entertainment. Healy observed the demonstration, and later the filming, from the sidelines. Healy says that the inspiration for Forged in Fire came from his and other developers ' love of food competition shows such as Chopped and Iron Chef. However, in order to appeal to the History channel 's audience, they decided to have the competition focus on historical weaponry.
In the city of Cohoes, New York near Albany, a man, inspired by the series, tried to forge a piece of metal over a fire in a barrel near his home. He caused a fire that destroyed three residential buildings and damaged 28 others.
|
who was the longest serving non-royal world leader who rose to power after 1900 | List of longest - ruling non-royal National leaders since 1900 - wikipedia
This list details national leaders since 1900 who ruled for 30 years or more, and were not self - described royalty. It also combines all national leader level offices held concurrently or consecutively by each individual.
000000001926 - 07 - 09 - 0000 9 July 1926 (1st time) 000000001928 - 01 - 03 - 0000 3 January 1928 (2nd time) 000000001932 - 03 - 06 - 0000 6 March 1932 (3rd time)
000000001927 - 08 - 14 - 0000 14 August 1927 (1st time) 000000001931 - 12 - 15 - 0000 15 December 1931 (2nd time) 000000001975 - 04 - 05 - 0000 5 April 1975 (3rd time)
000000001919 - 04 - 01 - 0000 1 April 1919 (1st time) 000000001932 - 03 - 09 - 0000 9 March 1932 (2nd time) 000000001951 - 06 - 13 - 0000 13 June 1951 (3rd time) 000000001957 - 03 - 20 - 0000 20 March 1957 (4th time) 000000001959 - 06 - 25 - 0000 25 June 1959 (5th time)
000000001922 - 01 - 09 - 0000 9 January 1922 (1st time) 000000001948 - 02 - 18 - 0000 18 February 1948 (2nd time) 000000001954 - 06 - 02 - 0000 2 June 1954 (3rd time) 000000001959 - 06 - 23 - 0000 23 June 1959 (4th time) 000000001973 - 06 - 24 - 0000 24 June 1973 (5th time)
000000001979 - 02 - 08 - 0000 8 February 1979 (1st time) 000000001997 - 10 - 25 - 0000 25 October 1997 (2nd time)
000000001992 - 08 - 31 - 0000 31 August 1992 (1st time) present (2nd time)
000000001950 - 03 - 17 - 0000 17 March 1950 (1st time) 000000001954 - 10 - 20 - 0000 20 October 1954 (2nd time)
000000001953 - 11 - 17 - 0000 17 November 1953 (1st time) 000000001982 - 01 - 27 - 0000 27 January 1982 (2nd time)
000000001876 - 11 - 28 - 0000 28 November 1876 (1st time) 000000001877 - 02 - 17 - 0000 17 February 1877 (2nd time) 000000001884 - 12 - 01 - 0000 1 December 1884 (3rd time)
000000001876 - 12 - 06 - 0000 6 December 1876 (1st time) 000000001880 - 12 - 01 - 0000 1 December 1880 (2nd time) 000000001911 - 05 - 25 - 0000 25 May 1911 (3rd time)
|
when did they first put warning labels on cigarettes | Tobacco packaging warning messages - wikipedia
Tobacco package warning messages are warning messages that appear on the packaging of cigarettes and other tobacco products concerning their health effects. They have been implemented in an effort to enhance the public 's awareness of the harmful effects of smoking. In general, warnings used in different countries try to emphasize the same messages. Warnings for some countries are listed below. Such warnings have been required in tobacco advertising for many years.
The WHO Framework Convention on Tobacco Control, adopted in 2003, requires such package warning messages to promote awareness against smoking.
A 2009 review summarises that "There is clear evidence that tobacco package health warnings increase consumers ' knowledge about the health consequences of tobacco use. '' The warning messages "contribute to changing consumers ' attitudes towards tobacco use as well as changing consumers ' behavior. ''
At the same time, such warning labels have been subject to criticism. 2007 meta - analyses indicated that communications emphasizing the severity of a threat are less effective than communications focusing on susceptibility, and that warning labels may have no effect among smokers who are not confident that they can quit, which lead the authors to recommend exploring different, potentially more effective methods of behavior change.
Text - based warnings on cigarette packets are used in Albania.
General warning:
As of January 30, 2013, all cigarette packages must include graphical warning labels that show the detrimental effects on the health of long - term smokers.
Translation of words in box:
Smoking is very bad for health. It can cause heart disease, gangrene, lung cancer, reduced lung function, chronic bronchitis, stroke, throat and mouth cancer, and emphysema. If you smoke despite all this, do n't say that we did not warn you.
On 1 December 2012, Australia introduced groundbreaking legislation and the world 's toughest tobacco packaging warning messages to date. All marketing and brand devices were removed from the package and replaced with warnings, only the name of the product remain in generic standard sized text. All tobacco products sold, offered for sale or otherwise supplied in Australia were plain packaged and labelled with new and expanded health warnings.
In Azerbaijan, cigarette packages carry a small notice: "Ministry of Health warns: Smoking is dangerous for your health '', but this is usually printed in light and small fonts, and the first part of the message is not always visible.
In Bangladesh, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a mouth cancer and lung cancer) are placed prominently on cigarette packages.
In Bolivia, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a laryngeal cancer and heart attack) are placed prominently on cigarette packages.
Front of packaging (covers 30 % of surface):
Back of packaging (covers 50 % of surface):
Before 2011, a small warning with the text Pušenje je štetno za zdravlje (Smoking is harmful to health) was printed on the back of cigarette packets.
Brazil was the second country in the world and the first country in Latin America to adopt mandatory warning images in cigarette packages. Warnings and graphic images illustrating the risks of smoking occupy 100 % of the back of cigarettes boxes since 2001. In 2008, the government elected a third batch of images, aimed at younger smokers.
Since 2003, the sentence
Este produto contém mais de 4, 7 mil substâncias tóxicas, e nicotina que causa dependência física ou psíquica. Não existem níveis seguros para consumo dessas substâncias. (This product contains over 4700 toxic substances and nicotine, which causes physical or psychological addiction. There are no safe levels for the intake of these substances.)
is displayed in all packs.
In Brunei, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages.
In Cambodia, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a premature birth and lung cancer) are placed prominently on cigarette packages.
Canada has had 3 phases of tobacco warning labels. The first set of warnings was introduced in 1989 under the Tobacco Products Control Act, and required warnings to be printed on all tobacco products sold legally in Canada. The set consisted of 4 messages printed in black - and - white on the front and back of the package, and was expanded in 1994 to include 8 messages covering 25 % of the front top of the package. In 2000, the Tobacco Products Information Regulations (TPIR) were passed under the Tobacco Act. The regulations introduced a new set of 16 warnings. Each warning was printed on the front and back of the package, covering 50 % of the surface, with a short explanation and a picture illustrating that particular warning, for example:
WARNING CIGARETTES CAUSE LUNG CANCER 85 % of lung cancers are caused by smoking. 80 % of lung cancer victims die within three years.
accompanied by a picture of a human lung detailing cancerous growths.
Additionally, on the inside of the packaging or, for some packets, on a pull - out card, "health information messages '' provide answers and explanations regarding common questions and concerns about quitting smoking and smoking - related illnesses. The side of the package also featured information on toxic emissions and constituent levels.
In 2011, the TPIR were replaced for cigarettes and little cigars with the Tobacco Products Labelling Regulations (Cigarettes and Little Cigars). These regulations introduced the third and current set of 16 warnings in Canada. Currently, cigarette and little cigar packages in Canada must bear new graphic warning messages that cover 75 % of the front and back of the package. The interior of each package contains 1 of 8 updated health warning messages, all including the number for a national quitline. The side of the package now bears 1 of 4 simplified toxic emission statements. These labels were fully implemented on cigarette and little cigar packages by June 2012 (though the 2000 labels still appear on other tobacco products). Canada also prohibits terms such as "light '' and "mild '' from appearing on tobacco packaging. The current labels were based on extensive research and a long consultation process that sought to evaluate and improve upon the warnings introduced in 2000.
In accordance with Canadian law regarding products sold legally in Canada, the warnings are provided in both English and French. Imported cigarettes to be sold in Canada which do not have the warnings are affixed with sticker versions when they are sold legally in Canada.
Health Canada is also considering laws mandating plain packaging, in which legal tobacco product packaging would still include warning labels, but brand names, fonts, and colors would be replaced with simple unadorned text, thereby reducing the impact of tobacco industry marketing techniques.
There have been complaints from some Canadians due to the graphic nature of the labels, but they generally enjoy wide public support.
Starting in November 2006, all cigarette packages sold in Chile are required to have one of two health warnings, a graphic pictorial warning or a text - only warning. These warnings are replaced with a new set of two warnings each year.
Under laws of the People 's Republic of China, "Law on Tobacco Monopoly '' (中华 人民 共和国 烟草 专卖 法) Chapter 4 Article 18 and "Regulations for the Implementation of the Law on Tobacco Monopoly '' (中华 人民 共和国 烟草 专卖 法 实施 条例) Chapter 5 Article 29, cigarettes and cigars sold within the territory of China should indicate the grade of tar content and "Smoking is hazardous to your health '' (吸烟 有害 健康) in the Chinese language on the packs and cartons.
In 2009, the warnings were changed. The warnings which occupy not less than 30 % of the front and back of cigarettes boxes shows "吸烟 有害 健康 尽早 戒烟 有益 健康 (Smoking is harmful to your health. Quitting smoking early is good for your health) in the front, and "吸烟 有害 健康 戒烟 可 减少 对 健康 的 危害 '' (' Smoking is harmful to your health. Quitting smoking can reduce health risks) in the back.
The warnings were revised in October, 2016. The warnings occupy not less than 35 % of the front and back of cigarettes boxes. The following warnings shows what is printed at the current time.
In Colombia, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a smoking hurts your arteries and bladder cancer) are placed prominently on cigarette packages.
General warning:
In Ecuador, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a tongue cancer and premature birth) are placed prominently on cigarette packages.
In Egypt, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a mouth cancer and gangrene) are placed prominently on cigarette packages.
Cigarette packets and other tobacco packaging must include warnings in the same size and format and using the same approved texts (in the appropriate local languages) in all member states of the European Union.
These warnings are displayed in black Helvetica bold on a white background with a thick black border. Ireland once prefaced its warnings with "Irish Government Warning '', Latvia with "Veselības ministrija brīdina '' (Health Ministry Warning) and Spain with "Las Autoridades Sanitarias Advierten (The Health Board Warns). In member states with more than one official language, the warnings are displayed in all official languages, with the sizes adjusted accordingly (for example in Belgium the messages are written in Dutch, French and German, in Luxembourg in French and German and in Ireland, in Irish and English). All cigarette packets sold in the European Union must display the content of nicotine, tar, and carbon monoxide in the same manner on the side of the packet.
In 2003, it was reported that sales of cigarette cases had surged, attributable to the introduction of more prominent warning labels on cigarette packs by an EU directive in January 2003. Alternatively, people choose to hide the warnings using various arguably "funny '' stickers, such as "You could be hit by a bus tomorrow. ''
In Belgium, warning signs are written in both Dutch, French and German languages.
Front of packaging (covers 30 % of surface):
or
Back of packaging (covers 40 % of surface):
The last warning contains a mistranslation from Directive 2001 / 37 / EC -- "hydrogen '' was translated as ugljik (carbon) instead of vodik. It was nevertheless signed into law and started appearing on cigarette packages in March 2009.
2004 -- 2009
These warnings are also simple text warnings.
Front of packaging:
Back of packaging:
Side of packaging:
1997 -- 2004
Between 1997 and 2004, a simple text label warning Pušenje je štetno za zdravlje (Smoking is harmful to health) was used.
or
Warning texts in tobacco products, health warnings, which are reproduced on the packaging of cigarettes and other tobacco products. It is implemented in an effort to strengthen public knowledge about the dangers of smoking.
The order was introduced in Denmark on 31 December 1991. The Order was last revised on 2 October 2003, which also imposed ban on the words "light '' and "mild '' on Danish cigarette packages, as did European Union countries.
The marking shall appear on one third of the most visible part of the package.
For smokeless tobacco use above markings does not, whereas the label "Denne tobaksvare kan være sundhedsskadelig og er afhængighedsskabende '' (This tobacco product can damage your health and is addictive) is always used for such products.
General warning:
or
In Finland, warning signs are written in both Finnish and Swedish languages.
or
Ireland currently follows EU standards (see above), but previously ran its own scheme, where one of 8 messages was placed on the pack, as defined in SI 326 / 1991.
After a High Court settlement in January 2008, it was accepted that the warnings on tobacco products must appear in all official languages of the state. As a result, the European Communities (Manufacture, Presentation and Sale of Tobacco Products) (Amendment) Regulations 2008 were enacted. This states that tobacco products going to market after 30 September 2008 must carry warnings in Irish and English. A year - long transition period applied to products which were on the market prior to 1 October 2008, which may have been sold until 1 October 2009.
Each packet of tobacco products must carry:
Other text is sometimes placed in the packets, for example some packets contain leaflets which have all the above warnings written on them, with more detailed explanations and reasons to give up, and advice from Philip Morris.
General warning:
or
Front of packaging (covers 30 % of surface):
or
There are also warnings on the back of every packet:
General warning (on the front of cigarette packages, covering at least 40 % of the area):
Additional warnings (on the back of cigarette packages, covering at least 50 % of the area):
or
In Spain, cigarette packages are preceded by warnings on both sides of the package marked "Las Autoridades Sanitarias advierten '' (Health authorities warn), written in black and white above the black part of the standard warning.
or
General warnings on all Swedish cigarette packagings have been in force since 1977.
In 1971, tobacco companies printed on the left side of cigarette packets "WARNING by H.M. Government, SMOKING CAN DAMAGE YOUR HEALTH ''.
In 1991, the E.U. tightened laws on tobacco warnings. "TOBACCO SERIOUSLY DAMAGES HEALTH '' was printed on the front of all tobacco packs. An additional warning was also printed on the reverse of cigarette packs.
In 2003, new E.U. regulations required one of the following general warnings must be displayed, covering at least 30 % of the surface of the pack:
Additionally, one of the following additional warnings must be displayed, covering at least 40 % of the surface of the pack:
From October 2008, all cigarette products manufactured must carry picture warnings to the reverse. Every pack must have one of these warnings by October 2009.
Plain packaging is compulsory for all cigarettes manufactured after May 2016 and sold after May 2017.
General warning:
Ghanaian warnings are very compliant with the EU 's legislations, as follows:
Packaging 1 (same as in the newer UK packaging):
Packaging 2 (same as in the older UK packaging):
Packaging 3 (same as in the older UK packaging):
Under Hong Kong Law, Chap 371B Smoking (Public Health) (Notices) Order, packaging must indicate the amount of nicotine and tar that is present in cigarette boxes in addition to graphics depicting different health problems caused by smoking in the size and ratio as prescribed by law. The warnings are to be published in both official languages, Traditional Chinese and English.
Warning begins with the phrase ' 香港 特區 政府 忠告 市民 HONG KONG SAR GOVERNMENT WARNING ' and then one of the following in all caps.
In addition, any print advertisement must give minimum 20 % coverage of the following warnings: HKSAR GOVERNMENT HEALTH WARNING
All cigarette packets and other tobacco packaging in Iceland must include warnings in the same size and format as in the European Union and using the same approved texts in Icelandic.
Cigarette packets sold in India are required to carry graphical and textual health warnings. The warning must cover at least 85 % of the surface of the pack, of which 60 % must be pictorial and the remaining 25 % contains textual warnings in English, Hindi or any other Indian language.
Until 2008, cigarette packets sold in India were required to carry a written warning on the front of the packet with the text CIGARETTE SMOKING IS INJURIOUS TO HEALTH in English. Paan, gutkha and tobacco packets carried the warning TOBACCO IS INJURIOUS TO HEALTH in Hindi and English. The law later changed. According to the new law, cigarette packets were required to carry pictorial warnings of a skull or scorpion along with the text SMOKING KILLS and TOBACCO CAUSES MOUTH CANCER in both Hindi and English.
The Cigarette and Other Tobacco Products (Packaging and Labelling) Rules 2008 requiring graphic health warnings came into force on 31 May 2008. Under the law, all tobacco products were required to display graphic pictures, such as pictures of diseased lungs, and the text SMOKING KILLS or TOBACCO KILLS in English, covering at least 40 % of the front of the pack, and retailers must display the cigarette packs in such a way that the pictures on pack are clearly visible. In January 2012 controversy arose when it was discovered an image of English footballer John Terry was used on a warning label.
On 15 October 2014, Union Health Minister Harsh Vardhan announced that only 15 % of the surface of a pack of cigarettes could contain branding, and that the rest must be used for graphic and text health warnings. The Union Ministry of Health amended the Cigarettes and Other Tobacco Products (Packaging and Labelling) Rules, 2008 to enforce the changes effective from 1 April 2015.
However, the government decision to increase pictorial warnings on tobacco packets from April 1, was put on hold indefinitely, following the recommendations of a Parliamentary committee, which reportedly did not speak to health experts, but only spoke to tobacco lobby representatives. On April 5, 2016, the health ministry ordered government agencies to enforce this new rule.
Following the intervention by the Parliamentary committee, NGO Health of Millions represented by Prashant Bhushan filed a petition in Supreme Court of India, which asks the government to stop selling of loose cigarettes and publish bigger health warnings on tobacco packs.
General warning:
Before 1996
1996 - 1999
1999 -- 2001
2002 -- end of 2013
Since 2014 With the enforcement of 2012 Indonesian Government Regulation (Peraturan Pemerintah) No. 109, all of tobacco products / cigarette packaging and advertisement should include warning images and age restriction (18 +). Graphic warning messages must consist 40 % of cigarette packages. After the introduction of graphic images in Indonesian cigarette packaging, the branding of cigarettes as "light '', "mild '', "filter '', etc. is forbidden.
Other alternatives:
This warning below is appear on the side of the cigarette packaging:
In Iran, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a lung cancer and mouth cancer) are placed prominently on cigarette packages.
Japan is the first country in Asia to enforce general warning on cigarette packaging, in force since 1972.
Prior to 2005, there was only one warning on all Japanese cigarette packages.
Since 2005, more than one general warning is printed on cigarette packaging.
On the front of cigarette packages:
On the back of cigarette packages:
In Laos, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a mouth cancer and rotting teeth) are placed prominently on cigarette packages.
In Malaysia, general warning as a mandatory on all Malaysian cigarette packaging are in force since June 1976.
Starting 1 June 2009, The Malaysian government has decided to place graphic images on the cigarette packs to show the adverse long - term effects of excessive smoking replacing the general warning with text describing the graphic images printed in Malay (front) and English (back) explaining:
Graphic warning messages must consist 40 % of the front of cigarette packages and 60 % in the back. After the introduction of graphic images in Malaysian cigarette packaging, the branding of cigarettes as "light '', "mild '', etc. is forbidden.
In Mexico cigarette packs contain health warnings and graphic images since 2010. By law, 30 % of the pack 's front, 100 % of the pack 's rear, and 100 % of one lateral must consist on images and warnings. The Secretariat of Health issues new warnings and images every six months. Images have included a dead rat, a partial mastectomy, a laryngectomy, a dead human fetus surrounded by cigarette butts, a woman being fed after suffering a stroke, and damaged lungs amongst others.
Warnings include smoking - related diseases and statistics, toxins found in cigarettes and others such as:
General warning (on the front of cigarette packages, covering at least 30 % of the area, Helvetica font):
Additional warnings (on the back of cigarette packages, covering at least 40 % of the area, Helvetica font):
Regulated by "LEGE cu privire la tutun şi la articolele din tutun "(Law on tobacco and tobacco articles) nr. 278 - XVI from 14.12. 2007 enabled at 07.03. 2008
Cigarette packets in Transnistria have variable warning labels, depending from where they come from (English, Russian, etc.)
In Mongolia, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a heart disease and lung cancer) are placed prominently on cigarette packages.
In Montenegro, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a mouth cancer and lung cancer) are placed prominently on cigarette packages.
In Nepal, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a lung cancer and mouth cancer) are placed prominently on cigarette packages.
The first health warnings appeared on cigarette packets in New Zealand in 1974. Warning images accompanying text have been required to appear on each packet since 28 February 2008. New Regulations were made on 14 March 2018 which provided for larger warnings and a new schedule of images and messages.
By law, 75 % of the cigarette pack 's front and 100 % of the cigarette pack 's rear must consist of warning messages. Images include gangrenous toes, rotting teeth and gums, diseased lungs and smoking - damaged hearts. Cigarette packets also carry the Quitline logo and phone number and other information about quitting smoking.
In total, there are 15 different warnings. A full list with pictures is available at the New Zealand Ministry of Health 's website. Warning messages are rotated annually. Following is a list of the warnings in English and Māori.
Smoking causes heart attacks, KA PĀ MAI NGĀ MANAWA - HĒ I TE KAI PAIPA
Smoking causes over 80 % of lung cancers, NEKE ATU I TE 80 % O NGĀ MATE PUKUPUKU KI NGĀ PŪKAHUKAHU I AHU MAI I TE KAI PAIPA
Smoking causes over 80 % of lung cancers, NEKE ATU I TE 80 % O NGĀ MATE PUKUPUKU KI NGĀ PŪKAHUKAHU I AHU MAI I TE KAI PAIPA
Smoking harms your baby before it is born, KA TŪKINOHIA TŌ PĒPI I TO KŌPŪ I TE KAI PAIPA
Your smoking harms others, KA TŪKINOHIA ĒTAHI ATU I Ō MAHI KAI PAIPA
Smoking is a major cause of stroke, KA PIKI AKE I TE KAI PAIPA TŌ TŪPONO KI TE IKURA RORO
Smoking damages your blood vessels, KA TŪKINOHIA Ō IA TOTO I TE KAI PAIPA
Smoking is not attractive, KA ANUANU KOE I TE KAI PAIPA
Smoking causes heart attacks, KA PĀ MAI NGĀ MANAWA - HĒ I TE KAI PAIPA
Smoking causes lung cancer, KA PĀ MAI TE MATE PUKUPUKU KI NGĀ PŪKAHUKAHU I TE KAI PAIPA
Smoking when pregnant harms your baby, KA TŪKINOHIA TŌ PĒPI I TE KAI PAIPA IA KOE E HAPŪ ANA
Your smoking harms children, KA TŪKINOHIA NGĀ TAMARIKI I Ō MAHI KAI PAIPA
Smoking is a major cause of stroke, KA PIKI AKE I TE KAI PAIPA TŌ TŪPONO KI TE IKURA RORO
Quit before it is too late, ME WHAKAMUTU KEI RIRO KOE
Smoking causes gum disease and stinking breath, KA PĀ TE MATE PŪNIHO, KA HAUNGA TŌ HĀ I TE KAI PAIPA
There are two versions of general warnings, as follows:
From 2013 onward, there is a warning:
Norway have had general warnings on cigarette packets since 1975. Norway 's warnings of today were introduced in 2003 and are in line with EU 's legislation, as Norway is an EEA member:
On the front of cigarette and cigar packages, covering about 30 % of the area:
On the back of cigarette and cigar packages, covering about 45 % of the area:
Tobacco products like snus and chewing tobacco have the following warning printed on them:
All cigarettes are required by a Statutory Order 1219 (I) / 2008 dated 25 September 2008, published in the Gazette of Pakistan dated 24 November 2008, to carry rotational health warnings from 1 July 2009. Under the previous law, health warnings were not required to be rotated.
Each health warning will be printed for a period of 6 months. The health warnings are to be in Urdu and in English. Here are the English versions:
1. WARNING: Protect children. Do not let them breathe your smoke. Ministry of Health.
2. WARNING: Smoking causes mouth and throat cancer. Ministry of Health.
3. WARNING: Quit smoking; live longer life. Ministry of Health.
4. WARNING: Smoking severely harms you and the people around you. Ministry of Health.
The warnings shall cover at least 30 % on both sides of the packet, and located at the top portions of the face (in Urdu) and back (in English) of the packet.
In Paraguay, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a impotence and heart attack) are placed prominently on cigarette packages.
In Peru, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a abortions and asthma) are placed prominently on cigarette packages.
All cigarette packaging sold in Philippines are required to display a government warning label. The warnings include:
In July 2014, Philippine President Benigno Aquino III signed the Republic Act 10643, or "An Act to Effectively Instill Health Consciousness through Graphic Health Warnings on Tobacco Products '', more known as the "Graphic Health Warning Act. '' This law requires tobacco product packaging to display pictures of the ill effects of smoking, occupying the bottom half of the display area in both front and the back side of the packaging. On March 3, 2016, Department of Health (DOH) secretary Janette Garin started the implementation of Republic Act 10643, requiring tobacco manufacturers to include graphic health warnings on newer cigarette packaging.
With the Graphic Health Warning Act implemented, graphic health warnings are used on all newer cigarette packaging, and older packages using text - only warnings are required to be replaced by newer packaging incorporating graphic warnings. The 12 new warnings, showing photos of negative effects of smoking, like mouth cancer, impotence, and gangrene, are rotated every month, and on November 3, 2016, all cigarette packaging without graphic health warning messages are banned from sale. Labeling of cigarettes with "light '' or "mild '' is also forbidden by the Graphic Health Warning Act.
Warning messages on Russian cigarette packets revised in 2013, falling in line with European Union standards.
Note: 12 different variants.
The warning messages on Serbian cigarette packets are visually similar to those in European Union countries, but the texts used in Serbia are not translated directly from EU - approved texts.
Text warnings were first added on cigarette packets. They used blunt, straight - to - the - point messages such as ' Smoking causes lung cancer '. They were later replaced by graphic warnings in August 2004. They featured gory pictures and were printed with the messages:
In 2016, the images and warnings were revised, with images focusing on damaged organs. The following warnings shows what is printed nowadays.
From 1 January 2009, people possessing cigarettes without the SDPC (Singapore Duty Paid Cigarettes) label will be committing an offence under the Customs and GST Acts. The law was passed to distinguish non-duty paid, contraband cigarettes from duty - paid ones.
Switzerland has four official languages, but only has warning messages in three languages. The fourth language, Romansh, is only spoken by 0.5 % of the population, and those persons typically also speak either German or Italian. The three warning messages below are posted on cigarette packets, cartons and advertisements such as outdoor billboard posters:
A small warning, in Somali and English, appears on British American Tobacco brands, Royals and Pall Mall.
In South Africa, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a cancer and smoking kills) are placed prominently on cigarette packages.
In South Korea, general warning on cigarette packaging are in force since 1976. The warning messages are:
In Sri Lanka, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a cancer and heart attack) are placed prominently on cigarette packages.
The warnings in Taiwan are led by the phrase "行政 院 衛生 署 警告 '' (Warning from the Department of Health, Executive Yuan:), and followed by one of the following warnings:
Due to the Department of Health was reorganized into Ministry of Health and Welfare, the images and warnings were revised in 2014. The following warnings shows what is printed (the new warnings will use on June 1, 2014).
Whether the warning is the old version or the new version, it will be marked with "戒煙 專線 0800 - 636363 '' (Smoking Quitting Hotline: 0800 - 636363).
In Thailand, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages. A recent study showed that the warnings made Thai smokers think more often about the health risks of smoking and about quitting smoking.
or
The warning messages on Ukrainian cigarette packets are also visually similar to those in European Union countries:
In 1966, the United States became the first nation in the world to require a health warning on cigarette packages.
In 1973, the Assistant Director of Research at R.J. Reynolds Tobacco Company wrote an internal memorandum regarding new brands of cigarettes for the youth market. He observed that, "psychologically, at eighteen, one is immortal '' and theorized that "the desire to be daring is part of the motivation to start smoking. '' He stated, "in this sense the label on the package is a plus. ''
In 1999, Philip Morris U.S.A. purchased three brands of cigarettes from Liggett Group Inc. The brands were: Chesterfield, L&M, and Lark. At the time Philip Morris purchased the brands from Liggett, the packaging for those cigarettes included the statement "Smoking is Addictive ''. After Philip Morris acquired the three Liggett brands, it removed the statement from the packages.
Though the United States started the trend of labeling cigarette packages with warnings, today the country has one of the least restrictive labelling requirements on their packages. Warnings are usually in small typeface placed along one of the sides of the cigarette packs with colors and fonts that closely resemble the rest of the package, so the warnings essentially are integrated and do not stand out with the rest of the cigarette package.
However, this is subject to change as the Family Smoking Prevention and Tobacco Control Act of 2009 requires color graphics with supplemental text that depicts the negative consequences of smoking to cover 50 percent of the front and rear of each pack. The nine new graphic warning labels were announced by the FDA in June 2011 and were required to appear on packaging by September 2012, though this was delayed by legal challenges.
In August 2011, five tobacco companies filed a lawsuit against the FDA in an effort to reverse the new warning mandate. Tobacco companies claimed that being required to promote government anti-smoking campaigns by placing the new warnings on packaging violates the companies ' free speech rights. Additionally, R.J. Reynolds, Lorillard, Commonwealth Brands Inc., Liggett Group LLC and Santa Fe Natural Tobacco Company Inc. claimed that the graphic labels are an unconstitutional way of forcing tobacco companies to engage in anti-smoking advocacy on the government 's behalf. A First Amendment lawyer, Floyd Abrams, represented the tobacco companies in the case, contending that requiring graphic warning labels on a lawful product can not withstand constitutional scrutiny. The Association of National Advertisers and the American Advertising Federation also filed a brief in the suit, arguing that the labels infringe on commercial free speech and could lead to further government intrusion if left unchallenged.
On 29 February 2012, U.S. District Judge Richard Leon ruled that the labels violate the right to free speech in the First Amendment. However, the following month the U.S. Court of Appeals for the 6th Circuit upheld the majority of the Tobacco Control Act of 2009, including the part requiring graphic warning labels. In April 2013 the Supreme Court declined to hear the appeal to this ruling, allowing the new labels to stand. As the original ruling against the FDA images was not actually reversed, the FDA will again need to go through the process of developing the new warning labels, and the timetable and final product remain unknown.
Stronger warning labels started to appear in May 2010.
Effective June 2010, the following labels began to appear on smokeless tobacco products (also known as chewing tobacco) and their advertisements.
The new warnings are required to comprise 30 percent of two principal display panels on the packaging; on advertisements, the health warnings must constitute 20 percent of the total area.
For many years in Venezuela, the only warning in cigarette packs was printed in a very small typeface along one of the sides:
"Se ha determinado que el fumar cigarrillos es nocivo para la salud, Ley de impuesto sobre cigarrillos '' (It has been determined that cigarette smoking is harmful to your health, Cigarette Tax Law) Since 14 September 1978
On 24 March 2005, another warning was introduced in every cigarette pack: "Este producto contiene alquitrán, nicotina y monóxido de carbono, los cuales son cancerígenos y tóxicos. No existen niveles seguros para el consumo de estas sustancias '' (This product contains tar, nicotine and carbon monoxide, which are carcinogenic and toxic. There are no safe levels for consumption of these substances ".
1978 's warning, was not removed, now every cigarette pack contains both warnings (one on each lateral).
In addition, since 24 March 2005, one of the following warnings is randomly printed very prominently, along with a graphical image, occupying the 100 % of the back of the pack (40 % for the text warning and 60 % for the image):
Also, in Venezuela, tobacco advertising is strictly forbidden, so much so that the words tobacco, cigarette, cigar, etc. are not permitted in media such as radio and television and no one can smoke on television.
In the campaign called: "Venezuela 100 % libre de humo '' (Venezuela, 100 % smoke - free), curiously, these warnings only appear on cigarette packs and not on other tobacco products (which only conserve the 1978 warning).
In Vietnam, a variety of warnings with graphic, disturbing images of tobacco - related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages.
|
who played cindy brady in the brady bunch movie | The Brady Bunch Movie - wikipedia
The Brady Bunch Movie is a 1995 American comedy film based on the 1969 -- 1974 television series The Brady Bunch. The film was directed by Betty Thomas, with a screenplay by Laurice Elehwany, Rick Copp, Bonnie and Terry Turner, and stars Shelley Long, Gary Cole and Michael McKean. It also features cameos from Davy Jones, Micky Dolenz, Peter Tork and RuPaul, and the original cast of The Brady Bunch (except Eve Plumb and Robert Reed (the latter who died in 1992)) in new roles. The film places the original sitcom characters, with their 1970s fashion sense and sitcom family morality, in a contemporary 1990s setting, drawing humor from the resulting culture clash.
The Brady Bunch Movie was released in the United States on February 17, 1995, and grossed $46.6 million. A sequel titled A Very Brady Sequel was released on August 23, 1996, and a television film titled The Brady Bunch in the White House was aired on November 29, 2002...
Larry Dittmeyer, an unscrupulous real estate developer, explains to his supervisor that almost all the families in his neighborhood have agreed to sell their property as part of a plan to turn the area into a shopping mall, except for the Brady family.
At the Bradys ' house, Mike and Carol are having breakfast prepared by their housekeeper, Alice, while the six children prepare for school. Jan is jealous of her elder, popular sister Marcia. Cindy is tattling about everything she 's hearing. Greg is dreaming of becoming a singer (but sings folk songs more appropriate to the seventies). Peter is nervous that his voice is breaking. Bobby is excited about his new role as hall monitor at school.
Cindy gives Mike and Carol a tax delinquency notice (which was earlier mistakenly delivered to the Dittmeyers) stating that they face foreclosure on their house if they do not pay $20,000 in back taxes. The two initially ignore the crisis, but when Mike 's architectural design (which is exactly the same as their house) is turned down by two potential clients, he tells Carol that they may have to sell the house.
Cindy overhears this and tells her siblings and they look for work to raise money to save the house, but their earnings are nowhere near enough to reach the required sum. Mike manages to sell a Japanese company on one of his dated designs, thereby securing the money, only for Larry to sabotage it by claiming that Mike 's last building collapsed.
On the night before the Bradys have to move out, Marcia suggests that they enter a "Search for the Stars '' contest, the prize of which is exactly $20,000. Jan, having originally suggested this and been rejected, runs away from home. Cindy sees her leave and tattles, and the whole family goes on a search for her. They use their car 's citizens ' band radio, and their transmission is heard by Schultzy (Ann B. Davis), a long - haul trucker who picks up Jan and convinces her to return home.
The next day, the children join the "Search for the Stars '' contest with a dated performance that receives poor audience response compared to the more modern performances of other bands. However, the judges -- Davy Jones, Micky Dolenz, and Peter Tork of The Monkees -- vote for them, and they win the contest as a result. The tax bill is paid and their neighbors withdraw their homes from the market, foiling Larry 's plan and securing the neighborhood.
Later, Carol 's mother (Florence Henderson) arrives and finally convinces Jan to stop being jealous of Marcia, only for Cindy to start feeling jealous of Jan.
Robert Reed (the original Mike) died in 1992. Eve Plumb (the original Jan) declined to appear in the film.
The film was shot almost entirely in Los Angeles, California, with the Brady house being located in Sherman Oaks. The school scenes were shot at Taft High School in Woodland Hills. Some scenes were filmed at Bowcraft amusement park in Scotch Plains, New Jersey.
The producers had sought to film the original house that had been used for exterior shots during the original Brady Bunch series, but the owner of the Studio City, California home refused to restore the property to its 1969 appearance. The filmmakers instead erected a facade around a house in nearby Encino and filmed scenes in the front yard.
The Brady Bunch Movie was released in theaters on February 17, 1995. The film opened at # 1 at the box office with $14.8 million and grossed $46.6 million in the U.S. and Canada.
The Brady Bunch Movie premiered on NBC November 29, 1997 with additional footage not shown in theaters or on home video releases.
The Brady Bunch Movie was released on DVD June 10, 2003 and April 25, 2017. The film has also been released digitally on Google Play.
The review aggregator website Rotten Tomatoes reported a 64 % approval rating based on 41 review and an average rating of 5.8 / 10. The website 's critical consensus reads, "Though lightweight and silly, The Brady Bunch Movie still charms as homage to the 70s sitcom. ''
Leonard Klady of Variety wrote, "For five years back in the early 1970s, U.S. TV homes were in the thrall of The Brady Bunch. Two decades after their small - screen demise, the clean - cut crew is back in mythic form as The Brady Bunch Movie. Part homage, part spoof, the deft balancing act is a clever adaptation -- albeit culled from less than pedigreed source material. '' Roger Ebert of the Chicago Sun - Times wrote, "The film establishes a bland, reassuring, comforting Brady reality -- a certain muted tone that works just fine but needs, I think, a bleaker contrast from outside to fully exploit the humor. The Brady Bunch Movie is rated PG - 13, which is a compromise: The Bradys themselves live in a PG universe, and the movie would have been funnier if when they ventured outside it was obviously Wayne 's World. '' Common Sense Media said that "for those who grew up watching the TV show, The Brady Bunch Movie is deeply satisfying and the best part is its nostalgia. Sure, it 's fun to see the Bradys treated as freaks. But the heart of the film is a campy, affectionate interpretation of the TV show. ''
A Very Brady Sequel, directed by Arlene Sanford, was released theatrically on August 23, 1996. It sees the family routine thrown into disarray when a man claiming to be Carol 's long - lost first husband arrives on their doorstep. The family must then follow Carol to Hawaii in order to set things straight. All of the main cast members reprised their roles.
The second sequel, The Brady Bunch in the White House, sees a convoluted series of mishaps end with Mike and Carol Brady elected as President and Vice President of the United States. Despite innocent efforts to improve the country, the Brady family is beset on all sides by controversy and imagined scandals which threaten to tear them apart. Although the original actors for Mike and Carol return, the children and Alice are all recast for this film, which was released as a filmed - for - television movie.
|
where do you get your beard genes from | Beard - wikipedia
A beard is the collection of hair that grows on the chin, upper lip, cheeks and neck of humans and some non-human animals. In humans, usually only pubescent or adult males are able to grow beards. However, women with hirsutism, a hormonal condition of excessive hairiness, may develop a beard.
Throughout the course of history, societal attitudes toward male beards have varied widely depending on factors such as prevailing cultural - religious traditions and the current era 's fashion trends. Some religions (such as Islam and Sikhism) have considered a full beard to be absolutely essential for all males able to grow one, and mandate it as part of their official dogma. Other cultures, even while not officially mandating it, view a beard as central to a man 's virility, exemplifying such virtues as wisdom, strength, sexual prowess and high social status. However, in cultures where facial hair is uncommon (or currently out of fashion), beards may be associated with poor hygiene or a "savage, '' uncivilized, or even dangerous demeanor.
The beard develops during puberty. Beard growth is linked to stimulation of hair follicles in the area by dihydrotestosterone, which continues to affect beard growth after puberty. Various hormones stimulate hair follicles from different areas. Dihydrotestosterone, for example, may also promote short - term pogonotrophy (i.e., the grooming of facial hair). For example, a scientist who chose to remain anonymous had to spend periods of several weeks on a remote island in comparative isolation. He noticed that his beard growth diminished, but the day before he was due to leave the island it increased again, to reach unusually high rates during the first day or two on the mainland. He studied the effect and concluded that the stimulus for increased beard growth was related to the resumption of sexual activity. However, at that time professional pogonologists such as R.M. Hardisty reacted vigorously and almost dismissively.
Beard growth rate is also genetic.
Biologists characterize beards as a secondary sexual characteristic because they are unique to one sex, yet do not play a direct role in reproduction. Charles Darwin first suggested possible evolutionary explanation of beards in his work The Descent of Man, which hypothesized that the process of sexual selection may have led to beards. Modern biologists have reaffirmed the role of sexual selection in the evolution of beards, concluding that there is evidence that a majority of females find men with beards more attractive than men without beards.
Evolutionary psychology explanations for the existence of beards include signalling sexual maturity and signalling dominance by increasing perceived size of jaws, and clean - shaved faces are rated less dominant than bearded. Some scholars assert that it is not yet established whether the sexual selection leading to beards is rooted in attractiveness (inter-sexual selection) or dominance (intra-sexual selection). A beard can be explained as an indicator of a male 's overall condition. The rate of facial hairiness appears to influence male attractiveness. The presence of a beard makes the male vulnerable in fights, which is costly, so biologists have speculated that there must be other evolutionary benefits that outweigh that drawback. Excess testosterone evidenced by the beard may indicate mild immunosuppression, which may support spermatogenesis.
The ancient Semitic civilization situated on the western, coastal part of the Fertile Crescent and centered on the coastline of modern Lebanon gave great attention to the hair and beard. Where the beard has mostly a strong resemblance to that affected by the Assyrians, and familiar to us from their sculptures. It is arranged in three, four, or five rows of small tight curls, and extends from ear to ear around the cheeks and chin. Sometimes, however, in lieu of the many rows, we find one row only, the beard falling in tresses, which are curled at the extremity. There is no indication of the Phoenicians having cultivated mustachios.
Mesopotamian civilizations (Sumerian, Assyrians, Babylonians, Chaldeans and Medians) devoted great care to oiling and dressing their beards, using tongs and curling irons to create elaborate ringlets and tiered patterns.
The highest ranking Ancient Egyptians grew hair on their chins which was often dyed or hennaed (reddish brown) and sometimes plaited with interwoven gold thread. A metal false beard, or postiche, which was a sign of sovereignty, was worn by queens and kings. This was held in place by a ribbon tied over the head and attached to a gold chin strap, a fashion existing from about 3000 to 1580 BC.
In ancient India, the beard was allowed to grow long, a symbol of dignity and of wisdom (cf. sadhu). The nations in the east generally treated their beards with great care and veneration, and the punishment for licentiousness and adultery was to have the beard of the offending parties publicly cut off. They had such a sacred regard for the preservation of their beards that a man might pledge it for the payment of a debt.
Confucius held that the human body was a gift from one 's parents to which no alterations should be made. Aside from abstaining from body modifications such as tattoos, Confucians were also discouraged from cutting their hair, finger nails or beards. To what extent people could actually comply with this ideal depended on their profession; farmers or soldiers could probably not grow long beards as it would have interfered with their work.
Most of the clay soldiers in the Terracotta Army have mustasches or goatees but shaved cheeks, indicating that this was likely the fashion of the Qin dynasty.
The Iranians were fond of long beards, and almost all the Iranian kings had a beard. In Travels by Adam Olearius, a King of Iran commands his steward 's head to be cut off, and on its being brought to him, remarks, "what a pity it was, that a man possessing such fine mustachios, should have been executed. '' Men in the Achaemenid era wore long beards, with warriors adorning theirs with jewelry. Men also commonly wore beards during the Safavid and Qajar eras.
The ancient Greeks regarded the beard as a badge or sign of virility; in the Homeric epics it had almost sanctified significance, so that a common form of entreaty was to touch the beard of the person addressed. It was only shaven as a sign of mourning, though in this case it was instead often left untrimmed. A smooth face was regarded as a sign of effeminacy. The Spartans punished cowards by shaving off a portion of their beards. From the earliest times, however, the shaving of the upper lip was not uncommon. Greek beards were also frequently curled with tongs.
In the time of Alexander the Great the custom of smooth shaving was introduced. Reportedly, Alexander ordered his soldiers to be clean - shaven, fearing that their beards would serve as handles for their enemies to grab and to hold the soldier as he was killed. The practice of shaving spread from the Macedonians, whose kings are represented on coins, etc. with smooth faces, throughout the whole known world of the Macedonian Empire. Laws were passed against it, without effect, at Rhodes and Byzantium; and even Aristotle conformed to the new custom, unlike the other philosophers, who retained the beard as a badge of their profession. A man with a beard after the Macedonian period implied a philosopher, and there are many allusions to this custom of the later philosophers in such proverbs as: "The beard does not make the sage. ''
Shaving seems to have not been known to the Romans during their early history (under the kings of Rome and the early Republic). Pliny tells us that P. Ticinius was the first who brought a barber to Rome, which was in the 454th year from the founding of the city (that is, around 299 BC). Scipio Africanus was apparently the first among the Romans who shaved his beard. However, after that point, shaving seems to have caught on very quickly, and soon almost all Roman men were clean - shaven; being clean - shaven became a sign of being Roman and not Greek. Only in the later times of the Republic did the Roman youth begin shaving their beards only partially, trimming it into an ornamental form; prepubescent boys oiled their chins in hopes of forcing premature growth of a beard.
Still, beards remained rare among the Romans throughout the Late Republic and the early Principate. In a general way, in Rome at this time, a long beard was considered a mark of slovenliness and squalor. The censors L. Veturius and P. Licinius compelled M. Livius, who had been banished, on his restoration to the city, to be shaved, and to lay aside his dirty appearance, and then, but not until then, to come into the Senate. The first occasion of shaving was regarded as the beginning of manhood, and the day on which this took place was celebrated as a festival. Usually, this was done when the young Roman assumed the toga virilis. Augustus did it in his twenty - fourth year, Caligula in his twentieth. The hair cut off on such occasions was consecrated to a god. Thus Nero put his into a golden box set with pearls, and dedicated it to Jupiter Capitolinus. The Romans, unlike the Greeks, let their beards grow in time of mourning; so did Augustus for the death of Julius Caesar. Other occasions of mourning on which the beard was allowed to grow were, appearance as a reus, condemnation, or some public calamity. On the other hand, men of the country areas around Rome in the time of Varro seem not to have shaved except when they came to market every eighth day, so that their usual appearance was most likely a short stubble.
In the second century AD the Emperor Hadrian, according to Dion Cassius, was the first of all the Caesars to grow a beard; Plutarch says that he did it to hide scars on his face. This was a period in Rome of widespread imitation of Greek culture, and many other men grew beards in imitation of Hadrian and the Greek fashion. Until the time of Constantine the Great the emperors appear in busts and coins with beards; but Constantine and his successors until the reign of Phocas, with the exception of Julian the Apostate, are represented as beardless.
Late Hellenistic sculptures of Celts portray them with long hair and mustaches but beardless.
Among the Gaelic Celts of Scotland and Ireland, men typically let their facial hair grow into a full beard, and it was often seen as dishonourable for a Gaelic man to have no facial hair.
Tacitus states that among the Catti, a Germanic tribe (perhaps the Chatten), a young man was not allowed to shave or cut his hair until he had slain an enemy. The Lombards derived their fame from the great length of their beards (Longobards -- Long Beards). When Otto the Great said anything serious, he swore by his beard, which covered his breast.
In the Middle - Age Europe, a beard displayed a knight 's virility and honour. The Castilian knight El Cid is described in The Lay of the Cid as "the one with the flowery beard ''. Holding somebody else 's beard was a serious offence that had to be righted in a duel.
While most noblemen and knights were bearded, the Catholic clergy were generally required to be clean - shaven. This was understood as a symbol of their celibacy.
In pre-Islamic Arabia men would apparently keep mustaches but shave the hair on their chins. The prophet Muhammad encouraged his followers to do the opposite, long chin hair but trimmed mustaches, to signify their break with the old religion. This style of beard subsequently spread along with Islam during the Muslim expansion in the Middle Ages.
Most Chinese emperors of the Ming dynasty (1368 - 1644) appear with beards or mustaches in portraits.
In the 15th century, most European men were clean - shaven. 16th - century beards were allowed to grow to an amazing length (see the portraits of John Knox, Bishop Gardiner, Cardinal Pole and Thomas Cranmer). Some beards of this time were the Spanish spade beard, the English square cut beard, the forked beard, and the stiletto beard. In 1587 Francis Drake claimed, in a figure of speech, to have singed the King of Spain 's beard.
During the Chinese Qing dynasty (1644 - 1911), the ruling Manchu minority were either clean - shaven or at most wore mustaches, in contrast to the Han majority who still wore beards in keeping with the Confucian ideal.
In the beginning of the 17th century, the size of beards decreased in urban circles of Western Europe. In the second half of the century, being clean - shaven gradually become more common again, so much so that in 1698, Peter the Great of Russia ordered men to shave off their beards, and in 1705 levied a tax on beards in order to bring Russian society more in line with contemporary Western Europe.
During the early 19th century most men, particularly amongst the nobility and upper classes, went clean - shaven. There was, however, a dramatic shift in the beard 's popularity during the 1850s, with it becoming markedly more popular. Consequently, beards were adopted by many leaders, such as Alexander III of Russia, Napoleon III of France and Frederick III of Germany, as well as many leading statesmen and cultural figures, such as Benjamin Disraeli, Charles Dickens, Giuseppe Garibaldi, Karl Marx, and Giuseppe Verdi. This trend can be recognised in the United States of America, where the shift can be seen amongst the post-Civil War presidents. Before Abraham Lincoln, no President had a beard; after Lincoln until Woodrow Wilson, every President except Andrew Johnson and William McKinley had either a beard or a moustache.
The beard became linked in this period with notions of masculinity and male courage. The resulting popularity has contributed to the stereotypical Victorian male figure in the popular mind, the stern figure clothed in black whose gravitas is added to by a heavy beard.
In China, the revolution of 1911 and subsequent May Fourth Movement of 1919 led the Chinese to idealize the West as more modern and progressive than themselves. This included the realm of fashion, and Chinese men began shaving their faces and cutting their hair short.
By the early 20th century beards began a slow decline in popularity. Although retained by some prominent figures who were young men in the Victorian period (like Sigmund Freud), most men who retained facial hair during the 1920s and 1930s limited themselves to a moustache or a goatee (such as with Marcel Proust, Albert Einstein, Vladimir Lenin, Leon Trotsky, Adolf Hitler, and Joseph Stalin). In the United States, meanwhile, popular movies portrayed heroes with clean - shaven faces and "crew cuts ''. Concurrently, the psychological mass marketing of Madison Avenue was becoming prevalent. The Gillette Safety Razor Company was one of these marketers ' early clients. These events conspired to popularize short hair and clean shaven faces as the only acceptable style for decades to come. The few men who wore the beard or portions of the beard during this period were frequently either old, Central European, members of a religious sect that required it, or in academia.
The beard was reintroduced to mainstream society by the counterculture, firstly with the "beatniks '' in the 1950s, and then with the hippie movement of the mid-1960s. Following the Vietnam War, beards exploded in popularity. In the mid-late 1960s and throughout the 1970s, beards were worn by hippies and businessmen alike. Popular music artists like The Beatles, Barry White, The Beach Boys, Jim Morrison (The Doors) and the male members of Peter, Paul, and Mary, among many others, wore full beards. The trend of seemingly ubiquitous beards in American culture subsided in the mid-1980s.
By the end of the 20th century, the closely clipped Verdi beard, often with a matching integrated moustache, had become relatively common. From the 1990s onward, the fashion in the United States has generally trended toward either a goatee, Van Dyke, or a closely cropped full beard undercut on the throat. By 2010, the fashionable length approached a "two - day shadow ''. The 2010s decade also saw the full beard become fashionable again amongst young men.
One stratum of American society where facial hair is virtually nonexistent is in government and politics. The last President of the United States to wear any type of facial hair was William Howard Taft, who was in office from 1909 till 1913. The last Vice President of the United States to wear any facial hair was Charles Curtis, who was in office from 1929 till 1933.
Beards also play an important role in some religions.
In Greek mythology and art, Zeus and Poseidon are always portrayed with beards, but Apollo never is. A bearded Hermes was replaced with the more familiar beardless youth in the 5th century BC. Zoroaster, the 11th / 10th century BC era founder of Zoroastrianism is almost always depicted with a beard. In Norse mythology, Thor the god of thunder is portrayed wearing a red beard.
Iconography and art dating from the 4th century onward almost always portray Jesus with a beard. In paintings and statues most of the Old Testament Biblical characters such as Moses and Abraham and Jesus ' New Testament disciples such as St Peter appear with beards, as does John the Baptist. However, Western European art generally depicts John the Apostle as clean - shaven, to emphasize his relative youth. Eight of the figures portrayed in the painting entitled The Last Supper by Leonardo da Vinci are bearded. Mainstream Christianity holds Isaiah Chapter 50: Verse 6 as a prophecy of Christ 's crucifixion, and as such, as a description of Christ having his beard plucked by his tormentors.
In Eastern Christianity, members of the priesthood and monastics often wear beards, and religious authorities at times have recommended or required beards for all male believers.
Amish and Hutterite men shave until they marry, then grow a beard and are never thereafter without one, although it is a particular form of a beard (see Visual markers of marital status). Many Syrian Christians from Kerala in India wore long beards.
In the 1160s Burchardus, abbot of the Cistercian monastery of Bellevaux in the Franche - Comté, wrote a treatise on beards. He regarded beards as appropriate for lay brothers, but not for the priests among the monks.
At various times in its history and depending on various circumstances, the Catholic Church in the West permitted or prohibited facial hair (barbae nutritio -- literally meaning "nourishing a beard '') for clergy. A decree of the beginning of the 6th century in either Carthage or the south of Gaul forbade clerics to let their hair and beards grow freely. The phrase "nourishing a beard '' was interpreted in different ways, either as imposing a clean - shaven face or only excluding a too - lengthy beard. In relatively modern times, the first pope to wear a beard was Pope Julius II, who in 1511 -- 12 did so for a while as a sign of mourning for the loss of the city of Bologna. Pope Clement VI let his beard grow at the time of the sack of Rome (1527) and kept it. All his successors did so until the death in 1700 of Pope Innocent XII. Since then, no pope has worn a beard. Most Latin - rite clergy are now clean - shaven, but Capuchins and some others are bearded. Present canon law is silent on the matter.
Although most Protestant Christians regard the beard as a matter of choice, some have taken the lead in fashion by openly encouraging its growth as "a habit most natural, scriptural, manly, and beneficial '' (C.H. Spurgeon). Some Messianic Jews also wear beards to show their observance of the Old Testament.
Diarmaid MacCulloch writes: "There is no doubt that Cranmer mourned the dead king (Henry VIII) '', and it was said that he showed his grief by growing a beard. But
"it was a break from the past for a clergyman to abandon his clean - shaven appearance which was the norm for late medieval priesthood; with Luther providing a precedent (during his exile period), virtually all the continental reformers had deliberately grown beards as a mark of their rejection of the old church, and the significance of clerical beards as an aggressive anti-Catholic gesture was well recognised in mid-Tudor England. ''
Since the mid-twentieth century, The Church of Jesus Christ of Latter - day Saints (LDS Church) has encouraged men to be clean - shaven, particularly those that serve in ecclesiastical leadership positions. The church 's encouragement of men 's shaving has no theological basis, but stems from the general waning of facial hair 's popularity in Western society during the twentieth century and its association with the hippie and drug culture aspects of the counterculture of the 1960s, and has not been a permanent rule.
After Joseph Smith, many of the early presidents of the LDS Church, such as Brigham Young and Lorenzo Snow, wore impressive beards. Since David O. McKay became church president in 1951, most LDS Church leaders have been clean - shaven. The church maintains no formal policy on facial hair for its general membership. However, formal prohibitions against facial hair are currently enforced for young men providing two - year missionary service. Students and staff of the church - sponsored higher education institutions, such as Brigham Young University (BYU), are required to adhere to the Church Educational System Honor Code, which states in part: "Men are expected to be clean - shaven; beards are not acceptable '', although male BYU students are permitted to wear a neatly groomed moustache. A beard exception is granted for "serious skin conditions '', and for approved theatrical performances, but until 2015 no exception was given for any other reason, including religious convictions. In January 2015, BYU clarified that students who want a beard for religious reasons, like Muslims or Sikhs, may be granted permission after applying for an exception.
BYU students led a campaign to loosen the beard restrictions in 2014, but it had the opposite effect at Church Educational System schools: some who had previously been granted beard exceptions were found no longer to qualify, and for a brief period the LDS Business College required students with a registered exception to wear a "beard badge '', which was likened to a "badge of shame ''. Some students also join in with shaming their fellow beard - wearing students, even those with registered exceptions.
The ancient text followed regarding beards depends on the Deva and other teachings, varying according to whom the devotee worships or follows. Many Sadhus, Yogis, or Yoga practitioners keep beards, and represent all situations of life. Shaivite ascetics generally have beards, as they are not permitted to own anything, which would include a razor. The beard is also a sign of a nomadic and ascetic lifestyle.
Vaishnava men, typically of the ISKCON sect, are often clean - shaven as a sign of cleanliness.
Allowing the beard (Lihyah in Arabic) to grow and trimming the moustache is ruled as mandatory according to the Sunnah in Islam by consensus and is considered of the fitra i.e. the way man was created.
Sahih Bukhari, Book 72, Hadith # 781 Narrated by Ibn ' Umar: Allah 's Apostle, salallahu aleihi wa sallam / peace and blessings of Allah be upon him, said, "Cut the moustaches short and leave the beard (as it is). ''
Ibn Hazm reported that there was scholarly consensus that it is an obligation to trim the moustache and let the beard grow. He quoted a number of ahaadeeth as evidence, including the hadeeth of Ibn ' Umar quoted above, and the hadeeth of Zayd ibn Arqam in which Mohammed said: "Whoever does not remove any of his moustache is not one of us. '' Ibn Hazm said in al - Furoo ': "This is the way of our colleagues (i.e., group of scholars). ''
The extent of the beard is from the cheekbones, level with the channel of the ears, until the bottom of the face. It includes the hair that grows on the cheeks. Hair on the neck is not considered a part of the beard and can be removed.
In the Islamic tradition, God commanded Abraham to keep his beard, shorten his moustache, clip his nails, shave the hair around his genitals, and epilate his armpit hair.
The Bible states in Leviticus 19: 27 that "You shall not round off the corners of your heads nor mar the corners of your beard. '' Talmudic tradition explains this to mean that a man may not shave his beard with a razor with a single blade, since the cutting action of the blade against the skin "mars '' the beard. Because scissors have two blades, some opinions in halakha (Jewish law) permit their use to trim the beard, as the cutting action comes from contact of the two blades and not the blade against the skin. For this reason, some poskim (Jewish legal deciders) rule that Orthodox Jews may use electric razors to remain cleanshaven, as such shavers cut by trapping the hair between the blades and the metal grating, halakhically a scissor - like action. Other poskim like Zokon Yisrael KiHilchso, maintain that electric shavers constitute a razor - like action and consequently prohibit their use.
The Zohar, one of the primary sources of Kabbalah (Jewish mysticism), attributes holiness to the beard, specifying that hairs of the beard symbolize channels of subconscious holy energy that flows from above to the human soul. Therefore, most Hasidic Jews, for whom Kabbalah plays an important role in their religious practice, traditionally do not remove or even trim their beards.
Traditional Jews refrain from shaving, trimming the beard, and haircuts during certain times of the year like Passover, Sukkot, the Counting of the Omer and the Three Weeks. Cutting the hair is also restricted during the 30 - day mourning period after the death of a close relative, known in Hebrew as the Shloshim (thirty).
Guru Gobind Singh, the tenth Sikh Guru, commanded the Sikhs to maintain unshorn hair, recognizing it as a necessary adornment of the body by Almighty God as well as a mandatory Article of Faith. Sikhs consider the beard to be part of the nobility and dignity of their manhood. Sikhs also refrain from cutting their hair and beards out of respect for the God - given form. Kesh, uncut hair, is one of the Five Ks, five compulsory articles of faith for a baptized Sikh. As such, a Sikh man is easily identified by his turban and uncut hair and beard.
Male Rastafarians wear beards in conformity with injunctions given in the Bible, such as Leviticus 21: 5, which reads "They shall not make any baldness on their heads, nor shave off the edges of their beards, nor make any cuts in their flesh. '' The beard is a symbol of the covenant between God (Jah or Jehovah in Rastafari usage) and his people.
In Greco - Roman antiquity the beard was "seen as the defining characteristic of the philosopher; philosophers had to have beards, and anyone with a beard was assumed to be a philosopher. '' While one may be tempted to think that Socrates and Plato sported "philosopher 's beards '', such is not the case. Shaving was not widespread in Athens during fifth & fourth - century BCE and so they would not be distinguished from the general populace for having a beard. The popularity of shaving did not rise in the region until the example of Alexander the Great near the end of the fourth century BCE. The popularity of shaving did not spread to Rome until the end of the third century BCE following its acceptance by Scipio Africanus. In Rome shaving 's popularity grew to the point that for a respectable Roman citizen it was seen almost as compulsory.
The idea of the philosopher 's beard gained traction when in 155 BCE three philosophers arrived in Rome as Greek diplomats: Carneades, head of the Platonic Academy; Critolaus of Aristotle 's Lyceum; and the head of the Stoics Diogenes of Babylon. "In contrast to their beautifully clean - shaven Italian audience, these three intellectuals all sported magnificent beards. '' Thus the connection of beards and philosophy caught hold of the Roman public imagination.
The importance of the beard to Roman philosophers is best seen by the extreme value that the Stoic philosopher Epictetus placed on it. As historian John Sellars puts it, Epictetus "affirmed the philosopher 's beard as something almost sacred... to express the idea that philosophy is no mere intellectual hobby but rather a way of life that, by definition, transforms every aspect of one 's behavior, including one 's shaving habits. If someone continues to shave in order to look the part of a respectable Roman citizen, it is clear that they have not yet embraced philosophy conceived as a way of life and have not yet escaped the social customs of the majority... the true philosopher will only act according to reason or according to nature, rejecting the arbitrary conventions that guide the behavior of everyone else. ''
Epictetus saw his beard as an integral part of his identity and held that he would rather be executed than submit to any force demanding he remove it. In his Discourses 1.2. 29, he puts forward such a hypothetical confrontation: "' Come now, Epictetus, shave your beard '. If I am a philosopher, I answer, I will not shave it off. ' Then I will have you beheaded '. If it will do you any good, behead me. '' The act of shaving "would be to compromise his philosophical ideal of living in accordance with nature and it would be to submit to the unjustified authority of another. ''
This was not a theoretical in the age of Epictetus, for the Emperor Domitian had the hair and beard forcibly shaven off of the philosopher Apollonius of Tyana "as punishment for anti-State activities. '' This disgraced Apollonius while avoiding making him a martyr like Socrates. Well before his declaration of "death before shaving '' Epictetus had been forced to flee Rome when Domitian banished all philosophers from Italy under threat of execution.
Roman philosophers sported different styles of beards to distinguish which school they belonged to. Cynics with long dirty beards to indicate their "strict indifference to all external goods and social customs ''; Stoics occasionally trimming and washing their beards in accord with their view "that it is acceptable to prefer certain external goods so long as they are never valued above virtue ''; Peripatetics took great care of their beards believing in accord with Aristotle that "external goods and social status were necessary for the good life together with virtue ''. To a Roman philosopher in this era, having a beard and its condition indicated their commitment to live in accord with their philosophy.
Professional airline pilots are required to be clean shaven to facilitate a tight seal with auxiliary oxygen masks. Similarly, firefighters may also be prohibited from full beards to obtain a proper seal with SCBA equipment. This restriction is also fairly common in the oil & gas industry for the same reason in locations where hydrogen sulfide gas is a common danger. Other jobs may prohibit beards as necessary to wear masks or respirators.
Isezaki city in Gunma prefecture, Japan, decided to ban beards for male municipal employees on May 19, 2010.
The U.S. Court of Appeals for the Eighth Circuit has found requiring shaving to be discriminatory.
The International Boxing Association prohibits the wearing of beards by amateur boxers, although the Amateur Boxing Association of England allows exceptions for Sikh men, on condition that the beard be covered with a fine net. As a safety precaution, high school wrestlers must be clean - shaven before each match, though neatly trimmed moustaches are often allowed.
The Cincinnati Reds had a longstanding enforced policy where all players had to be completely clean shaven (no beards, long sideburns or moustaches). However, this policy was abolished following the sale of the team by Marge Schott in 1999.
Under owner George Steinbrenner, the New York Yankees baseball team had a strict dress code that prohibited long hair and facial hair below the lip; the regulation was continued under Hank and Hal Steinbrenner when control of the Yankees was transferred to them after the 2008 season. Willie Randolph and Joe Girardi, both former Yankee assistant coaches, adopted a similar clean - shaven policy for their ballclubs: the New York Mets and New York Yankees, respectively. Fredi Gonzalez, who replaced Girardi as the Marlins ' manager, dropped that policy when he took over after the 2006 season.
The Playoff beard is a tradition common with teams in the National Hockey League and now in other leagues where players allow their beards to grow from the beginning of the playoff season until the playoffs are over for their team.
In 2008, some members of the Tyrone Gaelic football team vowed not to shave until the end of the season. They went on to win the All - Ireland football championship, some of them sporting impressive beards by that stage.
Canadian Rugby Union flanker Adam Kleeberger attracted much media attention before, during, and after the 2011 Rugby World Cup in New Zealand. Kleeberger was known, alongside teammates Jebb Sinclair and Hubert Buydens as one of "the beardoes ''. Fans in the stands could often be seen wearing fake beards and "fear the beard '' became a popular expression during the team 's run in the competition. Kleeberger, who became one of Canada 's star players in the tournament, later used the publicity surrounding his beard to raise awareness for two causes; Christchurch earthquake relief efforts and prostate cancer. As part of this fundraising, his beard was shaved off by television personality Rick Mercer and aired on national television. The "Fear the Beard '' expression was coined by the NBA 's Oklahoma City Thunder fans and is now used by Houston Rockets fans to support James Harden.
Los Angeles Dodgers relief pitcher Brian Wilson, who claims not to have shaved since the 2010 All - Star Game, has grown a big beard that has become popular in MLB and with its fans. MLB Fan Cave presented a "Journey Inside Brian Wilson 's Beard '', which was an interactive screenshot of Wilson 's beard, where one can click on different sections to see various fictional activities performed by small "residents '' of the beard. The hosts on sports shows sometimes wear replica beards, and the Giants gave them away to fans as a promo.
The 2013 Boston Red Sox featured at least 12 players with varying degrees of facial hair, ranging from the closely trimmed beard of slugger David Ortiz to the long shaggy looks of Jonny Gomes and Mike Napoli. The Red Sox used their beards as a marketing tool, offering a Dollar Beard Night, where all fans with beards (real or fake) could buy a ticket for $1.00; and also as means of fostering team camaraderie.
Beards have also become a source of competition between athletes. Examples of athlete "beard - offs '' include NBA players DeShawn Stevenson and Drew Gooden in 2008, and WWE wrestler Daniel Bryan and Oakland Athletics outfielder Josh Reddick in 2013.
Depending on the country and period, facial hair was either prohibited in the army or an integral part of the uniform.
Beard hair is most commonly removed by shaving or by trimming with the use of a beard trimmer. If only the area above the upper lip is left unshaven, the resulting facial hairstyle is known as a moustache; if hair is left only on the chin, the style is a goatee.
The term "beard '' is also used for a collection of stiff, hair - like feathers on the centre of the breast of turkeys. Normally, the turkey 's beard remains flat and may be hidden under other feathers, but when the bird is displaying, the beard become erect and protrudes several centimetres from the breast.
Many goats possess a beard. The male sometimes urinates on his own beard as a marking behaviour during rutting.
Several animals are termed "bearded '' as part of their common name. Sometimes a beard of hair on the chin or face is prominent but for some others, "beard '' may refer to a pattern or colouring of the pelage reminiscent of a beard.
|
who was the programmer of ms-dos operating system | MS - DOS - wikipedia
MS - DOS (/ ˌɛmɛsˈdɒs / EM - es - DOSS; acronym for Microsoft Disk Operating System) is a discontinued operating system for x86 - based personal computers mostly developed by Microsoft. Collectively, MS - DOS, its rebranding as IBM PC DOS, and some operating systems attempting to be compatible with MS - DOS, are sometimes referred to as "DOS '' (which is also the generic acronym for disk operating system). MS - DOS was the main operating system for IBM PC compatible personal computers during the 1980s and the early 1990s, when it was gradually superseded by operating systems offering a graphical user interface (GUI), in various generations of the graphical Microsoft Windows operating system.
MS - DOS resulted from a request in 1981 by IBM for an operating system to use in its IBM PC range of personal computers. Microsoft quickly bought the rights to 86 - DOS from Seattle Computer Products, and began work on modifying it to meet IBM 's specification. IBM licensed and released it in August 1981 as PC DOS 1.0 for use in their PCs. Although MS - DOS and PC DOS were initially developed in parallel by Microsoft and IBM, the two products diverged after twelve years, in 1993, with recognizable differences in compatibility, syntax, and capabilities.
During its lifetime, several competing products were released for the x86 platform, and MS - DOS went through eight versions, until development ceased in 2000. Initially MS - DOS was targeted at Intel 8086 processors running on computer hardware using floppy disks to store and access not only the operating system, but application software and user data as well. Progressive version releases delivered support for other mass storage media in ever greater sizes and formats, along with added feature support for newer processors and rapidly evolving computer architectures. Ultimately it was the key product in Microsoft 's growth from a programming languages company to a diverse software development firm, providing the company with essential revenue and marketing resources. It was also the underlying basic operating system on which early versions of Windows ran as a GUI. It is a flexible operating system, and consumes negligible installation space.
MS - DOS was a renamed form of 86 - DOS -- owned by Seattle Computer Products, written by Tim Paterson. Development of 86 - DOS took only six weeks, as it was basically a clone of Digital Research 's CP / M (for 8080 / Z80 processors), ported to run on 8086 processors and with two notable differences compared to CP / M; an improved disk sector buffering logic and the introduction of FAT12 instead of the CP / M filesystem. This first version was shipped in August 1980. Microsoft, which needed an operating system for the IBM Personal Computer hired Tim Paterson in May 1981 and bought 86 - DOS 1.10 for $75,000 in July of the same year. Microsoft kept the version number, but renamed it MS - DOS. They also licensed MS - DOS 1.10 / 1.14 to IBM, who, in August 1981, offered it as PC DOS 1.0 as one of three operating systems for the IBM 5150, or the IBM PC.
Within a year Microsoft licensed MS - DOS to over 70 other companies. It was designed to be an OS that could run on any 8086 - family computer. Each computer would have its own distinct hardware and its own version of MS - DOS, similar to the situation that existed for CP / M, and with MS - DOS emulating the same solution as CP / M to adapt for different hardware platforms. To this end, MS - DOS was designed with a modular structure with internal device drivers, minimally for primary disk drives and the console, integrated with the kernel and loaded by the boot loader, and installable device drivers for other devices loaded and integrated at boot time. The OEM would use a development kit provided by Microsoft to build a version of MS - DOS with their basic I / O drivers and a standard Microsoft kernel, which they would typically supply on disk to end users along with the hardware. Thus, there were many different versions of "MS - DOS '' for different hardware, and there is a major distinction between an IBM - compatible (or ISA) machine and an MS - DOS (compatible) machine. Some machines, like the Tandy 2000, were MS - DOS compatible but not IBM - compatible, so they could run software written exclusively for MS - DOS without dependence on the peripheral hardware of the IBM PC architecture.
This design would have worked well for compatibility, if application programs had only used MS - DOS services to perform device I / O, and indeed the same design philosophy is embodied in Windows NT (see Hardware Abstraction Layer). However, in MS - DOS 's early days, the greater speed attainable by programs through direct control of hardware was of particular importance, especially for games, which often pushed the limits of their contemporary hardware. Very soon an IBM - compatible architecture became the goal, and before long all 8086 - family computers closely emulated IBM 's hardware, and only a single version of MS - DOS for a fixed hardware platform was needed for the market. This version is the version of MS - DOS that is discussed here, as the dozens of other OEM versions of "MS - DOS '' were only relevant to the systems they were designed for, and in any case were very similar in function and capability to some standard version for the IBM PC -- often the same - numbered version, but not always, since some OEMs used their own proprietary version numbering schemes (e.g. labeling later releases of MS - DOS 1. x as 2.0 or vice versa) -- with a few notable exceptions.
Microsoft omitted multi-user support from MS - DOS because Microsoft 's Unix - based operating system, Xenix, was fully multi-user. The company planned to over time improve MS - DOS so it would be almost indistinguishable from single - user Xenix, or XEDOS, which would also run on the Motorola 68000, Zilog Z8000, and the LSI - 11; they would be upwardly compatible with Xenix, which Byte in 1983 described as "the multi-user MS - DOS of the future ''. Microsoft advertised MS - DOS and Xenix together, listing the shared features of its "single - user OS '' and "the multi-user, multi-tasking, UNIX - derived operating system '', and promising easy porting between them. After the breakup of the Bell System, however, AT&T Computer Systems started selling UNIX System V. Believing that it could not compete with AT&T in the Unix market, Microsoft abandoned Xenix, and in 1987 transferred ownership of Xenix to the Santa Cruz Operation (SCO).
On 25 March 2014, Microsoft made the code to SCP MS - DOS 1.25 and a mixture of Altos MS - DOS 2.11 and TeleVideo PC DOS 2.11 available to the public under the Microsoft Research License Agreement, which makes the code source - available, but not open source as defined by Open Source Initiative or Free Software Foundation standards.
As an April Fools joke in 2015, Microsoft Mobile launched a Windows Phone application called MS - DOS Mobile which was presented as a new mobile operating system and worked similar to MS - DOS.
Microsoft licensed or released versions of MS - DOS under different names like Lifeboat Associates "Software Bus 86 '' aka SB - DOS, COMPAQ - DOS, NCR - DOS or Z - DOS before it eventually enforced the MS - DOS name for all versions but the IBM one, which was originally called "IBM Personal Computer DOS '', later shortened to IBM PC DOS. (Competitors released compatible DOS systems such as DR DOS and PTS - DOS that could also run DOS applications.)
The following versions of MS - DOS were released to the public:
Microsoft DOS was released through the OEM channel, until Digital Research released DR DOS 5.0 as a retail upgrade. With PC DOS 5.00. 1, the IBM - Microsoft agreement started to end, and IBM entered the retail DOS market with IBM DOS 5.00. 1, 5.02, 6.00 and PC DOS 6.1, 6.3, 7, 2000 and 7.1.
Localized versions of MS - DOS existed for different markets. While Western issues of MS - DOS evolved around the same set of tools and drivers just with localized message languages and differing sets of supported codepages and keyboard layouts, some language versions were considerably different from Western issues and were adapted to run on localized PC hardware with additional BIOS services not available in Western PCs, support multiple hardware codepages for displays and printers, support DBCS, alternative input methods and graphics output. Affected issues include Japanese (DOS / V), Korean, Arabic (ADOS 3.3 / 5.0), Hebrew (HDOS 3.3 / 5.0), Russian (RDOS 4.01 / 5.0) as well as some other Eastern European versions of DOS.
On microcomputers based on the Intel 8086 and 8088 processors, including the IBM PC and clones, the initial competition to the PC DOS / MS - DOS line came from Digital Research, whose CP / M operating system had inspired MS - DOS. In fact, there remains controversy as to whether QDOS was more or less plagiarised from early versions of CP / M code. Digital Research released CP / M - 86 a few months after MS - DOS, and it was offered as an alternative to MS - DOS and Microsoft 's licensing requirements, but at a higher price. Executable programs for CP / M - 86 and MS - DOS were not interchangeable with each other; many applications were sold in both MS - DOS and CP / M - 86 versions until MS - DOS became preponderant (later Digital Research operating systems could run both MS - DOS and CP / M - 86 software). MS - DOS originally supported the simple. COM, which was modelled after a similar but binary incompatible format known from CP / M - 80. CP / M - 86 instead supported a relocatable format using the file extension. CMD to avoid name conflicts with CP / M - 80 and MS - DOS. COM files. MS - DOS version 1.0 added a more advanced relocatable. EXE executable file format.
Most of the machines in the early days of MS - DOS had differing system architectures and there was a certain degree of incompatibility, and subsequently vendor lock - in. Users who began using MS - DOS with their machines were compelled to continue using the version customized for their hardware, or face trying to get all of their proprietary hardware and software to work with the new system.
In the business world the 808x - based machines that MS - DOS was tied to faced competition from the Unix operating system which ran on many different hardware architectures. Microsoft itself sold a version of Unix for the PC called Xenix.
In the emerging world of home users, a variety of other computers based on various other processors were in serious competition with the IBM PC: the Apple II, early Apple Macintosh, the Commodore 64 and others did not use the 808x processor; many 808x machines of different architectures used custom versions of MS - DOS. At first all these machines were in competition. In time the IBM PC hardware configuration became dominant in the 808x market as software written to communicate directly with the PC hardware without using standard operating system calls ran much faster, but on true PC - compatibles only. Non-PC - compatible 808x machines were too small a market to have fast software written for them alone, and the market remained open only for IBM PCs and machines that closely imitated their architecture, all running either a single version of MS - DOS compatible only with PCs, or the equivalent IBM PC DOS. Most clones cost much less than IBM - branded machines of similar performance, and became widely used by home users, while IBM PCs had a large share of the business computer market.
Microsoft and IBM together began what was intended as the follow - on to MS - DOS / PC DOS, called OS / 2. When OS / 2 was released in 1987, Microsoft began an advertising campaign announcing that "DOS is Dead '' and stating that version 4 was the last full release. OS / 2 was designed for efficient multi-tasking (as was standard in operating systems since 1963) and offered a number of advanced features that had been designed together with similar look and feel; it was seen as the legitimate heir to the "kludgy '' DOS platform.
MS - DOS had grown in spurts, with many significant features being taken or duplicated from Microsoft 's other products and operating systems. MS - DOS also grew by incorporating, by direct licensing or feature duplicating, the functionality of tools and utilities developed by independent companies, such as Norton Utilities, PC Tools (Microsoft Anti-Virus), QEMM expanded memory manager, Stacker disk compression, and others.
During the period when Digital Research was competing in the operating system market some computers, like Amstrad PC1512, were sold with floppy disks for two operating systems (only one of which could be used at a time), MS - DOS and CP / M - 86 or a derivative of it. Digital Research produced DOS Plus, which was compatible with MS - DOS 2.11, supported CP / M - 86 programs, had additional features including multi-tasking, and could read and write disks in CP / M and MS - DOS format.
While OS / 2 was under protracted development, Digital Research released the MS - DOS compatible DR DOS 5.0, which included features only available as third - party add - ons for MS - DOS. Unwilling to lose any portion of the market, Microsoft responded by announcing the "pending '' release of MS - DOS 5.0 in May 1990. This effectively killed most DR DOS sales until the actual release of MS - DOS 5.0 in June 1991. Digital Research brought out DR DOS 6.0, which sold well until the "pre-announcement '' of MS - DOS 6.0 again stifled the sales of DR DOS.
Microsoft had been accused of carefully orchestrating leaks about future versions of MS - DOS in an attempt to create what in the industry is called FUD (fear, uncertainty, and doubt) regarding DR DOS. For example, in October 1990, shortly after the release of DR DOS 5.0, and long before the eventual June 1991 release of MS - DOS 5.0, stories on feature enhancements in MS - DOS started to appear in InfoWorld and PC Week. Brad Silverberg, then Vice President of Systems Software at Microsoft and general manager of its Windows and MS - DOS Business Unit, wrote a forceful letter to PC Week (5 November 1990), denying that Microsoft was engaged in FUD tactics ("to serve our customers better, we decided to be more forthcoming about version 5.0 '') and denying that Microsoft copied features from DR DOS:
"The feature enhancements of MS - DOS version 5.0 were decided and development was begun long before we heard about DR DOS 5.0. There will be some similar features. With 50 million MS - DOS users, it should n't be surprising that DRI has heard some of the same requests from customers that we have. '' -- (Schulman et al. 1994).
The pact between Microsoft and IBM to promote OS / 2 began to fall apart in 1990 when Windows 3.0 became a marketplace success. Much of Microsoft 's further contributions to OS / 2 also went into creating a third GUI replacement for DOS, Windows NT.
IBM, which had already been developing the next version of OS / 2, carried on development of the platform without Microsoft and sold it as the alternative to DOS and Windows.
As a response to Digital Research 's DR DOS 6.0, which bundled SuperStor disk compression, Microsoft opened negotiations with Stac Electronics, vendor of the most popular DOS disk compression tool, Stacker. In the due diligence process, Stac engineers had shown Microsoft part of the Stacker source code. Stac was unwilling to meet Microsoft 's terms for licensing Stacker and withdrew from the negotiations. Microsoft chose to license Vertisoft 's DoubleDisk, using it as the core for its DoubleSpace disk compression.
MS - DOS 6.0 and 6.20 were released in 1993, both including the Microsoft DoubleSpace disk compression utility program. Stac successfully sued Microsoft for patent infringement regarding the compression algorithm used in DoubleSpace. This resulted in the 1994 release of MS - DOS 6.21, which had disk - compression removed. Shortly afterwards came version 6.22, with a new version of the disk compression system, DriveSpace, which had a different compression algorithm to avoid the infringing code.
Prior to 1995, Microsoft licensed MS - DOS (and Windows) to computer manufacturers under three types of agreement: per - processor (a fee for each system the company sold), per - system (a fee for each system of a particular model), or per - copy (a fee for each copy of MS - DOS installed). The largest manufacturers used the per - processor arrangement, which had the lowest fee. This arrangement made it expensive for the large manufacturers to migrate to any other operating system, such as DR DOS. In 1991, the U.S. government Federal Trade Commission began investigating Microsoft 's licensing procedures, resulting in a 1994 settlement agreement limiting Microsoft to per - copy licensing. Digital Research did not gain by this settlement, and years later its successor in interest, Caldera, sued Microsoft for damages in the Caldera v. Microsoft lawsuit. It was believed that the settlement ran in the order of $150 million, but was revealed in November 2009 with the release of the Settlement Agreement to be $280 million.
Microsoft also used a variety of tactics in MS - DOS and several of their applications and development tools that, while operating perfectly when running on genuine MS - DOS (and PC DOS), would break when run on another vendor 's implementation of DOS. Notable examples of this practice included:
With the release of Windows 95 (and continuing in the Windows 9x product line through to Windows ME), an integrated version of MS - DOS was used for bootstrapping, troubleshooting, and backwards - compatibility with old DOS software, particularly games, and no longer released as a standalone product. In Windows 95, the DOS, called MS - DOS 7, can be booted separately, without the Windows GUI; this capability was retained through Windows 98 Second Edition. Windows ME removed the capability to boot its underlying MS - DOS 8.0 alone from a hard disk, but retained the ability to make a DOS boot floppy disk (called an "Emergency Boot Disk '') and can be hacked to restore full access to the underlying DOS.
In contrast to the Windows 9x series, the Windows NT - derived 32 - bit operating systems developed alongside the 9x series (Windows NT, 2000, XP and newer) do not contain MS - DOS as part of the operating system, but provide a subset of DOS emulation to run DOS applications and provide DOS - like command prompt windows. 64 - bit versions of Windows NT line do not provide DOS emulation and can not run DOS applications natively. Windows XP contains a copy of the Windows ME boot disk, stripped down to bootstrap only. This is accessible only by formatting a floppy as an "MS - DOS startup disk ''. Files like the driver for the CD - ROM support were deleted from the Windows ME bootdisk and the startup files (AUTOEXEC. BAT and CONFIG. SYS) no longer had content. This modified disk was the base for creating the MS - DOS image for Windows XP. Some of the deleted files can be recovered with an undelete tool. With Windows Vista the files on the startup disk are dated 18 April 2005 but are otherwise unchanged, including the string "MS - DOS Version 8 Copyright 1981 -- 1999 Microsoft Corp '' inside COMMAND.COM. Starting with Windows 10, the ability to create a DOS startup disk has been removed.
The only versions of MS - DOS currently recognized as stand - alone OSs and supported as such by Microsoft are MS - DOS 6.0 and 6.22, both of which remain available for download via their MSDN, volume license, and OEM license partner websites, for customers with valid login credentials. MS - DOS is still used in embedded x86 systems due to its simple architecture and minimal memory and processor requirements, though some current products have switched to the still - maintained open - source alternative FreeDOS.
All versions of Microsoft Windows have had an MS - DOS - like command - line interface (CLI) called Command Prompt. This could run many DOS and variously Win32, OS / 2 1. x and POSIX command line utilities in the same command - line session, allowing piping between commands. The user interface, and the icon up to Windows 2000, followed the native MS - DOS interface.
The 16 - bit versions of Windows (up to 3.11) ran as a Graphical User Interface (GUI) on top of MS - DOS. With Windows 95, 98, 98 SE and ME, the MS - DOS part was (superficially) integrated, treating both operating systems as a complete package, though the DOS component could actually stand alone. The command line accessed the DOS command line (usually COMMAND.COM), through a Windows module (WINOLDAP. MOD).
A new line of Windows, (Windows NT), boot through a kernel whose sole purpose is to load Windows. One can not run Win32 applications in the loader system in the manner that OS / 2, UNIX or Consumer Windows can launch character mode sessions.
The command session permits running of various supported command line utilities from Win32, MS - DOS, OS / 2 1. x and POSIX. The emulators for MS - DOS, OS / 2 and POSIX use the host 's window in the same way that Win16 applications use the Win32 explorer. Using the host 's window allows one to pipe output between emulations.
The MS - DOS emulation is done through the NTVDM (NT Virtual DOS Machine). This is a modified SoftPC (a former product similar to VirtualPC), running a modified MS - DOS 5 (NTIO. SYS and NTDOS. SYS). The output is handled by the console DLLs, so that the program at the prompt (CMD. EXE, 4NT. EXE, TCC. EXE), can see the output. 64 - bit Windows does not have either the DOS emulation, or the DOS commands (EDIT, DEBUG, EDLIN), that come with 32 - bit Windows.
The DOS version returns 5.00 or 5.50, depending on which API function is used to determine it. Utilities from MS - DOS 5.00 run in this emulation without modification. The very early beta programs of NT show MS - DOS 30.00, but programs running in MS - DOS 30.00 would assume that OS / 2 was in control.
The OS / 2 emulation is handled through OS2SS. EXE and OS2. EXE, and DOSCALLS. DLL. OS2. EXE is a version of the OS / 2 shell (CMD. EXE), which passes commands down to the OS2SS. EXE, and input - output to the Windows NT shell. Windows 2000 was the last version of NT to support OS / 2. The emulation is OS / 2 1.30.
POSIX is emulated through the POSIX shell, but no emulated shell; the commands are handled directly in CMD. EXE.
The Command Prompt is often called the MS - DOS prompt. In part, this was the official name for it in Windows 9x and early versions of Windows NT (NT 3.5 and earlier), and in part because the SoftPC emulation of DOS redirects output into it. Actually only COMMAND.COM and other 16bit commands run in a NTVDM with AUTOEXEC. NT and CONFIG. NT initialisation determined by _default. pif, optionally permitting the use of Win32 console applications and internal commands with a NTCMDPROMPT directive.
Win32 console applications use CMD. EXE as their command prompt shell. This confusion does not exist under OS / 2 because there are separate DOS and OS / 2 prompts, and running a DOS program under OS / 2 will launch a separate DOS window to run the application.
All versions of Windows for Itanium (no longer sold by Microsoft) and x86 - 64 architectures no longer include the NTVDM and can therefore no longer natively run MS - DOS or 16 - bit Windows applications. There are alternatives in the form of virtual machine emulators such as Microsoft 's own Virtual PC, as well as VMware, DOSBox, and others.
From 1983 onwards, various companies worked on graphical user interfaces (GUIs) capable of running on PC hardware. However, this required duplicated effort and did not provide much consistency in interface design (even between products from the same company).
Later, in 1985, Microsoft Windows was released as Microsoft 's first attempt at providing a consistent user interface (for applications). The early versions of Windows ran on top of MS - DOS. At first Windows met with little success, but this was also true for most other companies ' efforts as well, for example GEM. After version 3.0, Windows gained market acceptance.
Windows 9x used the DOS boot process to launch into protected mode. Basic features related to the file system, such as long file names, were only available to DOS when running as a subsystem of Windows. Windows NT runs independently of DOS but includes NTVDM, a component for simulating a DOS environment for legacy applications.
MS - DOS compatible systems include:
Microsoft manufactured IBM PC DOS for IBM. It and MS - DOS were identical products that eventually diverged starting with PC DOS version 6.1.
Digital Research 's DR - DOS is sometimes regarded as a clone of MS - DOS, but it did not follow Microsoft 's version numbering scheme. For example, MS - DOS 4, released in July 1988, was followed by DR DOS 5.0 in May 1990. MS - DOS 5.0 came in April 1991, with DR DOS 6.0 being released the following June.
These products are collectively referred to as "DOS, '' even though "Disk Operating System '' is a generic term used on other systems unrelated to the x86 and IBM PC. "MS - DOS '' can also be a generic reference to DOS on IBM PC compatible computers.
What made the difference in the end was Microsoft 's control of the Windows platform and their programming practices which intentionally made Windows appear as if it ran poorly on competing versions of DOS. Digital Research had to release interim releases to circumvent Windows limitations inserted artificially, designed specifically to provide Microsoft with an unfair competitive advantage.
|
when does it start getting cold in new york | Climate of New York - wikipedia
The climate of New York state is generally humid continental, while the extreme southeastern portion of the state (New York City area) lies in the warm Humid Subtropical climate zone. Winter temperatures average below freezing during January and February in much of New York state, but several degrees above freezing along the Atlantic coastline, including New York City.
Seasonally, summer - like conditions prevail from June to early September statewide, while areas in far southern New York and New York City have summer conditions from late May through late September. Cold - air damming east of the Appalachians leads to protracted periods of cloud cover and precipitation east of the range, primarily between the October and April months. Winter - like conditions prevail from November through April in northern New York, and from December through March in southern New York. On average, western New York is much cloudier than points south and east in New York, much of it generated from the Great Lakes. Greenhouse gas emission is low on a per capita basis when compared to most other states due to the extensive use of mass transit, particularly across New York City. The significant urbanization within New York city has led to an urban heat island, which causes temperatures to be warmer overnight in all seasons.
Annual precipitation is fairly even throughout the year across New York state. The Great Lakes region of New York sees the highest annual rain and snow amounts in New York state, and heavy lake - effect snow is common in both western and central New York in winter. In the hotter months, large, long - lived complexes of thunderstorms can invade the state from Canada and the Great Lakes, while tropical cyclones can bring rains and winds from the southwest during the summer and fall. Hurricane impacts on the state occur once every 18 -- 19 years, with major hurricane impacts every 70 -- 74 years. An average of ten tornadoes touch down in New York annually.
The annual average temperature across the state ranges from around 39 ° F (4 ° C) over the Adirondack Mountains to near 53 ° F (12 ° C) across Long Island. Weather in New York is heavily influenced by two air masses: a warm, humid one from the southwest and a cold, dry one from the northwest. A cool, humid northeast airflow from the North Atlantic is much less common, and results in a persistent cloud deck with associated precipitation which linger across the region for prolonged periods of time. Temperature differences between the warmer coast and far northern inland sections can exceed 36 degrees Fahrenheit (20 degrees Celsius), with rain near the coast and frozen precipitation, such as sleet and freezing rain, falling inland. Two - thirds of such events occur between November and April. which moves from northeast to southwest.
Unlike the vast majority of the state, New York City features a humid subtropical climate (Koppen Cfa). New York City is an urban heat island, with temperatures 5 -- 7 degrees Fahrenheit (3 -- 4 degrees Celsius) warmer overnight than surrounding areas. In an effort to fight this warming, roofs of buildings are being painted white across the city in an effort to increase the reflection of solar energy, or albedo.
In both Huntington and Ticonderoga, the average high July temperature is about 27 ° C, the latter much further north, but in the former on Long Island, summer air is cooled by the surrounding ocean. The summer climate is cooler in the Adirondacks, Catskills and higher elevations of the Southern Plateau due to the elevated terrain. The New York City area and lower portions of the Hudson Valley in contrast have much more sultry and tropical summers with frequent bouts of hot temperatures and high dew points. The remainder of New York State enjoys modestly warm summers. Summer temperatures usually range from the upper 70s to mid 80s ° F (25 to 30 ° C), over much of the state. The record high for New York state is 108 ° F (42 ° C), set at Troy on July 22, 1926.
Winters are often cold and snowy above the New York City and Long Island areas of New York. In the majority of winter seasons, a temperature of − 13 ° F (− 25 ° C) or lower can be expected in the northern highlands (Northern Plateau) and 0 ° F (− 18 ° C) or colder in the southwestern and east - central highlands (Southern Plateau). The Adirondack region records from 35 to 45 days with below zero temperatures, in normal to severe winters. Winters are also long and cold in both Western and Central New York, though not as cold as the Adirondack region due to the warming influence of the Great Lakes. The New York City metro area, in comparison to the rest of the state, is milder in the winter (especially in the city itself, which averages several degrees above freezing). The record low for New York state is − 52 ° F (− 47 ° C), set at Stillwater Reservoir on February 9, 1934 and at Old Forge on February 18, 1979. In February 2015, Rochester experienced its coldest month ever, with an average temperature of 12.2 ° F (− 11 ° C); while later that year had a near - record warm November and a record - breaking December. The latter month was about 12 degrees F warmer than average, and several degrees over the previous record.
Southeastern sections of the state near New York City have an average annual cloud cover of 59 - 62 %, while areas of western New York around Buffalo average 71 -- 75 % cloud cover annually.
Average precipitation across the region show maxima within the mountains of the Appalachians. Between 28 inches (710 mm) and 62 inches (1,600 mm) of precipitation falls annually across the Northeastern United States, and New York 's averages are similar, with maxima of over 60 inches (1,500 mm) falling across southwestern Lewis County, northern Oneida County, central and southern Hamilton County, as well as northwestern Ulster County. The lowest amounts occur near the northern borders with Vermont and Ontario, as well as much of southwestern sections of the state. Temporally, a maximum in precipitation is seen around three peak times: 3 a.m., 10 a.m., and 6 p.m. During the summer, the 6 p.m. peak is most pronounced.
Coastal extratropical cyclones, known as nor'easters, bring a bulk of the wintry precipitation to the region during the cold season as they track parallel to the coastline, forming along the natural temperature gradient of the Gulf Stream before moving up the coastline. The Appalachian Mountains largely shield New York City from picking up any lake - effect snow, which develops in the wake of extratropical cyclones downwind of the Great Lakes. The Finger Lakes of New York are long enough for lake - effect precipitation. Lake - effect snow from the Finger Lakes (like elsewhere) occurs in upstate New York until those lakes freeze over. Annual average lake - effect snows exceed 150 inches (380 cm) downwind of Lake Erie and 200 inches (510 cm) downwind of Lake Ontario.
During the summer and early fall, mesoscale convective systems can move into the area from Canada and the Great Lakes. Tropical cyclones and their remains occasionally move into the region from the south and southwest. The region has experienced a couple heavy rainfall events that exceeded the 50 - year return period, during October 1996 and October 1998, which suggest an increase in heavy rainfall along the coast.
In terms of emissions, New York ranks 46th among the 50 states in the amount of greenhouse gases generated per person. This efficiency is primarily due to the state 's higher rate of mass transit use in and around New York City.
However, New York City (particularly Manhattan) has extremely high rates of air pollution, with high particle pollution and high cancer rates, which can be explained by extreme population density, despite low per capita emissions rates.
New York experiences an average of ten tornadoes per year, with one tornado every five years considered strong or violent (EF2 - EF5). The return period for hurricane impacts on the state is 18 -- 19 years, with major hurricane return periods between 70 -- 74 years. In 2016, much of New York experienced a severe drought, including the Finger Lakes region, where the drought was preceded by a very mild winter with minimal snow pack.
|
what are the two clauses of the 4th amendment | Fourth Amendment to the United States Constitution - wikipedia
The Fourth Amendment (Amendment IV) to the United States Constitution is part of the Bill of Rights that prohibits unreasonable searches and seizures. It requires "reasonable '' governmental searches and seizures to be conducted only upon issuance of a warrant, judicially sanctioned by probable cause, supported by oath or affirmation, particularly describing the place to be searched and the persons or things to be seized. Under the Fourth Amendment, search and seizure (including arrest) should be limited in scope according to specific information supplied to the issuing court, usually by a law enforcement officer who has sworn by it. Fourth Amendment case law deals with three issues: what government activities constitute "search '' and "seizure ''; what constitutes probable cause for these actions; and how violations of Fourth Amendment rights should be addressed. Early court decisions limited the amendment 's scope to a law enforcement officer 's physical intrusion onto private property, but with Katz v. United States (1967), the Supreme Court held that its protections, such as the warrant requirement, extend to the privacy of individuals as well as physical locations. Law enforcement officers need a warrant for most search and seizure activities, but the Court has defined a series of exceptions for consent searches, motor vehicle searches, evidence in plain view, exigent circumstances, border searches, and other situations.
The exclusionary rule is one way the amendment is enforced. Established in Weeks v. United States (1914), this rule holds that evidence obtained through a Fourth Amendment violation is generally inadmissible at criminal trials. Evidence discovered as a later result of an illegal search may also be inadmissible as "fruit of the poisonous tree '', unless it inevitably would have been discovered by legal means.
The Fourth Amendment was adopted in response to the abuse of the writ of assistance, a type of general search warrant issued by the British government, and a major source of tension in pre-Revolutionary America. The Fourth Amendment was introduced in Congress in 1789 by James Madison, along with the other amendments in the Bill of Rights, in response to Anti-Federalist objections to the new Constitution. Congress submitted the amendment to the states on September 28, 1789. By December 15, 1791, the necessary three - fourths of the states had ratified it. On March 1, 1792, Secretary of State Thomas Jefferson announced the adoption of the amendment.
Because the Bill of Rights did not initially apply to the states, and federal criminal investigations were less common in the first century of the nation 's history, there is little significant case law for the Fourth Amendment before the 20th century. The Amendment was held to apply to the states in Mapp v. Ohio (1961).
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Like many other areas of American law, the Fourth Amendment finds its roots in English legal doctrine. Sir Edward Coke, in Semayne 's case (1604), famously stated: "The house of every one is to him as his castle and fortress, as well for his defence against injury and violence as for his repose. '' Semayne 's Case acknowledged that the King did not have unbridled authority to intrude on his subjects ' dwellings but recognized that government agents were permitted to conduct searches and seizures under certain conditions when their purpose was lawful and a warrant had been obtained.
The 1760s saw a growth in the intensity of litigation against state officers, who, using general warrants, conducted raids in search of materials relating to John Wilkes 's publications attacking both government policies and the King himself. The most famous of these cases involved John Entick, whose home was forcibly entered by the King 's Messenger Nathan Carrington, along with others, pursuant to a warrant issued by George Montagu - Dunk, 2nd Earl of Halifax authorizing them "to make strict and diligent search for... the author, or one concerned in the writing of several weekly very seditious papers entitled, ' The Monitor or British Freeholder, No 257, 357, 358, 360, 373, 376, 378, and 380, ' '' and seized printed charts, pamphlets and other materials. Entick filed suit in Entick v Carrington, argued before the Court of King 's Bench in 1765. Charles Pratt, 1st Earl Camden ruled that both the search and the seizure were unlawful, as the warrant authorized the seizure of all of Entick 's papers -- not just the criminal ones -- and as the warrant lacked probable cause to even justify the search. By holding that "(O) ur law holds the property of every man so sacred, that no man can set his foot upon his neighbour 's close without his leave '', Entick established the English precedent that the executive is limited in intruding on private property by common law.
Homes in Colonial America, on the other hand, did not enjoy the same sanctity as their British counterparts, because legislation had been explicitly written so as to enable enforcement of British revenue - gathering policies on customs; until 1750, in fact, the only type of warrant defined in the handbooks for justices of the peace was the general warrant. During what scholar William Cuddihy called the "colonial epidemic of general searches '', the authorities possessed almost unlimited power to search for anything at any time, with very little oversight.
In 1756, the colony of Massachusetts enacted legislation that barred the use of general warrants. This represented the first law in American history curtailing the use of seizure power. Its creation largely stemmed from the great public outcry over the Excise Act of 1754, which gave tax collectors unlimited powers to interrogate colonists concerning their use of goods subject to customs. The act also permitted the use of a general warrant known as a writ of assistance, allowing tax collectors to search the homes of colonists and seize "prohibited and uncustomed '' goods. A crisis erupted over the writs of assistance on December 27, 1760, when the news of King George II 's death on October 23 arrived in Boston. All writs automatically expired six months after the death of the King, and would have had to be re-issued by George III, the new king, to remain valid.
In mid-January 1761, a group of over 50 merchants represented by James Otis petitioned the court to have hearings on the issue. During the five - hour hearing on February 23, 1761, Otis vehemently denounced British colonial policies, including their sanction of general warrants and writs of assistance. However, the court ruled against Otis. Future US President John Adams, who was present in the courtroom when Otis spoke, viewed these events as "the spark in which originated the American Revolution. ''
Because of the name he had made for himself in attacking the writs, Otis was elected to the Massachusetts colonial legislature and helped pass legislation requiring that special writs of assistance be "granted by any judge or justice of the peace upon information under oath by any officer of the customs '' and barring all other writs. The governor overturned the legislation, finding it contrary to English law and parliamentary sovereignty.
Seeing the danger general warrants presented, the Virginia Declaration of Rights (1776) explicitly forbade the use of general warrants. This prohibition became a precedent for the Fourth Amendment:
That general warrants, whereby any officer or messenger may be commanded to search suspected places without evidence of a fact committed, or to seize any person or persons not named, or whose offense is not particularly described and supported by evidence, are grievous and oppressive and ought not to be granted.
Article XIV of the Massachusetts Declaration of Rights, written by John Adams and enacted in 1780 as part of the Massachusetts Constitution, added the requirement that all searches must be "reasonable, '' and served as another basis for the language of the Fourth Amendment:
Every subject has a right to be secure from all unreasonable searches, and seizures of his person, his houses, his papers, and all his possessions. All warrants, therefore, are contrary to this right, if the cause or foundation of them be not previously supported by oath or affirmation; and if the order in the warrant to a civil officer, to make search in suspected places, or to arrest one or more suspected persons, or to seize their property, be not accompanied with a special designation of the persons or objects of search, arrest, or seizure: and no warrant ought to be issued but in cases, and with the formalities, prescribed by the laws.
By 1784, eight state constitutions contained a provision against general warrants.
After several years of comparatively weak government under the Articles of Confederation, a Constitutional Convention in Philadelphia proposed a new constitution on September 17, 1787, featuring a stronger chief executive and other changes. George Mason, a Constitutional Convention delegate and the drafter of Virginia 's Declaration of Rights, proposed that a bill of rights listing and guaranteeing civil liberties be included. Other delegates -- including future Bill of Rights drafter James Madison -- disagreed, arguing that existing state guarantees of civil liberties were sufficient and that any attempt to enumerate individual rights risked the implication that other, unnamed rights were unprotected. After a brief debate, Mason 's proposal was defeated by a unanimous vote of the state delegations.
For the constitution to be ratified, however, nine of the thirteen states were required to approve it in state conventions. Opposition to ratification ("Anti-Federalism '') was partly based on the Constitution 's lack of adequate guarantees for civil liberties. Supporters of the Constitution in states where popular sentiment was against ratification (including Virginia, Massachusetts, and New York) successfully proposed that their state conventions both ratify the Constitution and call for the addition of a bill of rights. Four state conventions proposed some form of restriction on the authority of the new federal government to conduct searches.
In the 1st United States Congress, following the state legislatures ' request, James Madison proposed twenty constitutional amendments based on state bills of rights and English sources such as the Bill of Rights 1689, including an amendment requiring probable cause for government searches. Congress reduced Madison 's proposed twenty amendments to twelve, with modifications to Madison 's language about searches and seizures. The final language was submitted to the states for ratification on September 25, 1789.
By the time the Bill of Rights was submitted to the states for ratification, opinions had shifted in both parties. Many Federalists, who had previously opposed a Bill of Rights, now supported the Bill as a means of silencing the Anti-Federalists ' most effective criticism. Many Anti-Federalists, in contrast, now opposed it, realizing that the Bill 's adoption would greatly lessen the chances of a second constitutional convention, which they desired. Anti-Federalists such as Richard Henry Lee also argued that the Bill left the most objectionable portions of the Constitution, such as the federal judiciary and direct taxation, intact.
On November 20, 1789, New Jersey ratified eleven of the twelve amendments, including the Fourth. On December 19, 1789, December 22, 1789, and January 19, 1790, respectively, Maryland, North Carolina, and South Carolina ratified all twelve amendments. On January 25 and 28, 1790, respectively, New Hampshire and Delaware ratified eleven of the Bill 's twelve amendments, including the Fourth. This brought the total of ratifying states to six of the required ten, but the process stalled in other states: Connecticut and Georgia found a Bill of Rights unnecessary and so refused to ratify, while Massachusetts ratified most of the amendments, but failed to send official notice to the Secretary of State that it had done so. (All three states would later ratify the Bill of Rights for sesquicentennial celebrations in 1939.)
In February through June 1790, New York, Pennsylvania, and Rhode Island each ratified eleven of the amendments, including the Fourth. Virginia initially postponed its debate, but after Vermont was admitted to the Union in 1791, the total number of states needed for ratification rose to eleven. Vermont ratified on November 3, 1791, approving all twelve amendments, and Virginia finally followed on December 15, 1791. Secretary of State Thomas Jefferson announced the adoption of the ten successfully ratified amendments on March 1, 1792.
The Bill of Rights originally only restricted the federal government, and went through a long initial phase of "judicial dormancy ''; in the words of historian Gordon S. Wood, "After ratification, most Americans promptly forgot about the first ten amendments to the Constitution. '' Federal jurisdiction regarding criminal law was narrow until the late 19th century when the Interstate Commerce Act and Sherman Antitrust Act were passed. As federal criminal jurisdiction expanded to include other areas such as narcotics, more questions about the Fourth Amendment came to the Supreme Court. The U.S. Supreme Court responded to these questions by outlining the fundamental purpose of the amendment as guaranteeing "the privacy, dignity and security of persons against certain arbitrary and invasive acts by officers of the Government, without regard to whether the government actor is investigating crime or performing another function ''. In Mapp v. Ohio (1961), the U.S. Supreme Court ruled that the Fourth Amendment applies to the states by way of the Due Process Clause of the Fourteenth Amendment.
Fourth Amendment case law deals with three central issues: what government activities constitute "search '' and "seizure ''; what constitutes probable cause for these actions; how violations of Fourth Amendment rights should be addressed.
The Fourth Amendment typically requires "a neutral and detached authority interposed between the police and the public '', and it is offended by "general warrants '' and laws that allows searches to be conducted "indiscriminately and without regard to their connection with (a) crime under investigation '', for the "basic purpose of the Fourth Amendment, which is enforceable against the States through the Fourteenth, through its prohibition of ' unreasonable ' searches and seizures is to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials. ''
The Fourth Amendment has been held to mean that a search or an arrest generally requires a judicially sanctioned warrant, because the basic rule under the Fourth Amendment is that arrests and "searches conducted outside the judicial process, without prior approval by judge or magistrate, are per se unreasonable ''. In order for such a warrant to be considered reasonable, it must be supported by probable cause and be limited in scope according to specific information supplied by a person (usually a law enforcement officer) who has sworn by it and is therefore accountable to the issuing court. The Supreme Court further held in Chandler v. Miller (1997): "To be reasonable under the Fourth Amendment, a search ordinarily must be based on individualized suspicion of wrongdoing. But particularized exceptions to the main rule are sometimes warranted based on ' special needs, beyond the normal need for law enforcement '... When such ' special needs ' are alleged, courts must undertake a context - specific inquiry, examining closely the competing private and public interests advanced by the parties. '' The amendment applies to governmental searches and seizures, but not those done by private citizens or organizations who are not acting on behalf of a government. In Ontario v. Quon (2010), the Court held the amendment to also apply to the government when acting as an employer, ruling that a government could search a police officer 's text messages that were sent over that government 's pager.
One threshold question in the Fourth Amendment jurisprudence is whether a "search '' has occurred. Initial Fourth Amendment case law hinged on a citizen 's property rights -- that is, when the government physically intrudes on "persons, houses, papers, or effects '' for the purpose of obtaining information, a "search '' within the original meaning of the Fourth Amendment has occurred. Early 20th - century Court decisions, such as Olmstead v. United States (1928), held that Fourth Amendment rights applied in cases of physical intrusion, but not to other forms of police surveillance (e.g., wiretaps). In Silverman v. United States (1961), the Court stated of the amendment that "at the very core stands the right of a man to retreat into his own home and there be free from unreasonable governmental intrusion ''.
Fourth Amendment protections expanded significantly with Katz v. United States (1967). In Katz, the Supreme Court expanded that focus to embrace an individual 's right to privacy, and ruled that a search had occurred when the government wiretapped a telephone booth using a microphone attached to the outside of the glass. While there was no physical intrusion into the booth, the Court reasoned that: 1) Katz, by entering the booth and shutting the door behind him, had exhibited his expectation that "the words he utters into the mouthpiece will not be broadcast to the world ''; and 2) society believes that his expectation was reasonable. Justice Potter Stewart wrote in the majority opinion that "the Fourth Amendment protects people, not places ''. A "search '' occurs for purposes of the Fourth Amendment when the government violates a person 's "reasonable expectation of privacy ''. Katz 's reasonable expectation of privacy thus provided the basis to rule that the government 's intrusion, though electronic rather than physical, was a search covered by the Fourth Amendment, and thus necessitated a warrant. The Court said that it was not recognizing any general right to privacy in the Fourth Amendment, and that this wiretap could have been authorized if proper procedures had been followed.
This decision in Katz was later developed into the now commonly used two - prong test, adopted in Smith v. Maryland (1979), for determining whether a search has occurred for purposes of the Fourth Amendment:
The Supreme Court has held that the Fourth Amendment does not apply to information that is voluntarily shared with third parties. In Smith, the Court held individuals have no "legitimate expectation of privacy '' regarding the telephone numbers they dial because they knowingly give that information to telephone companies when they dial a number.
Following Katz, the vast majority of Fourth Amendment search cases have turned on the right to privacy, but in United States v. Jones (2012), the Court ruled that the Katz standard did not replace earlier case law, but rather, has supplemented it. In Jones, law enforcement officers had attached a GPS device on a car 's exterior without Jones ' knowledge or consent. The Court concluded that Jones was a bailee to the car, and so had a property interest in the car. Therefore, since the intrusion on the vehicle -- a common law trespass -- was for the purpose of obtaining information, the Court ruled that it was a search under the Fourth Amendment. The Court used similar "trespass '' reasoning in Florida v. Jardines (2013), to rule that bringing a drug detection dog to sniff at the front door of a home was a search.
In certain situations, law enforcement may perform a search when they have a reasonable suspicion of criminal activity, even if it falls short of probable cause necessary for an arrest. Under Terry v. Ohio (1968), law enforcement officers are permitted to conduct a limited warrantless search on a level of suspicion less than probable cause under certain circumstances. In Terry, the Supreme Court ruled that when a police officer witnesses "unusual conduct '' that leads that officer to reasonably believe "that criminal activity may be afoot '', that the suspicious person has a weapon and that the person is presently dangerous to the officer or others, the officer may conduct a "pat - down search '' (or "frisk '') to determine whether the person is carrying a weapon. This detention and search is known as a Terry stop. To conduct a frisk, officers must be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant their actions. As established in Florida v. Royer (1983), such a search must be temporary, and questioning must be limited to the purpose of the stop (e.g., officers who stop a person because they have reasonable suspicion to believe that the person was driving a stolen car, can not, after confirming that it is not stolen, compel the person to answer questions about anything else, such as the possession of contraband).
The Fourth Amendment proscribes unreasonable seizure of any person, person 's home (including its curtilage) or personal property without a warrant. A seizure of property occurs when there is "some meaningful interference with an individual 's possessory interests in that property '', such as when police officers take personal property away from an owner to use as evidence, or when they participate in an eviction. The amendment also protects against unreasonable seizure of persons, including a brief detention.
A seizure does not occur just because the government questions an individual in a public place. The exclusionary rule would not bar voluntary answers to such questions from being offered into evidence in a subsequent criminal prosecution. The person is not being seized if his freedom of movement is not restrained. The government may not detain an individual even momentarily without reasonable, objective grounds, with few exceptions. His refusal to listen or answer does not by itself furnish such grounds.
In United States v. Mendenhall (1980), the Court held that a person is seized only when, by means of physical force or show of authority, his freedom of movement is restrained and, in the circumstances surrounding the incident, a reasonable person would believe that he was not free to leave. In Florida v. Bostick (1991), the Court ruled that as long as the police do not convey a message that compliance with their requests is required, the police contact is a "citizen encounter '' that falls outside the protections of the Fourth Amendment. If a person remains free to disregard questioning by the government, there has been no seizure and therefore no intrusion upon the person 's privacy under the Fourth Amendment.
When a person is arrested and taken into police custody, he has been seized (i.e., a reasonable person who is handcuffed and placed in the back of a police car would not think they were free to leave). A person subjected to a routine traffic stop on the other hand, has been seized, but is not "arrested '' because traffic stops are a relatively brief encounter and are more analogous to a Terry stop than to a formal arrest. If a person is not under suspicion of illegal behavior, a law enforcement official is not allowed to place an individual under arrest simply because this person does not wish to state his identity, provided specific state regulations do not specify this to be the case. A search incidental to an arrest that is not permissible under state law does not violate the Fourth Amendment, so long as the arresting officer has probable cause. In Maryland v. King (2013), the Court upheld the constitutionality of police swabbing for DNA upon arrests for serious crimes, along the same reasoning that allows police to take fingerprints or photographs of those they arrest and detain.
The government may not detain an individual even momentarily without reasonable and articulable suspicion, with a few exceptions. In Delaware v. Prouse (1979), the Court ruled an officer has made an illegal seizure when he stops an automobile and detains the driver in order to check his driver 's license and the registration of the automobile, unless the officer has articulable and reasonable suspicion that a motorist is unlicensed or that an automobile is not registered, or either the vehicle or an occupant is otherwise subject to seizure for violation of law.
Where society 's need is great, no other effective means of meeting the need is available, and intrusion on people 's privacy is minimal, certain discretionless checkpoints toward that end may briefly detain motorists. In United States v. Martinez - Fuerte (1976), the Supreme Court allowed discretionless immigration checkpoints. In Michigan Dept. of State Police v. Sitz (1990), the Supreme Court allowed discretionless sobriety checkpoints. In Illinois v. Lidster (2004), the Supreme Court allowed focused informational checkpoints. However, in City of Indianapolis v. Edmond (2000), the Supreme Court ruled that discretionary checkpoints or general crime - fighting checkpoints are not allowed.
Under the Fourth Amendment, law enforcement must receive written permission from a court of law, or otherwise qualified magistrate, to lawfully search and seize evidence while investigating criminal activity. A court grants permission by issuing a writ known as a warrant. A search or seizure is generally unreasonable and unconstitutional if conducted without a valid warrant and the police must obtain a warrant whenever practicable. Searches and seizures without a warrant are not considered unreasonable if one of the specifically established and well - delineated exceptions to the warrant requirement applies. These exceptions apply "(o) nly in those exceptional circumstances in which special needs, beyond the normal need for law enforcement, make the warrant and probable cause requirement impracticable ''.
In these situations where the warrant requirement does n't apply a search or seizure nonetheless must be justified by some individualized suspicion of wrongdoing. However, the U.S. Supreme Court carved out an exception to the requirement of individualized suspicion. It ruled that, "In limited circumstances, where the privacy interests implicated by the search are minimal and where an important governmental interest furthered by the intrusion would be placed in jeopardy by a requirement of individualized suspicion '' a search (or seizure) would still be reasonable.
The standards of probable cause differ for an arrest and a search. The government has probable cause to make an arrest when "the facts and circumstances within their knowledge and of which they had reasonably trustworthy information '' would lead a prudent person to believe that the arrested person had committed or was committing a crime. Probable cause to arrest must exist before the arrest is made. Evidence obtained after the arrest may not apply retroactively to justify the arrest.
When police conduct a search, the amendment requires that the warrant establish probable cause to believe that the search will uncover criminal activity or contraband. They must have legally sufficient reasons to believe a search is necessary. In Carroll v. United States (1925), the Supreme Court stated that probable cause to search is a flexible, common - sense standard. To that end, the Court ruled in Dumbra v. United States (1925) that the term probable cause means "less than evidence that would justify condemnation '', reiterating Carroll 's assertion that it merely requires that the facts available to the officer would "warrant a man of reasonable caution '' in the belief that specific items may be contraband or stolen property or useful as evidence of a crime. It does not demand any showing that such a belief be correct or more likely true than false. A "practical, non-technical '' probability that incriminating evidence is involved is all that is required. In Illinois v. Gates (1983), the Court ruled that the reliability of an informant is to be determined based on the "totality of the circumstances ''.
If a party gives consent to a search, a warrant is not required. There are exceptions and complications to the rule, including the scope of the consent given, whether the consent is voluntarily given, and whether an individual has the right to consent to a search of another 's property. In Schneckloth v. Bustamonte (1973), the Court ruled that a consent search is still valid even if the police do not inform a suspect of his right to refuse the search. This contrasts with Fifth Amendment rights, which can not be relinquished without an explicit Miranda warning from police.
The Court stated in United States v. Matlock (1974) that a third party co-occupant could give consent for a search without violating a suspect 's Fourth Amendment rights. However, in Georgia v. Randolph (2006), the Supreme Court ruled that when two co-occupants are both present, one consenting and the other rejecting the search of a shared residence, the police may not make a search of that residence within the consent exception to the warrant requirement. Per the Court 's ruling in Illinois v. Rodriguez (1990), a consent search is still considered valid if police accept in good faith the consent of an "apparent authority '', even if that party is later discovered to not have authority over the property in question. A telling case on this subject is Stoner v. California, in which the Court held that police officers could not rely in good faith upon the apparent authority of a hotel clerk to consent to the search of a guest 's room.
According to the plain view doctrine as defined in Coolidge v. New Hampshire (1971), if an officer is lawfully present, he may seize objects that are in "plain view ''. However, the officer must have had probable cause to believe that the objects are contraband. What 's more, the criminality of the object in plain view must be obvious by its very nature. In Arizona v. Hicks, the Supreme Court held that an officer stepped beyond the plain view doctrine when he moved a turntable in order to view its serial number to confirm that the turntable was stolen. "A search is a search '', proclaimed the Court, "even if it happens to disclose nothing but the bottom of a turntable. ''
Similarly, "open fields '' such as pastures, open water, and woods may be searched without a warrant, on the ground that conduct occurring therein would have no reasonable expectation of privacy. The doctrine was first articulated by the Court in Hester v. United States (1924), which stated that "the special protection accorded by the Fourth Amendment to the people in their ' persons, houses, papers, and effects, ' is not extended to the open fields. ''
In Oliver v. United States (1984), the police ignored a "no trespassing '' sign and a fence, trespassed onto the suspect 's land without a warrant, followed a path for hundreds of feet, and discovered a field of marijuana. The Supreme Court ruled that no search had taken place, because there was no privacy expectation regarding an open field:
open fields do not provide the setting for those intimate activities that the Amendment is intended to shelter from government interference or surveillance. There is no societal interest in protecting the privacy of those activities, such as the cultivation of crops, that occur in open fields.
While open fields are not protected by the Fourth Amendment, the curtilage, or outdoor area immediately surrounding the home, is protected. Courts have treated this area as an extension of the house and as such subject to all the privacy protections afforded a person 's home (unlike a person 's open fields) under the Fourth Amendment. The curtilage is "intimately linked to the home, both physically and psychologically, '' and is where "privacy expectations are most heightened. '' However, courts have held aerial surveillance of curtilage not to be included in the protections from unwarranted search so long as the airspace above the curtilage is generally accessible by the public. An area is curtilage if it "harbors the intimate activity associated with the sanctity of a man 's home and the privacies of life. '' Courts make this determination by examining "whether the area is included within an enclosure surrounding the home, the nature of the uses to which the area is put, and the steps taken by the resident to protect the area from observation by people passing by. '' The Court has acknowledged that a doorbell or knocker is typically treated as an invitation, or license, to the public to approach the front door of the home to deliver mail, sell goods, solicit for charities, etc. This license extends to the police, who have the right to try engaging a home 's occupant in a "knock and talk '' for the purpose of gathering evidence without a warrant. However, they can not bring a drug detection dog to sniff at the front door of a home without either a warrant or consent of the homeowner or resident.
Law enforcement officers may also conduct warrantless searches in several types of exigent circumstances where obtaining a warrant is dangerous or impractical. One example is the Terry stop, which allows police to frisk suspects for weapons. The Court also allowed a search of arrested persons in Weeks v. United States (1914) to preserve evidence that might otherwise be destroyed and to ensure suspects were disarmed. In Carroll v. United States (1925), the Court ruled that law enforcement officers could search a vehicle that they suspected of carrying contraband without a warrant. The Court allowed blood to be drawn without a warrant from drunk - driving suspects in Schmerber v. California (1966) on the grounds that the time to obtain a warrant would allow a suspect 's blood alcohol content to reduce, although this was later modified by Missouri v. McNeely (2013). Warden v. Hayden (1967) provided an exception to the warrant requirement if officers were in "hot pursuit '' of a suspect.
The Supreme Court has held that individuals in automobiles have a reduced expectation of privacy, because (1) vehicles generally do not serve as residences or repositories of personal effects, and (2) vehicles "can be quickly moved out of the locality or jurisdiction in which the warrant must be sought. '' Vehicles may not be randomly stopped and searched; there must be probable cause or reasonable suspicion of criminal activity. Items in plain view may be seized; areas that could potentially hide weapons may also be searched. With probable cause to believe evidence is present, police officers may search any area in the vehicle. However, they may not extend the search to the vehicle 's passengers without probable cause to search those passengers or consent from the passengers.
In Arizona v. Gant (2009), the Court ruled that a law enforcement officer needs a warrant before searching a motor vehicle after an arrest of an occupant of that vehicle, unless 1) at the time of the search the person being arrested is unsecured and within reaching distance of the passenger compartment of the vehicle or 2) police officers have reason to believe that evidence for the crime for which the person is being arrested will be found in the vehicle.
A common law rule from Great Britain permits searches incident to an arrest without a warrant. This rule has been applied in American law, and has a lengthy common law history. The justification for such a search is to prevent the arrested individual 1.) from destroying evidence or 2.) using a weapon against the arresting officer by disarming the suspect. The U.S. Supreme Court ruled that "both justifications for the search - incident - to - arrest exception are absent and the rule does not apply '', when "there is no possibility '' that the suspect could gain access to a weapon or destroy evidence. In Trupiano v. United States (1948), the Supreme Court held that "a search or seizure without a warrant as an incident to a lawful arrest has always been considered to be a strictly limited right. It grows out of the inherent necessities of the situation at the time of the arrest. But there must be something more in the way of necessity than merely a lawful arrest. '' In United States v. Rabinowitz (1950), the Court reversed Trupiano, holding instead that the officers ' opportunity to obtain a warrant was not germane to the reasonableness of a search incident to an arrest. Rabinowitz suggested that any area within the "immediate control '' of the arrestee could be searched, but it did not define the term. In deciding Chimel v. California (1969), the Supreme Court elucidated its previous decisions. It held that when an arrest is made, it is reasonable for the officer to search the arrestee for weapons and evidence. However, in Riley v. California (2014), the Supreme Court ruled unanimously that police must obtain a warrant to search an arrestee 's cellular phone. The Court said that earlier Supreme Court decisions permitting searches incident to an arrest without a warrant do not apply to "modern cellphones, which are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy, '' and noted that US citizens ' cellphones today typically contain "a digital record of nearly every aspect of their lives -- from the mundane to the intimate. ''
Searches conducted at the United States border or the equivalent of the border (such as an international airport) may be conducted without a warrant or probable cause subject to the border search exception. Most border searches may be conducted entirely at random, without any level of suspicion, pursuant to U.S. Customs and Border Protection plenary search authority. However, searches that intrude upon a traveler 's personal dignity and privacy interests, such as strip and body cavity searches, must be supported by "reasonable suspicion. '' The U.S. Courts of Appeals for the Fourth and Ninth circuits have ruled that information on a traveler 's electronic materials, including personal files on a laptop computer, may be searched at random, without suspicion.
The Supreme Court decision in United States v. U.S. District Court (1972) left open the possibility for a foreign intelligence surveillance exception to the warrant clause. Three United States Courts of Appeals have recognized a foreign intelligence surveillance exception to the warrant clause, but tied it to certain requirements. The exception to the Fourth Amendment was formally recognized by the United States Foreign Intelligence Surveillance Court of Review in its 2008 In re Directives decision. The lower court held that, "a foreign intelligence exception to the Fourth Amendment 's warrant requirement exists when surveillance is conducted to obtain foreign intelligence for national security purposes and is directed against foreign powers or agents of foreign powers reasonably believed to be located outside the United States. '' Despite the foregoing citation the Fourth Amendment prohibitions against unreasonable searches and seizures nonetheless apply to the contents of all communications, whatever the means, because, "a person 's private communications are akin to personal papers. '' To protect the telecommunication carriers cooperating with the US government from legal action, the Congress passed a bill updating the Foreign Intelligence Surveillance Act of 1978 to permit this type of surveillance.
In New Jersey v. T.L.O. (1985), the Supreme Court ruled that searches in public schools do not require warrants, as long as the searching officers have reasonable grounds for believing that the search will result in the finding of evidence of illegal activity. However, in Safford Unified School District v. Redding (2009), the Court ruled that school officials violated the Fourth Amendment when they strip searched a 13 - year - old girl based only on a student claiming to have received drugs from that student. Similarly, in Samson v. California (2006), the Court ruled that government offices may be searched for evidence of work - related misconduct by government employees on similar grounds. Searches of prison cells are subject to no restraints relating to reasonableness or probable cause.
One way courts enforce the Fourth Amendment is through the use of the exclusionary rule. The rule provides that evidence obtained through a violation of the Fourth Amendment is generally not admissible by the prosecution during the defendant 's criminal trial. The Court stated in Elkins v. United States (1960) that the rule 's function "is to deter -- to compel respect for the constitutional guaranty in the only effectively available way -- by removing the incentive to disregard it. ''
The Court adopted the exclusionary rule in Weeks v. United States (1914), prior to which all evidence, no matter how seized, could be admitted in court. In Silverthorne Lumber Co. v. United States (1920) and Nardone v. United States (1939), the Court ruled that leads or other evidence resulting from illegally obtained evidence are also inadmissible in trials. Justice Felix Frankfurter described this secondary evidence in the Nardone decision as the "fruit of the poisonous tree ''. The Supreme Court rejected incorporating the exclusionary rule by way of the Fourteenth Amendment in Wolf v. Colorado (1949), but Wolf was explicitly overruled in Mapp v. Ohio (1961), making the Fourth Amendment (including the exclusionary rule) applicable in state proceedings.
The exclusionary rule and its effectiveness have often been controversial, particularly since its 1961 application to state proceedings. Critics charge that the rule hampers police investigation and can result in freeing guilty parties convicted on reliable evidence; other critics state that the rule has not been successful in deterring illegal police searches. Proponents argue that the number of criminal convictions overturned under the rule has been minimal and that no other effective mechanism exists to enforce the Fourth Amendment. In 1982, California passed a "Victim 's Bill of Rights '' containing a provision to repeal the exclusionary rule; though the bill could not affect federally mandated rights under the Fourth Amendment, it blocked the state courts from expanding these protections further.
Starting with United States v. Calandra (1974), the Supreme Court has repeatedly limited the exclusionary rule. The Court in Calandra ruled that grand juries may use illegally obtained evidence when questioning witnesses, because "the damage to that institution from the unprecedented extension of the exclusionary rule outweighs the benefit of any possible incremental deterrent effect. '' Explaining the purpose of the rule, the Court said that the rule "is a judicially created remedy designed to safeguard Fourth Amendment rights generally through its deterrent effect, rather than a personal constitutional right of the party aggrieved. ''
Several cases in 1984 further restricted the exclusionary rule:
In Arizona v. Evans (1995) and Herring v. United States (2009), the Court ruled that the exclusionary rule does not apply to evidence found due to negligence regarding a government database, as long as the arresting police officer relied on that database in "good faith '' and the negligence was not pervasive. In Davis v. United States (2011), the Court ruled that the exclusionary rule does not apply to a Fourth Amendment violation resulting from a reasonable reliance on binding appellate precedent. In Utah v. Strieff (2016), the Court ruled that evidence obtained from an unlawful police stop would not be excluded from court when the link between the stop and the evidence 's discovery was "attenuated '' by the discovery of an outstanding warrant during the stop.
The Supreme Court has also held the exclusionary rule to not apply in the following circumstances:
On December 16, 2013, in Klayman v. Obama, a United States district court ruled that the mass collection of metadata of Americans ' telephone records by the National Security Agency probably violates the Fourth Amendment. The court granted a preliminary injunction, blocking the collection of phone data for two private plaintiffs and ordered the government to destroy any of their records that have been gathered. The court stayed the ruling pending a government appeal, recognizing the "significant national security interests at stake in this case and the novelty of the constitutional issues ''.
However, in ACLU v. Clapper, a United States district court ruled that the U.S. government 's global telephone data - gathering system is needed to thwart potential terrorist attacks, and that it can only work if everyone 's calls are included. The court also ruled that Congress legally set up the program and that it does not violate anyone 's constitutional rights. The court concluded that the telephone data being swept up by NSA did not belong to telephone users, but to the telephone companies. Also, the court held that when NSA obtains such data from the telephone companies, and then probes into it to find links between callers and potential terrorists, this further use of the data was not even a search under the Fourth Amendment, concluding that the controlling precedent is Smith v. Maryland, saying "Smith 's bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties. '' The American Civil Liberties Union declared on January 2, 2014, that it will appeal the ruling that NSA bulk phone record collection is legal. "The government has a legitimate interest in tracking the associations of suspected terrorists, but tracking those associations does not require the government to subject every citizen to permanent surveillance, '' deputy ACLU legal director Jameel Jaffer said in a statement.
Citations
|
how many versions of iphone 6 are there | IPhone 6 - Wikipedia
September 19, 2014; 3 years ago (2014 - 09 - 19) (16, 64 and 128 GB models)
The iPhone 6 and iPhone 6 Plus are smartphones designed and marketed by Apple Inc. The devices are part of the iPhone series and were announced on September 9, 2014, and released on September 19, 2014. The iPhone 6 and iPhone 6 Plus jointly serve as successors to the iPhone 5S and were themselves replaced as flagship devices of the iPhone series by the iPhone 6S and iPhone 6S Plus on September 9, 2015. The iPhone 6 and 6 Plus include larger 4.7 and 5.5 inches (120 and 140 mm) displays, a faster processor, upgraded cameras, improved LTE and Wi - Fi connectivity and support for a near field communications - based mobile payments offering.
The iPhone 6 and 6 Plus received positive reviews, with critics regarding their improved design, specifications, camera, and battery life as being improvements over previous iPhone models. However, aspects of the design of iPhone 6 were also panned, including plastic strips on the rear of the device for its antenna that disrupted the otherwise metal exterior, and the screen resolution of the standard - sized iPhone 6 being lower than other devices in its class. Pre-orders of the iPhone 6 and iPhone 6 Plus exceeded four million within its first 24 hours of availability -- an Apple record. More than ten million iPhone 6 and iPhone 6 Plus devices were sold in the first three days, another Apple record. During its lifespan, the iPhone 6 and 6 Plus sold 220 million in total, making them the best selling iPhone models, so far, and one of the most successful phones to date.
Despite their positive reception, the iPhone 6 and 6 Plus have been the subject of several hardware issues, including most prominently, being susceptible to bending under pressure, such as in a pocket. The iPhone 6s and 6s Plus were made with 7000 series aluminum (a design flaw nicknamed "Bendgate ''), and as a byproduct of this lack of rigidity, the touchscreen 's internal hardware being susceptible to losing its connection to the phone 's logic board (nicknamed "Touch Disease ''). The iPhone 6 Plus was also the subject of camera issues, including some devices with malfunctioning optical image stabilization or otherwise defects on rear cameras.
The iPhone 6 and 6 Plus were moved to the midrange spot in Apple 's iPhone lineup when the iPhone 6S and 6S Plus were released in September 2015. The iPhone 6 and 6 Plus were discontinued in most countries on September 7, 2016 when Apple announced the iPhone 7 and iPhone 7 Plus. Their spot as the entry - level iPhone was replaced by the iPhone SE, which was released earlier on March 31, 2016. The iPhone 6 was relaunched with 32 GB of storage in Asian markets in February 2017 as a midrange / budget iPhone. It was later expanded to Europe, before hitting the US markets in May 2017, and Canada in July 2017.
From the launch of the original iPhone to the iPhone 4S, iPhones used 3.5 - inch displays -- which are smaller than screens used by flagship phones from competitors. The iPhone 5 and its immediate successors featured a display that was taller, but the same width as prior models, measuring at 4 inches diagonally. Following Apple 's loss in smartphone market share to companies producing phones with larger displays, reports as early as January 2014 suggested that Apple was preparing to launch new iPhone models with larger, 4.7 - inch and 5.5 - inch displays. Reports prior to its unveiling also speculated that Apple might use a new iPhone model to introduce a mobile payments platform using near - field communications -- a technology that has been incorporated into many Android phones, but has experienced a low adoption rate among users.
The iPhone 6 and iPhone 6 Plus were officially unveiled during a press event at the Flint Center for Performing Arts in Cupertino, California on September 9, 2014 and released on September 19, 2014; pre-orders began on September 12, 2014, with the iPhone 6 starting at US $649 and the iPhone 6 Plus starting at US $749. In China, where the iPhone 5c and 5s were the first models in the iPhone series to be released in the country on the same day as their international launch, Apple notified local wireless carriers that it would be unable to release the iPhone 6 and iPhone 6 Plus on the 19th because there were "details which are not ready ''; local media reported that the devices had not yet been approved by the Ministry of Industry and Information Technology, and earlier in the year, a news report by state broadcaster China Central Television alleged that iPhone devices were a threat to national security because iOS 7 's "frequent locations '' function could expose "state secrets. ''
In August 2015, Apple admitted that some iPhone 6 Plus may have faulty cameras that could be causing photos to look blurry and initiated a replacement program.
On September 9, 2015, with the release of the iPhone 6S and iPhone 6S Plus, the iPhone 6 and 6 Plus were moved to the mid-range of the iPhone lineup. The 128 GB versions of the iPhone 6 and iPhone 6 Plus was discontinued along with the gold version of both phones, but the 16 GB and 64 GB versions of the iPhone 6 and iPhone 6 Plus in silver and space gray remain available for sale at a reduced price.
In June 2016, Apple faced a potential sales ban in China, as Shenzhen Baili, a Chinese device maker, alleged that the iPhone 6 and iPhone 6 Plus infringed on its design patent.
The iPhone 6 and 6 Plus were discontinued on September 7, 2016, when Apple announced the iPhone 7 and iPhone 7 Plus, and the iPhone 6 and 6 Plus 's spot as the entry - level iPhone has since been taken by the iPhone SE. As the iPhone SE has more powerful internal hardware than the midrange iPhone 6 (largely the same as the 6S) and had been released earlier on March 31, 2016, this created an unusual situation when it was sold alongside the iPhone 6 and 6 Plus until September 7 despite being marketed as a lower - tier iPhone.
In February 2017, the iPhone 6 was quietly relaunched in carrier stores and online, this time the storage has been changed to 32 GB. In India it was sold on Amazon 's website in Space Grey. In Taiwan, it was sold through Taiwan Mobile on March 10 in gold colour. In mid-March, it was released in the EU to Belarus via the i - Store web shop. It also makes an appearance in North America with Sprint based US prepaid carriers Boost Mobile and Virgin Mobile, along with AT&T GoPhone. These are not being distributed by Apple on their website or their retail stores.
The design of the iPhone 6 and iPhone 6 Plus is influenced by that of the iPad Air with a glass front that is curved around the edges of the display, and an aluminum rear that contains two plastic strips for the antenna. Both models come in gold, silver, and "space gray '' finishes. The iPhone 6 has a thickness of 6.9 millimetres (0.27 in), while the iPhone 6 Plus is 7.1 mm (0.28 in) in thickness; both are thinner than the iPhone 5c and iPhone 5s, with the iPhone 6 being Apple 's thinnest phone to date. The most significant changes to the iPhone 6 and iPhone 6 Plus are its displays; both branded as "Retina HD Display '' and "ion - strengthened '', the iPhone 6 display is 4.7 inches in size with a 16: 9 resolution of 1334x750 (326 ppi, minus one row of pixels), while the iPhone 6 Plus includes a 5.5 - inch 1920x1080 (1080p) display (401 PPI). The displays use a multiple - domain LCD panel, dubbed "dual - domain pixels ''; the RGB pixels themselves are skewed in pattern, so that every pixel is seen from a different angle. This technique helps improve the viewing angles of the display.
To accommodate the larger physical size of the iPhone 6 and iPhone 6 Plus, the power button was moved to the side of the phone instead of the top to improve its accessibility. The iPhone 6 features a 6.91 Wh (1810 mAh) battery, while the iPhone 6 Plus features an 11.1 Wh (2915 mAh) battery. Unlike the previous model, the rear - facing camera is not flush with the rear of the device, but instead protrudes slightly.
Both models include an Apple A8 system - on - chip, and an M8 motion co-processor -- an update of the M7 chip from the iPhone 5s. The primary difference between the M8 and the original M7 coprocessor is that the M8 also includes a barometer to measure altitude changes. Phil Schiller touted that the A8 chip would provide, in comparison to the 5s, a 25 % increase in CPU performance, a 50 % increase in graphics performance, and less heat output. Early hands - on reports suggested that the A8 's GPU performance might indeed break away from previous generations doubling of performance at each yearly release, scoring 21204.26 in Base mark X compared to 20253.80, 10973.36 and 5034.75 on respectively the 5s, 5 and 4s.
The expanded LTE connectivity on the iPhone 6 and iPhone 6 Plus is improved to LTE Advanced, with support for over 20 LTE bands (7 more than the iPhone 5s), for up to 150 Mbit / s download speed, and VoLTE support. Wi - Fi performance has been improved with support for 802.11 ac specifications, providing speeds up to 433.0581 Mbit / s -- which is up to 3 times faster than 802.11 n, along with Wi - Fi Calling support where available. The iPhone 6 and iPhone 6 Plus add support for near - field communications (NFC). It is initially used exclusively for Apple Pay -- a new mobile payments system which allows users to store their credit cards in Passbook for use with online payments and retail purchases over NFC. iOS 11 will provide support for uses of near - field communications besides Apple Pay.
The iPhone 6 's rear - facing camera now has the ability to shoot 1080p video at either 30 or 60 frames per second and slow - motion video at either 120 or 240 frames per second. The camera also includes phase detection autofocus. It can also record. The iPhone 6 Plus camera is nearly identical, but also includes optical image stabilization. The front - facing camera was also updated with a new sensor and f / 2.2 aperture, along with support for burst and HDR modes.
When first released, the iPhone 6 and iPhone 6 Plus were supplied pre-loaded with iOS 8, while the iPhone 5S was supplied pre-loaded with iOS 7. Apps are able to take advantage of the increased screen size in the iPhone 6 and 6 Plus to display more information on - screen; for example, the Mail app uses a dual - pane layout similar to its iPad version when the device is in landscape mode on the iPhone 6 Plus. As it uses an identical aspect ratio, apps designed for the iPhone 5, iPhone 5C, and 5S can be upscaled for use on the iPhone 6 and 6 Plus. To improve the usability of the devices ' larger screens, an additional "Reachability '' gesture was added; double - tapping the Home button will slide the top half of the screen 's contents down to the bottom half of the screen. This function allows users to reach buttons located near the top of the screen, such as a "Back '' button in the top - left corner.
Both iPhone 6 models received generally positive reviews. Re / code called it "the best smartphone you can buy ''. TechRadar praised the iPhone 6 's "brilliant '' design, improved battery life over the 5s, iOS 8 for being "smarter and more intuitive than ever '', along with the quality of its camera. However, the plastic antenna strips on the rear of the phone were criticized for resulting in poor aesthetics, the display for having lower resolution and pixel density in comparison to other recent smartphones -- including those with the same physical screen size as the iPhone 6, such as the HTC One, and for not having a sufficient justification for its significantly higher price in comparison to similar devices running Android or Windows Phone. The Verge considered the iPhone 6 to be "simply and cleanly designed '' in comparison to the 5s, noting that the phone still felt usable despite its larger size, but criticized the antenna plastic, the protruding camera lens (which prevents the device from sitting flat without a case), and the lack of additional optimization in the operating system for the bigger screen. Improvements such as performance, battery life, VoLTE support, and other tweaks were also noted. In conclusion, the iPhone 6 was considered "good, even great, but there 's little about it that 's truly ambitious or truly moving the needle. It 's just a refinement of a lot of existing ideas into a much more pleasant package ''.
In regards to the 6 Plus, Engadget panned its design for being uncomfortable to hold and harder to grip in comparison to other devices such as the Galaxy Note 3 and LG G3, but praised its inclusion of optical image stabilization and slightly better battery life than the 6.
The iPhone 6 and 6 Plus were affected by a number of notable hardware - related issues, including but not limited to concerns surrounding their rigidity (which led to incidents surrounding chassis bending, as well as degradation or outright loss of touchscreen functionality), performance issues on models with larger storage capacity, camera problems on the 6 Plus model, as well as an initially undocumented "error 53 '' that appeared under certain circumstances.
Shortly after its public release, it was reported that the iPhone 6 and iPhone 6 Plus chassis was susceptible to bending under pressure, such as when carried tightly in a user 's pocket. While such issues are not exclusive to the iPhone 6 and 6 Plus, the design flaw came to be known among users and the media as "Bendgate ''.
Apple responded to the bending allegations, stating that they had only received nine complaints of bent devices, and that the damage occurring due to regular use is "extremely rare. '' The company maintained that the iPhone 6 and 6 Plus went through durability testing to ensure they would stand up to daily use. The company offered to replace phones that were bent, if it is determined that the bending was unintentional.
On October 1, 2014, it was reported by Axel Telzerow, editor - in - chief of the German technology magazine Computer Bild, that following the posting of a video where a presenter was able to bend an iPhone 6 Plus, an Apple Germany representative informed the publication that it had been banned from future Apple events and that it would no longer receive devices directly from Apple for testing. Telzerow responded by saying that "we congratulate you to your fine new generation of iPhones, even if one of them has a minor weakness with its casing. But we are deeply disappointed about the lack of respect of your company. ''
On October 3, 2014 9to5Mac released a post claiming that certain iPhone 6 and iPhone 6 Plus users complained on social networking sites that the phone ripped off their hair when they held the phone close to their ears when making phone calls. Twitter users claimed that the seam between the glass screen and aluminum back of the iPhone 6 is to blame, with hair becoming caught within it.
Some users reported that 64 and 128 GB iPhone 6 models had experienced performance issues, and that some 128 GB iPhone 6 Plus models would, in rare cases, randomly crash and reboot. Business Korea reported that the issues were connected to the triple - layer cell NAND storage of the affected models. Triple - layer cells can store three bits of data per cell of flash, and are cheaper than dual - layer cell solutions, but at the cost of performance. It was reported that Apple had planned to switch the affected model lines back to multi-layer cell flash, and address the performance issues on existing devices in a future iOS update.
It was reported that the optical image stabilization systems on some iPhone 6 Plus models were faulty, failing to properly stabilize when the phone is being held perfectly still, leading to blurry photos and "wavy '' - looking videos. The optical image stabilization system was also found to have been affected by accessories that use magnets, such as third - party lens attachments; Apple issued advisories to users and its licensed accessory makers, warning that magnetic or metallic accessories can cause the OIS to malfunction.
On August 21, 2015, Apple instituted a repair program for iPhone 6 Plus models released between September 2014 and January 2015, citing that faulty rear cameras on affected models may produce blurry pictures.
Some iPhone 6 and iPhone 6 Plus models have an issue where the front facing camera is somehow "shifted '', or out of place. Apple stated that they would replace most affected iPhone 6 models with this issue, free of charge. Despite numerous complaints regarding this issue, it does not seem to actually affect the camera itself. It is said that the camera is not what has shifted, but a piece of protective foam around the camera module itself that has gone out of place.
If the iPhone 6 home button is repaired or modified by a third - party, the device will fail security checks related to Touch ID as the components had not been "re-validated '' for security reasons -- a process which can only be performed by an authorized Apple outlet. Failing these checks disables all features related to Touch ID. Such effects have sometimes happened as a result of damage as well.
It was reported these same hardware integrity checks would trigger an unrecoverable loop into "Recovery Mode '' if iOS is updated or restored, with attempts to restore the device via iTunes software resulting in an "error 53 '' message. Beyond the explanation that this is related to hardware integrity errors regarding Touch ID components, Apple provided no official explanation of what specifically triggers error 53 or how it can be fixed without replacing the entire device.
On February 18, 2016, Apple released an iOS 9.2. 1 patch through iTunes which addresses this issue, and admitted that error 53 was actually related to a diagnostic check for inspecting the Touch ID hardware before an iPhone is shipped from its factories.
Touchscreen control components on iPhone 6 logic boards have insufficient support, including a lack of underfill -- which strengthens and stabilizes integrated circuits, and a lack of rigid metal shielding on the logic board unlike previous iPhone models; the touchscreen controller is instead shielded by a flexible "sticker ''. Normal use of the device can cause the logic board to flex internally, which strains the touchscreen IC connectors and leads to a degradation or outright loss of touchscreen functionality. A symptom that has been associated with this type of failure is a flickering grey bar near the top of the display. iFixit reported that this issue, nicknamed "touch disease '', was a byproduct of the previous "Bendgate '' design flaw because of the device 's demonstrated lack of rigidity. As such, the larger iPhone 6 Plus is more susceptible to the flaw, but it has also been reported on a small percentage of iPhone 6 models. The devices ' successor, the iPhone 6S, is not afflicted by this flaw due to changes to their internal design, which included the strengthening of "key points '' in the rear casing, and the re-location of the touchscreen controllers to the display assembly from the logic board.
Initially, Apple did not officially acknowledge this issue. The issue was widely discussed on Apple 's support forum -- where posts discussing the issue have been subject to censorship. The touchscreen can be repaired via microsoldering: Apple Stores are not equipped with the tools needed to perform the logic board repair, which had led to affected users sending their devices to unofficial, third - party repair services. An Apple Store employee interviewed by Apple Insider reported that six months after they first started noticing the problem, Apple had issued guidance instructing them to tell affected users that this was a hardware issue which could not be repaired, and that their iPhone had to be replaced. However, some in - stock units have also been afflicted with this issue out of the box, leading to an employee stating that they were "tired of pulling service stock out of the box, and seeing the exact same problem that the customer has on the replacement ''. The issue received mainstream attention in August 2016 when it was reported upon by iFixit. On August 26, 2016, Apple Insider reported that based on data from four "high - traffic '' Apple Store locations, there was a spike in the number of iPhone 6 devices brought into them for repairs following mainstream reports of the "touch disease '' problem.
On August 30, 2016, a group of three iPhone 6 owners sued Apple Inc. in the United States District Court for the Northern District of California and filed for a proposed class action lawsuit, alleging that Apple was engaging in unfair business practices by having "long been aware '' of the defective design, yet actively refusing to acknowledge or repair it.
On November 17, 2016, Apple officially acknowledged the issue and announced a paid repair program for affected iPhone 6 Plus models, stating that "some iPhone 6 Plus devices may exhibit display flickering or Multi-Touch issues after being dropped multiple times on a hard surface and then incurring further stress on the device ''.
Apple Inc. announced that, within 24 hours of availability, over 4 million pre-orders of the iPhone 6 and 6 Plus were made, exceeding the supply available. More than 10 million iPhone 6 and iPhone 6 Plus devices were sold in the first three days.
|
the broken windows theory views crime as highly unpredictable | Broken windows theory - wikipedia
The broken windows theory is a criminological theory of the norm - setting and signaling effect of urban disorder and vandalism on additional crime and anti-social behavior. The theory states that maintaining and monitoring urban environments to prevent small crimes such as vandalism, public drinking, and turnstile - jumping helps to create an atmosphere of order and lawfulness, thereby preventing more serious crimes from happening.
The theory was introduced in a 1982 article by social scientists James Q. Wilson and George L. Kelling. Since then it has been subject to great debate both within the social sciences and the public sphere. The theory has been used as a motivation for several reforms in criminal policy, including the controversial mass use of "stop, question, and frisk '' by the New York City Police Department.
James Q. Wilson and George L. Kelling first introduced the broken windows theory in an article titled Broken Windows, in the March 1982 The Atlantic Monthly. The title comes from the following example:
Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it 's unoccupied, perhaps become squatters or light fires inside.
Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of refuse from take - out restaurants there or even break into cars.
Before the introduction of this theory by Wilson and Kelling, Philip Zimbardo, a Stanford psychologist, arranged an experiment testing the broken - window theory in 1969. Zimbardo arranged for an automobile with no license plates and the hood up to be parked idle in a Bronx neighbourhood and a second automobile in the same condition to be set up in Palo Alto, California. The car in the Bronx was attacked within minutes of its abandonment. Zimbardo noted that the first "vandals '' to arrive were a family -- a father, mother and a young son -- who removed the radiator and battery. Within twenty - four hours of its abandonment, everything of value had been stripped from the vehicle. After that, the car 's windows were smashed in, parts torn, upholstery ripped, and children were using the car as a playground. At the same time, the vehicle sitting idle in Palo Alto sat untouched for more than a week until Zimbardo himself went up to the vehicle and deliberately smashed it with a sledgehammer. Soon after, people joined in for the destruction. Zimbardo observed that a majority of the adult "vandals '' in both cases were primarily well dressed, Caucasian, clean - cut and seemingly respectable individuals. It is believed that, in a neighborhood such as the Bronx where the history of abandoned property and theft are more prevalent, vandalism occurs much more quickly as the community generally seems apathetic. Similar events can occur in any civilized community when communal barriers -- the sense of mutual regard and obligations of civility -- are lowered by actions that suggest apathy.
The article received a great deal of attention and was very widely cited. A 1996 criminology and urban sociology book, Fixing Broken Windows: Restoring Order and Reducing Crime in Our Communities by George L. Kelling and Catharine Coles, is based on the article but develops the argument in greater detail. It discusses the theory in relation to crime and strategies to contain or eliminate crime from urban neighborhoods.
A successful strategy for preventing vandalism, according to the book 's authors, is to address the problems when they are small. Repair the broken windows within a short time, say, a day or a week, and the tendency is that vandals are much less likely to break more windows or do further damage. Clean up the sidewalk every day, and the tendency is for litter not to accumulate (or for the rate of littering to be much less). Problems are less likely to escalate and thus "respectable '' residents do not flee the neighborhood.
Though police work is crucial to crime prevention, Oscar Newman, in his 1972 book, Defensible Space, wrote that the presence of police authority is not enough to maintain a safe and crime - free city. People in the community help with crime prevention. Newman proposes that people care for and protect spaces they feel invested in, arguing that an area is eventually safer if the people feel a sense of ownership and responsibility towards the area. Broken windows and vandalism are still prevalent because communities simply do not care about the damage. Regardless of how many times the windows are repaired, the community still must invest some of their time to keep it safe. Residents ' negligence of broken window - type decay signifies a lack of concern for the community. Newman says this is a clear sign that the society has accepted this disorder -- allowing the unrepaired windows to display vulnerability and lack of defense. Malcolm Gladwell also relates this theory to the reality of NYC in his book The Tipping Point.
The theory thus makes two major claims: that further petty crime and low - level anti-social behavior is deterred, and that major crime is prevented as a result. Criticism of the theory has tended to focus disproportionately on the latter claim.
Many claim that informal social controls can be an effective strategy to reduce unruly behavior. Garland 2001 expresses that "community policing measures in the realization that informal social control exercised through everyday relationships and institutions is more effective than legal sanctions ''. Informal social control methods, has demonstrated a "get tough '' attitude by proactive citizens, and expresses a sense that disorderly conduct is not tolerated. According to Wilson and Kelling, there are two types of groups involved in maintaining order, ' community watchmen ' and ' vigilantes ' The United States has adopted in many ways policing strategies of old European times, and at that time informal social control was the norm, which gave rise to contemporary formal policing. Though, in earlier times, there were no legal sanctions to follow, informal policing was primarily ' objective ' driven as stated by Wilson and Kelling (1982).
Wilcox et al. 2004 argue that improper land use can cause disorder, and the larger the public land is, the more susceptible to criminal deviance. Therefore, nonresidential spaces such as businesses, may assume to the responsibility of informal social control "in the form of surveillance, communication, supervision, and intervention. '' It is expected that more strangers occupying the public land creates a higher chance for disorder. Jane Jacobs can be considered one of the original pioneers of this perspective of broken windows. Much of her book The Death and Life of Great American Cities focuses on residents ' and nonresidents ' contributions to maintaining order on the street, and explains how local businesses, institutions, and convenience stores provide a sense of having "eyes on the street. ''
On the contrary, many residents feel that regulating disorder is not their responsibility. Wilson and Kelling found that studies done by psychologists suggest people often refuse to go to the aid of someone seeking help, not due to a lack of concern or selfishness "but the absence of some plausible grounds for feeling that one must personally accept responsibility '' On the other hand, others plainly refuse to put themselves in harm 's way, depending on how grave they perceive the nuisance to be; a 2004 study observed that "most research on disorder is based on individual level perceptions decoupled from a systematic concern with the disorder - generating environment. '' Essentially, everyone perceives disorder differently, and can contemplate seriousness of a crime based on those perceptions. However, Wilson and Kelling feel that although community involvement can make a difference, "the police are plainly the key to order maintenance. ''
Ranasinghe argues that the concept of fear is a crucial element of broken windows theory, because it is the foundation of the theory. She also adds that public disorder is "... unequivocally constructed as problematic because it is a source of fear. '' Fear is elevated as perception of disorder rises; creating a social pattern that tears the social fabric of a community, and leaves the residents feeling hopeless and disconnected. Wilson and Kelling hint at the idea, but do n't focus on its central importance. They indicate that fear was a product of incivility, not crime, and that people avoid one another in response to fear, weakening controls. Hinkle and Weisburd found that police interventions to combat minor offenses, as per the broken windows model, "significantly increased the probability of feeling unsafe, '' suggesting that such interventions might offset any benefits of broken windows policing in terms of fear reduction.
In an earlier publication of The Atlantic released March, 1982, Wilson wrote an article indicating that police efforts had gradually shifted from maintaining order to fighting crime. This indicated that order maintenance was something of the past, and soon it would seem as it has been put on the back burner. The shift was attributed to the rise of the social urban riots of the 1960s, and "social scientists began to explore carefully the order maintenance function of the police, and to suggest ways of improving it -- not to make streets safer (its original function) but to reduce the incidence of mass violence ''. Other criminologists argue between similar disconnections, for example, Garland argues that throughout the early and mid 20th century, police in American cities strived to keep away from the neighborhoods under their jurisdiction. This is a possible indicator of the out - of - control social riots that were prevalent at that time. Still many would agree that reducing crime and violence begins with maintaining social control / order.
Jane Jacobs ' The Death and Life of Great American Cities is discussed in detail by Ranasinghe, and its importance to the early workings of broken windows, and claims that Kelling 's original interest in "minor offences and disorderly behaviour and conditions '' was inspired by Jacobs ' work. Ranasinghe includes that Jacobs ' approach toward social disorganization was centralized on the "streets and their sidewalks, the main public places of a city '' and that they "are its most vital organs, because they provide the principal visual scenes. '' Wilson and Kelling, as well as Jacobs, argue on the concept of civility (or the lack thereof) and how it creates lasting distortions between crime and disorder. Ranasinghe explains that the common framework of both set of authors is to narrate the problem facing urban public places. Jacobs, according to Ranasinghe, maintains that "Civility functions as a means of informal social control, subject little to institutionalized norms and processes, such as the law '' ' but rather maintained through an ' "intricate, almost unconscious, network of voluntary controls and standards among people... and enforced by the people themselves ''
The reason the state of the urban environment may affect crime may be three factors:
In an anonymous, urban environment, with few or no other people around, social norms and monitoring are not clearly known. Individuals thus look for signals within the environment as to the social norms in the setting and the risk of getting caught violating those norms; one of the signals is the area 's general appearance.
Under the broken windows theory, an ordered and clean environment, one that is maintained, sends the signal that the area is monitored and that criminal behavior is not tolerated. Conversely, a disordered environment, one that is not maintained (broken windows, graffiti, excessive litter), sends the signal that the area is not monitored and that criminal behavior has little risk of detection.
The theory assumes that the landscape "communicates '' to people. A broken window transmits to criminals the message that a community displays a lack of informal social control and so is unable or unwilling to defend itself against a criminal invasion. It is not so much the actual broken window that is important but the message the broken window sends to people. It symbolizes the community 's defenselessness and vulnerability and represents the lack of cohesiveness of the people within. Neighborhoods with a strong sense of cohesion fix broken windows and assert social responsibility on themselves, effectively giving themselves control over their space.
The theory emphasizes the built environment, but must also consider human behavior.
Under the impression that a broken window left unfixed leads to more serious problems, residents begin to change the way they see their community. In an attempt to stay safe, a cohesive community starts to fall apart, as individuals start to spend less time in communal space to avoid potential violent attacks by strangers. The slow deterioration of a community as a result of broken windows modifies the way people behave when it comes to their communal space, which, in turn, breaks down community control. As rowdy teenagers, drunks, panhandlers, addicts, and prostitutes slowly make their way into a community, it signifies that the community can not assert informal social control, and citizens become afraid of that worse things will happen. As a result, they spend less time in the streets to avoid these subjects and feel less and less connected from their community if the problems persist.
At times, residents tolerate "broken windows '' because they feel they belong in the community and "know their place. '' Problems, however, arise when outsiders begin to disrupt the community 's cultural fabric. That is the difference between "regulars '' and "strangers '' in a community. The way that "regulars '' act represents the culture within, but strangers are "outsiders '' who do not belong.
Consequently, what were considered "normal '' daily activities for residents now become uncomfortable, as the culture of the community carries a different feel from the way that it was once.
With regard to social geography, the broken windows theory is a way of explaining people and their interactions with space. The culture of a community can deteriorate and change over time with the influence of unwanted people and behaviors changing the landscape. The theory can be seen as people shaping space as the civility and attitude of the community create spaces, used for specific purposes by residents. On the other hand, it can also be seen as space shaping people with elements of the environment influencing and restricting day - to - day decision making.
However, with policing efforts to remove unwanted disorderly people that put fear in the public 's eyes, the argument would seem to be in favor of "people shaping space '' as public policies are enacted and help to determine how one is supposed to behave. All spaces have their own codes of conduct, and what is considered to be right and normal will vary from place to place.
The concept also takes into consideration spatial exclusion and social division as certain people behaving in a given way are considered disruptive and therefore unwanted. It excludes people from certain spaces because their behavior does not fit the class level of the community and its surroundings. A community has its own standards and communicates a strong message to criminals, by social control, that their neighborhood does not tolerate their behavior. If however, a community unable to ward off would - be criminals on their own, policing efforts help.
By removing unwanted people from the streets, the residents feel safer and have a higher regard for those that protect them. People of less civility who try to make a mark in the community are removed, according to the theory. Excluding the unruly and people of certain social statuses is an attempt to keep the balance and cohesiveness of a community.
In 1985, the New York City Transit Authority hired George L. Kelling, the author of Broken Windows, as a consultant. Kelling was later hired as a consultant to the Boston and the Los Angeles police departments.
One of Kelling 's adherents, David L. Gunn implemented policies and procedures based on the Broken Windows Theory during his tenure as President of the New York City Transit Authority. One of his major efforts was to lead a campaign from 1984 to 1990 to rid graffiti from New York 's subway system.
In 1990, William J. Bratton became head of the New York City Transit Police. Bratton described Kelling as his "intellectual mentor '' and implemented zero tolerance of fare - dodging, faster arrestee processing methods, and background checks on all those arrested.
After being elected Mayor of New York City in 1993, Republican Rudy Giuliani hired Bratton as his police commissioner to implement similar policies and practices throughout the city. Giuliani heavily subscribed to Kelling and Wilson 's theories and was eager to implement policies based on their theories. Such policies emphasized addressing crimes that negatively impact quality of life. In particular, Bratton directed the police to more strictly enforce laws against subway fare evasion, public drinking, public urination, and graffiti. He increased enforcement against squeegee men, those who aggressively demand payment at traffic stops for unsolicited car window cleanings. Bratton also revived New York City Cabaret Law, a previously dormant Prohibition era ban on dancing in unlicensed establishments. Throughout the late 1990s NYPD shut down many of the city 's acclaimed night spots for illegal dancing.
Although Broken Windows Theory does not necessarily prescribe zero - tolerance policies, this was another prominent feature of Bratton 's reforms, which was part of a broader series of related policies adopted by the Giuliani administration.
Bratton was criticized for his emphasis on petty crimes, which would presumably draw resources away from addressing more serious crimes.
According to a 2001 study of crime trends in New York City by Kelling and William Sousa, rates of both petty and serious crime fell significantly after the aforementioned policies were implemented. Furthermore, crime continued to decline for the following ten years. Such declines suggested that policies based on the Broken Windows Theory were effective.
However, other studies do not find a cause and effect relationship between the adoption of such policies and decreases in crime. The decrease may have been part of a broader trend across the United States. Other cities also experienced less crime, even though they had different police policies. Other factors, such as the 39 % drop in New York City 's unemployment rate, could also explain the decrease reported by Kelling and Sousa.
A 2017 study found that when the New York Police Department (NYPD) stopped aggressively enforcing minor legal statutes in late 2014 and early 2015 that "civilian complaints of major crimes (such as burglary, felony assault and grand larceny) decreased during and shortly after sharp reductions in proactive policing. The results challenge prevailing scholarship as well as conventional wisdom on authority and legal compliance, as they imply that aggressively enforcing minor legal statutes incites more severe criminal acts. ''
Albuquerque, New Mexico, instituted the Safe Streets Program in the late 1990s based on the Broken Windows Theory. Operating under the theory that American Westerners use roadways much in the same way that American Easterners use subways, the developers of the program reasoned that lawlessness on the roadways had much the same effect as it did on the New York City Subway. Effects of the program were reviewed by the US National Highway Traffic Safety Administration (NHTSA) and were published in a case study.
In 2005, Harvard University and Suffolk University researchers worked with local police to identify 34 "crime hot spots '' in Lowell, Massachusetts. In half of the spots, authorities cleared trash, fixed streetlights, enforced building codes, discouraged loiterers, made more misdemeanor arrests, and expanded mental health services and aid for the homeless. In the other half of the identified locations, there was no change to routine police service.
The areas that received additional attention experienced a 20 % reduction in calls to the police. The study concluded that cleaning up the physical environment was more effective than misdemeanor arrests and that increasing social services had no effect.
In 2007 and 2008, Kees Keizer and colleagues from the University of Groningen conducted a series of controlled experiments to determine if the effect of existing visible disorder (such as litter or graffiti) increased other crime such as theft, littering, or other antisocial behavior. They selected several urban locations, which they arranged in two different ways, at different times. In each experiment, there was a "disorder '' condition in which violations of social norms as prescribed by signage or national custom, such as graffiti and littering, were clearly visible as well as a control condition where no violations of norms had taken place. The researchers then secretly monitored the locations to observe if people behaved differently when the environment was "disordered ''. Their observations supported the theory. The conclusion was published in the journal Science: "One example of disorder, like graffiti or littering, can indeed encourage another, like stealing. ''
Other side effects of better monitoring and cleaned up streets may well be desired by governments or housing agencies and the population of a neighborhood: broken windows can count as an indicator of low real estate value and may deter investors. Fixing windows is therefore also a step of real estate development, which may lead, whether it is desired or not, to gentrification. By reducing the amount of broken windows in the community, the inner cities would appear to be attractive to consumers with more capital. Ridding spaces like downtown New York and Chicago, notably notorious for criminal activity, of danger would draw in investment from consumers, increasing the city 's economic status, providing a safe and pleasant image for present and future inhabitants.
In education, the broken windows theory is used to promote order in classrooms and school cultures. The belief is that students are signaled by disorder or rule - breaking and that they in turn imitate the disorder. Several school movements encourage strict paternalistic practices to enforce student discipline. Such practices include language codes (governing slang, curse words, or speaking out of turn), classroom etiquette (sitting up straight, tracking the speaker), personal dress (uniforms, little or no jewelry), and behavioral codes (walking in lines, specified bathroom times). Several schools have made significant strides in educational gains with this philosophy such as the Knowledge Is Power Program and American Indian Public Charter School.
From 2004 to 2006, Stephen B. Plank and colleagues from Johns Hopkins University conducted a correlational study to determine the degree to which the physical appearance of the school and classroom setting influence student behavior, particularly in respect to the variables concerned in their study: fear, social disorder, and collective efficacy. They collected survey data administered to 6th - 8th students by 33 public schools in a large mid-Atlantic city. From analyses of the survey data, the researchers determined that the variables in their study are statistically significant to the physical conditions of the school and classroom setting. The conclusion, published in the American Journal of Education, was
... the findings of the current study suggest that educators and researchers should be vigilant about factors that influence student perceptions of climate and safety. Fixing broken windows and attending to the physical appearance of a school can not alone guarantee productive teaching and learning, but ignoring them likely greatly increases the chances of a troubling downward spiral.
Broken windows policing is a form of policing based on broken windows theory. It is also sometimes called quality - of - life policing. William J. Bratton popularized this policing strategy as New York City 's police commissioner during the mid-1990s.
Many critics state that factors other than physical disorder more significantly influence crime rate. They argue that efforts to more effectively reduce crime rate should target or pay more attention to such factors instead.
According to a study by Robert J. Sampson and Stephen Raudenbush, the premise on which the theory operates, that social disorder and crime are connected as part of a causal chain, is faulty. They argue that a third factor, collective efficacy, "defined as cohesion among residents combined with shared expectations for the social control of public space, '' is the actual cause of varying crime rates that are observed in an altered neighborhood environment. They also argue that the relationship between public disorder and crime rate is weak.
C.R. Sridhar, in his article in the Economic and Political Weekly, also challenges the broken windows policing and the notion introduced in the annotated by Kelling and Bratton: that aggressive policing, such as the zero tolerance police strategy adopted by William Bratton, the appointed commissioner of the New York Police Department, is the sole cause of the decrease of crime rates in New York City. The policy targeted people in areas with a significant amount of physical disorder and there appeared to be a causal relationship between the adoption of the aggressive policy and the decrease in crime rate. Sridhar, however, discusses other trends (such as New York City 's economic boom in the late 1990s) that created a "perfect storm '' that contributed to the decrease of crime rate much more significantly than the application of the zero tolerance policy. Sridhar also compares this decrease of crime rate with other major cities that adopted other various policies and determined that the zero tolerance policy is not as effectual.
Baltimore criminologist Ralph B. Taylor argues in his book that fixing windows is only a partial and short - term solution. His data supports a materialist view: changes in levels of physical decay, superficial social disorder, and racial composition do not lead to higher crime, but economic decline does. He contends that the example shows that real, long - term reductions in crime require that urban politicians, businesses, and community leaders work together to improve the economic fortunes of residents in high - crime areas.
Another tack was taken by a 2010 study questioning the legitimacy of the theory concerning the subjectivity of disorder as perceived by persons living in neighborhoods. It concentrated on whether citizens view disorder as a separate issue from crime or as identical to it. The study noted that crime can not be the result of disorder if the two are identical, agreed that disorder provided evidence of "convergent validity '' and concluded that broken windows theory misinterprets the relationship between disorder and crime.
In recent years, there has been increasing attention on the correlation between environmental lead levels and crime. Specifically, there appears to be a correlation with a 25 - year lag with the addition and removal of lead from paint and gasoline and rises and falls in murder arrests.
Robert J. Sampson argues that based on common misconceptions by the masses, it is clearly implied that those who commit disorder and crime have a clear tie to groups suffering from financial instability and may be of minority status: "The use of racial context to encode disorder does not necessarily mean that people are racially prejudiced in the sense of personal hostility. '' He notes that residents make a clear implication of who they believe is causing the disruption, which has been termed as implicit bias. He further states that research conducted on implicit bias and stereotyping of cultures suggests that community members hold unrelenting beliefs of African - Americans and other disadvantaged minority groups, associating them with crime, violence, disorder, welfare, and undesirability as neighbors. A later study indicated that this contradicted Wilson and Kelling 's proposition that disorder is an exogenous construct that has independent effects on how people feel about their neighborhoods.
According to some criminologists who speak of a broader "backlash, '' the broken windows theory is not theoretically sound. They claim that the "broken windows theory '' closely relates correlation with causality, a reasoning prone to fallacy. David Thacher, assistant professor of public policy and urban planning at the University of Michigan, stated in a 2004 paper:
(S) ocial science has not been kind to the broken windows theory. A number of scholars reanalyzed the initial studies that appeared to support it... Others pressed forward with new, more sophisticated studies of the relationship between disorder and crime. The most prominent among them concluded that the relationship between disorder and serious crime is modest, and even that relationship is largely an artifact of more fundamental social forces.
It has also been argued that rates of major crimes also dropped in many other US cities during the 1990s, both those that had adopted zero - tolerance and those that had not. In the winter 2006 edition of the University of Chicago Law Review, Bernard Harcourt and Jens Ludwig looked at the later Department of Housing and Urban Development program that rehoused inner - city project tenants in New York into more - orderly neighborhoods. The broken windows theory would suggest that these tenants would commit less crime once moved because of the more stable conditions on the streets. However, Harcourt and Ludwig found that the tenants continued to commit crime at the same rate.
In a 2007 study called "Reefer Madness '' in the journal Criminology and Public Policy, Harcourt and Ludwig found further evidence confirming that mean reversion fully explained the changes in crime rates in the different precincts in New York in the 1990. Further alternative explanations that have been put forward include the waning of the crack epidemic, unrelated growth in the prison population by the Rockefeller drug laws, and that the number of males from 16 to 24 was dropping regardless of the shape of the US population pyramid.
A low - level intervention of police in neighborhoods has been considered problematic. Accordingly, Gary Stewart wrote, "The central drawback of the approaches advanced by Wilson, Kelling, and Kennedy rests in their shared blindness to the potentially harmful impact of broad police discretion on minority communities. '' It was seen by the authors, who worried that people would be arrested "for the ' crime ' of being undesirable. '' According to Stewart, arguments for low - level police intervention, including the broken windows hypothesis, often act "as cover for racist behavior. ''
The application of the broken windows theory in aggressive policing policies, such as William Bratton 's zero - tolerance policy, has been shown to criminalize the poor and homeless. That is because the physical signs that characterize a neighborhood with the "disorder '' that broken windows policing targets correlate with the socio - economic conditions of its inhabitants. Many of the acts that are considered legal but "disorderly '' are often targeted in public settings and are not targeted when they are conducted in private. Therefore, those without access to a private space are often criminalized. Critics, such as Robert J. Sampson and Stephen Raudenbush of Harvard University, see the application of the broken windows theory in policing as a war against the poor, as opposed to a war against more serious crimes.
In Dorothy Roberts 's article, "Foreword: Race, Vagueness, and the Social Meaning of Order Maintenance and Policing, '' she focuses on problems of the application of the broken windows theory, which lead to the criminalization of communities of color, who are typically disfranchised. She underscores the dangers of vaguely written ordinances that allows for law enforcers to determine who engages in disorderly acts, which, in turn, produce a racially - skewed outcome in crime statistics.
According to Bruce D. Johnson, Andrew Golub, and James McCabe, the application of the broken windows theory in policing and policymaking can result in development projects that decrease physical disorder but promote undesired gentrification. Often, when a city is so "improved '' in this way, the development of an area can cause the cost of living to rise higher than residents can afford, which forces low - income people, often minorities, out of the area. As the space changes, the middle and upper classes, often white, begin to move into the area, resulting in the gentrification of urban, poor areas. The local residents are affected negatively by such an application of the broken windows theory and end up evicted from their homes as if their presence indirectly contributed to the area 's problem of "physical disorder. ''
In More Guns, Less Crime (University of Chicago Press, 2000), economist John Lott, Jr. examined the use of the broken windows approach as well as community - and problem - oriented policing programs in cities over 10,000 in population, over two decades. He found that the impacts of these policing policies were not very consistent across different types of crime. Lott 's book has been subject to criticism, but other groups support Lott 's conclusions.
In the 2005 book Freakonomics, coauthors Steven D. Levitt and Stephen J. Dubner confirm and question the notion that the broken windows theory was responsible for New York 's drop in crime: "the reality that the pool of potential criminals had dramatically shrunk. Levitt had attributed that possibility in the Quarterly Journal of Economics to the legalization of abortion with Roe v. Wade, a decrease in the number of delinquents in the population at large, one generation later.
|
when can a baby horse leave its mother | Foal - wikipedia
A foal is an equine up to one year old; this term is used mainly for horses. More specific terms are colt for a male foal and filly for a female foal, and are used until the horse is three or four. When the foal is nursing from its dam (mother), it may also be called a "suckling ''. After it has been weaned from its dam, it may be called a "weanling ''. When a mare is pregnant, she is said to be "in foal ''. When the mare gives birth, she is "foaling '', and the impending birth is usually stated as "to foal ''. A newborn horse is "foaled ''.
After a horse is one year old, it is no longer a foal, and is a "yearling ''. There are no special age - related terms for young horses older than yearlings. When young horses reach breeding maturity, the terms change: a filly over three (four in horse racing) is called a mare, and a colt over three is called a stallion. A castrated male horse is called a gelding regardless of age; however, colloquially, the term "gelding colt '' is sometimes used until a young gelding is three or four. (There is no specific term for a spayed mare other than a "spayed mare ''.)
Horses that mature at a small stature are called ponies and occasionally confused with foals. However, body proportions are very different. An adult pony can be ridden and put to work, but a foal, regardless of stature, is too young to be ridden or used as a working animal. Foals, whether they grow up to be horse or pony - sized, can be distinguished from adult horses by their extremely long legs and small, slim bodies. Their heads and eyes also exhibit juvenile characteristics. Although ponies exhibit some neoteny with the wide foreheads and small stature, their body proportions are similar to that of an adult horse. Pony foals are proportionally smaller than adults, but like horse foals, they are slimmer and have proportionally longer legs than their adult parents.
Foals are born after a gestation period of approximately 11 months. Birth takes place quickly, consistent with the status of a horse as a prey animal, and more often at night than during the day. Labor lasting over twenty - four hours may be a sign of medical complications. Unlike most predators which are altricial (born helpless), horses are precocial, meaning they come into the world relatively mature and mobile. Healthy foals can typically keep up with the rest of the herd only a few hours after birth. If a foal has not eaten within twelve hours, it may require assistance.
Healthy foals grow quickly and can put on up to three pounds or over a kilo a day. A sound diet improves growth and leads to a healthier adult animal, although genetics also plays a part. In the first weeks of life the foal gets everything it needs from the mare 's milk. Like a human infant, it receives nourishment and antibodies from the colostrum in milk that is produced within the first few hours or days following parturition. The mare needs additional water to help her produce milk for the foal and may benefit from supplementary nutrition.
A foal may start to eat solids from ten days of age, after eight to ten weeks it will need more nutrition than the mare 's milk can supply; supplementary feeding is required by then. It is important when adding solid food to the foal 's diet to not feed the foal excessively or feed an improperly balanced diet. This can trigger one of several possible growth disorders that can cause lifelong soundness problems. On the other hand, insufficient nutrition to mare or foal can cause stunted growth and other health problems for the foal as it gets older.
It is typical for foals under human management to be weaned between four and six months of age, though under natural conditions, they may nurse for longer, occasionally until the following year when the mare foals again. Some foals can nurse for up to three years in domesticity because the mare is less likely to conceive another foetus. A foal that has been weaned but is less than one year old is called a weanling.
Mare 's milk is not a significant source of nutrients for the foal after about four months, though it does no harm to a healthy mare for a foal to nurse longer and may be of some psychological benefit to the foal. A mare that is both nursing and pregnant will have increased nutritional demands made upon her in the last months of pregnancy, and therefore most domesticated foals are weaned sometime in the autumn in the Northern Hemisphere if the mare is to be bred again the next season.
Weanlings are not capable of reproduction. Puberty occurs in most horses during their yearling year. Therefore, some young horses are capable of reproduction prior to full physical maturity, though it is not common. Two - year - olds sometimes are deliberately bred, though doing so, particularly with fillies, puts undesirable stress on their still - growing bodies. As a general rule, breeding young horses prior to the age of three is considered undesirable.
In spite of rapid growth, a foal is too young to be ridden or driven. However, foals usually receive very basic horse training in the form of being taught to accept being led by humans, called halter - breaking. They may also learn to accept horse grooming, hoof trimming by a farrier, having hair trimmed with electric clippers, and to become familiar with things it will have to do throughout life, such as loading into a horse trailer or wearing a horse blanket. Horses in general have excellent memories, so a foal must not be taught anything as a young horse that would be undesirable for it to do as a full - grown animal.
There is tremendous debate over the proper age to begin training a foal. Some advocate beginning to accustom a foal to human handling from the moment of birth, using a process termed imprinting or "imprint training ''. Others feel that imprint training of a foal interferes with the mare and foal bond and prefer to wait until the foal is a few days old, but do begin training within the first week to month of life. Yet other horse breeding operations wait until weaning, theorizing that a foal is more willing to bond to a human as a companion at the time it is separated from its mother. Regardless of theory, most modern horse breeding operations consider it wise to give a foal basic training while it is still young, and consider it far safer than trying to tame a semi-feral adult - sized horse.
In either case, foals that have not bonded to their mothers will have difficulty in pasture. The mare will find it more difficult to teach the foal to follow her. Other horses can have difficulty communicating with the foal and may ostracise it due to speaking a different "language ''. It can be difficult to lead a foal that has never even been led by its dam.
Horses are not fully mature until the age of four or five, but most are started as working animals much younger, though care must be taken not to over-stress the "soft '' bones of younger animals. Yearlings are generally too young to be ridden at all, though many race horses are put under saddle as "long '' yearlings, in autumn. Physiologically young horses are still not truly mature as two - year olds, though some breeders and most race horse trainers do start young horses in a cart or under saddle at that age. The most common age for young horses to begin training under saddle is the age of three. A few breeds and disciplines wait until the animal is four.
|
where will the 2018 nba draft be held | 2018 NBA draft - wikipedia
The 2018 NBA draft will be held on June 21, 2018 at the Barclays Center in Brooklyn, New York. National Basketball Association (NBA) teams will take turns selecting amateur U.S. college basketball players and other eligible players, including international players. It will be televised nationally by ESPN. This draft will be the last to use the original weighted lottery system that gives teams near the bottom of the NBA draft better odds at the top three picks of the draft while teams higher up had worse odds in the process; the rule was agreed upon by the NBA on September 28, 2017, but would not be implemented until the 2019 draft. With the last year of what was, at the time, the most recent lottery system (with the NBA draft lottery being held in Chicago instead of in New York), the Phoenix Suns won the first overall pick on May 15, 2018, with the Sacramento Kings at the second overall pick and the Atlanta Hawks at third overall pick. The Suns ' selection is their first No. 1 overall selection in franchise history.
The invitation - only NBA Draft Combine was held in Chicago from May 16 to 20. The on - court element of the combine took place on May 18 and 19. A total of 69 players were invited for this year 's NBA Draft Combine, with two top talents in Deandre Ayton and Luka Dončić declining invitations for the event this year, with the latter player being involved with the 2018 EuroLeague Final Four at the time.
The NBA draft lottery took place during the playoffs on May 15, 2018. This year will be the last time it uses what was originally the updated system for the NBA draft lottery to upgrade draft odds for teams in the lower regions of the NBA. Starting in 2019 onward, the newer updated draft lottery will give the bottom 3 teams equal odds for the No. 1 pick, while some of the teams higher up the NBA draft would get an increased chance for a top - four pick instead of a top - three pick like in this year, thus hoping to discourage teams from potentially losing games on purpose for higher draft picks (and potentially better talent in the process). There were also two tiebreakers involved for lottery odds this season; the first involved the Dallas Mavericks having one more result favoring them having the No. 1 pick over Atlanta after splitting the odds together, while the second tiebreaker had the Chicago Bulls splitting odds with the Sacramento Kings, resulting in the Bulls having slightly better odds on their end in the process.
^ 1: The Brooklyn Nets ' pick was automatically conveyed to the Cleveland Cavaliers this year. ^ 2: The Los Angeles Lakers ' pick was conveyed to the Philadelphia 76ers since the pick turned unprotected for them this year and was n't in the Nos. 2 - 5 range. ^ 3: The Detroit Pistons ' pick was conveyed to the Los Angeles Clippers since it was outside the top 4.
The draft is conducted under the eligibility rules established in the league 's 2017 collective bargaining agreement (CBA) with its player 's union. The previous CBA that ended the 2011 lockout instituted no immediate changes to the draft, but called for a committee of owners and players to discuss future changes.
The NBA has since expanded the draft combine to include players with remaining college eligibility (who, like players without college eligibility, can only attend by invitation).
Players who are not automatically eligible have to declare their eligibility for the draft by notifying the NBA offices in writing no later than 60 days before the draft. For the 2018 draft, the date fell on April 22. After that date, "early entry '' players are able to attend NBA pre-draft camps and individual team workouts to show off their skills and obtain feedback regarding their draft positions. Under the CBA a player may withdraw his name from consideration from the draft at any time before the final declaration date, which is 10 days before the draft. Under current NCAA rules, players have until May 30 (10 days after the draft combine) to withdraw from the draft and retain college eligibility.
A player who has hired an agent forfeits his remaining college eligibility regardless of whether he is drafted.
A record - high 236 underclassed draft prospects (i.e., players with remaining college eligibility) had declared by the April 22 deadline, with 181 of these players being from college. Names in bold mean they have hired an agent, or have announced that they plan to do so. Currently, 68 players have declared their intentions to enter the draft with an agent at hand.
International players that had declared this year and did not previously declare in another prior year can drop out of the draft about 10 days before the draft begins on June 11. Initially, there were 55 players who originally expressed interest in entering the 2018 draft, one of which was a player who came directly out of high school from Canada.
Players who do not meet the criteria for "international '' players are automatically eligible if they meet any of the following criteria:
Players who meet the criteria for "international '' players are automatically eligible if they meet any of the following criteria:
Prior to the day of the draft, the following trades were made and resulted in exchanges of draft picks between the teams below.
1961
1966
1967
1968
1970
1980
1988
1989
1995
|
who sang the original winnie the pooh theme song | Winnie the Pooh (song) - wikipedia
"Winnie the Pooh '' is the title song for the franchise of the same name. It is musically emblematic of the most successful branding Disney currently owns and has been used in most merchandising models of the brand since the song 's first publication in 1966 in the musical film featurette Winnie the Pooh and the Honey Tree. In film, the song is generally utilized in the title sequence. The lyric gives an overview of the characters and the roles each plays in relation to Pooh himself. The song has been used in every theatrically released Pooh film as well as most of the television series. The songwriters are the Sherman Brothers, who have written the grand majority of Winnie the Pooh songs and musical numbers since 1966. It is unknown who performed the song. The song was also performed by Carly Simon. A music video was released for this version and it was included in The Many Adventures of Winnie the Pooh DVD.
Notably, Tigger is the only original character from the books (thus does not count Gopher, Kessie or Lumpy) not to be named in this song and is probably the reason he gets his own introduction song, "The Wonderful Thing About Tiggers '', when he first appears in Winnie the Pooh and the Blustery Day. In the 2011 film Winnie the Pooh Tigger is finally named in the song, after Kanga and Roo, saying: "And Tigger, too. ''
In 1987 an eighteen month old, Texas infant named Jessica McClure fell into a well and it took forty - five hours to rescue her. The incident became national news over the four day ordeal. A mining engineer was eventually brought in to help supervise and coordinate the rescue effort. TV viewers watched as paramedics and rescuers, drilling experts and contractors worked tirelessly to save the baby 's life. Meanwhile, they were reassured when they heard Jessica voice singing "Winnie the Pooh '' from deep in the well. As long as she was still singing, they knew she was still alive. Forty - five hours after McClure fell into the well, the shaft and tunnel were finally completed.
|
a key factor in the slowness of china to industrialize was a | Chinese industrialization - wikipedia
In the 1960s, about 60 % of the Chinese Labor Force were employed in agriculture. The figure remained more or less constant throughout the early phase of industrialization between the 1960s and 1990s, but in view of the rapid population growth this amounted to a rapid growth of the industrial sector in absolute terms, of up to 8 % per year during the 1970s. By 1990, the fraction of the labor force employed in agriculture had fallen to about 30 %, and by 2000 still further.
In the State of Wu of China, steel was first made, preceding the Europeans by over 1,000 years. The Song dynasty saw intensive industry in steel production, and coal mining. No other premodern state advanced as nearly as starting an industrial revolution than the Southern Song. The want of potential customers for products manufactured by machines instead of artisans was due to the absence of a "middle class '' in Song China which was the reason for the failure to industrialize.
Western historians debate whether bloomery - based ironworking ever spread to China from the Middle East. Around 500 BC, however, metalworkers in the southern state of Wu developed an iron smelting technology that would not be practiced in Europe until late medieval times. In Wu, iron smelters achieved a temperature of 1130 ° C, hot enough to be considered a blast furnace which could create cast iron. At this temperature, iron combines with 4.3 % carbon and melts. As a liquid, iron can be cast into molds, a method far less laborious than individually forging each piece of iron from a bloom.
Cast iron is rather brittle and unsuitable for striking implements. It can, however, be decarburized to steel or wrought iron by heating it in air for several days. In China, these ironworking methods spread northward, and by 300 BC, iron was the material of choice throughout China for most tools and weapons. A mass grave in Hebei province, dated to the early 3rd century BC, contains several soldiers buried with their weapons and other equipment. The artifacts recovered from this grave are variously made of wrought iron, cast iron, malleabilized cast iron, and quench - hardened steel, with only a few, probably ornamental, bronze weapons.
During the Han Dynasty (202 BC -- 220 AD), the government established ironworking as a state monopoly (yet repealed during the latter half of the dynasty, returned to private entrepreneurship) and built a series of large blast furnaces in Henan province, each capable of producing several tons of iron per day. By this time, Chinese metallurgists had discovered how to puddle molten pig iron, stirring it in the open air until it lost its carbon and became wrought iron. (In Chinese, the process was called chao, literally, stir frying.) By the 1st century BC, Chinese metallurgists had found that wrought iron and cast iron could be melted together to yield an alloy of intermediate carbon content, that is, steel. According to legend, the sword of Liu Bang, the first Han emperor, was made in this fashion. Some texts of the era mention "harmonizing the hard and the soft '' in the context of ironworking; the phrase may refer to this process. Also, the ancient city of Wan (Nanyang) from the Han period forward was a major center of the iron and steel industry. Along with their original methods of forging steel, the Chinese had also adopted the production methods of creating Wootz steel, an idea imported from India to China by the 5th century.
The Chinese during the ancient Han Dynasty were also the first to apply hydraulic power (i.e. a waterwheel) in working the inflatable bellows of the blast furnace. This was recorded in the year 31 AD, an innovation of the engineer Du Shi, prefect of Nanyang. Although Du Shi was the first to apply water power to bellows in metallurgy, the first drawn and printed illustration of its operation with water power came in 1313, in the Yuan Dynasty era text called the Nong Shu. In the 11th century, there is evidence of the production of steel in Song China using two techniques: a "berganesque '' method that produced inferior, heterogeneous steel and a precursor to the modern Bessemer process that utilized partial decarbonization via repeated forging under a cold blast. By the 11th century, there was also a large amount of deforestation in China due to the iron industry 's demands for charcoal. However, by this time the Chinese had figured out how to use bituminous coke to replace the use of charcoal, and with this switch in resources many acres of prime timberland in China were spared. This switch in resources from charcoal to coal was later used in Europe by the 17th century.
The economy of the Song Dynasty was one of the most prosperous and advanced economies in the medieval world. Song Chinese invested their funds in joint stock companies and in multiple sailing vessels at a time when monetary gain was assured from the vigorous overseas trade and indigenous trade along the Grand Canal and Yangzi River. Prominent merchant families and private businesses were allowed to occupy industries that were not already government - operated monopolies. Both private and government - controlled industries met the needs of a growing Chinese population in the Song. Both artisans and merchants formed guilds which the state had to deal with when assessing taxes, requisitioning goods, and setting standard worker 's wages and prices on goods.
The iron industry was pursued by both private entrepreneurs who owned their own smelters as well as government - supervised smelting facilities. The Song economy was stable enough to produce over a hundred million kg (over two hundred million lb) of iron product a year. Large scale deforestation in China would have continued if not for the 11th century innovation of the use of coal instead of charcoal in blast furnaces for smelting cast iron. Much of this iron was reserved for military use in crafting weapons and armoring troops, but some was used to fashion the many iron products needed to fill the demands of the growing indigenous market. The iron trade within China was furthered by the building of new canals which aided the flow of iron products from production centers to the large market found in the capital city.
The annual output of minted copper currency in 1085 alone reached roughly six billion coins. The most notable advancement in the Song economy was the establishment of the world 's first government issued paper - printed money, known as Jiaozi (see also Huizi). For the printing of paper money alone, the Song court established several government - run factories in the cities of Huizhou, Chengdu, Hangzhou, and Anqi. The size of the workforce employed in paper money factories was large; it was recorded in 1175 that the factory at Hangzhou employed more than a thousand workers a day.
The economic power of Song China heavily influenced foreign economies abroad. The Moroccan geographer al - Idrisi wrote in 1154 of the prowess of Chinese merchant ships in the Indian Ocean and of their annual voyages that brought iron, swords, silk, velvet, porcelain, and various textiles to places such as Aden (Yemen), the Indus River, and the Euphrates in modern - day Iraq. Foreigners, in turn, affected the Chinese economy. For example, many West Asian and Central Asian Muslims went to China to trade, becoming a preeminent force in the import and export industry, while some were even appointed as officers supervising economic affairs. Sea trade with the Southeast Pacific, the Hindu world, the Islamic world, and the East African world brought merchants great fortune and spurred an enormous growth in the shipbuilding industry of Song - era Fujian province. However, there was risk involved in such long overseas ventures. To reduce the risk of losing money on maritime trade missions abroad, the historians Ebrey, Walthall, and Palais write:
(Song era) investors usually divided their investment among many ships, and each ship had many investors behind it. One observer thought eagerness to invest in overseas trade was leading to an outflow of copper cash. He wrote, ' People along the coast are on intimate terms with the merchants who engage in overseas trade, either because they are fellow - countrymen or personal acquaintances... (They give the merchants) money to take with them on their ships for purchase and return conveyance of foreign goods. They invest from ten to a hundred strings of cash, and regularly make profits of several hundred percent '.
Some historians such as David Landes and Max Weber credit the different belief systems in China and Europe with dictating where the revolution occurred. The religion and beliefs of Europe were largely products of Judaeo - Christianity, Socrates, Plato, and Aristotle. Conversely, Chinese society was founded on men like Confucius, Mencius, Han Feizi (Legalism), Lao Tzu (Taoism), and Buddha (Buddhism). The key difference between these belief systems was that those from Europe focused on the individual, while Chinese beliefs centered around relationships between people. The family unit was more important than the individual for the large majority of Chinese history, and this may have played a role in why the Industrial Revolution took much longer to occur in China. There was the additional difference as to whether people looked backwards to a reputedly glorious past for answers to their questions or looked hopefully to the future. However, applications of this view have been dismissed as racist.
By contrast, there is a historical school which Jack Goldstone has dubbed the "English school '' which argues that China was not essentially different from Europe, and that many of the assertions that it was are based on bad historical evidence.
Mark Elvin argues that China was in a high level equilibrium trap in which the non-industrial methods were efficient enough to prevent use of industrial methods with high initial capital. Kenneth Pomeranz, in the Great Divergence, argues that Europe and China were remarkably similar in 1700, and that the crucial differences which created the Industrial Revolution in Europe were sources of coal near manufacturing centers, and raw materials such as food and wood from the New World, which allowed Europe to expand economically in a way that China could not.
Some have compared England directly to China, but the comparison between England and China has been viewed as a faulty one, since China is so much larger than England. A more relevant comparison would be between China 's Yangtze Delta region, China 's most advanced region, the location of Hangzhou, Nanjing and contemporary Shanghai, and England. This region of China is said to have had similar labor costs to England. According to Andre Gunder Frank, "Particularly significant is the comparison of Asia 's 66 percent share of world population, confirmed above all by estimates for 1750, with its 80 percent share of production in the world at the same time. So, two thirds of the world 's people in Asia produced four - fifths of total world output, while one - fifth of world population in Europe produced only a part of the remaining one - fifth share of world production, to which Europeans and Americans also contributed. '' China was one of Asia 's most advanced economies at the time and was in the middle of its 18th century boom brought on by a long period of stability under the Qing Dynasty.
Industrialization of China did occur on a significant scale only from the 1950s, in the Maoist Great Leap Forward (simplified Chinese: 大 跃进; traditional Chinese: 大 躍進; pinyin: Dàyuèjìn). This was the plan used from 1958 to 1961 to transform the People 's Republic of China from a primarily agrarian economy by peasant farmers into a modern communist society through the process of agriculturalization and industrialization. Mao Zedong based this program on the Theory of Productive Forces. It ended in catastrophe due to widespread drought towards the end of the period that led to widespread famine.
As political stability was gradually restored following the Cultural Revolution of the late 1960s, a renewed drive for coordinated, balanced development was set in motion under the leadership of Premier Zhou Enlai. To revive efficiency in industry, Communist Party of China committees were returned to positions of leadership over the revolutionary committees, and a campaign was carried out to return skilled and highly educated personnel to the jobs from which they had been displaced during the Cultural Revolution. Universities began to reopen, and foreign contacts were expanded. Once again the economy suffered from imbalances in the capacities of different industrial sectors and an urgent need for increased supplies of modern inputs for agriculture. In response to these problems, there was a significant increase in investment, including the signing of contracts with foreign firms for the construction of major facilities for chemical fertilizer production, steel finishing, and oil extraction and refining. The most notable of these contracts was for thirteen of the world 's largest and most modern chemical fertilizer plants. During this period, industrial output grew at an average rate of 8 percent a year.
At the milestone Third Plenum of the National Party Congress 's 11th Central Committee which opened on December 22, 1978, the party leaders decided to undertake a program of gradual but fundamental reform of the economic system. They concluded that the Maoist version of the centrally planned economy had failed to produce efficient economic growth and had caused China to fall far behind not only the industrialized nations of the West but also the new industrial powers of Asia: Japan, the Republic of Korea, Singapore, Taiwan, and Hong Kong. In the late 1970s, while Japan and Hong Kong rivaled European countries in modern technology, China 's citizens had to make do with barely sufficient food supplies, rationed clothing, inadequate housing, and a service sector that was inadequate and inefficient. All of these shortcomings embarrassed China internationally.
The purpose of the reform program was not to abandon communism but to make it work better by substantially increasing the role of market mechanisms in the system and by reducing -- not eliminating -- government planning and direct control. The process of reform was incremental. New measures were first introduced experimentally in a few localities and then were popularized and disseminated nationally if they proved successful. By 1987 the program had achieved remarkable results in increasing supplies of food and other consumer goods and had created a new climate of dynamism and opportunity in the economy. At the same time, however, the reforms also had created new problems and tensions, leading to intense questioning and political struggles over the program 's future.
The first few years of the reform program were designated the "period of readjustment, '' during which key imbalances in the economy were to be corrected and a foundation was to be laid for a well - planned modernization drive. The schedule of Hua Guofeng 's ten - year plan was discarded, although many of its elements were retained. The major goals of the readjustment process were to expand exports rapidly; overcome key deficiencies in transportation, communications, coal, iron, steel, building materials, and electric power; and redress the imbalance between light and heavy industry by increasing the growth rate of light industry and reducing investment in heavy industry.
In 1984, the fourteen largest coastal cities were designated as economic development zones, including Dalian, Tianjin, Shanghai, and Guangzhou, all of which were major commercial and industrial centers. These zones were to create productive exchanges between foreign firms with advanced technology and major Chinese economic networks.
|
which event led to the american involvement of the spanish-american war | Spanish -- American war - wikipedia
American victory
United States
Spain
American:
Spanish:
The higher naval losses are attributed to the disastrous naval defeats inflicted on the Spanish at Manila Bay and Santiago de Cuba.
The Spanish -- American War (Spanish: Guerra hispano - americana or Guerra hispano - estadounidense; Filipino: Digmaang Espanyol - Amerikano) was fought between the United States and Spain in 1898. Hostilities began in the aftermath of the internal explosion of the USS Maine in Havana Harbor in Cuba leading to United States intervention in the Cuban War of Independence. American acquisition of Spain 's Pacific possessions led to its involvement in the Philippine Revolution and ultimately in the Philippine -- American War.
The main issue was Cuban independence. Revolts had been occurring for some years in Cuba against Spanish rule. The U.S. later backed these revolts upon entering the Spanish -- American War. There had been war scares before, as in the Virginius Affair in 1873, but in the late 1890s, U.S. public opinion was agitated by anti-Spanish propaganda led by newspaper publishers such as Joseph Pulitzer and William Randolph Hearst which used yellow journalism to call for war. The business community across the United States had just recovered from a deep depression and feared that a war would reverse the gains. It lobbied vigorously against going to war.
The United States Navy armoured cruiser Maine had mysteriously sunk in Havana Harbor; political pressures from the Democratic Party pushed the administration of Republican President William McKinley into a war that he had wished to avoid.
President McKinley signed a joint Congressional resolution demanding Spanish withdrawal and authorizing the President to use military force to help Cuba gain independence on April 20, 1898.. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. On April 23, Spain stated that it would declare war if the US forces invaded its territory. On April 25, the U.S. Congress declared that a state of war between the U.S. and Spain had de facto existed since April 21, the day the blockade of Cuba had begun. The United States sent an ultimatum to Spain demanding that it surrender control of Cuba, but due to Spain not replying soon enough, the United States had assumed Spain had ignored the ultimatum and continued to occupy Cuba.
The ten - week war was fought in both the Caribbean and the Pacific. As the American agitators for war well knew, U.S. naval power proved decisive, allowing expeditionary forces to disembark in Cuba against a Spanish garrison already facing nationwide Cuban insurgent attacks and further wasted by yellow fever. American, Cuban, and Philippine forces obtained the surrender of Santiago de Cuba and Manila despite the good performance of some Spanish infantry units and fierce fighting for positions such as San Juan Hill. Madrid sued for peace after two obsolete Spanish squadrons sunk in Santiago de Cuba and Manila Bay and a third, more modern fleet was recalled home to protect the Spanish coasts.
The result was the 1898 Treaty of Paris, negotiated on terms favourable to the U.S. which allowed it temporary control of Cuba and ceded ownership of Puerto Rico, Guam and the Philippine islands. The cession of the Philippines involved payment of $20 million ($575,760,000 today) to Spain by the U.S. to cover infrastructure owned by Spain.
The defeat and loss of the last remnants of the Spanish Empire was a profound shock to Spain 's national psyche and provoked a thorough philosophical and artistic revaluation of Spanish society known as the Generation of ' 98. The United States gained several island possessions spanning the globe and a rancorous new debate over the wisdom of expansionism. It was one of only five US wars (against a total of eleven sovereign states) to have been formally declared by the U.S. Congress.
The combined problems arising from the Peninsular War (1807 -- 1814), the loss of most of its colonies in the Americas in the early 19th - century Spanish American wars of independence, and three Carlist Wars (1832 -- 1876) effected a new interpretation of Spain 's remaining empire. Liberal Spanish elites like Antonio Cánovas del Castillo and Emilio Castelar offered new interpretations of the concept of "empire '' to dovetail with Spain 's emerging nationalism. Cánovas made clear in an address to the University of Madrid in 1882 his view of the Spanish nation as based on shared cultural and linguistic elements -- on both sides of the Atlantic -- that tied Spain 's territories together.
Cánovas saw Spanish imperialism as markedly different in its methods and purposes of colonization from those of rival empires like the British or French. Spaniards regarded the spreading of civilization and Christianity as Spain 's major objective and contribution to the New World. The concept of cultural unity bestowed special significance on Cuba, which had been Spanish for almost four hundred years, and was viewed as an integral part of the Spanish nation. The focus on preserving the empire would have negative consequences for Spain 's national pride in the aftermath of the Spanish -- American War.
In 1823, American fifth President James Monroe (1758 - 1831, served 1817 - 1825) enunciated the Monroe Doctrine, which stated that the United States would not tolerate further efforts by European governments to retake, expand their colonial holdings in the Americas or to interfere with the newly independent states in the hemisphere; at the same time, the doctrine stated that the U.S. would respect the status of the existing European colonies. Before the American Civil War (1861 - 1865), Southern interests attempted to have the United States purchase Cuba and convert it into a new slave territory. The Ostend Manifesto proposal of 1854 failed, and national attention shifted to the growing sectional conflict and threat of civil war.
After the American Civil War and Cuba 's Ten Years ' War, U.S. businessmen began monopolizing the devalued sugar markets in Cuba. In 1894, 90 % of Cuba 's total exports went to the United States, which also provided 40 % of Cuba 's imports. Cuba 's total exports to the U.S. were almost twelve times larger than the export to her mother country, Spain. U.S. business interests indicated that while Spain still held political authority over Cuba, economic authority in Cuba, acting - authority, was shifting to the U.S.A.
The U.S. became interested in a trans - isthmus canal across Central America, either in Nicaragua, or in Panama, where the Panama Canal would later be built (1903 - 1914), and realized the need for naval protection. Captain Alfred Thayer Mahan was an especially influential theorist; his ideas were much admired by future 26th President Theodore Roosevelt, as the U.S. rapidly built a powerful naval fleet of steel warships in the 1880s and 1890s. Roosevelt served as Assistant Secretary of the Navy in 1897 -- 1898 and was an aggressive supporter of a war with Spain over Cuba.
Meanwhile, the "Cuba Libre '' movement, led by Cuban intellectual José Martí, had established offices in Florida and New York to buy and smuggle weapons. It mounted a large propaganda campaign to generate sympathy that would lead to official pressure on Spain. Protestant churches and Democratic farmers were supportive, but business interests called on Washington to ignore them.
Although Cuba attracted American attention, little note was made of the Philippines, Guam, or Puerto Rico. Historians note that there was little popular demand in the United States for an overseas colonial empire, though at this time the longtime colonial empires of the United Kingdom (Great Britain) with its British Empire "on which the sun never set '' and France 's French Empire maintained theirs with some added growths and additions, now joined by the German Empire, Italian Empire and the Empire of Japan. These new and growing empires were dramatically expanding their overseas holdings during the late 19th century in unclaimed areas among native and indigenous peoples in the less developed continents of Africa, Asia and the Pacific.
The first serious bid for Cuban independence, the Ten Years ' War, erupted in 1868 and was subdued by the authorities a decade later. Neither the fighting nor the reforms in the Pact of Zanjón (February 1878) quelled the desire of some revolutionaries for wider autonomy and ultimately independence. One such revolutionary, José Martí, continued to promote Cuban financial and political autonomy in exile. In early 1895, after years of organizing, Martí launched a three - pronged invasion of the island.
The plan called for one group from Santo Domingo led by Máximo Gómez, one group from Costa Rica led by Antonio Maceo Grajales, and another from the United States (preemptively thwarted by U.S. officials in Florida) to land in different places on the island and provoke an uprising. While their call for revolution, the grito de Baíre, was successful, the result was not the grand show of force Martí had expected. With a quick victory effectively lost, the revolutionaries settled in to fight a protracted guerrilla campaign.
Antonio Cánovas del Castillo, the architect of Spain 's Restoration constitution and the prime minister at the time, ordered General Arsenio Martínez - Campos, a distinguished veteran of the war against the previous uprising in Cuba, to quell the revolt. Campos 's reluctance to accept his new assignment and his method of containing the revolt to the province of Oriente earned him criticism in the Spanish press.
The mounting pressure forced Cánovas to replace General Campos with General Valeriano Weyler, a soldier who had experience in quelling rebellions in overseas provinces and the Spanish metropole. Weyler deprived the insurgency of weaponry, supplies, and assistance by ordering the residents of some Cuban districts to move to reconcentration areas near the military headquarters. This strategy was effective in slowing the spread of rebellion. In the United States, this fueled the fire of anti-Spanish propaganda. In a political speech President William McKinley used this to ram Spanish actions against armed rebels. He even said this "was not civilized warfare '' but "extermination ''.
The Spanish Government regarded Cuba as a province of Spain rather than a colony, and depended on it for prestige and trade, and as a training ground for the army. Spanish Prime Minister Antonio Cánovas del Castillo announced that "the Spanish nation is disposed to sacrifice to the last peseta of its treasure and to the last drop of blood of the last Spaniard before consenting that anyone snatch from it even one piece of its territory. '' He had long dominated and stabilized Spanish politics. He was assassinated in 1897 by Italian anarchist Michele Angiolillo, leaving a Spanish political system that was not stable and could not risk a blow to its prestige.
The eruption of the Cuban revolt, Weyler 's measures, and the popular fury these events whipped up proved to be a boon to the newspaper industry in New York City, where Joseph Pulitzer of the New York World and William Randolph Hearst of the New York Journal recognized the potential for great headlines and stories that would sell copies. Both papers denounced Spain, but had little influence outside New York. American opinion generally saw Spain as a hopelessly backward power that was unable to deal fairly with Cuba. American Catholics were divided before the war began, but supported it enthusiastically once it started.
The U.S. had important economic interests that were being harmed by the prolonged conflict and deepening uncertainty about the future of Cuba. Shipping firms that had relied heavily on trade with Cuba now suffered losses as the conflict continued unresolved. These firms pressed Congress and McKinley to seek an end to the revolt. Other American business concerns, specifically those who had invested in Cuban sugar, looked to the Spanish to restore order. Stability, not war, was the goal of both interests. How stability would be achieved would depend largely on the ability of Spain and the U.S. to work out their issues diplomatically.
While tension increased among the Cubans and Spanish Government, popular support of intervention began to spring up in the United States, due to the emergence of the "Cuba Libre '' movement and the fact that many Americans had drawn parallels between the American Revolution and the Cuban revolt, seeing the Spanish Government as the tyrannical colonial oppressor. Historian Louis Pérez notes that "The proposition of war in behalf of Cuban independence took hold immediately and held on thereafter. Such was the sense of the public mood. '' At the time many poems and songs were written in the United States to express support of the "Cuba Libre '' movement. At the same time, many African Americans, facing growing racial discrimination and increasing retardation of their civil rights, wanted to take part in the war because they saw it as a way to advance the cause of equality, service to country hopefully helping to gain political and public respect amongst the wider population.
President McKinley, well aware of the political complexity surrounding the conflict, wanted to end the revolt peacefully. In accordance with this policy, McKinley began to negotiate with the Spanish government, hoping that the negotiations would be able to end the yellow journalism in the United States, and therefore, end the loudest calls to go to war with Spain. An attempt was made to negotiate a peace before McKinley took office, however, the Spanish refused to take part in the negotiations. In 1897 McKinley appointed Stewart L. Woodford as the new minister to Spain, who again offered to negotiate a peace. In October 1897, the Spanish government still refused the United States offer to negotiate between the Spanish and the Cubans, but promised the U.S. it would give the Cubans more autonomy. However, with the election of a more liberal Spanish government in November, Spain began to change their policies in Cuba. First, the new Spanish government told the United States that it was willing to offer a change in the Reconcentration policies (the main set of policies that was feeding yellow journalism in the United States) if the Cuban rebels agreed to a cessation of hostilities. This time the rebels refused the terms in hopes that continued conflict would lead to U.S. intervention and the creation of an independent Cuba. The liberal Spanish government also recalled the Spanish Governor General Valeriano Weyler from Cuba. This action alarmed many Cubans loyal to Spain.
The Cubans loyal to Weyler began planning large demonstrations to take place when the next Governor General, Ramon Blanco, arrived in Cuba. U.S. consul Fitzhugh Lee learned of these plans and sent a request to the U.S. State Department to send a U.S. warship to Cuba. This request lead to the U.S.S. Maine being sent to Cuba. While the Maine was docked in Havana, an explosion sank the ship. The sinking of the Maine was blamed on the Spanish and made the possibility of a negotiated peace very slim. Throughout the negotiation process, the major European powers, especially Britain, France, and Russia, generally supported the American position and urged Spain to give in. Spain repeatedly promised specific reforms that would pacify Cuba but failed to deliver; American patience ran out.
McKinley sent the USS Maine to Havana to ensure the safety of American citizens and interests, and to underscore the urgent need for reform. Naval forces were moved in position to attack simultaneously on several fronts if the war was not avoided. As Maine left Florida, a large part of the North Atlantic Squadron was moved to Key West and the Gulf of Mexico. Others were also moved just off the shore of Lisbon, and still others were moved to Hong Kong.
At 9: 40 on the evening of February 15, 1898, Maine sank in Havana Harbor after suffering a massive explosion. While McKinley urged patience and did not declare that Spain had caused the explosion, the deaths of 250 out of 355 sailors on board focused American attention. McKinley asked Congress to appropriate $50 million for defense, and Congress unanimously obliged. Most American leaders took the position that the cause of the explosion was unknown, but public attention was now riveted on the situation and Spain could not find a diplomatic solution to avoid war. Spain appealed to the European powers, most of whom advised it to accept U.S. conditions for Cuba in order to avoid war. Germany urged a united European stand against the United States but took no action.
The U.S. Navy 's investigation, made public on March 28, concluded that the ship 's powder magazines were ignited when an external explosion was set off under the ship 's hull. This report poured fuel on popular indignation in the U.S., making the war inevitable. Spain 's investigation came to the opposite conclusion: the explosion originated within the ship. Other investigations in later years came to various contradictory conclusions, but had no bearing on the coming of the war. In 1974, Admiral Hyman George Rickover had his staff look at the documents and decided there was an internal explosion. A study commissioned by National Geographic magazine in 1999, using AME computer modelling, stated that the explosion could have been caused by a mine, but no definitive evidence was found.
After the Maine was destroyed, New York City newspaper publishers Hearst and Pulitzer decided that the Spanish were to blame, and they publicized this theory as fact in their papers. They both used sensationalistic and astonishing accounts of "atrocities '' committed by the Spanish in Cuba by using headlines in their newspapers, such as "Spanish Murderers '' and "Remember The Maine ''. Their press exaggerated what was happening and how the Spanish were treating the Cuban prisoners. The stories were based on factual accounts, but most of the time, the articles that were published were embellished and written with incendiary language causing emotional and often heated responses among readers. A common myth falsely states that when illustrator Frederic Remington said there was no war brewing in Cuba, Hearst responded: "You furnish the pictures and I 'll furnish the war. ''
This new "yellow journalism '' was, however, uncommon outside New York City, and historians no longer consider it the major force shaping the national mood. Public opinion nationwide did demand immediate action, overwhelming the efforts of President McKinley, Speaker of the House Thomas Brackett Reed, and the business community to find a negotiated solution. Wall Street, big business, high finance and Main Street businesses across the country were vocally opposed to war and demanded peace. After years of severe depression, the economic outlook for the domestic economy was suddenly bright again in 1897. However, the uncertainties of warfare posed a serious threat to full economic recovery. "War would impede the march of prosperity and put the country back many years, '' warned the New Jersey Trade Review. The leading railroad magazine editorialized, "From a commercial and mercenary standpoint it seems peculiarly bitter that this war should come when the country had already suffered so much and so needed rest and peace. '' McKinley paid close attention to the strong anti-war consensus of the business community, and strengthened his resolve to use diplomacy and negotiation rather than brute force to end the Spanish tyranny in Cuba.
A speech delivered by Republican Senator Redfield Proctor of Vermont on March 17, 1898, thoroughly analyzed the situation and greatly strengthened the pro-war cause. Proctor concluded that war was the only answer. Many in the business and religious communities which had until then opposed war, switched sides, leaving McKinley and Speaker Reed almost alone in their resistance to a war. On April 11, McKinley ended his resistance and asked Congress for authority to send American troops to Cuba to end the civil war there, knowing that Congress would force a war.
On April 19, while Congress was considering joint resolutions supporting Cuban independence, Republican Senator Henry M. Teller of Colorado proposed the Teller Amendment to ensure that the U.S. would not establish permanent control over Cuba after the war. The amendment, disclaiming any intention to annex Cuba, passed the Senate 42 to 35; the House concurred the same day, 311 to 6. The amended resolution demanded Spanish withdrawal and authorized the President to use as much military force as he thought necessary to help Cuba gain independence from Spain. President McKinley signed the joint resolution on April 20, 1898, and the ultimatum was sent to Spain. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. Spain stated, it would declare war if the US forces invaded its territory, on April 23. On April 25, the U.S. Congress declared that a state of war between the U.S. and Spain had de facto existed since April 21, the day the blockade of Cuba had begun.
The Navy was ready, but the Army was not well - prepared for the war and made radical changes in plans and quickly purchased supplies. In the spring of 1898, the strength of the Regular U.S. Army was just 25,000 men. The Army wanted 50,000 new men but received over 220,000 through volunteers and the mobilization of state National Guard units, even gaining nearly 100,000 men on the first night after the explosion of the USS Maine.
The Department of State of the United States of America summarizes the aftermath of the war for the Filipino people:
After its defeat in the Spanish - American War of 1898, Spain ceded its longstanding colony of the Philippines to the United States in the Treaty of Paris. On February 4, 1899, just two days before the U.S. Senate ratified the treaty, fighting broke out between American forces and Filipino nationalists led by Emilio Aguinaldo, who sought independence rather than a change in colonial rulers. The ensuing Philippine - American War lasted three years and resulted in the death of over 4,200 American and over 20,000 Filipino combatants. As many as 200,000 Filipino civilians died from violence, famine, and disease.
In 1901, novelist Mark Twain wrote about the aftermath of the war for the Philippines:
We have robbed a trusting friend of his land and his liberty; we have invited clean young men to shoulder a discredited musket and do bandit 's work under a flag which bandits have been accustomed to fear, not to follow; we have debauched America 's honor and blackened her face before the world.
In his War and Empire, Prof. Paul Atwood of the University of Massachusetts (Boston) writes:
The Spanish - American War was fomented on outright lies and trumped up accusations against the intended enemy... War fever in the general population never reached a critical temperature until the accidental sinking of the USS Maine was deliberately, and falsely, attributed to Spanish villainy... In a cryptic message... Senator lodge wrote that ' There may be an explosion any day in Cuba which would settle a great many things. We have got a battleship in the harbor of Havana, and our fleet, which overmatches anything the Spanish have, is masked at the Dry Tortugas. '
In his autobiography, Theodore Roosevelt gave his views of the origins of the war:
Our own direct interests were great, because of the Cuban tobacco and sugar, and especially because of Cuba 's relation to the projected Isthmian (Panama) Canal. But even greater were our interests from the standpoint of humanity... It was our duty, even more from the standpoint of National honor than from the standpoint of National interest, to stop the devastation and destruction. Because of these considerations I favored war.
In the 333 years of Spanish rule, the Philippines developed from a small overseas colony governed from the Viceroyalty of New Spain to a land with modern elements in the cities. The Spanish - speaking middle classes of the 19th century were mostly educated in the liberal ideas coming from Europe. Among these Ilustrados was the Filipino national hero José Rizal, who demanded larger reforms from the Spanish authorities. This movement eventually led to the Philippine Revolution against Spanish colonial rule. The revolution had been in a state of truce since the signing of the Pact of Biak - na - Bato in 1897, with revolutionary leaders having accepted exile outside of the country.
The first battle between American and Spanish forces was at Manila Bay where, on May 1, Commodore George Dewey, commanding the U.S. Navy 's Asiatic Squadron aboard USS Olympia, in a matter of hours defeated a Spanish squadron under Admiral Patricio Montojo. Dewey managed this with only nine wounded. With the German seizure of Tsingtao in 1897, Dewey 's squadron had become the only naval force in the Far East without a local base of its own, and was beset with coal and ammunition problems. Despite these problems, the Asiatic Squadron not only destroyed the Spanish fleet but also captured the harbor of Manila.
Following Dewey 's victory, Manila Bay was filled with the warships of Britain, Germany, France, and Japan. The German fleet of eight ships, ostensibly in Philippine waters to protect German interests, acted provocatively -- cutting in front of American ships, refusing to salute the United States flag (according to customs of naval courtesy), taking soundings of the harbor, and landing supplies for the besieged Spanish.
The Germans, with interests of their own, were eager to take advantage of whatever opportunities the conflict in the islands might afford. There was a fear at the time that the islands would become a German possession. The Americans called the bluff of the Germans, threatening conflict if the aggression continued, and the Germans backed down. At the time, the Germans expected the confrontation in the Philippines to end in an American defeat, with the revolutionaries capturing Manila and leaving the Philippines ripe for German picking.
Commodore Dewey transported Emilio Aguinaldo, a Filipino leader who had led rebellion against Spanish rule in the Philippines in 1896, from exile in Hong Kong to the Philippines to rally more Filipinos against the Spanish colonial government. By June, U.S. and Filipino forces had taken control of most of the islands, except for the walled city of Intramuros. On June 12, Aguinaldo proclaimed the independence of the Philippines.
On August 13, with American commanders unaware that a cease - fire had been signed between Spain and the U.S. on the previous day, American forces captured the city of Manila from the Spanish in the Battle of Manila. This battle marked the end of Filipino -- American collaboration, as the American action of preventing Filipino forces from entering the captured city of Manila was deeply resented by the Filipinos. This later led to the Philippine -- American War, which would prove to be more deadly and costly than the Spanish -- American War.
The U.S. had sent a force of some 11,000 ground troops to the Philippines. Armed conflict broke out between U.S. forces and the Filipinos when U.S. troops began to take the place of the Spanish in control of the country after the end of the war, resulting in the Philippine -- American War. On August 14, 1899, the Schurman Commission recommended that the U.S. retain control of the Philippines, possibly granting independence in the future.
On June 20, a U.S. fleet commanded by Captain Henry Glass, consisting of the protected cruiser USS Charleston and three transports carrying troops to the Philippines, entered Guam 's Apra Harbor, Captain Glass having opened sealed orders instructing him to proceed to Guam and capture it. Charleston fired a few cannon rounds at Fort Santa Cruz without receiving return fire. Two local officials, not knowing that war had been declared and believing the firing had been a salute, came out to Charleston to apologize for their inability to return the salute as they were out of gunpowder. Glass informed them that the U.S. and Spain were at war.
The following day, Glass sent Lt. William Braunersruehter to meet the Spanish Governor to arrange the surrender of the island and the Spanish garrison there. Some 54 Spanish infantry were captured and transported to the Philippines as prisoners of war. No U.S. forces were left on Guam, but the only U.S. citizen on the island, Frank Portusach, told Captain Glass that he would look after things until U.S. forces returned.
Theodore Roosevelt advocated intervention in Cuba, both for the Cuban people and to promote the Monroe Doctrine. While Assistant Secretary of the Navy, he placed the Navy on a war - time footing and prepared Dewey 's Asiatic Squadron for battle. He also worked with Leonard Wood in convincing the Army to raise an all - volunteer regiment, the 1st U.S. Volunteer Cavalry. Wood was given command of the regiment that quickly became known as the "Rough Riders ''.
The Americans planned to capture the city of Santiago de Cuba to destroy Linares ' army and Cervera 's fleet. To reach Santiago they had to pass through concentrated Spanish defenses in the San Juan Hills and a small town in El Caney. The American forces were aided in Cuba by the pro-independence rebels led by General Calixto García.
For quite some time the Cuban public believed the United States government to possibly hold the key to its independence, and even annexation was considered for a time, which historian Louis Pérez explored in his book Cuba and the United States: Ties of Singular Intimacy. The Cubans harbored a great deal of discontent towards the Spanish Government, due to years of manipulation on the part of the Spanish. The prospect of getting the United States involved in the fight was considered by many Cubans as a step in the right direction. While the Cubans were wary of the United States ' intentions, the overwhelming support from the American public provided the Cubans with some peace of mind, because they believed that the United States was committed to helping them achieve their independence. However, with the imposition of the Platt Amendment of 1903 after the war, as well as economic and military manipulation on the part of the United States, Cuban sentiment towards the United States became polarized, with many Cubans disappointed with continuing American interference.
From June 22 to 24, the Fifth Army Corps under General William R. Shafter landed at Daiquirí and Siboney, east of Santiago, and established an American base of operations. A contingent of Spanish troops, having fought a skirmish with the Americans near Siboney on June 23, had retired to their lightly entrenched positions at Las Guasimas. An advance guard of U.S. forces under former Confederate General Joseph Wheeler ignored Cuban scouting parties and orders to proceed with caution. They caught up with and engaged the Spanish rearguard of about 2,000 soldiers led by General Antero Rubín who effectively ambushed them, in the Battle of Las Guasimas on June 24. The battle ended indecisively in favor of Spain and the Spanish left Las Guasimas on their planned retreat to Santiago.
The U.S. Army employed Civil War - era skirmishers at the head of the advancing columns. Three of four of the U.S. soldiers who had volunteered to act as skirmishers walking point at the head of the American column were killed, including Hamilton Fish II (grandson of Hamilton Fish, the Secretary of State under Ulysses S. Grant), and Captain Allyn K. Capron, Jr., whom Theodore Roosevelt would describe as one of the finest natural leaders and soldiers he ever met. Only Oklahoma Territory Pawnee Indian, Tom Isbell, wounded seven times, survived.
The Battle of Las Guasimas showed the U.S. that quick - thinking American soldiers would not stick to the linear tactics which did not work effectively against Spanish troops who had learned the art of cover and concealment from their own struggle with Cuban insurgents, and never made the error of revealing their positions while on the defense. Americans advanced by rushes and stayed in the weeds so that they, too, were largely invisible to the Spaniards who used un-targeted volley fire to try to mass fires against the advancing Americans. While some troops were hit, this technique was mostly a waste of bullets as the Americans learned to duck as soon as they heard the Spanish word Fire, "Fuego '' yelled by the Spanish officers. Spanish troops were equipped with smokeless powder arms that also helped them to hide their positions while firing.
Regular Spanish troops were mostly armed with modern charger - loaded 1893 7mm Spanish Mauser rifles and using smokeless powder. The high - speed 7 × 57mm Mauser round was termed the "Spanish Hornet '' by the Americans because of the supersonic crack as it passed overhead. Other irregular troops were armed with Remington Rolling Block rifles in. 43 Spanish using smokeless powder and brass - jacketed bullets. US regular infantry were armed with the. 30 -- 40 Krag -- Jørgensen, a bolt - action rifle with a complex rotating magazine. Both the US regular cavalry and the volunteer cavalry used smokeless ammunition. In later battles, state volunteers used the. 45 -- 70 Springfield a single - shot black powder rifle.
On July 1, a combined force of about 15,000 American troops in regular infantry and cavalry regiments, including all four of the army 's "Colored '' regiments, and volunteer regiments, among them Roosevelt and his "Rough Riders '', the 71st New York, the 2nd Massachusetts Infantry, and 1st North Carolina, and rebel Cuban forces attacked 1,270 entrenched Spaniards in dangerous Civil War - style frontal assaults at the Battle of El Caney and Battle of San Juan Hill outside of Santiago. More than 200 U.S. soldiers were killed and close to 1,200 wounded in the fighting, thanks to the high rate of fire the Spanish put down range at the Americans. Supporting fire by Gatling guns was critical to the success of the assault. Cervera decided to escape Santiago two days later. First Lieutenant John J. Pershing, nicknamed "Black Jack '', oversaw the 10th Cavalry Unit during the war. Pershing and his unit fought in the Battle of San Juan Hill. Pershing was cited for his gallantry during the battle.
The Spanish forces at Guantánamo were so isolated by Marines and Cuban forces that they did not know that Santiago was under siege, and their forces in the northern part of the province could not break through Cuban lines. This was not true of the Escario relief column from Manzanillo, which fought its way past determined Cuban resistance but arrived too late to participate in the siege.
After the battles of San Juan Hill and El Caney, the American advance halted. Spanish troops successfully defended Fort Canosa, allowing them to stabilize their line and bar the entry to Santiago. The Americans and Cubans forcibly began a bloody, strangling siege of the city. During the nights, Cuban troops dug successive series of "trenches '' (raised parapets), toward the Spanish positions. Once completed, these parapets were occupied by U.S. soldiers and a new set of excavations went forward. American troops, while suffering daily losses from Spanish fire, suffered far more casualties from heat exhaustion and mosquito - borne disease. At the western approaches to the city, Cuban general Calixto Garcia began to encroach on the city, causing much panic and fear of reprisals among the Spanish forces.
The major port of Santiago de Cuba was the main target of naval operations during the war. The U.S. fleet attacking Santiago needed shelter from the summer hurricane season; Guantánamo Bay, with its excellent harbor, was chosen. The 1898 invasion of Guantánamo Bay happened between June 6 and 10, with the first U.S. naval attack and subsequent successful landing of U.S. Marines with naval support.
The Battle of Santiago de Cuba on July 3, was the largest naval engagement of the Spanish -- American War and resulted in the destruction of the Spanish Caribbean Squadron (also known as the Flota de Ultramar). In May, the fleet of Spanish Admiral Pascual Cervera y Topete had been spotted by American forces in Santiago harbor, where they had taken shelter for protection from sea attack. A two - month stand - off between Spanish and American naval forces followed.
When the Spanish squadron finally attempted to leave the harbor on July 3, the American forces destroyed or grounded five of the six ships. Only one Spanish vessel, the new armored cruiser Cristóbal Colón, survived, but her captain hauled down her flag and scuttled her when the Americans finally caught up with her. The 1,612 Spanish sailors who were captured, including Admiral Cervera, were sent to Seavey 's Island at the Portsmouth Naval Shipyard in Kittery, Maine, where they were confined at Camp Long as prisoners of war from July 11 until mid-September.
During the stand - off, U.S. Assistant Naval Constructor, Lieutenant Richmond Pearson Hobson had been ordered by Rear Admiral William T. Sampson to sink the collier USS Merrimac in the harbor to bottle up the Spanish fleet. The mission was a failure, and Hobson and his crew were captured. They were exchanged on July 6, and Hobson became a national hero; he received the Medal of Honor in 1933, retired as a Rear Admiral and became a Congressman.
Yellow fever had quickly spread amongst the American occupation force, crippling it. A group of concerned officers of the American army chose Theodore Roosevelt to draft a request to Washington that it withdraw the Army, a request that paralleled a similar one from General Shafter, who described his force as an "army of convalescents ''. By the time of his letter, 75 % of the force in Cuba was unfit for service.
On August 7, the American invasion force started to leave Cuba. The evacuation was not total. The U.S. Army kept the black Ninth US Cavalry Regiment in Cuba to support the occupation. The logic was that their race and the fact that many black volunteers came from southern states would protect them from disease; this logic led to these soldiers being nicknamed "Immunes ''. Still, when the Ninth left, 73 of its 984 soldiers had contracted the disease.
In May 1898, Lt. Henry H. Whitney of the United States Fourth Artillery was sent to Puerto Rico on a reconnaissance mission, sponsored by the Army 's Bureau of Military Intelligence. He provided maps and information on the Spanish military forces to the U.S. government before the invasion.
The American offensive began on May 12, 1898, when a squadron of 12 U.S. ships commanded by Rear Adm. William T. Sampson of the United States Navy attacked the archipelago 's capital, San Juan. Though the damage inflicted on the city was minimal, the Americans established a blockade in the city 's harbor, San Juan Bay. On June 22, the cruiser Isabel II and the destroyer Terror delivered a Spanish counterattack, but were unable to break the blockade and the Terror was damaged.
The land offensive began on July 25, when 1,300 infantry soldiers led by Nelson A. Miles disembarked off the coast of Guánica. The first organized armed opposition occurred in Yauco in what became known as the Battle of Yauco.
This encounter was followed by the Battle of Fajardo. The United States seized control of Fajardo on August 1, but were forced to withdraw on August 5 after a group of 200 Puerto Rican - Spanish soldiers led by Pedro del Pino gained control of the city, while most civilian inhabitants fled to a nearby lighthouse. The Americans encountered larger opposition during the Battle of Guayama and as they advanced towards the main island 's interior. They engaged in crossfire at Guamaní River Bridge, Coamo and Silva Heights and finally at the Battle of Asomante. The battles were inconclusive as the allied soldiers retreated.
A battle in San Germán concluded in a similar fashion with the Spanish retreating to Lares. On August 9, 1898, American troops that were pursuing units retreating from Coamo encountered heavy resistance in Aibonito in a mountain known as Cerro Gervasio del Asomante and retreated after six of their soldiers were injured. They returned three days later, reinforced with artillery units and attempted a surprise attack. In the subsequent crossfire, confused soldiers reported seeing Spanish reinforcements nearby and five American officers were gravely injured, which prompted a retreat order. All military actions in Puerto Rico were suspended on August 13, after U.S. President William McKinley and French Ambassador Jules Cambon, acting on behalf of the Spanish Government, signed an armistice whereby Spain relinquished its sovereignty over Puerto Rico.
With defeats in Cuba and the Philippines, and both of its fleets destroyed, Spain sued for peace and negotiations were opened between the two parties. After the sickness and death of British consul Edward Henry Rawson - Walker, American admiral George Dewey requested the Belgian consul to Manila, Édouard André, to take Rawson - Walker 's place as intermediary with the Spanish Government.
Hostilities were halted on August 12, 1898, with the signing in Washington of a Protocol of Peace between the United States and Spain. After over two months of difficult negotiations, the formal peace treaty, the Treaty of Paris, was signed in Paris on December 10, 1898, and was ratified by the United States Senate on February 6, 1899.
The United States gained all of Spain 's colonies outside of Africa in the treaty, including the Philippines, Guam and Puerto Rico with the exception of Cuba, which became a U.S. protectorate. The treaty came into force in Cuba April 11, 1899, with Cubans participating only as observers. Having been occupied since July 17, 1898, and thus under the jurisdiction of the United States Military Government (USMG), Cuba formed its own civil government and gained independence on May 20, 1902, with the announced end of USMG jurisdiction over the island. However, the U.S. imposed various restrictions on the new government, including prohibiting alliances with other countries, and reserved the right to intervene. The U.S. also established a perpetual lease of Guantánamo Bay.
The war lasted ten weeks. John Hay (the United States Ambassador to the United Kingdom), writing from London to his friend Theodore Roosevelt, declared that it had been "a splendid little war ''. The press showed Northerners and Southerners, blacks and whites fighting against a common foe, helping to ease the scars left from the American Civil War. Exemplary of this was the fact that four former Confederate States Army generals had served in the war, now in the US Army and all of them again carrying similar ranks. These officers included Matthew Butler, Fitzhugh Lee, Thomas L. Rosser and Joseph Wheeler, though only the latter had seen action. Still, in an exciting moment during the Battle of Las Guasimas, Wheeler apparently forgot for a moment which war he was fighting, having supposedly called out "Let 's go, boys! We 've got the damn Yankees on the run again! ''
The war marked American entry into world affairs. Since then, the U.S. has had a significant hand in various conflicts around the world, and entered many treaties and agreements. The Panic of 1893 was over by this point, and the U.S. entered a long and prosperous period of economic and population growth, and technological innovation that lasted through the 1920s.
The war redefined national identity, served as a solution of sorts to the social divisions plaguing the American mind, and provided a model for all future news reporting.
The idea of American imperialism changed in the public 's mind after the short and successful Spanish -- American War. Due to the United States ' powerful influence diplomatically and militarily, Cuba 's status after the war relied heavily upon American actions. Two major developments emerged from the Spanish -- American War: one, it greatly enforced the United States ' vision of itself as a "defender of democracy '' and as a major world power, and two, it had severe implications for Cuban -- American relations in the future. As historian Louis Pérez argued in his book Cuba in the American Imagination: Metaphor and the Imperial Ethos, the Spanish -- American War of 1898 "fixed permanently how Americans came to think of themselves: a righteous people given to the service of righteous purpose ''.
The war greatly reduced the Spanish Empire. Spain had been declining as an imperial power since the early 19th century as a result of Napoleon 's invasion. The loss of Cuba caused a national trauma because of the affinity of peninsular Spaniards with Cuba, which was seen as another province of Spain rather than as a colony. Spain retained only a handful of overseas holdings: Spanish West Africa (Spanish Sahara), Spanish Guinea, Spanish Morocco, and the Canary Islands.
The Spanish soldier Julio Cervera Baviera, who served in the Puerto Rican Campaign, published a pamphlet in which he blamed the natives of that colony for its occupation by the Americans, saying, "I have never seen such a servile, ungrateful country (i.e., Puerto Rico)... In twenty - four hours, the people of Puerto Rico went from being fervently Spanish to enthusiastically American... They humiliated themselves, giving in to the invader as the slave bows to the powerful lord. '' He was challenged to a duel by a group of young Puerto Ricans for writing this pamphlet.
Culturally, a new wave called the Generation of ' 98 originated as a response to this trauma, marking a renaissance in Spanish culture. Economically, the war benefited Spain, because after the war large sums of capital held by Spaniards in Cuba and the United States were returned to the peninsula and invested in Spain. This massive flow of capital (equivalent to 25 % of the gross domestic product of one year) helped to develop the large modern firms in Spain in the steel, chemical, financial, mechanical, textile, shipyard, and electrical power industries. However, the political consequences were serious. The defeat in the war began the weakening of the fragile political stability that had been established earlier by the rule of Alfonso XII.
The Teller Amendment, which was enacted on April 20, 1898, was a promise from the United States to the Cuban people that it was not declaring war to annex Cuba, but to help it gain its independence from Spain. The Platt Amendment was a move by the United States ' government to shape Cuban affairs without violating the Teller Amendment.
The U.S. Congress had passed the Teller Amendment before the war, promising Cuban independence. However, the Senate passed the Platt Amendment as a rider to an Army appropriations bill, forcing a peace treaty on Cuba which prohibited it from signing treaties with other nations or contracting a public debt. The Platt Amendment was pushed by imperialists who wanted to project U.S. power abroad (in contrast to the Teller Amendment which was pushed by anti-imperialists who called for a restraint on U.S. rule). The amendment granted the United States the right to stabilize Cuba militarily as needed. In addition, the Platt Amendment permitted the United States to deploy Marines to Cuba if its freedom and independence was ever threatened or jeopardized by an external or internal force. The Platt Amendment also provided for a permanent American naval base in Cuba. Guantánamo Bay was established after the signing of the Cuban -- American Treaty of Relations in 1903. Thus, despite that Cuba technically gained its independence after the war ended, the United States government ensured that it had some form of power and control over Cuban affairs.
The U.S. annexed the former Spanish colonies of Puerto Rico, the Philippines and Guam. The notion of the United States as an imperial power, with colonies, was hotly debated domestically with President McKinley and the Pro-Imperialists winning their way over vocal opposition led by Democrat William Jennings Bryan, who had supported the war. The American public largely supported the possession of colonies, but there were many outspoken critics such as Mark Twain, who wrote The War Prayer in protest.
Roosevelt returned to the United States a war hero, and he was soon elected governor of New York and then became the vice president. At the age of 42 he became the youngest man to become President after the assassination of President William McKinley.
The war served to further repair relations between the American North and South. The war gave both sides a common enemy for the first time since the end of the Civil War in 1865, and many friendships were formed between soldiers of northern and southern states during their tours of duty. This was an important development, since many soldiers in this war were the children of Civil War veterans on both sides.
The African - American community strongly supported the rebels in Cuba, supported entry into the war, and gained prestige from their wartime performance in the Army. Spokesmen noted that 33 African - American seamen had died in the Maine explosion. The most influential Black leader, Booker T. Washington, argued that his race was ready to fight. War offered them a chance "to render service to our country that no other race can '', because, unlike Whites, they were "accustomed '' to the "peculiar and dangerous climate '' of Cuba. One of the Black units that served in the war was the 9th Cavalry Regiment. In March 1898, Washington promised the Secretary of the Navy that war would be answered by "at least ten thousand loyal, brave, strong black men in the south who crave an opportunity to show their loyalty to our land, and would gladly take this method of showing their gratitude for the lives laid down, and the sacrifices made, that Blacks might have their freedom and rights. ''
In 1904, the United Spanish War Veterans was created from smaller groups of the veterans of the Spanish American War. Today, that organization is defunct, but it left an heir in the Sons of Spanish -- American War Veterans, created in 1937 at the 39th National Encampment of the United Spanish War Veterans. According to data from the United States Department of Veterans Affairs, the last surviving U.S. veteran of the conflict, Nathan E. Cook, died on September 10, 1992, at age 106. (If the data is to be believed, Cook, born October 10, 1885, would have been only 12 years old when he served in the war.)
The Veterans of Foreign Wars of the United States (VFW) was formed in 1914 from the merger of two veterans organizations which both arose in 1899: the American Veterans of Foreign Service and the National Society of the Army of the Philippines. The former was formed for veterans of the Spanish -- American War, while the latter was formed for veterans of the Philippine -- American War. Both organizations were formed in response to the general neglect veterans returning from the war experienced at the hands of the government.
To pay the costs of the war, Congress passed an excise tax on long - distance phone service. At the time, it affected only wealthy Americans who owned telephones. However, the Congress neglected to repeal the tax after the war ended four months later, and the tax remained in place for over 100 years until, on August 1, 2006, it was announced that the U.S. Department of the Treasury and the IRS would no longer collect the tax.
The change in sovereignty of Puerto Rico, like the occupation of Cuba, brought about major changes in both the insular and U.S. economies. Before 1898 the sugar industry in Puerto Rico was in decline for nearly half a century. In the second half of the nineteenth century, technological advances increased the capital requirements to remain competitive in the sugar industry. Agriculture began to shift toward coffee production, which required less capital and land accumulation. However, these trends were reversed with U.S. hegemony. Early U.S. monetary and legal policies made it both harder for local farmers to continue operations and easier for American businesses to accumulate land. This, along with the large capital reserves of American businesses, led to a resurgence in the Puerto Rican sugar industry in the form of large American owned agro-industrial complexes.
At the same time, the inclusion of Puerto Rico into the U.S. tariff system as a customs area, effectively treating Puerto Rico as a state with respect to internal or external trade, increased the codependence of the insular and mainland economies and benefitted sugar exports with tariff protection. In 1897 the United States purchased 19.6 percent of Puerto Rico 's exports while supplying 18.5 percent of its imports. By 1905 these figures jumped to 84 percent and 85 percent, respectively. However, coffee was not protected, as it was not a product of the mainland. At the same time, Cuba and Spain, traditionally the largest importers of Puerto Rican coffee, now subjected Puerto Rico to previously nonexistent import tariffs. These two effects led to a decline in the coffee industry. From 1897 to 1901 coffee went from 65.8 percent of exports to 19.6 percent while sugar went from 21.6 percent to 55 percent. The tariff system also provided a protected market place for Puerto Rican tobacco exports. The tobacco industry went from nearly nonexistent in Puerto Rico to a major part of the country 's agricultural sector.
The Spanish -- American War was the first U.S. war in which the motion picture camera played a role. The Library of Congress archives contain many films and film clips from the war. In addition, a few feature films have been made about the war. These include
The United States awards and decorations of the Spanish -- American War were as follows:
The governments of Spain and Cuba also issued a wide variety of military awards to honor Spanish, Cuban, and Philippine soldiers who had served in the conflict.
It has been a splendid little war; begun with the highest motives, carried on with magnificent intelligence and spirit, favored by the fortune which loves the brave. It is now to be concluded, I hope, with that firm good nature which is after all the distinguishing trait of our American character.
|
who is running in the general election 2017 | United Kingdom General election, 2017 - wikipedia
Theresa May Conservative
Theresa May Conservative
The United Kingdom general election of 2017 took place on Thursday 8 June. Each of the 650 constituencies elected one Member of Parliament (MP) to the House of Commons. Under the Fixed - term Parliaments Act 2011 an election had not been due until 7 May 2020, but a call by Prime Minister Theresa May for a snap election was ratified by the necessary supermajority in a 522 -- 13 vote in the House of Commons on 19 April 2017.
The Conservative Party (which had governed as a senior coalition partner from 2010 and as a single - party majority government from 2015) was defending a working majority of 17 seats against the Labour Party, the official opposition led by Jeremy Corbyn. May had said that she hoped to secure a larger majority for the Conservative Party in order to "strengthen (her) hand in (the forthcoming Brexit) negotiations ''.
Opinion polls had shown consistent leads for the Conservatives over Labour. From a 20 - point lead, the Conservatives ' lead began to diminish in the final weeks of the campaign. In a surprising result, the Conservatives made a net loss of 13 seats with 42.3 % of the vote (its highest share since 1983), while Labour made a net gain of 30 seats with 40.0 % (its highest since 2001). In terms of vote share for the two main parties, this was the closest result since February 1974 and the highest combined share since 1970. The Scottish National Party and the Liberal Democrats, the third - and fourth - largest parties, both lost vote share; media coverage characterised the election as a return to two - party politics. The SNP, which won 56 of the 59 Scottish seats at the previous general election in 2015, lost 21 seats. The Liberal Democrats made a net gain of four seats. UKIP, the third - largest party in 2015 by number of votes, saw its share reduced from 12.6 % to 1.8 % and lost its only seat. Plaid Cymru gained one seat, giving it four seats. The Green Party retained its single seat, but saw its share of the vote reduced. In Northern Ireland, the DUP won 10 seats, Sinn Féin won seven, and independent unionist Sylvia Hermon retained her seat. The SDLP and UUP lost all their seats. The Conservatives remained in power as a minority government, having secured a confidence and supply deal with the DUP.
Negotiation positions following the UK 's invocation of Article 50 of the Treaty on European Union in March 2017 to leave the EU were expected to feature significantly in the campaign, but did not. The campaign was interrupted by two major terrorist attacks in Manchester and London, with national security becoming a prominent issue in the final weeks of campaigning.
Each parliamentary constituency of the United Kingdom elects one MP to the House of Commons using the "first past the post '' system. If one party obtains a majority of seats, then that party is entitled to form the Government, with its leader as Prime Minister. If the election results in no single party having a majority, then there is a hung parliament. In this case, the options for forming the Government are either a minority government or a coalition.
The Sixth Periodic Review of Westminster constituencies is not due to report until 2018, and therefore this general election took place under existing boundaries, enabling direct comparisons with the results by constituency in 2015.
To vote in the general election, one had to be:
Individuals had to be registered to vote by midnight twelve working days before polling day (22 May). Anyone who qualified as an anonymous elector had until midnight on 31 May to register. A person who has two homes (such as a university student who has a term - time address and lives at home during holidays) may be registered to vote at both addresses, as long as they are not in the same electoral area, but can vote in only one constituency at the general election.
On 18 May, The Independent reported that more than 1.1 million people between 18 and 35 had registered to vote since the election was announced on 18 April. Of those, 591,730 were under the age of 25.
The Fixed - term Parliaments Act 2011 introduced fixed - term Parliaments to the United Kingdom, with elections scheduled every five years following the general election on 7 May 2015. This removed the power of the Prime Minister, using the royal prerogative, to dissolve Parliament before its five - year maximum length. The Act permits early dissolution if the House of Commons votes by a supermajority of two - thirds of the entire membership of the House.
On 18 April 2017, the Prime Minister Theresa May announced she would seek an election on 8 June, despite previously ruling out an early election. A House of Commons motion to allow this was passed on 19 April, with 522 votes for and 13 against, a majority of 509, meeting the required two - thirds majority. The motion was supported by the Conservatives, Labour, the Liberal Democrats and the Greens, while the SNP abstained. Nine Labour MPs, one SDLP MP and three independents (Sylvia Hermon and two former SNP MPs, Natalie McGarry and Michelle Thomson) voted against the motion.
Labour leader Jeremy Corbyn supported the early election, as did Liberal Democrat leader Tim Farron and the Green Party. The SNP stated that it was in favour of fixed - term parliaments, and would abstain in the House of Commons vote. UKIP leader Paul Nuttall and First Minister of Wales Carwyn Jones criticised May for being opportunistic in the timing of the election, motivated by the then strong position of the Conservative Party in the opinion polls.
On 25 April, the election date was confirmed as 8 June, with dissolution on 3 May. The government announced that it intended for the next parliament to assemble on 13 June, with the state opening on 19 June.
The key dates are listed below (all times are BST):
The cost to the taxpayer of organising the election was £ 140 million -- slightly less than the EU referendum.
Most candidates were representatives of a political party registered with the Electoral Commission. Candidates not belonging to a registered party could use an "independent '' label, or no label at all.
The leader of the party commanding a majority of support in the House of Commons is the person who is called on by the monarch to form a government as Prime Minister, while the leader of the largest party not in government becomes the Leader of the Opposition. Other parties also form shadow ministerial teams. The leaders of the SNP, Plaid Cymru and the DUP are not MPs; hence, they appoint separate leaders in the House of Commons.
The Conservative Party and the Labour Party have been the two biggest parties since 1922, and have supplied all Prime Ministers since 1935. Both parties changed their leader after the 2015 election. David Cameron, who had been the leader of the Conservative Party since 2005 and Prime Minister since 2010, was replaced in July 2016 by Theresa May following the referendum on the United Kingdom 's membership of the European Union. Jeremy Corbyn replaced Ed Miliband as Leader of the Labour Party and Leader of the Opposition in September 2015, and was re-elected leader in September 2016.
While the Liberal Democrats and their predecessors had long been the third - largest party in British politics, they returned only 8 MPs in 2015 -- 49 fewer than at the previous election. Tim Farron became the Liberal Democrat leader in July 2015, following the resignation of Nick Clegg. Led by First Minister of Scotland Nicola Sturgeon, the SNP stands only in Scotland; it won 56 of 59 Scottish seats in 2015. UKIP, then led by Nigel Farage, who was later replaced by Diane James and then by Paul Nuttall in 2016, won 12.7 % of the vote in 2015 but gained only one MP, Douglas Carswell, who left the party in March 2017 to sit as an independent. After securing 3.8 % of the vote and one MP in the previous general election, Green Party leader Natalie Bennett was succeeded by joint leaders Caroline Lucas and Jonathan Bartley in September 2016. Smaller parties that contested the 2015 election and chose not to stand candidates in 2017 included Mebyon Kernow, the Communist Party of Britain, the Scottish Socialist Party and the National Front.
In Northern Ireland, the Democratic Unionist Party (DUP), Sinn Féin, the Social Democratic and Labour Party (SDLP), the Ulster Unionist Party (UUP), and the Alliance Party contested the 2017 election. Sinn Féin maintained its abstentionist policy. The DUP, Sinn Féin, SDLP, UUP and APNI were all led by new party leaders, changed since the 2015 election. The Conservatives, Greens and four other minor parties also stood. Despite contesting 10 seats last time, UKIP did not stand in Northern Ireland.
3,304 candidates stood for election, down from 3,631 in the previous general election. The Conservatives stood in 637 seats, Labour in 631 (including jointly with the Co-operative Party in 50) and the Liberal Democrats in 629. UKIP stood in 377 constituencies, down from 624 in 2015, while the Greens stood in 468, down from 573. The SNP contested all 59 Scottish seats and Plaid Cymru stood in all 40 Welsh seats. In Great Britain, 183 candidates stood as independents; minor parties included the Christian Peoples Alliance which contested 31 seats, the Yorkshire Party which stood in 21, the Official Monster Raving Loony Party in 12, the British National Party in 10, the Pirate Party in 10, the English Democrats in 7, the Women 's Equality Party in 7, the Social Democratic Party in 6, the National Health Action Party in 5 and the Workers Revolutionary Party in 5, while an additional 79 candidates stood for 46 other registered political parties.
In Wales, 213 candidates stood for election. Labour, Conservatives, Plaid Cymru, and Liberal Democrats contested all forty seats and there were 32 UKIP and 10 Green candidates. In Scotland the SNP, the Conservatives, Labour and the Liberal Democrats stood in all 59 seats while UKIP contested 10 seats and the Greens only 3.
Of the 109 candidates in Northern Ireland, Sinn Féin, the SDLP and the Alliance contested all 18 seats; the DUP stood in 17, the UUP in 14 and the Conservatives and Greens stood in 7 each. People Before Profit and the Workers ' Party contested two seats while Traditional Unionist Voice and the new Citizens Independent Social Thought Alliance stood in one each; four independents including incumbent Sylvia Hermon also stood.
Unlike in previous elections, the timetable of the snap election required parties to select candidates in just under three weeks, to meet the 11 May deadline.
For the Conservatives, local associations in target seats were offered a choice of three candidates by the party 's headquarters from an existing list of candidates, without inviting applications; candidates in non-target seats were to be appointed directly; and MPs were to be confirmed by a meeting of their local parties. Labour required sitting MPs to express their intention to stand, automatically re-selecting those that did. Labour advertised for applications from party members for all remaining seats by 23 April. Having devolved selections to its Scottish and Welsh parties, Labour 's National Executive Committee endorsed all parliamentary candidates on 3 May except for Rochdale, the seat of suspended MP Simon Danczuk. On 7 May Steve Rotheram announced he was standing down as MP for Liverpool Walton following his election as Liverpool City Region mayor, leaving five days to appoint a candidate by close of nominations.
The SNP confirmed on 22 April that its 54 sitting MPs would be re-selected and that its suspended members Natalie McGarry and Michelle Thomson would not be nominated as SNP candidates; the party subsequently selected candidates for McGarry 's and Thomson 's former seats, as well as for the three Scottish constituencies it did not win in 2015. The Liberal Democrats had already selected 326 candidates in 2016 and over 70 in 2017 before the election was called. Meetings of local party members from UKIP, the Greens and Plaid Cymru selected their candidates. Parties in Northern Ireland were not believed to have already selected candidates due to the Assembly elections in March.
Ken Clarke, the Father of the House of Commons, had said he would retire in 2020, but opted to stand again in the 2017 election. Former Conservative employment minister Esther McVey was selected to contest Tatton and Zac Goldsmith was adopted as the Conservative candidate for Richmond Park, having lost the 2016 by - election as an independent after previously serving as the constituency 's Conservative MP since 2010.
After coming second in the Stoke - on - Trent Central by - election earlier in the year, UKIP leader Paul Nuttall announced he would contest Boston and Skegness. Tony Lloyd, a former Labour MP for Manchester Central who served as Greater Manchester Police and Crime Commissioner from 2012 and interim Mayor of Greater Manchester since 2015, was selected to contest Rochdale. The former Labour MP Simon Danczuk stood as an independent candidate, after being banned from standing as a Labour candidate and then leaving the party.
A number of former Liberal Democrat ministers who were defeated in 2015 stood for election in their former seats, including Vince Cable in Twickenham, Ed Davey in Kingston and Surbiton, Jo Swinson in East Dunbartonshire, and Simon Hughes in Bermondsey and Old Southwark. After David Ward, the former MP for Bradford East, was dropped as a candidate by the Liberal Democrats for anti-semitism, he ran as an independent in his former seat.
Ahead of the general election, crowdfunding groups such as More United and Open Britain were formed to promote candidates of similar views standing for election, and a "progressive alliance '' was proposed. Former UKIP donor Arron Banks suggested a "patriotic alliance '' movement. Tactical voting to keep the Conservatives out of government was suggested on social media. Gina Miller, who took the government to court over Article 50, set out plans to tour marginal constituencies in support of pro-EU candidates.
Within a few days of the election being announced, the Green Party of England and Wales and the SNP each proposed to collaborate with Labour and the Liberal Democrats to prevent a Conservative majority government. Lib Dem leader Tim Farron quickly reaffirmed his party 's opposition to an electoral pact or coalition with Labour, citing "electorally toxic '' Corbyn and concerns over Labour 's position on Brexit. On 22 April the Liberal Democrats also ruled out a coalition deal with the Conservatives and SNP. Labour ruled out an electoral pact with the SNP, Liberal Democrats and Greens.
Notwithstanding national arrangements, the Liberal Democrats, the Greens and UKIP indicated they might not stand in every constituency. The Green Party of England and Wales chose not to contest 22 seats explicitly "to increase the chance of a progressive candidate beating the Conservatives '', including in South West Surrey, the seat of Health Secretary Jeremy Hunt, in favour of the National Health Action Party candidate. The Scottish Green Party contested just three constituencies. The Liberal Democrats agreed to stand down in Brighton Pavilion. After indicating they may not nominate candidates in seats held by strongly pro-Brexit Conservative MPs, UKIP nominated 377 candidates; it was suggested this would help the Conservatives in marginal seats.
In Northern Ireland, there were talks between the DUP and UUP. Rather than engaging in a formal pact, the DUP agreed not to contest Fermanagh and South Tyrone, while the UUP chose not to stand in four constituencies. Talks took place between Sinn Féin, the SDLP and the Green Party in Northern Ireland about an anti-Brexit agreement (the Alliance Party were approached but declined to be involved) but no agreement was reached; the Greens said there was "too much distance '' between the parties, Sinn Féin 's abstentionist policy was criticised, and the SDLP admitted an agreement was unlikely. On 8 May the SDLP rejected Sinn Féin 's call for them to stand aside in some seats.
Prior to the calling of the general election, the Liberal Democrats gained Richmond Park from the Conservatives in a by - election, a seat characterised by its high remain vote in the 2016 EU referendum. The Conservatives held the safe seat of Sleaford and North Hykeham in December 2016. In by - elections on 23 February 2017, Labour held Stoke - on - Trent Central but lost Copeland to the Conservatives, the first time a governing party had gained a seat in a by - election since 1982.
The general election came soon after the Northern Ireland Assembly election on 2 March. Talks on power - sharing between the DUP and Sinn Féin had failed to reach a conclusion, with Northern Ireland thus facing either another Assembly election, or the imposition of direct rule. The deadline was subsequently extended to 29 June.
Local elections in England, Scotland and Wales took place on 4 May. These saw large gains by the Conservatives, and large losses by Labour and UKIP. Notably, the Conservatives won metro mayor elections in Tees Valley and the West Midlands, areas traditionally seen as Labour heartlands. Initially scheduled for 4 May, a by - election in Manchester Gorton was cancelled; the seat was contested on 8 June along with all the other seats.
On 6 May, a letter from Church of England Archbishops Justin Welby and John Sentamu stressed the importance of education, housing, communities and health.
All parties suspended campaigning for a time in the wake of the 2017 Manchester Arena bombing on 22 May. The SNP had been scheduled to release their manifesto for the election but this was delayed. Campaigning resumed on 25 May.
Major political parties also suspended campaigning for a second time on 4 June, following the June 2017 London Bridge attack. UKIP chose to continue campaigning. There were unsuccessful calls for polling day to be postponed.
The UK 's withdrawal from the European Union was expected to be a key issue in the campaign, but featured less than expected. May said she called the snap election to secure a majority for her Brexit negotiations. UKIP support a "clean, quick and efficient Brexit '' and, launching his party 's election campaign, Nuttall stated that Brexit is a "job half done '' and UKIP MPs are needed to "see this through to the end ''.
Labour had supported Brexit in the previous parliament, but proposed different priorities for negotiations. The Liberal Democrats and Greens called for a deal to keep the UK in the single market and a second referendum on any deal proposed between the EU and the UK.
The Conservative manifesto committed the party to leaving the single market and customs union, but sought a "deep and special partnership '' through a comprehensive free trade and customs agreement. It proposed seeking to remain part of some EU programmes where it would "be reasonable that we make a contribution '', staying as a signatory of the European Convention on Human Rights over the next parliament, and maintaining the Human Rights Act during Brexit negotiations. Parliament would be able to amend or repeal EU legislation once converted into UK law, and have a vote on the final agreement.
Two major terrorist attacks took place during the election campaign, with parties arguing about the best way to prevent such events. May, after the second attack, focused on global co-operation to tackle Islamist ideology and tackling the use of the internet by terrorist groups. After the first attack, Labour criticised cuts in police numbers under the Conservative government. Corbyn also linked the Manchester attack to British foreign policy. The Conservatives stated that spending on counter-terrorism for both the police and other agencies had risen.
Former Conservative strategist Steve Hilton said Theresa May should be "resigning not seeking re-election '', because her police cuts and security failures had led to the attacks. Corbyn backed calls for May to resign, but said she should be removed by voters. May said that police budgets for counter-terrorism had been maintained and that Corbyn had voted against counter-terrorism legislation.
The Conservative manifesto proposed more government control and regulation of the internet, including forcing internet companies to restrict access to extremist and adult content. After the London attack, Theresa May called for international agreements to regulate the internet. The Conservative stance on regulation of the internet and social media was criticised by Farron and the Open Rights Group.
On 6 June, May promised longer prison sentences for people convicted of terrorism and restrictions on the freedom of movement or deportation of militant suspects when it is thought they present a threat but there is not enough evidence to prosecute them, stating that she would change human rights laws to do so if necessary.
The UK 's nuclear weapons, including the renewal of the Trident system, also featured in the campaign. The Conservatives and Liberal Democrats favoured Trident renewal. Labour 's manifesto committed to Trident renewal; Corbyn confirmed renewal would took place under Labour but declined to explicitly speak in favour. He also declined to answer whether as prime minister he would use nuclear weapons if the UK was under imminent nuclear threat.
Social care became a major election issue after the Conservative Party 's manifesto included new proposals, which were subsequently altered after criticism. The previous coalition government had commissioned a review by Andrew Dilnot into how to fund social care. Measures that were seen to disadvantage pensioners were also in the Conservative manifesto: eliminating the pension triple lock and Winter Fuel Payments for all pensioners.
The question of a proposed Scottish independence referendum was also thought likely to influence the campaign in Scotland. On 28 March 2017, the Scottish Parliament approved a motion requesting that Westminster pass a Section 30 order giving the Parliament the authority to hold a second independence referendum, suggesting that there had been a "material change '' in the terms of the failed independence referendum in 2014 as a result of Britain 's vote to leave the EU. The SNP hopes to hold a second independence referendum once the Brexit terms are clear but before Britain leaves the EU; May has said her government would not approve an independence referendum before Brexit negotiations have finished.
Labour was thought to have attracted a significant number of student voters with its pledge to abolish tuition fees, which stands at £ 9,000 a year in England, and bring back student grants.
Although Labour and the Liberal Democrats both rejected election pacts with each other and with the Greens and the SNP, and although the Liberal Democrats ruled out a coalition deal with the Conservatives, the Conservatives campaigned on this theme, using the phrase "coalition of chaos ''. Similar messages against a potential Lib -- Lab pact were credited with securing a Conservative win in the 1992 and 2015 elections. On 19 April, May warned against a Labour -- SNP -- Lib Dem pact that would "divide our country ''. After the hung result led the Conservatives to seek DUP support for a minority government, this rhetoric was mocked by opponents.
May launched the Conservative campaign with a focus on Brexit, lower domestic taxes and avoiding a Labour -- Lib Dem -- SNP "coalition of chaos '', but she refused to commit not to raise taxes. On 30 April, May stated that it was her intention to lower taxes if the Conservatives won the general election, but only explicitly ruled out raising VAT. May reiterated her commitment to spending 0.7 % of GNI on foreign aid.
Theresa May hired Lynton Crosby, the campaign manager for the Conservatives in the 2015 general election, as well as Barack Obama 's 2012 campaign manager, Jim Messina. The Conservative campaign was noted for the use of targeted adverts on social media, in particular attacking Corbyn. The repeated use of the phrase "strong and stable '' in the Conservatives ' campaigning attracted attention and criticism. Some expressed concern that the party may have restricted media access to the prime minister. While some speculated that an investigation into campaign spending by the Conservatives in the 2015 general election was a factor behind the snap election, on 10 May the Crown Prosecution Service said that despite evidence of inaccurate spending returns, no further action was required.
On 7 May the Conservatives promised to replace the 1983 Mental Health Act, to employ an additional 10,000 NHS mental health workers by 2020 and to tackle discrimination against those with mental health problems. May indicated that the Conservatives would maintain their net immigration target, and promised to implement a cap on "rip - off energy prices '', a policy that appeared in Labour 's 2015 manifesto. May indicated she would permit a free vote among Conservative MPs on repealing the ban on fox hunting in England and Wales. On 11 May the Conservatives promised above - inflation increases in defence spending alongside its NATO commitment to spend at least 2 % of GDP on defence.
In a speech in Tynemouth the next day, May said Labour had "deserted '' working - class voters, criticised Labour 's policy proposals and said Britain 's future depended on making a success of Brexit. On 14 May the Conservatives proposed a "new generation '' of social housing, paid from the existing capital budget, offering funding to local authorities and changing compulsory purchase rules. The following day May promised "a new deal for workers '' that would maintain workers ' rights currently protected by the EU after Brexit, put worker representation on company boards, introduce a statutory right to unpaid leave to care for a relative and increase the National Living Wage in line with average earnings until 2022. The proposals were characterised as an "unabashed pitch for Labour voters ''; however Labour and the GMB trade union criticised the government 's past record on workers ' rights.
Unveiling the Conservative manifesto in Halifax on 18 May, May promised a "mainstream government that would deliver for mainstream Britain ''. It proposed to balance the budget by 2025, raise spending on the NHS by £ 8bn per year and on schools by £ 4bn per year by 2022, remove the ban on grammar schools, means - test the winter fuel allowance, replace the state pension "triple lock '' with a "double lock '' and require executive pay to be approved by a vote of shareholders. It dropped the 2015 pledge to not raise income tax or national insurance contributions but maintained a commitment to freeze VAT. New sovereign wealth funds for infrastructure, rules to prevent foreign takeovers of "critical national infrastructure '' and institutes of technology were also proposed. The manifesto was noted for its intervention in industry, lack of tax cuts and increased spending commitments on public services. On Brexit it committed to leaving the single market and customs union while seeking a "deep and special partnership '' and promised a vote in parliament on the final agreement. The manifesto was noted for containing similar policies to those found in Labour 's 2015 general election manifesto.
The manifesto also proposed reforms to social care in England that would raise the threshold for free care from £ 23,250 to £ 100,000, while including property in the means test and permitting deferred payment after death. After attracting substantial media attention, four days after the manifesto launch, May stated that the proposed social care reforms would now include an "absolute limit '' on costs in contrast to the rejection of a cap in the manifesto. She criticised the "fake '' portrayal of the policy in recent days by Labour and other critics, who had termed it a "dementia tax ''. Evening Standard editor and former Conservative Chancellor George Osborne called the policy change a "U-turn ''.
Corbyn launched the Labour campaign focusing on public spending, and argued that services were being underfunded, particularly education. Labour 's shadow Brexit secretary Keir Starmer stated that the party would replace the existing Brexit white paper with new negotiating priorities that emphasise the benefits of the single market and customs union, that the residence rights of EU nationals would be guaranteed and that the principle of free movement would have to end. Corbyn emphasised Labour 's support for a "jobs - first Brexit '' that "safeguards the future of Britain 's vital industries ''.
Labour proposed the creation of four new bank holidays, marking the feast days of the patron saints of the United Kingdom 's constituent nations. On 27 April the party pledged to build 1 million new homes over five years. Labour 's proposal to employ 10,000 new police officers was overshadowed when Shadow Home Secretary Diane Abbott cited incorrect figures in an LBC interview on how it would be funded. Labour later confirmed that the £ 300 million cost would be funded by reversing cuts to capital gains taxes, although it was noted that the party had also pledged some of those savings towards other expenditure plans.
On 7 May, Shadow Chancellor John McDonnell ruled out rises in VAT, and in income tax and employee national insurance contributions for those with earnings below £ 80,000 per year. The following day Labour outlined plans to ban junk food TV adverts and parking charges at NHS hospitals. Labour promised an additional £ 4.8 billion for education, funded by raising corporation tax from 19 % to 26 %.
A draft copy of Labour 's manifesto was leaked to the Daily Mirror and The Daily Telegraph on 10 May. It included pledges to renationalise the National Grid, the railways and the Royal Mail, and create publicly owned energy companies. The draft was noted for including commitments on workers ' rights, a ban on fracking, and the abolition of university tuition fees in England. The draft manifesto included a commitment to the Trident nuclear deterrent, but suggested a future government would be "extremely cautious '' about using it. The next day Labour 's Clause V meeting endorsed the manifesto after amendments from shadow cabinet members and trade unions present.
In a speech at Chatham House on 12 May, Corbyn set out his foreign policy, saying he would reshape Britain 's foreign relations, avoid the use of nuclear weapons, and while Labour supported Trident renewal he would initiate a defence review in government. Corbyn stated that he would halt all weapons sales from the UK to Saudi Arabia citing the violations of human rights in the Saudi Arabian - led intervention in Yemen. After the June 2017 London Bridge attack, Corbyn said that a conversation should take place "with Saudi Arabia and other Gulf states that have funded and fuelled extremist ideology ''.
On 14 May, Labour revealed plans to extend stamp duty by introducing a financial transactions tax, which McDonnell claimed would raise up to £ 5.6 bn per year. The next day Corbyn set out plans to spend £ 37bn on the NHS in England over a five - year parliament, including £ 10bn on IT upgrades and building repairs.
Launching its manifesto on 16 May, Labour revealed it would nationalise the water industry, provide 30 hours per week of free childcare for two to four - year - olds, charge companies a levy on annual earnings above £ 330,000, lower the 45p income tax rate threshold to £ 80,000 per year, and reintroduce the 50p tax rate for those earning more than £ 123,000 per year. Labour said it would raise an additional £ 48.6 bn in tax revenue per year and insisted its policies were fully costed, though it was noted no costings were provided for its nationalisation pledges. Compared to the leaked draft, the manifesto was noted for toughening Labour 's position on defence and Trident, confirming that outside the EU free movement would have to end, qualifying support for airport expansion, and clarifying the party 's stance on Israel - Palestine, as well as other changes. After initial confusion, Labour clarified it would not reverse the government 's freeze on most working - age benefits.
In an interview following the manifesto launch, Unite general secretary Len McCluskey said victory for Labour in the general election would be "extraordinary '' and that winning just 200 seats (compared to 229 seats held at the time) would be a "successful '' result; the following morning he clarified he was now "optimistic '' about Labour 's chances.
The SNP, keen to maintain its position as the third - largest party in the House of Commons, made the need to protect Scotland 's interests in the Brexit negotiations a central part of its campaign. The SNP manifesto called for a vote on independence to be held "at the end of the Brexit process '', set out "anti-austerity '' plans to invest £ 118bn in UK public services over the next five years, pledged to increase the minimum wage to £ 10 an hour and called for Scotland to have control over immigration and to remain in the EU single market after Brexit. With the polls closing, Nicola Sturgeon told the Today programme that the SNP could support a Labour government "on an issue - by - issue basis '' in the event of a hung parliament and she would be open to forming a "progressive alternative to a Conservative government ''.
Central themes of the Liberal Democrat campaign were an offer of a second referendum on any eventual Brexit deal and a desire for the UK to stay in the single market. The party reportedly targeted seats which had voted to remain in the EU, such as Twickenham, Oxford West and Abingdon, and Vauxhall. Bob Marshall - Andrews, a Labour MP from 1997 to 2010, announced he would support the Liberal Democrats.
The party reported a surge in membership after the election was called, passing 100,000 on 24 April, having grown by 12,500 in the preceding week. The party also reported raising £ 500,000 in donations in the first 48 hours after May 's announcement of an early election.
An early issue raised in the campaign was Tim Farron 's views, as a Christian, regarding gay sex and LGBT rights. After declining to state whether he thought gay sex was a sin, Farron affirmed that he believed neither being gay nor having gay sex are sinful.
The party proposed raising income tax by 1p to fund the NHS, and maintaining the triple - lock on the state pension. The Liberal Democrats also promised an additional £ 7 billion to protect per - pupil funding in education; they said it would be partly funded by remaining in the EU single market. The party pledged on 11 May to accept 50,000 refugees from Syria over five years, with Farron saying that the £ 4.3 billion costs would over time be repaid in taxes by those refugees that settle in Britain.
On 12 May the party revealed plans to legalise cannabis and extend paid paternity leave. Farron proposed financial incentives for graduates joining the armed forces and committed to NATO 's 2 % of GDP defence spending target. The next day the Liberal Democrats promised to end the cap on public - sector pay increases and repeal the Investigatory Powers Act. On 16 May the Liberal Democrats proposed an entrepreneurs ' allowance, to review business rates and to increase access to credit.
Policies emphasised during their manifesto launch on 17 May included a second referendum on a Brexit deal with the option to remain a member of the EU, discounted bus passes for 16 - to 21 - year - olds, the reinstatement of housing benefit for 18 - to 21 - year - olds, a £ 3bn plan to build 300,000 new houses a year by 2022 and support for renters to build up equity in their rented properties.
Paul Nuttall announced that UKIP 's manifesto would seek to ban the burqa, outlaw sharia law, impose a temporary moratorium on new Islamic schools and require annual checks against female genital mutilation (FGM) for high - risk girls. In response to the proposed burqa ban UKIP 's foreign affairs spokesperson James Carver resigned, labelling the policy "misguided ''.
Despite losing all 145 of the seats it was defending in the 2017 local elections (but gaining one from Labour in Burnley), Nuttall insisted voters would return to UKIP in the general election. On 8 May UKIP proposed a net migration target of zero within five years.
Within hours of the election being announced, Corbyn, Farron and Sturgeon called for televised debates. The Prime Minister 's office initially opposed the idea. On 19 April the BBC and ITV announced they planned to host leaders ' debates, as they had done in the 2010 and 2015 elections, whether or not May took part. Labour subsequently ruled out Corbyn taking part in television debates without May.
Broadcaster Andrew Neil separately interviewed the party leaders in The Andrew Neil Interviews on BBC One, starting on 22 May with Theresa May. The 2017 Manchester Arena bombing led to interviews with Nuttall, Farron, Sturgeon and Corbyn to be rescheduled. ITV Tonight also ran a series of programmes with the major party leaders.
Sky News and Channel 4 hosted an election programme on 29 May where May and Corbyn were individually interviewed by Jeremy Paxman after taking questions from a studio audience. The BBC held two debates to which all seven main party leaders were invited, on 31 May in Cambridge and 6 June in Manchester; both May and Corbyn stated they would not attend the 31 May debate. May said that she had already debated Corbyn many times in parliament, and that she would be meeting the public instead. Corbyn announced on the day that he would attend the debate in Cambridge, calling on May to do the same. Instead Amber Rudd appeared for the Conservatives.
The BBC hosted separate debates for the English regions, and for both Scotland and Wales, and also a Question Time special with May and Corbyn separately answering questions from voters on 2 June, chaired by David Dimbleby. Sturgeon and Farron were expected to do the same on 4 June, but after the June 2017 London Bridge attack it was rescheduled to 5 June and instead presented by Nick Robinson. The BBC also hosted two back - to - back episodes of a special election programme titled Election Questions on 4 June, first in Bristol with Green Party co-leader Jonathan Bartley followed by Nuttall, and second in Swansea with Plaid Cymru leader Leanne Wood. The party leaders were individually questioned by a studio audience.
STV planned to host a live TV debate in Glasgow with four Scottish party leaders on 24 May, but it was postponed, owing to the Manchester Arena bombing. The debate was rescheduled for Tuesday 6 June.
Newspapers, organisations and individuals have endorsed parties or individual candidates for the election.
Former UKIP leader Nigel Farage announced that he would not stand, saying he could be more effective as an MEP. UKIP major donor Arron Banks, who had earlier indicated his intention to stand in Clacton to defeat Douglas Carswell, withdrew in favour of the UKIP candidate after Carswell announced he would be standing down.
Plaid Cymru leader Leanne Wood chose not to contest a Westminster seat, nor did former Labour MP and shadow chancellor Ed Balls.
In the 2015 general election, polling companies underestimated the Conservative Party vote and overestimated the Labour Party vote and so failed to predict the result accurately. Afterwards they started making changes to polling practices; recommendations from a review by the British Polling Council are likely to result in further changes.
The first - past - the - post system used in UK general elections means that the number of seats won is not directly related to vote share. Thus, several approaches are used to convert polling data and other information into seat predictions. The table below lists some of the predictions.
The UK 's first - past - the - post electoral system means that national shares of the vote do not give an exact indicator of how the various parties will be represented in Parliament. Different commentators and pollsters currently provide a number of predictions, based on polls and other data, as to how the parties will be represented in Parliament:
An exit poll, conducted by GfK and Ipsos MORI on behalf of the BBC, ITV and Sky News, was published at the end of voting at 10 pm, predicting the number of seats for each party, with the Conservatives being the largest party, but short of an overall majority: Actual results were close to the prediction.
Results for all constituencies except Kensington were reported by the morning after the election. The Conservatives remained the largest single party in terms of seats and votes, but were short of a parliamentary majority. The Conservatives won 317 seats with 42.4 % of the vote while the Labour Party won 262 seats with 40.0 % of the vote. The election resulted in the third hung parliament since the Second World War, with elections in February 1974 and 2010 resulting in hung parliaments. YouGov correctly estimated the result after employing "controversial '' methodology.
In England, Labour made a net gain of 21 seats, taking 25 constituencies from the Conservatives and two from the Liberal Democrats. Their gains were predominantly in London and university towns and cities, most notably achieving victories in Battersea, Canterbury, Kensington and Ipswich from the Conservatives by narrow margins; they also lost five seats to the Conservatives, largely in the Midlands, and were unable to regain Copeland which had been lost in a February by - election. The Conservatives experienced a net loss of 22 seats, the first time since 1997 that the party suffered a net loss of seats. They gained Clacton from UKIP and Southport from the Liberal Democrats in addition to the six gains from Labour. The Liberal Democrats took five seats from the Conservatives, including Twickenham, won back by Vince Cable, and Kingston and Surbiton, won by Ed Davey, but lost two seats to Labour: Leeds North West and Sheffield Hallam, the seat of former party leader Nick Clegg. Richmond Park, which the Liberal Democrats had won in a 2016 by - election, was narrowly lost to the Conservatives. Caroline Lucas remained the sole Green Party MP, retaining Brighton Pavilion.
In Scotland, the Conservatives, Labour and the Liberal Democrats all gained seats from the SNP, whose losses were attributed to opposition to a second Scottish independence referendum, contributing to tactical voting for unionist parties. The Conservatives placed second in Scotland for the first time since 1992, won their largest number of seats in Scotland since 1983 and recorded their highest share of the vote there since 1979. With thirteen seats, the Scottish Conservatives became the largest unionist party in Scotland for the first time since 1955. Labour gained six seats from the SNP while the Liberal Democrats gained three. Having won 56 of 59 Scottish seats at the last general election, the SNP lost a total of 21 seats, and majorities in their remaining seats were greatly reduced. High - profile losses included SNP Commons leader Angus Robertson in Moray and former party leader and First Minister Alex Salmond in Gordon.
In Wales, Labour held 25 seats and gained Cardiff North, Gower and Vale of Clwyd from the Conservatives, leaving the Welsh Tories with eight seats. Plaid Cymru retained their three existing seats and gained Ceredigion, the Lib Dems ' only seat in Wales.
In Northern Ireland the SDLP lost its three seats (Foyle and South Down to Sinn Féin and Belfast South to the DUP) while the UUP lost its two seats (Fermanagh and South Tyrone to Sinn Féin and South Antrim to the DUP). With the Alliance Party failing to win any seats or regain Belfast East, this left the DUP with ten seats (up from eight), Sinn Féin with seven (up from four), and independent unionist Sylvia Hermon held North Down. Recording their best result since partition, Sinn Féin confirmed it would continue its abstentionist policy, leaving no nationalist representation in the House of Commons.
UKIP failed to win any seats, with its vote share falling from 12.6 % at the previous general election to just 1.8 %; party leader Paul Nuttall came third in Boston and Skegness. The Greens ' vote share dropped from 3.8 % to 1.6 %.
The result was noted for increased vote shares for Labour (up 9.6 percentage points) and the Conservatives (up 5.5 percentage points), with a combined 82.3 % share of the vote, up from 67.2 % in 2015. This was the highest combined share of the vote for the two main parties since 1970. It was suggested this indicated a return to two - party politics. The election was characterised by higher turnout, possibly among younger voters, which may have contributed to Labour 's increased vote share. Research company Ipsos Mori considered age to be one of the most significant factors behind the result; compared to the 2015 general election under - 45s tended to opt more for Labour and over-54s for the Conservatives. It found 60 % of those aged 18 - 24 voted Labour while 61 % of over-64s voted Conservative. The swing to Labour was high in those seats with large numbers of young people.
In terms of social grade, Labour increased its share of middle - class voters (defined as ABC1) by 12 percentage points compared to the previous election while the Conservatives increased their share of working - class voters (C2DE) by 12 percentage points. Political scientist John Curtice found that the Conservatives tended to experience a greater increase in vote share in seats with a higher proportion of working - class voters, particularly those that voted for Leave in the EU referendum. Many of Labour 's most successful results occurred in seats that voted Remain by a large margin in 2016.
Compared to previous elections, turnout for private renters increased (from 51 % in 2010 to 65 %) and favoured Labour to a greater degree, with the party achieving a 23 - point lead over the Conservatives among private renters; the Conservatives maintained a 14 - point lead among homeowners. In terms of education, YouGov found a one - point lead for the Conservatives among university graduates in 2015 had flipped to a 17 - point lead for Labour in 2017. For those with low educational qualifications, the Conservatives led by 22 points, up from 8 points in 2015.
It was suggested that UKIP 's decline boosted both main parties, but tended to help Labour retain seats in the North of England and the Midlands against the Conservatives, though it may have also benefited the Conservatives in predominantly working - class seats. Ipsos Mori found that UKIP 's collapse was consistent across all age groups.
Published in August 2017, the British Election Study (BES), which surveyed 30,000 voters, found that despite a relatively low profile in the campaign, Brexit was considered to be the single most important issue facing the country by over a third of respondents. It found more than half of UKIP voters in 2015 went to the Conservatives, while 18 % went to Labour. Remain voters tended to favour Labour, with the party particularly gaining among Remain voters who previously supported other parties, despite perceived uncertainty over its position on the single market. There was a strong correlation between those who prioritised controlling immigration and the Conservatives, while the same was true for supporting single market access and those who opted for Labour or the Liberal Democrats.
The BES study indicated the importance of the campaign period. A pre-election survey found 41 % for the Conservatives and 27 % for Labour, but by the election 19 % of voters had switched party. Unlike the previous election where both main parties achieved similar shares of late - switchers, Labour won 54 % while the Conservatives won 19 %. Likeability of party leaders also narrowed over the course of the campaign.
Newly elected MPs included Britain 's first turbaned Sikh MP, Tan Dhesi, the first woman Sikh MP, Preet Gill, and the first MP of Palestinian descent, Layla Moran.
A record number of woman and LGBT+ MPs were elected. 208 woman MPs were elected to Parliament; the first time more than 200 MPs were women and beating the previous high of 196 woman MPs in the last Parliament. For the first time, a majority of MPs were educated at state comprehensive schools. More MPs who are known to be disabled were elected in 2017 than in 2015.
After all 650 constituencies had been declared, the results were:
Seats, of total, by party
Votes, of total, by party
Corbyn and Farron called on May to resign. On 9 June, May apologised to candidates who lost their seats and confirmed she would continue as party leader and prime minister, with the intention of forming a minority government with support from the Democratic Unionist Party in order to ensure "certainty ''. May 's joint chiefs of staff Nick Timothy and Fiona Hill resigned, replaced by Gavin Barwell, who had lost his seat in the election.
On 10 June, a survey of 1,500 ConservativeHome readers found that almost two thirds of Conservative Party members wanted Theresa May to resign. A YouGov poll of 1,720 adults for the Sunday Times had 48 % saying Theresa May should resign, with 38 % against. A Survation poll of 1,036 adults online for the Mail on Sunday had 49 % of people wanting her resignation, with 38 % against. On 11 June George Osborne, former Chancellor of the Exchequer, described May as a "dead woman walking ''.
In a post-election reshuffle carried out on 11 June, May promoted her close ally Damian Green to become First Secretary of State and brought Michael Gove into the cabinet as environment secretary, making Andrea Leadsom Leader of the House of Commons. Liz Truss, David Lidington and David Gauke changed roles, while eleven cabinet members including key figures such as Boris Johnson, Amber Rudd, Michael Fallon, Philip Hammond and David Davis remained in post.
Negotiations between the Conservatives and DUP started on 9 June. On 12 June it was reported that the State Opening of Parliament, scheduled for 19 June, could be delayed. DUP sources informed the BBC that the Grenfell Tower fire on 14 June would delay finalisation of an agreement. On 15 June it was announced that the Queen 's Speech would occur on 21 June. A confidence and supply deal was reached on 26 June, with the DUP backing the Conservatives in key votes in the House of Commons over the course of the parliament. The agreement included additional funding of £ 1 billion for Northern Ireland, highlighted mutual support for Brexit and national security, expressed commitment to the Good Friday Agreement, and indicated that policies such as the state pension triple lock and winter fuel payments would be maintained.
After achieving just 1.8 % of the popular vote, down from 12.7 % in 2015, and failing to win any seats, Paul Nuttall resigned as UKIP leader on 9 June. A leadership election followed.
Ian Blackford became the new SNP leader in Westminster on 14 June following Angus Robertson 's defeat.
On 14 June Brian Paddick resigned as home affairs spokesperson for the Liberal Democrats over concerns about Farron 's "views on various issues '' during the campaign. Later that day, Farron announced his resignation as leader of the Liberal Democrats, citing conflict between his Christian faith and serving as leader. He remained as leader until Sir Vince Cable was elected unopposed on 20 July.
The Conservative Party campaign was widely criticised by those within and outwith the party. Points of criticism included the initial decision to call the election (against which Lynton Crosby had advised); the control of the campaign by a small team of May 's joint chiefs of staff Nick Timothy and Fiona Hill, who were more experienced with policy work than campaigning; the presidential style of the campaign focusing on the figure of Theresa May, while most of the Cabinet were sidelined (particularly the exclusion of Chancellor of the Exchequer Hammond, with reports that May would sack him after the election); and a poorly designed manifesto that offered little hope and the contents of which were not shared with Cabinet members until shortly before its release. In July, Prime Minister Theresa May admitted she had "shed a tear '' upon seeing the election exit poll, and suggested the manifesto 's lack of appeal to younger voters had played a part in Conservative shortcomings.
|
when did they stop giving the global war on terrorism award | Global War on Terrorism Service Medal - Wikipedia
The Global War on Terrorism Service Medal (GWOT - SM) is a military award of the United States Armed Forces which was created through Executive Order 13289 on 12 March 2003, by President George W. Bush. The medal recognizes those military service members who have supported operations to counter terrorism in the War on Terror from 11 September 2001, to a date yet to be determined.
In September 2002, the U.S. Department of Defense sent a request to the U.S. Army Institute of Heraldry to provide a design for a Global War on Terrorism Service Medal. In January 2003, a design was completed, which was then approved and made official in March 2003.
According to the U.S. Department of Defense, the Global War on Terrorism Service Medal will cease being awarded when Presidential Proclamation 7463, "Declaration of National Emergency by Reason of Certain Terrorist Attacks '', delivered on 14 September 2001, is terminated by the U.S. government.
The following are the seven established operations for the Global War on Terrorism Service Medal recognized by the Department of Defense:
The Coast Guard awards the medal for different operations (qv).
To receive the Global War on Terrorism Service Medal, a military service member must have served on active duty during a designated anti-terrorism operation for a minimum 30 consecutive or 60 non-consecutive days. For those who were engaged in combat, killed, or wounded in the line of duty the time requirement is waived.
The initial authorized operation for the Global War on Terrorism Service Medal was the so - called "Airport Security Operation '' which occurred between 27 September 2001 and 31 May 2002. Additional operations, for which the Global War on Terrorism Service Medal is authorized, include the active military campaigns of Operation Enduring Freedom, Operation Noble Eagle, and Operation Iraqi Freedom. Future operations are at the discretion of United States component commanders upon approval from the United States Department of Defense.
In 2004, Defense Department and military service branches began publishing directives, messages, and orders, specifying that the Global War on Terrorism Service Medal would be awarded not only for direct participation in specific operations, but also to any personnel who performed support duty of an anti-terrorism operation but did not directly participate. The phrase "support '' was further defined as any administrative, logistics, planning, operational, technical, or readiness activity, which provides support to an operation of the Global War on Terrorism. As a result of this blanket term, the Global War on Terrorism Service Medal became an eligible award for most personnel of the United States Armed Forces who performed service after 11 September 2001 through March 2004.
With the orders granting the GWOT - SM for "support duty '', the medal has essentially become almost the same type of award as the National Defense Service Medal and graduates of training schools, ROTC, and service academies are typically presented both awards at the same time. The primary difference between the NDSM and the GWOT - SM is that the NDSM is automatic as soon as a person joins the military whereas the GWOTSM may only be presented after thirty days of active duty in a unit (or three months in the case of the Reserve Component). The regulations for Reservists and National Guardsmen are also not as well defined for the GWOT - SM as they are for the NDSM, since the presentation of the NDSM to reservists and National Guardsmen was codified and clarified as far back as the Persian Gulf War.
The U.S. Army 's regulations state that all soldiers "on active duty, including Reserve Component Soldiers (sic) mobilized, or Army National Guard Soldiers (sic) activated on or after 11 September 2001 to a date to be determined having served 30 consecutive days or 60 nonconsecutive days are authorized the GWOTSM. '' The GWOT - SM was awarded automatically to all service members on Active Duty between 11 September 2001 and 31 March 2004. While the award is no longer automatic, the termination "date to be determined '' has not been set. The Battalion Commander is the approval authority for the GWOT - SM. Service members are still eligible for the medal provided they meet the criteria in AR 600 - 8 - 22.
U.S. Army soldiers serving on active duty primarily in a training status (basic training, advanced individual training, officer training courses, etc....) are not authorized award of the GWOT - SM for the active duty time they are in training. The criteria for the awards specifically states that a Soldier has to serve on active duty in support of a designated GWOT operation (Operation Noble Eagle ("ONE ''), Operation Enduring Freedom ("OEF ''), Operation Iraqi Freedom ("OIF ''), Operation New Dawn ("OND ''), Operation Inherent Resolve ("OIR ''), and Operation Freedom 's Sentinel ("OFS '')) for 30 consecutive days or 60 nonconsecutive days. Army soldiers in a training status are not considered to be supporting these designated operations.
Regulations for rating the GWOT - SM are the same in both the Navy, the Marine Corps, and Military Sealift Command for those who serve on both active duty, reserve duty, and support. Essentially, 30 days of consecutive duty or 60 days of non-consecutive duty in support of approved organizations. Personnel who are still in their initial career training are not eligible. Eligibility begins when they reach their first permanent duty station. Civilian Mariners (CIVMARs) attached to Military Sealift Command 's supply ships may be eligible for the Global War on Terrorism Civilian Service Medal.
Air Force service members were first awarded the GWOT - SM for conducting airport security operations in the fall and winter of 2001. It was subsequently awarded for participation or support of Operations Noble Eagle, Enduring Freedom, and Iraqi Freedom. Members must be assigned, attached or mobilized to a unit participating in or serving in support of these designated operations for thirty consecutive days or sixty nonconsecutive days. Personnel who are not deployed may be eligible for service in support of the Global War on Terrorism. Examples of these duties are maintaining and loading weapons systems for combat missions, securing installations against terrorism, augmenting command posts or crisis action teams, and processing personnel for deployment.
Coast Guard regulations concerning the award of the GWOT - SM, "From 11 September 2001 to 30 January 2005 '': Awarded to all Coast Guard active duty and reserve members on active duty during the eligibility period. To qualify, members must have served on active duty for a period of not less than 30 consecutive days or 60 non-consecutive days following initial accession point training. Service while assigned to training duty as a student, cadet, officer candidate, and duty under instruction (DUINS), does not count toward eligibility. This includes both training and summer cruises for the U.S. Coast Guard Academy and Officer Candidate School. For reservists, "active duty '' includes ADT and IDT service in an operational vice classroom setting.
From 31 January 2005 to a date to be determined: Eligible service members must be or have been assigned, attached, or mobilized to a unit participating in or serving in direct support of specified Global War on Terrorism operations (e.g., NOBLE EAGLE, LIBERTY SHIELD, NEPTUNE SHIELD, PORT SHIELD, ENDURING FREEDOM, IRAQI FREEDOM, or Area Commander - designated GWOT operations) for 30 consecutive or 60 cumulative days, or meet one of the following criteria: (a) Be engaged in actual combat regardless of time served in the operation; or (b) While participating in the operation, regardless of time, be killed, wounded, or injured requiring medical evacuation. ''
The medal is a bronze color metal disc 1.25 inches in diameter. The obverse depicts an eagle with spread wings. On the breast of the eagle is a shield of thirteen vertical bars. In the eagle 's right claw is an olive branch and in the left claw are three arrows. The eagle is surmounted by a terrestrial globe with the inscription above "WAR ON TERRORISM SERVICE MEDAL. '' On the reverse is a laurel wreath on a plain field. The medal is suspended from an Old Glory Blue ribbon 1.375 inches wide with stripes of golden yellow, scarlet and white.
Only one award of this medal may be authorized for any individual, no bronze or silver ⁄ inch service stars are prescribed for second or subsequent awards.
Although qualifying circumstances would be extremely rare, bronze ⁄ inch battle stars were applicable for personnel who were engaged in actual combat against the enemy involving grave danger of death or serious bodily injury. Only a Combatant Command could initiate a request for a GWOT - SM (or Global War on Terrorism Expeditionary Medal) battle star. This request would have contained the specific unit (s) or individual (s) engaged in actual combat, the duration for which combat was sustained, and a detailed description of the actions against the enemy. The Chairman of the Joint Chiefs of Staff was the approving authority for the specific battle stars.
To date there have been no battle stars authorized for the Global War on Terrorism Service Medal. The Military Decorations and Awards Review Results released in January 2016 resolved to "eliminate authority for battle stars '' in regard to the GWOT - SM.
This article incorporates public domain material from websites or documents of the United States Army. This article incorporates public domain material from the United States Air Force website http://www.af.mil.
|
how many states have a town called springfield | List of the Most Common U.S. place names - wikipedia
This is a list of the most common U.S. place names (cities, towns, villages, boroughs and census - designated places (CDP)), with the number of times that name occurs (in parentheses). Some states have more than one occurrence of the same name. Cities with populations over 100,000 are in bold.
The United States Postal Service has published a list of the most common city and post office names within the United States, as of 2017:
There are 4 Centerville, Wisconsin locations.
- Riverside, Connecticut neighborhood
|
when did india become a direct colony of britain | Colonial India - wikipedia
Colonial India was the part of the Indian subcontinent which was under the jurisdiction of European colonial powers, during the Age of Discovery. European power was exerted both by conquest and trade, especially in spices. The search for the wealth and prosperity of India led to the discovery of the Americas by Christopher Columbus in 1492. Only a few years later, near the end of the 15th century, Portuguese sailor Vasco da Gama became the first European to re-establish direct trade links with India since Roman times by being the first to arrive by circumnavigating Africa (c. 1497 -- 1499). Having arrived in Calicut, which by then was one of the major trading ports of the eastern world, he obtained permission to trade in the city from Saamoothiri Rajah.
Trading rivalries among the seafaring European powers brought other European powers to India. The Dutch Republic, England, France, and Denmark - Norway all established trading posts in India in the early 17th century. As the Mughal Empire disintegrated in the early 18th century, and then as the Maratha Empire became weakened after the third battle of Panipat, many relatively weak and unstable Indian states which emerged were increasingly open to manipulation by the Europeans, through dependent Indian rulers.
In the later 18th century Great Britain and France struggled for dominance, partly through proxy Indian rulers but also by direct military intervention. The defeat of the redoubtable Indian ruler Tipu Sultan in 1799 marginalised the French influence. This was followed by a rapid expansion of British power through the greater part of the Indian subcontinent in the early 19th century. By the middle of the century the British had already gained direct or indirect control over almost all of India. British India, consisting of the directly - ruled British presidencies and provinces, contained the most populous and valuable parts of the British Empire and thus became known as "the jewel in the British crown ''.
Long after the decline of the Roman Empire 's sea - borne trade with India, the Portuguese were the next Europeans to sail there for the purpose of trade, first arriving by ship in May 1498. The closing of the traditional trade routes in western Asia by the Ottoman Empire, and rivalry with the Italian states, sent Portugal in search of an alternate sea route to India. The first successful voyage to India was by Vasco da Gama in 1498, when after sailing around the Cape of Good Hope he arrived in Calicut, now in Kerala. Having arrived there, he obtained from Saamoothiri Rajah permission to trade in the city. The navigator was received with traditional hospitality, but an interview with the Saamoothiri (Zamorin) failed to produce any definitive results. Vasco da Gama requested permission to leave a factor behind in charge of the merchandise he could not sell; his request was refused, and the king insisted that Gama should pay customs duty like any other trader, which strained their relations.
Though Portugal presence in India initially started in 1498, its colonial rule ranges from 1505 to 1961. The Portuguese Empire established the first European trading centre at Kollam, Kerala. In 1505 King Manuel I of Portugal appointed Dom Francisco de Almeida as the first Portuguese viceroy in India, followed in 1509 by Dom Afonso de Albuquerque. In 1510 Albuquerque conquered the city of Goa, which had been controlled by Muslims. He inaugurated the policy of marrying Portuguese soldiers and sailors with local Indian girls, the consequence of which was a great miscegenation in Goa and other Portuguese territories in Asia. Another feature of the Portuguese presence in India was their will to evangelise and promote Catholicism. In this, the Jesuits played a fundamental role, and to this day the Jesuit missionary Saint Francis Xavier is revered among the Catholics of India.
The Portuguese established a chain of outposts along India 's west coast and on the island of Ceylon in the early 16th century. They built the St. Angelo Fort at Kannur to guard their possessions in North Malabar. Goa was their prized possession and the seat of Portugal 's viceroy. Portugal 's northern province included settlements at Daman, Diu, Chaul, Baçaim, Salsette, and Mumbai. The rest of the northern province, with the exception of Daman and Diu, was lost to the Maratha Empire in the early 18th century.
In 1661 Portugal was at war with Spain and needed assistance from England. This led to the marriage of Princess Catherine of Portugal to Charles II of England, who imposed a dowry that included the insular and less inhabited areas of southern Bombay while the Portuguese managed to retain all the mainland territory north of Bandra up to Thana and Bassein. This was the beginning of the English presence in India.
The Dutch East India Company established trading posts on different parts along the Indian coast. For some while, they controlled the Malabar southwest coast (Pallipuram, Cochin, Cochin de Baixo / Santa Cruz, Quilon (Coylan), Cannanore, Kundapura, Kayamkulam, Ponnani) and the Coromandel southeastern coast (Golkonda, Bhimunipatnam, Kakinada, Palikol, Pulicat, Parangippettai, Negapatnam) and Surat (1616 -- 1795). They conquered Ceylon from the Portuguese. The Dutch also established trading stations in Travancore and coastal Tamil Nadu as well as at Rajshahi in present - day Bangladesh, Pipely, Hugli - Chinsura, and Murshidabad in present - day West Bengal, Balasore (Baleshwar or Bellasoor) in Odisha, and Ava, Arakan, and Syriam in present - day Myanmar (Burma). Ceylon was lost at the Congress of Vienna in the aftermath of the Napoleonic Wars, where the Dutch having fallen subject to France, saw their colonies raided by Britain. The Dutch later became less involved in India, as they had the Dutch East Indies (now Indonesia) as their prized possession.
At the end of the 16th century, England and the United Netherlands began to challenge Portugal 's monopoly of trade with Asia, forming private joint - stock companies to finance the voyages: the English (later British) East India Company, and the Dutch East India Company, which were chartered in 1600 and 1602 respectively. These companies were intended to carry on the lucrative spice trade, and they focused their efforts on the areas of production, the Indonesian archipelago and especially the "Spice Islands '', and on India as an important market for the trade. The close proximity of London and Amsterdam across the North Sea, and the intense rivalry between England and the Netherlands, inevitably led to conflict between the two companies, with the Dutch gaining the upper hand in the Moluccas (previously a Portuguese stronghold) after the withdrawal of the English in 1622, but with the English enjoying more success in India, at Surat, after the establishment of a factory in 1613.
The Netherlands ' more advanced financial system and the three Anglo - Dutch Wars of the 17th century left the Dutch as the dominant naval and trading power in Asia. Hostilities ceased after the Glorious Revolution of 1688, when the Dutch prince William of Orange ascended the English throne, bringing peace between the Netherlands and England. A deal between the two nations left the more valuable spice trade of the Indonesian archipelago to the Netherlands and the textiles industry of India to England, but textiles overtook spices in terms of profitability, so that by 1720, in terms of sales, the English company had overtaken the Dutch. The English East India Company shifted its focus from Surat -- a hub of the spice trade network -- to Fort St. George.
In 1757 Mir Jafar, the commander in chief of the army of the Nawab of Bengal, along with Jagat Seth, Maharaja Krishna Nath, Umi Chand and some others, secretly connived with the British, asking support to overthrow the Nawab in return for trade grants. The British forces, whose sole duty until then was guarding Company property, were numerically inferior to the Bengali armed forces. At the Battle of Plassey on 23 June 1757, fought between the British under the command of Robert Clive and the Nawab, Mir Jafar 's forces betrayed the Nawab and helped defeat him. Jafar was installed on the throne as a British subservient ruler. The battle transformed British perspective as they realised their strength and potential to conquer smaller Indian kingdoms and marked the beginning of the imperial or colonial era in South Asia.
British policy in Asia during the 19th century was chiefly concerned with expanding and protecting its hold on India, viewed as its most important colony and the key to the rest of Asia. The East India Company drove the expansion of the British Empire in Asia. The company 's army had first joined forces with the Royal Navy during the Seven Years ' War, and the two continued to cooperate in arenas outside India: the eviction of Napoleon from Egypt (1799), the capture of Java from the Netherlands (1811), the acquisition of Singapore (1819) and Malacca (1824), and the defeat of Burma (1826).
From its base in India, the company had also been engaged in an increasingly profitable opium export trade to China since the 1730s. This trade, unlawful in China since it was outlawed by the Qing dynasty in 1729, helped reverse the trade imbalances resulting from the British imports of tea, which saw large outflows of silver from Britain to China. In 1839, the confiscation by the Chinese authorities at Canton of 20,000 chests of opium led Britain to attack China in the First Opium War, and the seizure by Britain of the island of Hong Kong, at that time a minor settlement.
The British had direct or indirect control over all of present - day India before the middle of the 19th century. In 1857, a local rebellion by an army of sepoys escalated into the Rebellion of 1857, which took six months to suppress with heavy loss of life on both sides, although the loss of British lives is in the range of a few thousand, the loss on the Indian side was in the hundreds of thousands. The trigger for the Rebellion has been a subject of controversy. The resistance, although short - lived, was triggered by British East India Company attempts to expand its control of India. According to Olson, several reasons may have triggered the Rebellion. For example, Olson concludes that the East India Company 's attempt to annexe and expand its direct control of India, by arbitrary laws such as Doctrine of Lapse, combined with employment discrimination against Indians, contributed to the 1857 Rebellion. The East India Company officers lived like princes, the company finances were in shambles, and the company 's effectiveness in India was examined by the British crown after 1858. As a result, the East India Company lost its powers of government and British India formally came under direct British rule, with an appointed Governor - General of India. The East India Company was dissolved the following year in 1858. A few years later, Queen Victoria took the title of Empress of India.
India suffered a series of serious crop failures in the late 19th century, leading to widespread famines in which at least 10 million people died. Responding to earlier famines as threats to the stability of colonial rule, the East India Company had already begun to concern itself with famine prevention during the early colonial period. This greatly expanded during the Raj, in which commissions were set up after each famine to investigate the causes and implement new policies, which took until the early 1900s to have an effect.
The slow but momentous reform movement developed gradually into the Indian Independence Movement. During the years of World War I, the hitherto bourgeois "home - rule '' movement was transformed into a popular mass movement by Mahatma Gandhi, a pacifist. Apart from Gandhi, other revolutionaries such as Bagha Jatin, Khudiram Bose, Bhagat Singh, Chandrashekar Azad, Surya Sen, Subhas Chandra Bose, and Pradyumn Ananth Pendyala were not against use of violence to oppose the British rule. The independence movement attained its objective with the independence of Pakistan and India on 14 and 15 August 1947 respectively.
Conservative elements in England consider the independence of India to be the moment that the British Empire ceased to be a world power, following Curzon 's dictum that, "(w) hile we hold on to India, we are a first - rate power. If we lose India, we will decline to a third - rate power. ''
Following the Portuguese, English, and Dutch, the French also established trading bases in India. Their first establishment was in Pondicherry on the Coromandel Coast in southeastern India in 1674. Subsequent French settlements were Chandernagore in Bengal, northeastern India in 1688, Yanam in Andhra Pradesh in 1723, Mahe in 1725, and Karaikal in 1739. The French were constantly in conflict with the Dutch and later on mainly with the British in India. At the height of French power in the mid-18th century, the French occupied large areas of southern India and the area lying in today 's northern Andhra Pradesh and Odisha. Between 1744 and 1761, the British and the French repeatedly attacked and conquered each other 's forts and towns in southeastern India and in Bengal in the northeast. After some initial French successes, the British decisively defeated the French in Bengal in the Battle of Plassey in 1757 and in the southeast in 1761 in the Battle of Wandiwash, after which the British East India Company was the supreme military and political power in southern India as well as in Bengal. In the following decades it gradually increased the size of the territories under its control. The enclaves of Pondichéry, Karaikal, Yanam, Mahé and Chandernagore were returned to France in 1816 and were integrated with the Republic of India in 1954.
Denmark -- Norway held colonial possessions in India for more than 200 years, but the Danish presence in India was of little significance to the major European powers as they presented neither a military nor a mercantile threat. Denmark -- Norway established trading outposts in Tranquebar, Tamil Nadu (1620), Serampore, West Bengal (1755), Calicut, Kerala (1752) and the Nicobar Islands (1750s). At one time, the main Danish and Swedish East Asia companies together imported more tea to Europe than the British did. Their outposts lost economic and strategic importance, and Tranquebar, the last Dano - Norwegian outpost, was sold to the British in October 16, 1868.
The Spanish were briefly given territorial rights to India by Pope Alexander VI on 25 September 1493 by the bull Dudum siquidem before these rights were removed by the Treaty of Tordesillas less than one year later. The Andaman and Nicobar Islands were briefly occupied by the Japanese Empire during World War II.
The wars that took place involving the British East India Company or British India during the Colonial era:
|
captain america's love interest in the first movie | Peggy Carter - wikipedia
Margaret "Peggy '' Carter is a fictional character appearing in American comic books published by Marvel Comics. She is usually depicted as a supporting character in books featuring Captain America. Created by writer Stan Lee and artist Jack Kirby, she first appeared in Tales of Suspense # 77 as a World War II love interest of Steve Rogers in flashback sequences. She would later be better known as a relative of Captain America 's modern - day significant other Sharon Carter.
Hayley Atwell portrays the character in the Marvel Cinematic Universe, beginning with the 2011 film Captain America: The First Avenger, and continuing in the Marvel One - Shot Agent Carter, the 2014 film Captain America: The Winter Soldier, the television series Marvel 's Agents of S.H.I.E.L.D. and Marvel 's Agent Carter, and the 2015 films Avengers: Age of Ultron and Ant - Man.
The character first appeared, unnamed, as a wartime love interest of Captain America in Tales of Suspense # 75 (single panel) and # 77 (May 1966), by writer Stan Lee and artist Jack Kirby. She appeared again as the older sister of Sharon Carter in Captain America # 161 (May 1973). She was later retconned as Sharon 's aunt due to the unaging nature of comic book characters (see Captain America Vol. 5 # 25 (April 2007)). The character has appeared frequently in Captain America stories set during World War II.
An unnamed blonde British agent Agent Zero was rescued from Berlin by the Young Allies after her capture by the Red Skull and then joins up with Captain America in Young Allies # 1 (Summer 1941).
Peggy Carter joins the French Resistance as a teenager and becomes a skilled fighter, who serves on several operations alongside Captain America. The two fall in love, but an exploding shell gives her amnesia, and she is sent to live with her parents in Virginia.
With Captain America thought dead, she lives a quiet life for many years. After Captain America reemerges in the present day, unaged after having been in suspended animation, Carter encounters the villain Doctor Faustus, whom Captain America saves her from.
Peggy Carter is portrayed by Hayley Atwell in the Marvel Cinematic Universe. This version is depicted as a British agent rather than an American.
|
the devil is a part timer season 1 episode 14 english dub | List of the Devil is a Part - Timer! Episodes - wikipedia
The Devil Is a Part - Timer! is 2013 fantasy, comedy Japanese anime series based on the light novels written by Satoshi Wagahara. In another dimension, the Dark Lord Satan and his forces of evil are defeated by the Hero Emilia Justina. Satan and his Demon general Alciel are forced to flee through a portal which drops them off in modern day Japan. With their magic slowly depleting in an unfamiliar world, they are forced to assume the lives of normal human beings in order to survive. The Hero Emilia Justina follows them through the portal and she too is met with the same circumstances. Although Emilia still harbors negative feelings towards Satan for his past acts of evil, they become unlikely allies in order to survive.
The anime is produced by White Fox and directed by Naoto Hosoda, with series composition by Masahiro Yokotani, character designs by Atsushi Ikariya, art direction by Yoshito Takamine and sound direction by Jin Aketagawa. The thirteen episode series premiered between April 4 and June 27, 2013 on Tokyo MX and was later aired on KBS, SUN - TV, BS Nittele, TVA and AT - X. Pony Canyon released the series in Japan on six Blu - ray and DVD volumes starting on July 3, 2013. The anime was acquired by Funimation for streaming in North America. Manga Entertainment later licensed the series for distribution in the United Kingdom. This was followed by an acquisition by Siren Visual for home media distribution in Australia and New Zealand and online streaming on AnimeLab in 2014.
The opening theme is "Zero!! '' by Minami Kuribayashi while the ending theme is "Gekka '' (月 花, lit. "Moon Flower '') by Nano Ripe. "Star Chart '' (スター チャート) by Nano Ripe is used as the ending theme of episode 5 while "Tsumabiku Hitori '' (ツマビクヒトリ) by Nano Ripe is used as the ending theme of episode 13.
Chiho has a strange dream where Satan leaves Earth and returns to Ente Isla, leaving her behind much to her agony. While recalling the dream to Sadao the next day he confirms that he does n't have enough magic to return since he used it all repairing the city from his and Sariel 's battle the previous day. At the same time, they discover Sariel in MgRonald 's freezer since Sadao 's magical bamboo had redirected his portal. Sariel immediately falls head over heels for Mayumi much to her disgust. Meanwhile, Emi also has a strange dream where Satan conquers Sasakuza. As she goes to visit Sadao and co. Shirō asks Sadao for some time off which makes her suspicious, causing her to enlist Emeralda to see what they might be up to. Later, Emi notices numerous boxes being delivered to Sadao 's apartment and grows even more suspicious. At work, Sadao suddenly asks Mayumi for a shift change which causes Chiho to fear that he might return to Ente Isla, but Sadao apologetically keeps their motives hidden from her. Chiho and Emi decide to follow Sadao the next day to find out what they have been up to and discover that he along with Shirō had taken extra jobs to pay for Hanzo 's GPS trackers which had saved them from Sariel. As Emi questions Sadao on the boxes, they return to the Villa Rosa to discover that Hanzo had been caught in a purchasing scam whereby he was tricked into buying useless items. Sadao and Emi head over to the retailer to claim a refund, and discover that Shirō had been working there, along with a threatening manager who refuses a refund due to a contract Hanzo had signed. Emi is able to solve the situation via a cooling - off period. Eventually Shirō and Sadao celebrate their success by going out to a restaurant, while everyone goes back to living normally in the human world.
Pony Canyon began releasing the series in Japan on Blu - ray and DVD volumes starting on July 3, 2013. The complete series was released on Blu - ray and DVD format by Funimation on July 22, 2014, Siren Visual on September 17, 2014 and Manga Entertainment on October 27, 2014. These releases contained English and Japanese audio options and English subtitles.
|
where are they playing the armed forces bowl | Armed Forces Bowl - wikipedia
The Armed Forces Bowl, officially the Lockheed Martin Armed Forces Bowl for sponsorship purposes, is an annual postseason college football bowl game. The game is played in the 44,008 - seat Amon G. Carter Stadium on the campus of Texas Christian University in Fort Worth, Texas, featuring teams from a variety of collegiate football conferences; in addition, the independent United States Military Academy (Army) is also eligible to participate. The game was initially known as the Fort Worth Bowl (2003 -- 2005).
The contest is one of 14 bowls produced by ESPN Events (previously ESPN Regional Television) and has been televised annually on ESPN since its inception. Armed Forces Insurance is the official Insurance Partner of the Armed Forces Bowl and has sponsored the Great American Patriot Award, presented at halftime at the Bowl, since 2006.
The bowl game was inaugurated in 2003 as the PlainsCapital Fort Worth Bowl, reflecting the sponsorship of PlainsCapital Bank. The bank 's sponsorship ended in 2004, and the 2005 game was without corporate sponsorship.
In 2006, Fort Worth based Bell Helicopter Textron took over sponsorship, and thus the game became officially known as the Bell Helicopter Armed Forces Bowl. The Bell sponsorship ended in 2013. During this time, the 2010 and 2011 Armed Forces Bowl were held at Gerald J. Ford Stadium on the campus of Southern Methodist University in the Dallas enclave of University Park, while Amon G. Carter Stadium was undergoing a major renovation. The game returned to Amon Carter Stadium in Fort Worth in 2012, after construction on that stadium was completed.
Since the name was changed to the Armed Forces Bowl, one of the three FBS - playing service academies (Army, Navy, and Air Force) has appeared in the game nine times, in twelve playings through the 2017 game. Contractual tie - ins with the American Athletic Conference (home of Navy), the Mountain West Conference (home of Air Force) and independent Army assures that one of those schools could appear in the game every year, if bowl eligible and not already committed to another bowl.
Alltel was to assume the title sponsorship and naming rights to the game beginning in 2014, which would have been titled the Alltel Wireless Bowl to promote its mobile division, but the deal fell through. Instead, Lockheed Martin became the game 's sponsor. The company has a major presence in the Dallas - Fort Worth Metroplex: the company 's Lockheed Martin Aeronautics division is based in Fort Worth while its Lockheed Martin Missiles and Fire Control division is based in nearby Grand Prairie, Texas.
The bowl 's partnership with the Big 12 Conference ended with the 2005 season. From 2006 to 2009, the Mountain West Conference was signed to provide a team to face either a team from the Pacific - 10 Conference or Conference USA (depending on the year; Pac - 10 teams would play in odd number years while C - USA teams would play in even numbered years). As such, the 2006 and 2008 games featured Conference USA teams Tulsa and Houston, respectively, whereas California represented the Pac - 10 in 2007. The Pac - 10 was unable to send a representative to the game in 2009, so Conference USA sent Houston to the game for a second consecutive year. In 2010, since the Mountain West did not have enough eligible teams and Army was bowl eligible, they played SMU in the Armed Forces Bowl.
Following the 2013 football season, the Armed Forces Bowl signed multi-year agreements with the American Athletic Conference, Big Ten Conference, Big 12 Conference, Mountain West Conference, Army and Navy to set bowl match - ups for the next six seasons (Navy would later join the American Athletic Conference).
Starting with the 2008 game, two MVPs are selected; one from each team.
Won: Boise State, BYU, Cincinnati, Kansas, Louisiana Tech, Rice, Utah Lost: Marshall, Middle Tennessee, Pittsburgh, San Diego State, SMU, TCU
Through the December 2017 playing, there have been 15 games (30 total appearances).
|
who wrote the passionate shepherd to his love | The Passionate Shepherd to his love - wikipedia
The Passionate Shepherd to His Love, known for its first line "Come live with me and be my love '', is a poem written by the English poet Christopher Marlowe and published in 1599 (six years after the poet 's death). In addition to being one of the best - known love poems in the English language, it is considered one of the earliest examples of the pastoral style of British poetry in the late Renaissance period. It is composed in iambic tetrameter (four feet of unstressed / stressed syllables), with seven (sometimes six, depending on the version) stanzas each composed of two rhyming couplets. It is often used for scholastic purposes for its regular meter and rhythm.
The poem was the subject of a well - known "reply '' by Walter Raleigh, called "The Nymph 's Reply to the Shepherd ''. The interplay between the two poems reflects the relationship that Marlowe had with Raleigh. Marlowe was young, his poetry romantic and rhythmic, and in the Passionate Shepherd he idealises the love object (the Nymph). Raleigh was an old courtier and an accomplished poet himself. His attitude is more jaded, and in writing "The Nymph 's Reply, '' it is clear that he is rebuking Marlowe for being naive and juvenile in both his writing style and the Shepherd 's thoughts about love. Subsequent responses to Marlowe have come from John Donne, C. Day Lewis, William Carlos Williams, Ogden Nash, W.D. Snodgrass, Douglas Crase and Greg Delanty, and Robert Herrick.
In about 1846 the composer William Sterndale Bennett set the words as a four - part madrigal. The poem was adapted for the lyrics of the 1930s - style swing song performed by Stacey Kent at the celebratory ball in the 1995 film of William Shakespeare 's Richard III. It was also the third of the Liebeslieder Polkas for Mixed Chorus and Piano Five Hands, supposedly written by fictional composer P.D.Q. Bach (Peter Schickele) and performed by the Swarthmore College Chorus in 1980. Other songs to draw lyrics from the poem include The Prayer Chain song "Antarctica '' (1996) from the album of the same name, and The Real Tuesday Weld song "Let It Come Down '' from their album The Last Werewolf (2011).
|
who sang do you love me in the sixties | Do You Love Me - wikipedia
"Do You Love Me '' is a 1962 hit single recorded by The Contours for Motown 's Gordy Records label. Written and produced by Motown CEO Berry Gordy, Jr., "Do You Love Me? '' was the Contours ' only Top 40 single on the Billboard Hot 100 chart in the United States. Notably, the record achieved this feat twice, once in 1962 and again in 1988. A main point of the song is to name the Mashed Potato, The Twist, and a variation of the title "I like it like that '', as "You like it like this '', and many other fad dances of the 1960s.
The song is noted for the spoken recitation heard in the introduction which goes: "You broke my heart / ' Cause I could n't dance / You did n't even want me around / And now I 'm back / To let you know / I can really shake ' em down ''
The song is noted for its false ending at 2: 26.
Berry Gordy wrote "Do You Love Me '' with the intention that The Temptations, who had no Top 40 hits to their name yet, would record it. However, when Gordy wanted to locate the group and record the song, they were nowhere to be found (the Temptations had not been made aware of Gordy 's intentions and had departed Motown 's Hitsville USA recording studio for a local Detroit gospel music showcase).
After spending some time looking for the Temptations, Gordy ran into the Contours (Billy Gordon, Hubert Johnson, Billy Hoggs, Joe Billingslea, Sylvester Potts, and guitarist Hugh Davis) in the hallway. Wanting to record and release "Do You Love Me '' as soon as possible, Gordy decided to let them record his "sure - fire hit '' instead of the Temptations. The Contours, who were in danger of being dropped from the label after their first two singles ("Whole Lotta ' Woman '' and "The Stretch '') failed to chart, were so elated at Gordy 's offer that they immediately began hugging and thanking him.
"Do You Love Me, '' the fifth release on Gordy Records, became a notably successful dance record, built around Gordon 's screaming vocals. Selling over a million copies, "Do You Love Me '' peaked at number three on the Billboard Hot 100 for three weeks starting on October 20, 1962 and was a number - one hit on the Billboard R&B Singles chart. An album featuring the single, Do You Love Me (Now That I Can Dance), was also released. None of the Contours ' future singles lived up to the success of "Do You Love Me '', although its success won the group a headlining position on Motown 's very first Motor Town Revue tour.
Like many American R&B songs of the 1960s, "Do You Love Me? '' was covered by a number of British Invasion groups. Three British groups who recorded their own versions of the song were Brian Poole and the Tremeloes (who hit number one with it in the UK Singles Chart after learning it from Liverpool 's Faron 's Flamingos), the Dave Clark Five, and The Hollies on their 1964 album Stay with the Hollies. The song has also been covered by The Sonics, The Kingsmen, Paul Revere & the Raiders, and Johnny Thunders and the Heartbreakers. In 1965, Bob Marley and the Wailers recorded the song "Playboy, '' which incorporates "Do You Love Me '' 's chorus. The song was covered by the English glam rock band Mud for their album mud rock (1974). The song was one of the highlights of The Blues Brothers ' live set. Bruce Springsteen frequently ended his shows in the mid-1980s with the song, as part of a medley with "Twist and Shout ''. Alvin and the Chipmunks covered the song for their 1988 album The Chipmunks and The Chipettes: Born to Rock. Indie band Steadman has a well known cover of the song, played in the style of the Dave Clark Five. Andy Fraser covered the song in the style of the 1980s back in 1984. Westlife performed a live version for their The Greatest Hits Tour in 2003. German girl group Preluders covered the song for their cover album Prelude to History in 2004. In 1992 David Hasselhoff famously sang the song in an episode of Baywatch. In November 2013, The Overtones covered the song for their album Saturday Night at the Movies. In February 2015, Chester See & Andy Lange uploaded a cover of "Do You Love Me '' to See 's YouTube channel. In 2017 Colt Prattes, Nicole Scherzinger, and J. Quinton Johnson covered the song for ABC 's Dirty Dancing movie remake.
"Do You Love Me '' is featured prominently in the 1987 film Dirty Dancing, reviving the record 's popularity. Re-issued as a single from the More Dirty Dancing soundtrack album, "Do You Love Me '' became a hit for the second time, peaking at number eleven on the Billboard Hot 100 in August 1988. The Contours, by then composed of Joe Billingslea and three new members, joined Ronnie Spector and Bill Medley, among others, on a ' Dirty Dancing Tour ' resulting from the success of the film. The song also appeared in the episode "The End '' in season 5 of TV series Supernatural. The song was also made into a music video in the Tiny Toon Adventures episode "Toon TV ''. David Hasselhoff performed it on Baywatch in the 1992 episode "The Reunion ''. He later performed "Do You Love Me '' with Kids Incorporated in 1984 in the Season 1 episode "School 's For Fools ''. Kids Incorporated covered "Do You Love Me '' in 1991 during the Season 7 episode "Teen Spotlight ''.
With the re-release of the single in 1988, Motown also released a 12 '' maxi - single (Motown 68009) with an extended dance remix, running 6: 26. The remix was also included in the late 1988 Motown CD reissue of the album Do You Love Me (Now That I Can Dance) on Motown 37463 - 5415 - 2. This remix only appears on the CD and Cassette tape issues, as the Vinyl LP of the same release has the original 2: 54 minute hit version.
This song was in the soundtrack for Dirty Dancing (1987), Sleepwalkers (1992), and Getting Even with Dad (1994). It was featured in the 1979 movie The Wanderers. The song also had an appearance in Beethoven 's 2nd, where George Newton (Charles Grodin) dances to the song while preparing his breakfast. Was also in Teen Wolf Too, sang by Jason Bateman in 1987.
|
the world is but a stage merchant of venice | All the world 's a stage - wikipedia
"All the world 's a stage '' is the phrase that begins a monologue from William Shakespeare 's As You Like It, spoken by the melancholy Jaques in Act II Scene VII. The speech compares the world to a stage and life to a play, and catalogues the seven stages of a man 's life, sometimes referred to as the seven ages of man: infant, schoolboy, lover, soldier, justice, Pantalone and old age, facing imminent death. It is one of Shakespeare 's most frequently quoted passages.
All the world 's a stage, And all the men and women merely players; They have their exits and their entrances, And one man in his time plays many parts, His acts being seven ages. At first, the infant, Mewling and puking in the nurse 's arms. Then the whining schoolboy, with his satchel And shining morning face, creeping like snail Unwillingly to school. And then the lover, Sighing like furnace, with a woeful ballad Made to his mistress ' eyebrow. Then a soldier, Full of strange oaths and bearded like the pard, Jealous in honor, sudden and quick in quarrel, Seeking the bubble reputation Even in the cannon 's mouth. And then the justice, In fair round belly with good capon lined, With eyes severe and beard of formal cut, Full of wise saws and modern instances; And so he plays his part. The sixth age shifts Into the lean and slippered pantaloon, With spectacles on nose and pouch on side; His youthful hose, well saved, a world too wide For his shrunk shank, and his big manly voice, Turning again toward childish treble, pipes And whistles in his sound. Last scene of all, That ends this strange eventful history, Is second childishness and mere oblivion, Sans teeth, sans eyes, sans taste, sans everything.
The comparison of the world to a stage and people to actors long predated Shakespeare. Richard Edwardes 's play Damon and Pythias, written in the year Shakespeare was born, contains the lines, "Pythagoras said that this world was like a stage / Whereon many play their parts; the lookers - on, the sage ''. When it was founded in 1599 Shakespeare 's own theatre, The Globe, may have used the motto Totus mundus agit histrionem (All the world plays the actor), the Latin text of which is derived from a 12th - century treatise. Ultimately the words derive from quod fere totus mundus exercet histrionem (because almost the whole world are actors) attributed to Petronius, a phrase which had wide circulation in England at the time.
In his own earlier work, The Merchant of Venice, Shakespeare also had one of his main characters, Antonio, comparing the world to a stage:
I hold the world but as the world, Gratiano; A stage where every man must play a part, And mine a sad one.
In his work The Praise of Folly, first printed in 1511, Renaissance humanist Erasmus asks, "For what else is the life of man but a kind of play in which men in various costumes perform until the director motions them off the stage. ''
Likewise the division of human life into a series of ages was a commonplace of art and literature, which Shakespeare would have expected his audiences to recognize. The number of ages varied: three and four being the most common among ancient writers such as Aristotle. The concept of seven ages derives from medieval philosophy, which constructed groups of seven, as in the seven deadly sins, for theological reasons. The seven ages model dates from the 12th century. King Henry V had a tapestry illustrating the seven ages of man.
According to T.W. Baldwin, Shakespeare 's version of the concept of the ages of man is based primarily upon Palingenius ' book Zodiacus Vitae, a school text he would have studied at the Stratford Grammar School, which also enumerates stages of human life. He also takes elements from Ovid and other sources known to him.
|
how did the renaissance contribute to the scientific revolution (5 points) | Scientific Revolution - wikipedia
The Scientific Revolution was a series of events that marked the emergence of modern science during the early modern period, when developments in mathematics, physics, astronomy, biology (including human anatomy) and chemistry transformed the views of society about nature. The Scientific Revolution took place in Europe towards the end of the Renaissance period and continued through the late 18th century, influencing the intellectual social movement known as the Enlightenment. While its dates are debated, the publication in 1543 of Nicolaus Copernicus 's De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is often cited as marking the beginning of the Scientific Revolution.
The concept of a scientific revolution taking place over an extended period emerged in the eighteenth century in the work of Jean Sylvain Bailly, who saw a two - stage process of sweeping away the old and establishing the new. The beginning of the Scientific Revolution, the Scientific Renaissance, was focused on the recovery of the knowledge of the ancients; this is generally considered to have ended in 1632 with publication of Galileo 's Dialogue Concerning the Two Chief World Systems. The completion of the Scientific Revolution is attributed to the "grand synthesis '' of Isaac Newton 's 1687 Principia. The work formulated the laws of motion and universal gravitation thereby completing the synthesis of a new cosmology. By the end of the 18th century, the Age of Enlightenment that followed Scientific Revolution had given way to the "Age of Reflection. ''
Great advances in science have been termed "revolutions '' since the 18th century. In 1747, Clairaut wrote that "Newton was said in his own lifetime to have created a revolution ''. The word was also used in the preface to Lavoisier 's 1789 work announcing the discovery of oxygen. "Few revolutions in science have immediately excited so much general notice as the introduction of the theory of oxygen... Lavoisier saw his theory accepted by all the most eminent men of his time, and established over a great part of Europe within a few years from its first promulgation. ''
In the 19th century, William Whewell described the revolution in science itself -- the scientific method -- that had taken place in the 15th -- 16th century. "Among the most conspicuous of the revolutions which opinions on this subject have undergone, is the transition from an implicit trust in the internal powers of man 's mind to a professed dependence upon external observation; and from an unbounded reverence for the wisdom of the past, to a fervid expectation of change and improvement. '' This gave rise to the common view of the Scientific Revolution today:
A new view of nature emerged, replacing the Greek view that had dominated science for almost 2,000 years. Science became an autonomous discipline, distinct from both philosophy and technology and came to be regarded as having utilitarian goals.
The Scientific Revolution is traditionally assumed to start with the Copernican Revolution (initiated in 1543) and to be complete in the "grand synthesis '' of Isaac Newton 's 1687 Principia. Much of the change of attitude came from Francis Bacon whose "confident and emphatic announcement '' in the modern progress of science inspired the creation of scientific societies such as the Royal Society, and Galileo who championed Copernicus and developed the science of motion.
In the 20th century, Alexandre Koyré introduced the term "scientific revolution '', centering his analysis on Galileo. The term was popularized by Butterfield in his Origins of Modern Science. Thomas Kuhn 's 1962 work The Structure of Scientific Revolutions emphasized that different theoretical frameworks -- such as Einstein 's relativity theory and Newton 's theory of gravity, which it replaced -- can not be directly compared.
The transformation of scientific subject also concerns the social sciences. These inherit from a natural philosophy a dispute that authors like Francesco Bacone and Descartes take charge of following. This particular aspect is questioned. Thus arise disciplines that reflect the natural world with social laws. They are sociology, social policy, the specialized study of morality. Authors such as Auguste Comte and Herbert Spencer seek a connection with the affirmation of the inductive method, from the 16th century to the 19th century. Sociological science is born in this moment of great evolution for the sciences. Even the history of science seems to include subjects such as new psychology, morality and sociology (Cfr. Guglielmo Rinzivillo, Natura, cultura e induzione nell'età delle scienze. Fatti e idee del movimento scientifico in Francia e Inghilterra, Rome, New Culture, 2015, Series GNOSEIS, ISBN 9788868124977 - DOI 10.4458 / 4977).
The period saw a fundamental transformation in scientific ideas across mathematics, physics, astronomy, and biology in institutions supporting scientific investigation and in the more widely held picture of the universe. The Scientific Revolution led to the establishment of several modern sciences. In 1984, Joseph Ben - David wrote:
Rapid accumulation of knowledge, which has characterized the development of science since the 17th century, had never occurred before that time. The new kind of scientific activity emerged only in a few countries of Western Europe, and it was restricted to that small area for about two hundred years. (Since the 19th century, scientific knowledge has been assimilated by the rest of the world).
Many contemporary writers and modern historians claim that there was a revolutionary change in world view. In 1611 the English poet, John Donne, wrote:
(The) new Philosophy calls all in doubt,
The Element of fire is quite put out; The Sun is lost, and th'earth, and no man 's wit Can well direct him where to look for it.
Mid-20th - century historian Herbert Butterfield was less disconcerted, but nevertheless saw the change as fundamental:
Since that revolution turned the authority in English not only of the Middle Ages but of the ancient world -- since it started not only in the eclipse of scholastic philosophy but in the destruction of Aristotelian physics -- it outshines everything since the rise of Christianity and reduces the Renaissance and Reformation to the rank of mere episodes, mere internal displacements within the system of medieval Christendom... (It) looms so large as the real origin both of the modern world and of the modern mentality that our customary periodization of European history has become an anachronism and an encumbrance.
The history professor Peter Harrison attributes Christianity to having contributed to the rise of the Scientific Revolution:
historians of science have long known that religious factors played a significantly positive role in the emergence and persistence of modern science in the West. Not only were many of the key figures in the rise of science individuals with sincere religious commitments, but the new approaches to nature that they pioneered were underpinned in various ways by religious assumptions... Yet, many of the leading figures in the scientific revolution imagined themselves to be champions of a science that was more compatible with Christianity than the medieval ideas about the natural world that they replaced.
The Scientific Revolution was built upon the foundation of ancient Greek learning and science in the Middle Ages, as it had been elaborated and further developed by Roman / Byzantine science and medieval Islamic science. Some scholars have noted a direct tie between "particular aspects of traditional Christianity '' and the rise of science. The "Aristotelian tradition '' was still an important intellectual framework in the 17th century, although by that time natural philosophers had moved away from much of it. Key scientific ideas dating back to classical antiquity had changed drastically over the years, and in many cases been discredited. The ideas that remained, which were transformed fundamentally during the Scientific Revolution, include:
It is important to note that ancient precedent existed for alternative theories and developments which prefigured later discoveries in the area of physics and mechanics; but in light of the limited number of works to survive translation in a period when many books were lost to warfare, such developments remained obscure for centuries and are traditionally held to have had little effect on the re-discovery of such phenomena; whereas the invention of the printing press made the wide dissemination of such incremental advances of knowledge commonplace. Meanwhile, however, significant progress in geometry, mathematics, and astronomy was made in medieval times.
It is also true that many of the important figures of the Scientific Revolution shared in the general Renaissance respect for ancient learning and cited ancient pedigrees for their innovations. Nicolaus Copernicus (1473 -- 1543), Galileo Galilei (1564 -- 1642), Kepler (1571 -- 1630) and Newton (1642 -- 1727), all traced different ancient and medieval ancestries for the heliocentric system. In the Axioms Scholium of his Principia, Newton said its axiomatic three laws of motion were already accepted by mathematicians such as Huygens (1629 -- 1695), Wallace, Wren and others. While preparing a revised edition of his Principia, Newton attributed his law of gravity and his first law of motion to a range of historical figures.
Despite these qualifications, the standard theory of the history of the Scientific Revolution claims that the 17th century was a period of revolutionary scientific changes. Not only were there revolutionary theoretical and experimental developments, but that even more importantly, the way in which scientists worked was radically changed. For instance, although intimations of the concept of inertia are suggested sporadically in ancient discussion of motion, the salient point is that Newton 's theory differed from ancient understandings in key ways, such as an external force being a requirement for violent motion in Aristotle 's theory.
Under the scientific method as conceived in the 17th century, natural and artificial circumstances were set aside as a research tradition of systematic experimentation was slowly accepted by the scientific community. The philosophy of using an inductive approach to obtain knowledge -- to abandon assumption and to attempt to observe with an open mind -- was in contrast with the earlier, Aristotelian approach of deduction, by which analysis of known facts produced further understanding. In practice, many scientists and philosophers believed that a healthy mix of both was needed -- the willingness to question assumptions, yet also to interpret observations assumed to have some degree of validity.
By the end of the Scientific Revolution the qualitative world of book - reading philosophers had been changed into a mechanical, mathematical world to be known through experimental research. Though it is certainly not true that Newtonian science was like modern science in all respects, it conceptually resembled ours in many ways. Many of the hallmarks of modern science, especially with regard to its institutionalization and professionalization, did not become standard until the mid-19th century.
The Aristotelian scientific tradition 's primary mode of interacting with the world was through observation and searching for "natural '' circumstances through reasoning. Coupled with this approach was the belief that rare events which seemed to contradict theoretical models were aberrations, telling nothing about nature as it "naturally '' was. During the Scientific Revolution, changing perceptions about the role of the scientist in respect to nature, the value of evidence, experimental or observed, led towards a scientific methodology in which empiricism played a large, but not absolute, role.
By the start of the Scientific Revolution, empiricism had already become an important component of science and natural philosophy. Prior thinkers, including the early - 14th - century nominalist philosopher William of Ockham, had begun the intellectual movement toward empiricism.
The term British empiricism came into use to describe philosophical differences perceived between two of its founders Francis Bacon, described as empiricist, and René Descartes, who was described as a rationalist. Thomas Hobbes, George Berkeley, and David Hume were the philosophy 's primary exponents, who developed a sophisticated empirical tradition as the basis of human knowledge.
An influential formulation of empiricism was John Locke 's An Essay Concerning Human Understanding (1689), in which he maintained that the only true knowledge that could be accessible to the human mind was that which was based on experience. He wrote that the human mind was created as a tabula rasa, a "blank tablet, '' upon which sensory impressions were recorded and built up knowledge through a process of reflection.
The philosophical underpinnings of the Scientific Revolution were laid out by Francis Bacon, who has been called the father of empiricism. His works established and popularised inductive methodologies for scientific inquiry, often called the Baconian method, or simply the scientific method. His demand for a planned procedure of investigating all things natural marked a new turn in the rhetorical and theoretical framework for science, much of which still surrounds conceptions of proper methodology today.
Bacon proposed a great reformation of all process of knowledge for the advancement of learning divine and human, which he called Instauratio Magna (The Great Instauration). For Bacon, this reformation would lead to a great advancement in science and a progeny of new inventions that would relieve mankind 's miseries and needs. His Novum Organum was published in 1620. He argued that man is "the minister and interpreter of nature '', that "knowledge and human power are synonymous '', that "effects are produced by the means of instruments and helps '', and that "man while operating can only apply or withdraw natural bodies; nature internally performs the rest '', and later that "nature can only be commanded by obeying her ''. Here is an abstract of the philosophy of this work, that by the knowledge of nature and the using of instruments, man can govern or direct the natural work of nature to produce definite results. Therefore, that man, by seeking knowledge of nature, can reach power over it -- and thus reestablish the "Empire of Man over creation '', which had been lost by the Fall together with man 's original purity. In this way, he believed, would mankind be raised above conditions of helplessness, poverty and misery, while coming into a condition of peace, prosperity and security.
For this purpose of obtaining knowledge of and power over nature, Bacon outlined in this work a new system of logic he believed to be superior to the old ways of syllogism, developing his scientific method, consisting of procedures for isolating the formal cause of a phenomenon (heat, for example) through eliminative induction. For him, the philosopher should proceed through inductive reasoning from fact to axiom to physical law. Before beginning this induction, though, the enquirer must free his or her mind from certain false notions or tendencies which distort the truth. In particular, he found that philosophy was too preoccupied with words, particularly discourse and debate, rather than actually observing the material world: "For while men believe their reason governs words, in fact, words turn back and reflect their power upon the understanding, and so render philosophy and science sophistical and inactive. ''
Bacon considered that it is of greatest importance to science not to keep doing intellectual discussions or seeking merely contemplative aims, but that it should work for the bettering of mankind 's life by bringing forth new inventions, having even stated that "inventions are also, as it were, new creations and imitations of divine works ''. He explored the far - reaching and world - changing character of inventions, such as the printing press, gunpowder and the compass.
Bacon first described the experimental method.
There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle (hypothesis), and then by means of the candle shows the way (arranges and delimits the experiment); commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms (theories), and from established axioms again new experiments.
William Gilbert was an early advocate of this method. He passionately rejected both the prevailing Aristotelian philosophy and the Scholastic method of university teaching. His book De Magnete was written in 1600, and he is regarded by some as the father of electricity and magnetism. In this work, he describes many of his experiments with his model Earth called the terrella. From these experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses point north.
De Magnete was influential not only because of the inherent interest of its subject matter, but also for the rigorous way in which Gilbert described his experiments and his rejection of ancient theories of magnetism. According to Thomas Thomson, "Gilbert ('s)... book on magnetism published in 1600, is one of the finest examples of inductive philosophy that has ever been presented to the world. It is the more remarkable, because it preceded the Novum Organum of Bacon, in which the inductive method of philosophizing was first explained. ''
Galileo Galilei has been called the "father of modern observational astronomy '', the "father of modern physics '', the "father of science '', and "the Father of Modern Science ''. His original contributions to the science of motion were made through an innovative combination of experiment and mathematics.
Galileo was one of the first modern thinkers to clearly state that the laws of nature are mathematical. In The Assayer he wrote "Philosophy is written in this grand book, the universe... It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures;... '' His mathematical analyses are a further development of a tradition employed by late scholastic natural philosophers, which Galileo learned when he studied philosophy. He ignored Aristotelianism. In broader terms, his work marked another step towards the eventual separation of science from both philosophy and religion; a major development in human thought. He was often willing to change his views in accordance with observation. In order to perform his experiments, Galileo had to set up standards of length and time, so that measurements made on different days and in different laboratories could be compared in a reproducible fashion. This provided a reliable foundation on which to confirm mathematical laws using inductive reasoning.
Galileo showed an appreciation for the relationship between mathematics, theoretical physics, and experimental physics. He understood the parabola, both in terms of conic sections and in terms of the ordinate (y) varying as the square of the abscissa (x). Galilei further asserted that the parabola was the theoretically ideal trajectory of a uniformly accelerated projectile in the absence of friction and other disturbances. He conceded that there are limits to the validity of this theory, noting on theoretical grounds that a projectile trajectory of a size comparable to that of the Earth could not possibly be a parabola, but he nevertheless maintained that for distances up to the range of the artillery of his day, the deviation of a projectile 's trajectory from a parabola would be only very slight.
Scientific knowledge, according to the Aristotelians, was concerned with establishing true and necessary causes of things. To the extent that medieval natural philosophers used mathematical problems, they limited social studies to theoretical analyses of local speed and other aspects of life. The actual measurement of a physical quantity, and the comparison of that measurement to a value computed on the basis of theory, was largely limited to the mathematical disciplines of astronomy and optics in Europe.
In the 16th and 17th centuries, European scientists began increasingly applying quantitative measurements to the measurement of physical phenomena on the Earth. Galileo maintained strongly that mathematics provided a kind of necessary certainty that could be compared to God 's: "... with regard to those few (mathematical propositions) which the human intellect does understand, I believe its knowledge equals the Divine in objective certainty... ''
Galileo anticipates the concept of a systematic mathematical interpretation of the world in his book Il Saggiatore:
Philosophy (i.e., physics) is written in this grand book -- I mean the universe -- which stands continually open to our gaze, but it can not be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.
Aristotle recognized four kinds of causes, and where applicable, the most important of them is the "final cause ''. The final cause was the aim, goal, or purpose of some natural process or man - made thing. Until the Scientific Revolution, it was very natural to see such aims, such as a child 's growth, for example, leading to a mature adult. Intelligence was assumed only in the purpose of man - made artifacts; it was not attributed to other animals or to nature.
In "mechanical philosophy '' no field or action at a distance is permitted, particles or corpuscles of matter are fundamentally inert. Motion is caused by direct physical collision. Where natural substances had previously been understood organically, the mechanical philosophers viewed them as machines. As a result, Isaac Newton 's theory seemed like some kind of throwback to "spooky action at a distance ''. According to Thomas Kuhn, Newton and Descartes held the teleological principle that God conserved the amount of motion in the universe:
Gravity, interpreted as an innate attraction between every pair of particles of matter, was an occult quality in the same sense as the scholastics ' "tendency to fall '' had been... By the mid eighteenth century that interpretation had been almost universally accepted, and the result was a genuine reversion (which is not the same as a retrogression) to a scholastic standard. Innate attractions and repulsions joined size, shape, position and motion as physically irreducible primary properties of matter.
Newton had also specifically attributed the inherent power of inertia to matter, against the mechanist thesis that matter has no inherent powers. But whereas Newton vehemently denied gravity was an inherent power of matter, his collaborator Roger Cotes made gravity also an inherent power of matter, as set out in his famous preface to the Principia 's 1713 second edition which he edited, and contradicted Newton himself. And it was Cotes 's interpretation of gravity rather than Newton 's that came to be accepted.
The first moves towards the institutionalization of scientific investigation and dissemination took the form of the establishment of societies, where new discoveries were aired, discussed and published. The first scientific society to be established was the Royal Society of London. This grew out of an earlier group, centred around Gresham College in the 1640s and 1650s. According to a history of the College:
The scientific network which centred on Gresham College played a crucial part in the meetings which led to the formation of the Royal Society.
These physicians and natural philosophers were influenced by the "new science '', as promoted by Francis Bacon in his New Atlantis, from approximately 1645 onwards. A group known as The Philosophical Society of Oxford was run under a set of rules still retained by the Bodleian Library.
On 28 November 1660, the 1660 committee of 12 announced the formation of a "College for the Promoting of Physico - Mathematical Experimental Learning '', which would meet weekly to discuss science and run experiments. At the second meeting, Robert Moray announced that the King approved of the gatherings, and a Royal charter was signed on 15 July 1662 creating the "Royal Society of London '', with Lord Brouncker serving as the first President. A second Royal Charter was signed on 23 April 1663, with the King noted as the Founder and with the name of "the Royal Society of London for the Improvement of Natural Knowledge ''; Robert Hooke was appointed as Curator of Experiments in November. This initial royal favour has continued, and since then every monarch has been the patron of the Society.
The Society 's first Secretary was Henry Oldenburg. Its early meetings included experiments performed first by Robert Hooke and then by Denis Papin, who was appointed in 1684. These experiments varied in their subject area, and were both important in some cases and trivial in others. The society began publication of Philosophical Transactions from 1665, the oldest and longest - running scientific journal in the world, which established the important principles of scientific priority and peer review.
The French established the Academy of Sciences in 1666. In contrast to the private origins of its British counterpart, the Academy was founded as a government body by Jean - Baptiste Colbert. Its rules were set down in 1699 by King Louis XIV, when it received the name of ' Royal Academy of Sciences ' and was installed in the Louvre in Paris.
As the Scientific Revolution was not marked by any single change, the following new ideas contributed to what is called the Scientific Revolution. Many of them were revolutions in their own fields.
For almost five millennia, the geocentric model of the Earth as the center of the universe had been accepted by all but a few astronomers. In Aristotle 's cosmology, Earth 's central location was perhaps less significant than its identification as a realm of imperfection, inconstancy, irregularity and change, as opposed to the "heavens '' (Moon, Sun, planets, stars), which were regarded as perfect, permanent, unchangeable, and in religious thought, the realm of heavenly beings. The Earth was even composed of different material, the four elements "earth '', "water '', "fire '', and "air '', while sufficiently far above its surface (roughly the Moon 's orbit), the heavens were composed of different substance called "aether ''. The heliocentric model that replaced it involved not only the radical displacement of the earth to an orbit around the sun, but its sharing a placement with the other planets implied a universe of heavenly components made from the same changeable substances as the Earth. Heavenly motions no longer needed to be governed by a theoretical perfection, confined to circular orbits.
Copernicus ' 1543 work on the heliocentric model of the solar system tried to demonstrate that the sun was the center of the universe. Few were bothered by this suggestion, and the pope and several archbishops were interested enough by it to want more detail. His model was later used to create the calendar of Pope Gregory XIII. However, the idea that the earth moved around the sun was doubted by most of Copernicus ' contemporaries. It contradicted not only empirical observation, due to the absence of an observable stellar parallax, but more significantly at the time, the authority of Aristotle.
The discoveries of Johannes Kepler and Galileo gave the theory credibility. Kepler was an astronomer who, using the accurate observations of Tycho Brahe, proposed that the planets move around the sun not in circular orbits, but in elliptical ones. Together with his other laws of planetary motion, this allowed him to create a model of the solar system that was an improvement over Copernicus ' original system. Galileo 's main contributions to the acceptance of the heliocentric system were his mechanics, the observations he made with his telescope, as well as his detailed presentation of the case for the system. Using an early theory of inertia, Galileo could explain why rocks dropped from a tower fall straight down even if the earth rotates. His observations of the moons of Jupiter, the phases of Venus, the spots on the sun, and mountains on the moon all helped to discredit the Aristotelian philosophy and the Ptolemaic theory of the solar system. Through their combined discoveries, the heliocentric system gained support, and at the end of the 17th century it was generally accepted by astronomers.
This work culminated in the work of Isaac Newton. Newton 's Principia formulated the laws of motion and universal gravitation, which dominated scientists ' view of the physical universe for the next three centuries. By deriving Kepler 's laws of planetary motion from his mathematical description of gravity, and then using the same principles to account for the trajectories of comets, the tides, the precession of the equinoxes, and other phenomena, Newton removed the last doubts about the validity of the heliocentric model of the cosmos. This work also demonstrated that the motion of objects on Earth and of celestial bodies could be described by the same principles. His prediction that the Earth should be shaped as an oblate spheroid was later vindicated by other scientists. His laws of motion were to be the solid foundation of mechanics; his law of universal gravitation combined terrestrial and celestial mechanics into one great system that seemed to be able to describe the whole world in mathematical formulae.
As well as proving the heliocentric model, Newton also developed the theory of gravitation. In 1679, Newton began to consider gravitation and its effect on the orbits of planets with reference to Kepler 's laws of planetary motion. This followed stimulation by a brief exchange of letters in 1679 -- 80 with Robert Hooke, who had been appointed to manage the Royal Society 's correspondence, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. Newton 's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680 -- 1681, on which he corresponded with John Flamsteed. After the exchanges with Hooke, Newton worked out proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector (see Newton 's law of universal gravitation -- History and De motu corporum in gyrum). Newton communicated his results to Edmond Halley and to the Royal Society in De motu corporum in gyrum, in 1684. This tract contained the nucleus that Newton developed and expanded to form the Principia.
The Principia was published on 5 July 1687 with encouragement and financial help from Edmond Halley. In this work, Newton stated the three universal laws of motion that contributed to many advances during the Industrial Revolution which soon followed and were not to be improved upon for more than 200 years. Many of these advancements continue to be the underpinnings of non-relativistic technologies in the modern world. He used the Latin word gravitas (weight) for the effect that would become known as gravity, and defined the law of universal gravitation.
Newton 's postulate of an invisible force able to act over vast distances led to him being criticised for introducing "occult agencies '' into science. Later, in the second edition of the Principia (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomena implied a gravitational attraction, as they did; but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomena. (Here Newton used what became his famous expression "hypotheses non fingo '').
The writings of Greek physician Galen had dominated European medical thinking for over a millennium. The Flemish scholar Vesalius demonstrated mistakes in the Galen 's ideas. Vesalius dissected human corpses, whereas Galen dissected animal corpses. Published in 1543, Vesalius ' De humani corporis fabrica was a groundbreaking work of human anatomy. It emphasized the priority of dissection and what has come to be called the "anatomical '' view of the body, seeing human internal functioning as an essentially corporeal structure filled with organs arranged in three - dimensional space. This was in stark contrast to many of the anatomical models used previously, which had strong Galenic / Aristotelean elements, as well as elements of astrology.
Besides the first good description of the sphenoid bone, he showed that the sternum consists of three portions and the sacrum of five or six; and described accurately the vestibule in the interior of the temporal bone. He not only verified the observation of Etienne on the valves of the hepatic veins, but he described the vena azygos, and discovered the canal which passes in the fetus between the umbilical vein and the vena cava, since named ductus venosus. He described the omentum, and its connections with the stomach, the spleen and the colon; gave the first correct views of the structure of the pylorus; observed the small size of the caecal appendix in man; gave the first good account of the mediastinum and pleura and the fullest description of the anatomy of the brain yet advanced. He did not understand the inferior recesses; and his account of the nerves is confused by regarding the optic as the first pair, the third as the fifth and the fifth as the seventh.
Further groundbreaking work was carried out by William Harvey, who published De Motu Cordis in 1628. Harvey made a detailed analysis of the overall structure of the heart, going on to an analysis of the arteries, showing how their pulsation depends upon the contraction of the left ventricle, while the contraction of the right ventricle propels its charge of blood into the pulmonary artery. He noticed that the two ventricles move together almost simultaneously and not independently like had been thought previously by his predecessors.
In the eighth chapter, Harvey estimated the capacity of the heart, how much blood is expelled through each pump of the heart, and the number of times the heart beats in a half an hour. From these estimations, he demonstrated that according to Gaelen 's theory that blood was continually produced in the liver, the absurdly large figure of 540 pounds of blood would have to be produced every day. Having this simple mathematical proportion at hand -- which would imply a seemingly impossible role for the liver -- Harvey went on to demonstrate how the blood circulated in a circle by means of countless experiments initially done on serpents and fish: tying their veins and arteries in separate periods of time, Harvey noticed the modifications which occurred; indeed, as he tied the veins, the heart would become empty, while as he did the same to the arteries, the organ would swell up.
This process was later performed on the human body (in the image on the left): the physician tied a tight ligature onto the upper arm of a person. This would cut off blood flow from the arteries and the veins. When this was done, the arm below the ligature was cool and pale, while above the ligature it was warm and swollen. The ligature was loosened slightly, which allowed blood from the arteries to come into the arm, since arteries are deeper in the flesh than the veins. When this was done, the opposite effect was seen in the lower arm. It was now warm and swollen. The veins were also more visible, since now they were full of blood.
Various other advances in medical understanding and practice were made. French physician Pierre Fauchard started dentistry science as we know it today, and he has been named "the father of modern dentistry ''. Surgeon Ambroise Paré (c. 1510 -- 1590) was a leader in surgical techniques and battlefield medicine, especially the treatment of wounds, and Herman Boerhaave (1668 -- 1738) is sometimes referred to as a "father of physiology '' due to his exemplary teaching in Leiden and his textbook Institutiones medicae (1708).
Chemistry, and its antecedent alchemy, became an increasingly important aspect of scientific thought in the course of the 16th and 17th centuries. The importance of chemistry is indicated by the range of important scholars who actively engaged in chemical research. Among them were the astronomer Tycho Brahe, the chemical physician Paracelsus, Robert Boyle, Thomas Browne and Isaac Newton. Unlike the mechanical philosophy, the chemical philosophy stressed the active powers of matter, which alchemists frequently expressed in terms of vital or active principles -- of spirits operating in nature.
Practical attempts to improve the refining of ores and their extraction to smelt metals were an important source of information for early chemists in the 16th century, among them Georg Agricola (1494 -- 1555), who published his great work De re metallica in 1556. His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build.
English chemist Robert Boyle (1627 -- 1691) is considered to have refined the modern scientific method for alchemy and to have separated chemistry further from alchemy. Although his research clearly has its roots in the alchemical tradition, Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. Although Boyle was not the original discover, he is best known for Boyle 's law, which he presented in 1662: the law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.
Boyle is also credited for his landmark publication The Sceptical Chymist in 1661, which is seen as a cornerstone book in the field of chemistry. In the work, Boyle presents his hypothesis that every phenomenon was the result of collisions of particles in motion. Boyle appealed to chemists to experiment and asserted that experiments denied the limiting of chemical elements to only the classic four: earth, fire, air, and water. He also pleaded that chemistry should cease to be subservient to medicine or to alchemy, and rise to the status of a science. Importantly, he advocated a rigorous approach to scientific experiment: he believed all theories must be tested experimentally before being regarded as true. The work contains some of the earliest modern ideas of atoms, molecules, and chemical reaction, and marks the beginning of the history of modern chemistry.
Important work was done in the field of optics. Johannes Kepler published Astronomiae Pars Optica (The Optical Part of Astronomy) in 1604. In it, he described the inverse - square law governing the intensity of light, reflection by flat and curved mirrors, and principles of pinhole cameras, as well as the astronomical implications of optics such as parallax and the apparent sizes of heavenly bodies. Astronomiae Pars Optica is generally recognized as the foundation of modern optics (though the law of refraction is conspicuously absent).
Willebrord Snellius (1580 -- 1626) found the mathematical law of refraction, now known as Snell 's law, in 1621. Subsequently René Descartes (1596 -- 1650) showed, by using geometric construction and the law of refraction (also known as Descartes ' law), that the angular radius of a rainbow is 42 ° (i.e. the angle subtended at the eye by the edge of the rainbow and the rainbow 's centre is 42 °). He also independently discovered the law of reflection, and his essay on optics was the first published mention of this law.
Christiaan Huygens (1629 -- 1695) wrote several works in the area of optics. These included the Opera reliqua (also known as Christiani Hugenii Zuilichemii, dum viveret Zelhemii toparchae, opuscula posthuma) and the Traité de la lumière.
Isaac Newton investigated the refraction of light, demonstrating that a prism could decompose white light into a spectrum of colours, and that a lens and a second prism could recompose the multicoloured spectrum into white light. He also showed that the coloured light does not change its properties by separating out a coloured beam and shining it on various objects. Newton noted that regardless of whether it was reflected or scattered or transmitted, it stayed the same colour. Thus, he observed that colour is the result of objects interacting with already - coloured light rather than objects generating the colour themselves. This is known as Newton 's theory of colour. From this work he concluded that any refracting telescope would suffer from the dispersion of light into colours. The interest of the Royal Society encouraged him to publish his notes On Colour (later expanded into Opticks). Newton argued that light is composed of particles or corpuscles and were refracted by accelerating toward the denser medium, but he had to associate them with waves to explain the diffraction of light.
In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light. He considered light to be made up of extremely subtle corpuscles, that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation "Are not gross Bodies and Light convertible into one another,... and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition? ''
Dr. William Gilbert, in De Magnete, invented the New Latin word electricus from ἤλεκτρον (elektron), the Greek word for "amber ''. Gilbert undertook a number of careful electrical experiments, in the course of which he discovered that many substances other than amber, such as sulphur, wax, glass, etc., were capable of manifesting electrical properties. Gilbert also discovered that a heated body lost its electricity and that moisture prevented the electrification of all bodies, due to the now well - known fact that moisture impaired the insulation of such bodies. He also noticed that electrified substances attracted all other substances indiscriminately, whereas a magnet only attracted iron. The many discoveries of this nature earned for Gilbert the title of founder of the electrical science. By investigating the forces on a light metallic needle, balanced on a point, he extended the list of electric bodies, and found also that many substances, including metals and natural magnets, showed no attractive forces when rubbed. He noticed that dry weather with north or east wind was the most favourable atmospheric condition for exhibiting electric phenomena -- an observation liable to misconception until the difference between conductor and insulator was understood.
Robert Boyle also worked frequently at the new science of electricity, and added several substances to Gilbert 's list of electrics. He left a detailed account of his researches under the title of Experiments on the Origin of Electricity. Boyle, in 1675, stated that electric attraction and repulsion can act across a vacuum. One of his important discoveries was that electrified bodies in a vacuum would attract light substances, this indicating that the electrical effect did not depend upon the air as a medium. He also added resin to the then known list of electrics.
This was followed in 1660 by Otto von Guericke, who invented an early electrostatic generator. By the end of the 17th century, researchers had developed practical means of generating electricity by friction with an electrostatic generator, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. The first usage of the word electricity is ascribed to Sir Thomas Browne in his 1646 work, Pseudodoxia Epidemica. In 1729 Stephen Gray (1666 -- 1736) demonstrated that electricity could be "transmitted '' through metal filaments.
As an aid to scientific investigation, various tools, measuring aids and calculating devices were developed in this period.
John Napier introduced logarithms as a powerful mathematical tool. With the help of the prominent mathematician Henry Briggs their logarithmic tables embodied a computational advance that made calculations by hand much quicker. His Napier 's bones used a set of numbered rods as a multiplication tool using the system of lattice multiplication. The way was opened to later scientific advances, particularly in astronomy and dynamics.
At Oxford University, Edmund Gunter built the first analog device to aid computation. The ' Gunter 's scale ' was a large plane scale, engraved with various scales, or lines. Natural lines, such as the line of chords, the line of sines and tangents are placed on one side of the scale and the corresponding artificial or logarithmic ones were on the other side. This calculating aid was a predecessor of the slide rule. It was William Oughtred (1575 -- 1660) who first used two such scales sliding by one another to perform direct multiplication and division, and thus is credited as the inventor of the slide rule in 1622.
Blaise Pascal (1623 -- 1662) invented the mechanical calculator in 1642. The introduction of his Pascaline in 1645 launched the development of mechanical calculators first in Europe and then all over the world. Gottfried Leibniz (1646 -- 1716), building on Pascal 's work, became one of the most prolific inventors in the field of mechanical calculators; he was the first to describe a pinwheel calculator, in 1685, and invented the Leibniz wheel, used in the arithmometer, the first mass - produced mechanical calculator. He also refined the binary number system, foundation of virtually all modern computer architectures.
John Hadley (1682 -- 1744) was the inventor of the octant, the precursor to the sextant (invented by John Bird), which greatly improved the science of navigation.
Denis Papin (1647 -- 1712) was best known for his pioneering invention of the steam digester, the forerunner of the steam engine. The first working steam engine was patented in 1698 by the inventor Thomas Savery, as a "... new invention for raising of water and occasioning motion to all sorts of mill work by the impellent force of fire, which will be of great use and advantage for drayning mines, serveing townes with water, and for the working of all sorts of mills where they have not the benefitt of water nor constant windes. '' (sic) The invention was demonstrated to the Royal Society on 14 June 1699 and the machine was described by Savery in his book The Miner 's Friend; or, An Engine to Raise Water by Fire (1702), in which he claimed that it could pump water out of mines. Thomas Newcomen (1664 -- 1729) perfected the practical steam engine for pumping water, the Newcomen steam engine. Consequently, Thomas Newcomen can be regarded as a forefather of the Industrial Revolution.
Abraham Darby I (1678 -- 1717) was the first, and most famous, of three generations of the Darby family who played an important role in the Industrial Revolution. He developed a method of producing high - grade iron in a blast furnace fueled by coke rather than charcoal. This was a major step forward in the production of iron as a raw material for the Industrial Revolution.
Refracting telescopes first appeared in the Netherlands in 1608, apparently the product of spectacle makers experimenting with lenses. The inventor is unknown but Hans Lippershey applied for the first patent, followed by Jacob Metius of Alkmaar. Galileo was one of the first scientists to use this new tool for his astronomical observations in 1609.
The reflecting telescope was described by James Gregory in his book Optica Promota (1663). He argued that a mirror shaped like the part of a conic section, would correct the spherical aberration that flawed the accuracy of refracting telescopes. His design, the "Gregorian telescope '', however, remained un-built.
In 1666, Isaac Newton argued that the faults of the refracting telescope were fundamental because the lens refracted light of different colors differently. He concluded that light could not be refracted through a lens without causing chromatic aberrations. From these experiments Newton concluded that no improvement could be made in the refracting telescope. However, he was able to demonstrate that the angle of reflection remained the same for all colors, so he decided to build a reflecting telescope. It was completed in 1668 and is the earliest known functional reflecting telescope.
50 years later, John Hadley developed ways to make precision aspheric and parabolic objective mirrors for reflecting telescopes, building the first parabolic Newtonian telescope and a Gregorian telescope with accurately shaped mirrors. These were successfully demonstrated to the Royal Society.
The invention of the vacuum pump paved the way for the experiments of Robert Boyle and Robert Hooke into the nature of vacuum and atmospheric pressure. The first such device was made by Otto von Guericke in 1654. It consisted of a piston and an air gun cylinder with flaps that could suck the air from any vessel that it was connected to. In 1657, he pumped the air out of two conjoined hemispheres and demonstrated that a team of sixteen horses were incapable of pulling it apart. The air pump construction was greatly improved by Robert Hooke in 1658.
Evangelista Torricelli (1607 -- 1647) was best known for his invention of the mercury barometer. The motivation for the invention was to improve on the suction pumps that were used to raise water out of the mines. Torricelli constructed a sealed tube filled with mercury, set vertically into a basin of the same substance. The column of mercury fell downwards, leaving a Torricellian vacuum above.
Surviving instruments from this period, tend to be made of durable metals such as brass, gold, or steel, although examples such as telescopes made of wood, pasteboard, or with leather components exist. Those instruments that exist in collections today tend to be robust examples, made by skilled craftspeople for and at the expense of wealthy patrons. These may have been commissioned as displays of wealth. In addition, the instruments preserved in collections may not have received heavy use in scientific work; instruments that had visibly received heavy use were typically destroyed, deemed unfit for display, or excluded from collections altogether. It is also postulated that the scientific instruments preserved in many collections were chosen because they were more appealing to collectors, by virtue of being more ornate, more portable, or made with higher - grade materials.
Intact air pumps are particularly rare. The pump at right included a glass sphere to permit demonstrations inside the vacuum chamber, a common use. The base was wooden, and the cylindrical pump was brass. Other vacuum chambers that survived were made of brass hemispheres.
Instrument makers of the late seventeenth and early eighteenth century were commissioned by organizations seeking help with navigation, surveying, warfare, and astronomical observation. The increase in uses for such instruments, and their widespread use in global exploration and conflict, created a need for new methods of manufacture and repair, which would be met by the Industrial Revolution.
People and key ideas that emerged from the 16th and 17th centuries:
The idea that modern science took place as a kind of revolution has been debated among historians. A weakness of the idea of scientific revolution is the lack of a systematic approach to the question of knowledge in the period comprehended between the 14th and 17th centuries, leading to misunderstandings on the value and role of modern authors. From this standpoint, the continuity thesis is the hypothesis that there was no radical discontinuity between the intellectual development of the Middle Ages and the developments in the Renaissance and early modern period and has been deeply and widely documented by the works of scholars like Pierre Duhem, John Hermann Randall, Alistair Crombie and William A. Wallace, who proved the preexistence of a wide range of ideas used by the followers of the Scientific Revolution thesis to substantiate their claims. Thus, the idea of a scientific revolution following the Renaissance is -- according to the continuity thesis -- a myth. Some continuity theorists point to earlier intellectual revolutions occurring in the Middle Ages, usually referring to either a European Renaissance of the 12th century or a medieval Muslim scientific revolution, as a sign of continuity.
Another contrary view has been recently proposed by Arun Bala in his dialogical history of the birth of modern science. Bala proposes that the changes involved in the Scientific Revolution -- the mathematical realist turn, the mechanical philosophy, the atomism, the central role assigned to the Sun in Copernican heliocentrism -- have to be seen as rooted in multicultural influences on Europe. He sees specific influences in Alhazen 's physical optical theory, Chinese mechanical technologies leading to the perception of the world as a machine, the Hindu - Arabic numeral system, which carried implicitly a new mode of mathematical atomic thinking, and the heliocentrism rooted in ancient Egyptian religious ideas associated with Hermeticism.
Bala argues that by ignoring such multicultural impacts we have been led to a Eurocentric conception of the Scientific Revolution. However, he clearly states: "The makers of the revolution -- Copernicus, Kepler, Galileo, Descartes, Newton, and many others -- had to selectively appropriate relevant ideas, transform them, and create new auxiliary concepts in order to complete their task... In the ultimate analysis, even if the revolution was rooted upon a multicultural base it is the accomplishment of Europeans in Europe. '' Critics note that lacking documentary evidence of transmission of specific scientific ideas, Bala 's model will remain "a working hypothesis, not a conclusion ''.
A third approach takes the term "Renaissance '' literally as a "rebirth ''. A closer study of Greek philosophy and Greek mathematics demonstrates that nearly all of the so - called revolutionary results of the so - called scientific revolution were in actuality restatements of ideas that were in many cases older than those of Aristotle and in nearly all cases at least as old as Archimedes. Aristotle even explicitly argues against some of the ideas that were espoused during the Scientific Revolution, such as heliocentrism. The basic ideas of the scientific method were well known to Archimedes and his contemporaries, as demonstrated in the well - known discovery of buoyancy. Atomism was first thought of by Leucippus and Democritus. Lucio Russo claims that science as a unique approach to objective knowledge was born in the Hellenistic period (c. 300 B.C), but was extinguished with the advent of the Roman Empire. This approach to the Scientific Revolution reduces it to a period of relearning classical ideas that is very much an extension of the Renaissance. This view does not deny that a change occurred but argues that it was a reassertion of previous knowledge (a renaissance) and not the creation of new knowledge. It cites statements from Newton, Copernicus and others in favour of the Pythagorean worldview as evidence.
In more recent analysis of the Scientific Revolution during this period, there has been criticism of not only the Eurocentric ideologies spread, but also of the dominance of male scientists of the time. Science as we know it today, and the original theories that we base modern science on, was built by males, regardless of the input women might have made. The incorporation of women 's work in the sciences during this time tends to be obscured. Scholars have tried to look into the participation of women in the 17th century in science, and even with sciences as simple as domestic knowledge women were making advances. With the limited history provided from texts of the period we are not completely aware if women were helping these scientists develop the ideas they did. Another idea to consider is the way this period influenced even the women scientists of the periods following it. Annie Jump Cannon was an astronomer who benefitted from the laws and theories developed from this period; she made several advances in the century following the Scientific Revolution. It was an important period for the future of science, including the incorporation of women into fields using the developments made.
|
who played the mom in house of 1000 corpses | House of 1000 Corpses - wikipedia
House of 1000 Corpses is a 2003 American exploitation horror film written, co-scored and directed by Rob Zombie in his directorial debut. The film stars Sid Haig, Bill Moseley, Sheri Moon, and Karen Black as members of the Firefly family. Set on Halloween, the film sees the Firefly family torturing and mutilating a group of teenagers who are traveling across the country writing a book. The film explores a number of genres, and features elements of the supernatural. Zombie cited American horror films The Texas Chain Saw Massacre (1974) and The Hills Have Eyes (1977) as influences on House of 1000 Corpses, as well as other films released during the 1970s.
Initially filmed in 2000, House of 1000 Corpses was purchased by Universal Pictures, thus a large portion of it was filmed on the Universal Studios backlots. The film was made with a budget of $7 million. Zombie worked with Scott Humphrey on the score of the film. House of 1000 Corpses featured a graphic amount of blood and gore, and controversial scenes involving masturbation and necrophilia. The project was ultimately shelved by the company prior to its release due to fears of an NC - 17 rating. Zombie later managed to re-purchase the rights to the work, eventually selling it to Lions Gate Entertainment. The film received a theatrical release on April 11, 2003, nearly three years after filming had concluded.
House of 1000 Corpses received a generally negative reaction following its release. The film was critically panned, with the film 's various side - plots and main cast being criticized by multiple critics. The film earned over $3 million in its opening weekend, and would later go on to gross over $16 million worldwide. Despite its initial negative reception, the film went on to develop a cult following. Zombie later directed the film 's sequel, The Devil 's Rejects (2005), in which the Firefly family are on the run from the police. Zombie later developed a haunted house attraction for Universal Studios Hollywood based on the film. This would be the final film performance of Dennis Fimple, before his death in August 2002. The film was dedicated to his memory.
On October 30, 1977, an attempted burglary by two masked men at a gas station goes wrong due to distraction as the owner shoots both masked men in self - defense. Later on, Jerry Goldsmith, Bill Hudley, Mary Knowles, and Denise Willis are on the road in hopes of writing a book on offbeat roadside attractions. When the four meet Captain Spaulding, the owner of a gas station and "The Museum of Monsters & Madmen '', they learn the local legend of Dr. Satan. As they take off in search of the tree from which Dr. Satan was hanged, they pick up a young hitchhiker named Baby, who claims to live only a few miles away. Shortly after, the vehicle 's tire bursts in what is later seen to be a trap and Baby takes Bill to her family 's house. Moments later, Baby 's half - brother, Rufus, picks up the stranded passengers and takes them to the family home.
There they meet Baby 's family: Mother Firefly, Otis Driftwood, her adopted brother, Grampa Hugo and Baby 's deformed giant half - brother, Tiny. While being treated to dinner, Mother Firefly explains that her ex-husband, Earl, had previously tried to burn Tiny alive, along with the Firefly house. After dinner, the family puts on a Halloween show for their guests and Baby offends Mary by flirting with Bill. After Mary threatens Baby, Rufus tells them their car is repaired. As they leave, Otis and Tiny, disguised as scarecrows, attack the couples in the drive way and take them prisoner. The next day, Otis kills Bill and mutilates his body for art. Mary is tied up in a barn, Denise is tied to a bed while dressed up for Halloween, and Jerry is partially scalped for failing to guess Baby 's favorite movie star.
When Denise does n't come home, her father Don calls the police to report her missing. Two deputies, George Wydell and Steve Naish, find the couples ' abandoned car in a field with a tortured victim in the trunk. Don, who was once a cop, is called to the scene to help the deputies search. They arrive at the Firefly house and Wydell questions Mother Firefly about the missing teens. Mother Firefly shoots Wydell in the neck and kills him, and Don and Steve are then killed by Otis upon finding other bodies in the barn. Later that night, the three remaining teenagers are dressed as rabbits, and taken out to an abandoned well. Mary attempts to run away, but is stabbed to death by Baby moments later. One of the family members dons the uniform of one of the deputies and drives their car, while drunk.
Meanwhile, Jerry and Denise are lowered into the well, where a group of undead men pull Jerry away, leaving Denise to find her way through an underground lair. As she wanders through the tunnels, she encounters Dr. Satan and a number of mental patients; Jerry is on Dr. Satan 's operating table being vivisected. Dr. Satan tells his mutated assistant, who turns out to be Earl, Mother Firefly 's ex-husband, to capture Denise, but Denise outwits him and escapes the chambers by crawling to the surface. She makes her way to the main road, where she encounters Captain Spaulding, who gives her a ride in his car. She passes out from exhaustion in the front seat, and Otis suddenly appears in the backseat with a knife. Denise later wakes up to find herself strapped to Dr. Satan 's operating table, with Dr. Satan standing there.
The rest of the Firefly family includes Tiny (Matthew McGrory), Tiny 's brother Rufus (Robert Allen Mukes), and Grandpa Hugo (Dennis Fimple); Other cast members include Tom Towles as Lieutenant George Wydell, Walton Goggins as Steve Neish, Harrison Young as Denise 's father Don, and Walter Phelan as Dr. Satan.
Rob Zombie rose to fame as a member of the band White Zombie, before later beginning a solo career. Zombie 's debut album, Hellbilly Deluxe (1998), drew influence from classic horror films, as did his music videos for "Living Dead Girl '' (1999) and "Superbeast '' (1999). The album was a commercial success, selling over three million copies in the United States. Prior to working on House of 1000 Corpses, Zombie had done animation for Beavis and Butt - head Do America (1996), directed music videos, and attempted to write a script for The Crow: Salvation to no avail. In 1999, Zombie designed a haunted maze attraction for Universal Studios. The project was a success, and was seen as instrumental in reviving the studios annual Halloween Horror Nights. Bill Moseley presented Zombie an award for his contribution that same year, while Zombie formed a friendship with the company. The studio later began working on an animated Frankenstein film which Zombie hoped to be a part of, though plans for the film were ultimately scrapped by the studio.
The idea for House of 1000 Corpses came to Zombie while designing a haunted house attraction for the studio titled "The House of 1000 Corpses ''. This gave Zombie the initial idea for the film, which he later pitched to Universal to a positive reception. Zombie later stated "I was in the office of the head of production or something and he asked me if I had any movie ideas and I pitched him Corpses, which was very rough at the time, because I was n't ready and I made it up on the spot. He liked it, I went home, wrote a 12 - page treatment and met up with them. Two months later, we were shooting. '' Production on the film began in May 2000. Zombie stated that the film was finished by Halloween of 2000. The house was launched the following year, though the title was changed to "American Nightmare '' due to the shelving of the film. Despite the name change, the house still featured numerous references to the film, and the theatrical trailer played while customers waited.
The starting budget of the film was $3 -- 4 million, though the film 's final budget is debatable. Zombie at one point claimed that the film was made solely with the initial $4 million, though in a later interview claimed the film had received a budget of anywhere between $7 - 14 million. Zombie later admitted that he initially knew he did n't have the funding for a good ending but hedged his bets that if he shot what he could on what remained of his budget, the studio would kick in more funds to make a better ending. "I knew the ending sucked, so I let it suck and they said, "The movie 's great but the ending sucks '' and I know. So they gave me more money and we shot a more elaborate ending, bigger sets, the whole razzamatazz. '' The original film featured more characters, with Zombie mentioning a "skunk ape thing '', and featured footage of the four teenagers on their road trip. Universal hoped that the film would focus more on the group of kids, with Zombie stating "nobody gives a shit about the kids ''. Zombie claimed the film was not initially meant to feature elements of black humor, claiming it "turned out a little wackier and campier than I originally intended. But as we were shooting, that 's the tone that it was turning out to be. Movies sometimes dictate their own course, so I just sort of went with it. ''
The film was shot on a twenty - five - day shooting schedule. Two weeks were spent filming on the Universal Studios Hollywood backlots -- the house featured in the film is the same house used in The Best Little Whorehouse in Texas (1982), and can be seen on Universal Studios ' tram tours. Zombie stated that filming on the lot was at times difficult, as the amusement park would often be open and would ruin takes. The remaining eleven days of the shoot were spent on a ranch in Valencia, California. The scene involving Bill being transformed into "Fishboy '' was initially much longer, featuring gory details of the creation of the monster. Zombie stated the scene was created after Universal passed on the film. Scenes featuring Baby masturbating with a skeleton, along with other cutaway scenes, were filmed in Zombie 's basement after initial filming for the project had concluded. Zombie later cited home recordings from the Manson family as inspiration for the Firefly family 's "bizarre '' rants. Zombie often filmed two versions of scenes, one of which would be less gory, in an attempt to please Universal.
Jake McKinnon could not see well when dressed as The Professor, and almost hit actress Erin Daniels with a real ax during the film 's climax. Zombie later claimed he simply hoped Daniels would move out of the way in time. When Denise calls her father from a telephone booth, a sign for a missing dog head can be seen hanging in the booth; this was in fact a real item found by Zombie and used for the film. In the early stages of the film, Grandpa Hugo was to be revealed as the murderous Dr. Satan, who at the time was simply referred to as the mad doctor. The legend of the mad doctor was to be a ploy by the Firefly family to lure victims in, though this idea was later scrapped. This led to Grandpa Hugo receiving much less screen - time. The character of Dr. Satan was inspired by a 1950s billboard - sized poster advertising a "live spook show starring a magician called Dr. Satan '' that Zombie has in his house.
The film 's main cast consisted of the murderous Firefly family, the four teenagers and various police officers attempting to find the group, among others. Sid Haig was cast as Captain Spaulding, a man who dresses as a clown and owns a gas station and museum of curiosities. Haig claimed he had to "get in touch with (his) own insanity '' for the role. His relation to the Firefly family is not revealed in the film, though he is working with them to some extent; however, the sequel establishes that he is the father of Baby. Bill Moseley starred as Otis B. Driftwood, who was adopted into the Firefly family. Sheri Moon Zombie portrayed Baby Firefly, who became known for her high pitched laugh and sexual nature. Karen Black was cast as Mother Firefly, the protective mother to the family. Matthew McGrory portrayed Tiny Firefly, a tall man who was left deformed after a house fire started by his father. Robert Allen Mukes portrayed Rufus "RJ '' Firefly. Jr. Dennis Fimple was selected to play Grampa Hugo Firefly. He died following filming, and the finished product was dedicated to him.
The names members of the Firefly family were taken from the names of Groucho Marx characters. Captain Spaulding was a character in Animal Crackers (1930), Otis B. Driftwood was a character in A Night at the Opera (1935), Rufus T. Firefly was taken from Duck Soup (1933), and lastly Hugo Z. Hackenbush derived from A Day at the Races (1937). Despite only allusions to this being made in House of 1000 Corpses, it is more prevalent in the film 's sequel, with the names becoming integral to the plot. Zombie acknowledged that viewers were meant to "root for '' the Firefly family as opposed to the group of teens, though claims it was n't intentional: "Yeah, I wanted the audience to cheer ' em. I did n't consciously think of it at the time, because I was trying to make Bill and Jerry likeable. But it 's like when you saw Beetlejuice and you could tell all Tim Burton cared about was Beetlejuice. ''
Erin Daniels portrayed Denise Willis. Chris Hardwick was cast as Jerry Goldsmith, a young man who was seen as "hyper '' and "wise - cracking ''. The character Bill Hudley was portrayed by Rainn Wilson. House of 1000 Corpses served as one of Wilson 's first films, though he would later go on to find mainstream success following the film 's release. Mary Knowles, Bill 's girlfriend, was played by Jennifer Jostyn. Mary was seen as the most confrontational of the group, often clashing with Baby due to her flirtatious relationship with Bill. Harrison Young was selected to play Don Willis, the father of Denise who later goes looking for her and her group of friends. Tom Towles and Walton Goggins portrayed Lieutenant George Wydell and Deputy Steve Naish, respectively; the pair work with Don to find the missing group.
Irwin Keyes was cast as Ravelli, the assistant to Captain Spaulding who helps run the tourist attraction. Michael J. Pollard portrayed Stucky, a friend of Captain Spaulding and Ravelli 's. Chad Bannon and David Reynolds played Killer Karl and Richard "Little Dick '' Wick, two men who try to rob Captain Spaulding 's shop and are murdered. William H. Basset had a small role in the film as Sheriff Frank Huston. Joe Dobbs III played Gerry Ober, a man who works at the liquor store; he is later given the nickname "Goober '' by Baby. Gregg Gibbs appeared as Dr. Wolfenstein during the "murder ride ''. Zombie made a cameo appearance as Dr. Wolfenstein 's assistant. Despite initially planning to appear as Dr. Wolfenstein, Zombie opted to be his assistant instead, believing he would look "normal '' in costume. Walter Phelan was cast as Dr. Satan, whose real name was S. Quintin Quale.
The score for the film was composed by Zombie, alongside Canadian producer Scott Humphrey. Much of the production work for the soundtrack to the film was done in Humphrey 's studio, The Chop Shop. The film 's score featured similar musical themes to Zombie 's releases, consisting of heavy metal influences. MTV said the music mixed "snippets of ominous hillbilly dialogue with grim horror movie rock. '' While making the movie, Zombie joked with his manager that he should do a cover of "Brick House '' (1977), originally performed by Commodores. His manager later got both Lionel Richie and female rapper Trina to appear on a cover of the song with Zombie, under the title "Brick House 2003 ''. Aside from making audio clips and snippets for the film, Zombie recorded a variety of new songs for the film 's soundtrack. The song "House of 1000 Corpses '', taken from Zombie 's album The Sinister Urge (2001), is also present. The soundtrack, released on March 25, 2003, made an appearance on the Billboard 200 chart in the United States. The soundtrack to the film is isolated on home video releases of the film as a separate audio track.
Prior to agreeing to release the film through Universal, Zombie reportedly told the studio of the film 's nature, stating "I was really blatant when I talked to them. I did n't want to get into a situation where they thought I was making something mainstream. And I told them that I wanted to make a drive - in movie, something very gritty and nasty and weird. '' Production of the film was completed in 2000, and was set for release through Universal. The studio completed a theatrical trailer for the film, which was shown in theaters and prior to the Universal ride created by Zombie. Zombie later received a call for a meeting with Stacey Snider, head of Universal, requesting a meeting. Zombie recalled fearing that the studio would demand a re-shoot, though he later learned that Snider 's fears of the film receiving an NC - 17 rating had led to the company 's refusal to release the film. The film remained shelved for several months, with Zombie eventually purchasing the rights to the film from Universal. Zombie claimed that many urged him to scrap the film following the fallout with Universal, though he continued to search for a new distributor.
Zombie later made a deal with MGM to release the film, with MGM slating an October 2002 release for the film. Despite this, MGM later refused to release the film following a controversial remark from Zombie claiming that the company had no morals for releasing the film. Zombie later announced plans to release the film himself, without the backing of a production company. Despite this, Zombie eventually caught the eye of Lions Gate Entertainment, the final studio to sign on for the project. Lions Gate, attempting to venture into new types of films, hoped releasing a horror film would provide more opportunities. The film was cut and edited in an attempt to achieve an R - rating, with Zombie claiming that most of the cut footage featured Sherri Moon Zombie 's character.
The first public screening of the film occurred in Argentina on March 13, 2003. House of 1000 Corpses received its theatrical release on April 11, 2003. The film made its debut in the United Kingdom at Fright Fest, and was the fastest selling event of the night. House of 1000 Corpses grossed $3,460,666 on a limited opening weekend, while boasting $2,522,026 on its official opening. The film opened in second at the box office, behind the comedy film Anger Management (2003). It went on to gross $12,634,962 in the United States alone, with an additional $4,194,583 accumulated worldwide. The film 's reported gross is $16,829,545. According to Zombie, Lions Gate Entertainment made back their investment on the film on the first day, and shortly afterwards approached Zombie about a sequel to the film. Having already begun developing ideas for a sequel, Zombie quickly began work on a follow - up.
It received a home video release on August 12, 2003. For the main menu of the film, Zombie had Sid Haig perform in character as an added bonus. The Blu - ray edition of the film was released on September 18, 2007. The Blu - ray edition of the film features the added menu content with Haig, as well as the bonus features found on its initial release. The film was released alongside The Devil 's Rejects (2005) in a combo - pack on January 4, 2011. Zombie spoke in 2003 of releasing a "super-duper deluxe '' edition of the film on DVD, which he hoped would include the footage of the scrapped characters, as well as deleted footage from the film 's death scenes; Zombie also claimed the "fishboy '' scene was initially much gorier, and he hoped to include added footage for that scene. Despite Zombie 's claims, this version of the film is yet to be released.
House of 1000 Corpses received a generally negative critical reception upon its release. Frank Scheck of The Hollywood Reporter wrote that the film "lives up to the spirit but not the quality of its inspirations '' and is ultimately a "cheesy and ultragory exploitation horror flick '' and "strangely devoid of thrills, shocks or horror. '' Clint Morris of Film Threat slammed the film as "an hour and a half of undecipherable plot '' and found the film to be "sickening '' overall. James Brundage of Filmcritic.com wrote that the film was simply "hick after hick, cheap scary image after cheap scary image, lots of southern accents and psychotic murders, '' and was "too highbrow to be a good cheap horror movie, too lowbrow to be satire, and too boring to bear the value of the ticket. '' Slant Magazine gave the film two out of four stars, stating "If not for the blink - and - miss sideshow attractions and stockpile of memorable quotes, (House of) 1000 Corpses would have been easier to shrug off. This vintage curio is proudly and humorously derivative but that familiar aftertaste is that of wasted opportunities. '' The New York Times also had a negative review of the film, writing "As much as film buffs might enjoy recognizing references to Motel Hell and other drive - in classics, Mr. Zombie 's encyclopedic approach to the genre results in a crowded, frenzied film in which no single idea is developed to a satisfying payoff. '' The review also criticized the cutaway scenes and home footage used throughout the film, adding "Mr. Zombie is both too much of a stylist, always cutting away to oddball inserts, black - and - white flashbacks, negative images and much else, and too little: he is not in enough control of his means to let a mood grow and fester. And festering is what this kind of film is all about. ''
JoBlo.com had a more positive view of the film, claiming it "slaps together just the right amount of creepy atmosphere, nervous laughter, cheap scares, fun rides and blood and guts to satisfy any major fan of the macabre. '' Horror Express gave the film a generally positive review, claiming "He has succeeded for the most part, but really this is only a film Rob Zombie could do. Beyond the harkening back to the old days, there are instances where Zombie 's signature style comes through. It 's a style he has honed over the years through his videos, animations and music. Grotesque imagery is shown through skewed camera angles as grinning faces watch on. A use of bright fluorescents almost creates a deceptive atmosphere of childlike innocence as the devils perfect their craft on screen. '' Entertainment Weekly gave it a C+, saying "House of 1000 Corpses "is n't coherent, exactly, but what dripping - ghoul horror movie is these days? The new rule is, It 's not hip to make sense when you 're raising hell. '' Maitland McDonagh of TV Guide gave the film two out of four stars, stating "It is ugly -- in the distinctively washed out, grainy, slightly burned manner of low - budget ' 70s films -- gory and single - mindedly mean, none of which is a criticism since that 's exactly what it wants to be. '' The project was well received by the LA Times, who wrote "Let 's give the devil his due. Zombie, who displays a natural flair for the cinematic, has a real appreciation and knowledge of horror pictures and a Diane Arbus - like affinity for sleazy, bizarre Americana and schlock culture. Throughout his fast - moving movie he inserts vintage clips in a witty, telling manner, and as to be expected, Zombie, with Scott Humphries, has come through with a rip - roaring score for his picture. '' The film has a 19 % "rotten '' rating on film review website Rotten Tomatoes, based on eighty reviews, saying "There 's an abundance of gore in this derivative horror movie, but little sense or wit. '' It has a Metacritic score of 31, signifying "generally unfavorable reviews ''.
Since its initial release, the film has gone on to gain a cult following. In his 2007 review of the Blu - ray release, Christopher Monfette of IGN called the film "fun as hell '', writing "House of 1000 Corpses is a messy film -- veering this way and that across the genre map with no discernible destination. But viewed less a movie and more as an experience, the film offers a certain degree of inspired insanity and a healthy dose of carnival - like madness ''. Zombie has acknowledged the film 's cult status, stating "Now, a decade later, it 's become a pretty loved movie among people. It 's great that we have this big celebration. I love seeing Sid Haig and the other actors get such great attention from it. The funny thing is, ten years becomes a long time. I 'll meet someone who 's eighteen - years - old, and that 's always been a film that they 've loved. It 's funny that the film 's been around that long to be like that for some people. '' CinemaBlend also wrote of the film 's cult status, stating "While his Halloween films were a mess, Zombie did bring something new with his original films House of 1000 Corpses and The Devil 's Rejects, developing a cult following for his movies on top of the one he earned for his music. Say what you will about him as a director, but there 's no denying that he has a unique vision. '' Zombie has gone on to dismiss the film following its release, saying "The first film (I directed), which people seems to love, is just a calamitous mess. Well, when it came out it seemed like everyone hated it. Now everyone acts like it 's beloved in some way. All I see is flaw, upon flaw, upon flaw... upon flaw. '' The success of the film led to a sequel, released in 2005, and later a seasonal haunted house attraction at Universal Studios Hollywood.
|
name 3 reasons opponents of socialism believe in | Criticisms of socialism - wikipedia
Criticism of socialism refers to any critique of socialist models of economic organization and their feasibility; as well as the political and social implications of adopting such a system. Some criticisms are not directed toward socialism as a system, but are directed toward the socialist movement, socialist political parties or existing socialist states. Some critics consider socialism to be a purely theoretical concept that should be criticized on theoretical grounds (such as in the socialist calculation debate) -- others hold that certain historical examples exist and that they can be criticized on practical grounds.
Economic liberals and right libertarians view private ownership of the means of production and the market exchange as natural entities or moral rights which are central to their conceptions of freedom and liberty, and view the economic dynamics of capitalism as immutable and absolute. Therefore, they perceive public ownership of the means of production, cooperatives and economic planning as infringements upon liberty.
According to the Austrian school economist Ludwig von Mises, an economic system that does not utilize money, financial calculation and market pricing will be unable to effectively value capital goods and coordinate production, and therefore socialism is impossible because it lacks the necessary information to perform economic calculation in the first place. Another central argument leveled against socialist systems based on economic planning is based on the use of dispersed knowledge. Socialism is unfeasible in this view because information can not be aggregated by a central body and effectively used to formulate a plan for an entire economy, because doing so would result in distorted or absent price signals.
Other economists criticize models of socialism based on neoclassical economics for their reliance on the faulty and unrealistic assumptions of economic equilibrium and pareto efficiency.
Some philosophers have also criticized the aims of socialism, arguing that equality erodes away at individual diversities, and that the establishment of an equal society would have to entail strong coercion. Critics of the socialist political movement often criticize the internal conflicts of the socialist movement as creating a sort of "responsibility void. ''
Because there are many models of socialism, most critiques are only focused on a specific type of socialism. Therefore, the criticisms presented below may not apply to all forms of socialism, and many will focus on the experience of Soviet - type economies. It is also important to note that different models of socialism conflict with each other over questions of property ownership, economic coordination and how socialism is to be achieved -- so critics of specific models of socialism might be advocates of a different type of socialism.
The economic calculation problem is a criticism of central economic planning. It was first proposed by Ludwig von Mises in 1920 and later expounded by Friedrich Hayek. The problem referred to is that of how to distribute resources rationally in an economy. The free market relies on the price mechanism, wherein people individually have the ability to decide how resources should be distributed based on their willingness to give money for specific goods or services. The price conveys embedded information about the abundance of resources as well as their desirability which in turn allows, on the basis of individual consensual decisions, corrections that prevent shortages and surpluses; Mises and Hayek argued that this is the only possible solution, and without the information provided by market prices, socialism lacks a method to rationally allocate resources. Those who agree with this criticism argue it is a refutation of socialism and that it shows that a socialist planned economy could never work. The debate raged in the 1920s and 1930s, and that specific period of the debate has come to be known by economic historians as the Socialist Calculation Debate.
Ludwig von Mises argued in a famous 1930 article "Economic Calculation in the Socialist Commonwealth '' that the pricing systems in socialist economies were necessarily deficient because if government owned the means of production, then no prices could be obtained for capital goods as they were merely internal transfers of goods in a socialist system and not "objects of exchange, '' unlike final goods. Therefore, they were unpriced and hence the system would be necessarily inefficient since the central planners would not know how to allocate the available resources efficiently. This led him to declare "... that rational economic activity is impossible in a socialist commonwealth. ''. Mises developed his critique of socialism more completely in his 1922 book Socialism, an Economic and Sociological Analysis.
Friedrich Hayek argued in 1977 that "prices are an instrument of communication and guidance which embody more information than we directly have '', and therefore "the whole idea that you can bring about the same order based on the division of labor by simple direction falls to the ground ''. He further argued that "if you need prices, including the prices of labor, to direct people to go where they are needed, you can not have another distribution except the one from the market principle. ''
Ludwig von Mises argued that a socialist system based upon a planned economy would not be able to allocate resources effectively due to the lack of price signals. Because the means of production would be controlled by a single entity, approximating prices for capital goods in a planned economy would be impossible. His argument was that socialism must fail economically because of the economic calculation problem -- the impossibility of a socialist government being able to make the economic calculations required to organize a complex economy. Mises projected that without a market economy there would be no functional price system, which he held essential for achieving rational and efficient allocation of capital goods to their most productive uses. Socialism would fail as demand can not be known without prices, according to Mises.
The socialist planner, therefore, is left trying to steer the collectivist economy blindfolded. He can not know what products to produce, the relative quantities to produce, and the most economically appropriate way to produce them with the resources and labor at his central command. This leads to "planned chaos '' or to the "planned anarchy '' to which Pravda referred... Even if we ignore the fact that the rulers of socialist countries have cared very little for the welfare of their own subjects; even if we discount the lack of personal incentives in socialist economies; and even if we disregard the total lack of concern for the consumer under socialism; the basic problem remains the same: the most well - intentioned socialist planner just does not know what to do.
The heart of Mises ' argument against socialism is that central planning by the government destroys the essential tool -- competitively formed market prices -- by which people in a society make rational economic decisions.
These arguments were elaborated by subsequent Austrian economists such as Friedrich Hayek and students such as Hans Sennholz.
The anarcho - capitalist economist Hans - Hermann Hoppe argues that, in the absence of prices for the means of production, there is no cost - accounting which would direct labor and resources to the most valuable uses. Hungarian economist Janos Kornai has written that "the attempt to realize market socialism... produces an incoherent system, in which there are elements that repel each other: the dominance of public ownership and the operation of the market are not compatible. ''
Proponents of laissez - faire capitalism argue that although private monopolies do n't have any actual competition, there are many potential competitors watching them, and if they were delivering inadequate service, or charging an excessive amount for a good or service, investors would start a competing enterprise.
In her book How We Survived Communism & Even Laughed, Slavenka Drakulić claims that a major contributor to the fall of socialist planned economies in the former Soviet bloc was the failure to produce the basic consumer goods that its people desired. She argues that, because of the makeup of the leadership of these regimes, the concerns of women got particularly short shrift. She illustrates this, in particular, by the system 's failure to produce washing machines. If a state - owned industry is able to keep operating with losses, it may continue operating indefinitely producing things that are not in high consumer demand. If consumer demand is too low to sustain the industry with voluntary payments by consumers then it is tax - subsidized. This prevents resources (capital and labor) from being applied to satisfying more urgent consumer demands. According to economist Milton Friedman "The loss part is just as important as the profit part. What distinguishes the private system from a government socialist system is the loss part. If an entrepreneur 's project does n't work, he closes it down. If it had been a government project, it would have been expanded, because there is not the discipline of the profit and loss element. ''
Proponents of chaos theory argue that it is impossible to make accurate long - term predictions for highly complex systems such as an economy.
Pierre - Joseph Proudhon raises similar calculational issues in his General Idea of the Revolution in the 19th Century but also proposes certain voluntary arrangements, which would also require economic calculation.
Leon Trotsky, a proponent of decentralized planning, argued that centralized economic planning would be "insoluble without the daily experience of millions, without their critical review of their own collective experience, without their expression of their needs and demands and could not be carried out within the confines of the official sanctums '', and "Even if the Politburo consisted of seven universal geniuses, of seven Marxes, or seven Lenins, it will still be unable, all on its own, with all its creative imagination, to assert command over the economy of 170 million people. ''
Mises argued that real - world implementation of free market and socialist principles provided empirical evidence for which economic system leads to greatest success:
The only certain fact about Russian affairs under the Soviet regime with regard to which all people agree is: that the standard of living of the Russian masses is much lower than that of the masses in the country which is universally considered as the paragon of capitalism, the United States of America. If we were to regard the Soviet regime as an experiment, we would have to say that the experiment has clearly demonstrated the superiority of capitalism and the inferiority of socialism.
According to Tibor R. Machan, "Without a market in which allocations can be made in obedience to the law of supply and demand, it is difficult or impossible to funnel resources with respect to actual human preferences and goals. ''
In contrast to the lack of a marketplace, market socialism can be viewed as an alternative to the traditional socialist model. Theoretically, the fundamental difference between a traditional socialist economy and a market socialist economy is the existence of a market for the means of production and capital goods.
Central planning is also criticized by elements of the radical left. Libertarian socialist economist Robin Hahnel notes that even if central planning overcame its inherent inhibitions of incentives and innovation it would nevertheless be unable to maximize economic democracy and self - management, which he believes are concepts that are more intellectually coherent, consistent and just than mainstream notions of economic freedom.
As Hahnel explains, "Combined with a more democratic political system, and redone to closer approximate a best case version, centrally planned economies no doubt would have performed better. But they could never have delivered economic self - management, they would always have been slow to innovate as apathy and frustration took their inevitable toll, and they would always have been susceptible to growing inequities and inefficiencies as the effects of differential economic power grew. Under central planning neither planners, managers, nor workers had incentives to promote the social economic interest. Nor did impending markets for final goods to the planning system enfranchise consumers in meaningful ways. But central planning would have been incompatible with economic democracy even if it had overcome its information and incentive liabilities. And the truth is that it survived as long as it did only because it was propped up by unprecedented totalitarian political power. ''
Milton Friedman, an economist, argued that socialism, by which he meant state ownership over the means of production, impedes technological progress due to competition being stifled. As evidence, he said that we need only look to the U.S. to see where socialism fails, by observing that the most technologically backward areas are those where government owns the means of production. Without a reward system, it is argued, many inventors or investors would not risk time or capital for research. This was one of the reasons for the United States patent system and copyright law.
Socialism has proved no more efficient at home than abroad. What are our most technologically backward areas? The delivery of first class mail, the schools, the judiciary, the legislative system -- all mired in outdated technology. No doubt we need socialism for the judicial and legislative systems. We do not for mail or schools, as has been shown by Federal Express and others, and by the ability of many private schools to provide superior education to underprivileged youngsters at half the cost of government schooling...
We all justly complain about the waste, fraud and inefficiency of the military. Why? Because it is a socialist activity -- one that there seems no feasible way to privatize. But why should we be any better at running socialist enterprises than the Russians or Chinese?
Some critics of socialism argue that income sharing reduces individual incentives to work, and therefore incomes should be individualized as much as possible. Critics of socialism have argued that in any society where everyone holds equal wealth there can be no material incentive to work, because one does not receive rewards for a work well done. They further argue that incentives increase productivity for all people and that the loss of those effects would lead to stagnation. John Stuart Mill in The Principles of Political Economy (1848) said:
It is the common error of Socialists to overlook the natural indolence of mankind; their tendency to be passive, to be the slaves of habit, to persist indefinitely in a course once chosen. Let them once attain any state of existence which they consider tolerable, and the danger to be apprehended is that they will thenceforth stagnate; will not exert themselves to improve, and by letting their faculties rust, will lose even the energy required to preserve them from deterioration. Competition may not be the best conceivable stimulus, but it is at present a necessary one, and no one can foresee the time when it will not be indispensable to progress.
However, he later altered his views and adopted a socialist perspective, adding chapters to his Principles of Political Economy in defense of a socialist outlook, and defending some socialist causes. Within this revised work he also made the radical proposal that the whole wage system be abolished in favour of a co-operative wage system. Nonetheless, some of his views on the idea of flat taxation remained, albeit in a slightly toned down form.
The economist John Kenneth Galbraith has criticized communal forms of socialism that promote egalitarianism in terms of wages / compensation as unrealistic in its assumptions about human motivation:
This hope (that egalitarian reward would lead to a higher level of motivation), one that spread far beyond Marx, has been shown by both history and human experience to be irrelevant. For better or worse, human beings do not rise to such heights. Generations of socialists and socially oriented leaders have learned this to their disappointment and more often to their sorrow. The basic fact is clear: the good society must accept men and women as they are.
According to economist Hans - Hermann Hoppe, countries where the means of production are socialized are not as prosperous as those where the means of production are under private control. Ludwig von Mises, a classical liberal economist, argued that aiming for more equal incomes through state intervention necessarily leads to a reduction in national income and therefore average income (why?). Consequently, the socialist chooses the objective of a more equal distribution of income, on the assumption that the marginal utility of income to a poor person is greater than that to a rich person. According to Mises, this mandates a preference for a lower average income over inequality of income at a higher average income. He sees no rational justification for this preference, and he also states that there is little evidence that the objective of greater income equality is achieved.
Friedrich Hayek in The Road to Serfdom, argued that the more even distribution of wealth through the nationalization of the means of production advocated by certain socialists can not be achieved without a loss of political, economic, and human rights. According to Hayek, to achieve control over means of production and distribution of wealth it is necessary for such socialists to acquire significant powers of coercion. Hayek argued that the road to socialism leads society to totalitarianism, and argued that fascism and Nazism were the inevitable outcome of socialist trends in Italy and Germany during the preceding period.
Hayek was critical of the bias shown by university teachers and intellectuals towards socialist ideals. He argued that socialism is not a working class movement as socialists contend, but rather "the construction of theorists, deriving from certain tendencies of abstract thought with which for a long time only the intellectuals were familiar; and it required long efforts by the intellectuals before the working classes could be persuaded to adopt it as their program. ''
Peter Self criticizes the traditional socialist planned economy and argues against pursuing "extreme equality '' because he believes it requires "strong coercion '' and does not allow for "reasonable recognition (for) different individual needs, tastes (for work or leisure) and talents. '' He recommends market socialism instead.
Objectivists criticize socialism as devaluing the individual, and making people incapable of choosing their own values, as decisions are made centrally. They also reject socialism 's indifference to property rights.
Some critics of socialism have argued that, in a socialist state, the leadership will either become corrupted or be replaced by corrupt people, and this would prevent the goals of socialism from being realized.
Lord Acton 's aphorism that "power tends to corrupt '' has been used by critics of socialism to argue that the leadership of a socialist state would be more susceptible to corruption than others, because a socialist state has a broader scope than other states. Milton Friedman argued that the absence of private economic activity would enable political leaders to grant themselves coercive powers. Winston Churchill, in his campaign against socialist candidate Clement Attlee in the 1945 elections, claimed that socialism requires totalitarian methods, including a political police, in order to achieve its goals.
Friedrich Hayek made a slightly different but related argument. He conceded that the leaders of the socialist movement had idealistic motives and did not argue that they would become corrupt or resort to totalitarian methods once in power. However, he argued that the kind of state structure they wish to set up would eventually attract a new generation of leaders motivated by cynical ambition rather than any ideals, and these new leaders would enact repressive measures while at the same time giving up attempts to implement the original goals of socialism.
|
i wanna soak up the sun i wanna tell everyone | Soak up the Sun - wikipedia
"Soak Up the Sun '' is the title of a song recorded by American artist Sheryl Crow. It was released in March 2002 as the lead single from her album C'mon C'mon. The song, which features backing vocals by Liz Phair, peaked at number - one on the Billboard Adult Top 40 chart and hit number 5 on the Hot Adult Contemporary Tracks chart and # 17 on the Hot 100 chart. In addition, "Soak Up the Sun '' (remixed by noted DJ Victor Calderone) spent one week at # 1 on the Billboard Hot Dance Club Play chart in June 2002; to date, this is Crow 's only song to top this chart. It reached # 16 on the UK Singles Chart. It was covered by the Kidz Bop Kids in 2003. The song was also included on the album "Nolee Mix '' which was released to promote the My Scene dolls. The song was a staple of radio airplay during the summer of 2002.
The video directed by Wayne Isham and shot on the North Shore of Oahu, Hawaii features Crow performing on the beach, as well as various vacationers surfing in the ocean and jumping off a waterfall.
|
how many film awards are there in india | List of film awards - wikipedia
This is a list of groups, organizations, and festivals that recognize achievements in cinema, usually by awarding various prizes. The awards sometimes also have popular unofficial names (such as the ' Oscar ' for Hollywood 's Academy Awards), which are mentioned if applicable. Many awards are simply identified by the name of the group presenting the award.
Awards have been divided into four major categories: critics ' awards, voted on (usually annually) by a group of critics; festival awards, awards presented to the best film shown in a particular film festival; industry awards, which are selected by professionals working in some branch of the movie industry; and audience awards, which are voted by the general public.
As of 1998, The Variety Guide to Film Festivals states that "the Triple Crown of international competitive festivals '' are Cannes, Venice and Berlin.
(This is not intended to be a complete list of film festivals, but to showcase the distinctively named awards given at some festivals.)
(1)
|
who sings before you turn off the lights | World Class Wreckin ' Cru - wikipedia
World Class Wreckin ' Cru was an electro group, best known for its contributions to early rap and its association with Dr. Dre, DJ Yella, and Michel'le.
World Class Wreckin ' Cru debuted in a club owned by one of the early West Coast DJs, Alonzo Williams. Before he opened "Eve After Dark '' in 1979, Alonzo was one of the most popular DJs in the Los Angeles area. He began producing dances under the name of Disco Construction, named after funk group Brass Construction. Seeing the popularity of this new craze, he entered the market of running nightclub performances. The club opened with Detroit - born Andre Manuel aka Unknown DJ directing the music program whose main influence derived from an east coast flavour, Soulsonic Force, Orbit and Scorpio.
As the 1980s arrived, so did electronic funk, sampling drum beats fused with old school rap format. Disco Construction created a sub group called the into Wreckin ' Cru which were the Lonzos roadies and later adding World Class it became the name of the recording group. Lonzo hired local DJs Antoine "Yella '' Carraby and Andre "Dr. Dre '' Young who later became the original Mix Masters for KDAY. The Wreckin ' Cru also performed in various shows around L.A. including opening for New Edition as well as multiple shows promoted by Lonzo.
Alonzo Williams created the label "Kru - cut '' which began releasing The Wreckin ' Cru music through the mid-1980's with very minimal resources through Macola Records ", a local independent record manufacturing and distributing company. The Cru released the single "Slice '', followed by "Surgery '' and a full - length album titled World Class. The success of their early releases led to a major record deal with CBS / Epic Records. After being signed to CBS records Lonzo was asked if he had any other acts. After seeing Dre 's cousin Jinx ' group perform in a rap contest, a teenage group called C.I.A. (Cru ' In Action) starring O'Shea "Ice Cube '' Jackson, Dre 's cousin Tony ' Sir Jinx ' Wheaton and Darrell ' K - Dee ' Johnson, who with Dre would record a demo tape called "She 's a Skag ''. The group was then signed to a single deal with CBS.
After being released from CBS the WCWC went on to have their biggest hit ever "Turn off The Lights ''. WCWC was known as a dance and romance act with songs like Surgery, Juice, Cabbage Patch, Lovers and Turn Off The Lights.
By 1985 Kru - Cut had produced World Class ' debut album, World Class, with Cli - N - Tel leaving to control his own direction in life, leaving Shakespeare to step up to be the prominent MC as they signed with Epic Records. With this they released a string of singles and also their second album, Rapped In Romance.
World Class 's success was building an army of fans in the underground scene and each member a valued reputation. At this stage the look of Prince and Michael Jackson 's Thriller outfits were cool, glitzy purple leather suits, sequin suits not unlike the glam - rock look running parallel to this scene, Dre was growing tired of the image and considered Lonzo 's direction to be soft and yearned to control his own expression in music.
At this time Dre was working on side projects for local entrepreneur Eric ' Eazy - E ' Wright in the Kru - Cut studio and Ice Cube was enlisted to ghostwrite for the World Class single ' House Calls / Cabbage Patch ' single in 1987 and shortly after, featuring Michel'le, (Dre 's girlfriend) came out the slow jam hit "Before You Turn off the Lights '', which peaked at # 54 on the Billboard Black Singles Charts in April 1988. The last time Dre was arrested (after frequent visits) and thrown back in jail for failing to appear for multiple traffic violations, $166 was owed which would qualify for a warrant in 1986, Lonzo was n't going to bail him out a third time, Eazy - E paid this out in return for Dre 's production trade for E 's new record company ' Ruthless Records '. Eazy and Dre 's combined collaboration of talent known as N.W.A. Lonzo released a further LP called ' Phases in Life ' released in 1990 under Lonzo 's solo identity. Later, remaining World Class members, Shakespeare pursued a position as a pastor. Michel'le dated Dr. Dre and released an album on Ruthless Records.
|
when did circumcision become common in the united states | History of male circumcision - wikipedia
The oldest documentary evidence of male circumcision comes from ancient Egypt. Circumcision was common, although not universal, among ancient Semitic peoples. In the aftermath of the conquests of Alexander the Great, however, Greek dislike of circumcision (they regarded a man as truly "naked '' only if his prepuce was retracted) led to a decline in its incidence among many peoples that had previously practiced it.
Circumcision has ancient roots among several ethnic groups in sub-equatorial Africa, and is still performed on adolescent boys to symbolize their transition to warrior status or adulthood. Male circumcision and / or subincision, often as part of an intricate coming of age ritual, was a common practice among Australian Aborigines and Pacific islanders at first contact with Western travellers. It is still practiced in the traditional way by a proportion of the population. According to "National Hospital Discharge Survey '' in United States, as of 2008, the rate of circumcision of infant boys in hospitals in United States was 55.9 %.
The origin of male circumcision is not known with certainty. It has been variously proposed that it began as a religious sacrifice, as a rite of passage marking a boy 's entrance into adulthood, as a form of sympathetic magic to ensure virility or fertility, as a means of reducing sexual pleasure, as an aid to hygiene where regular bathing was impractical, as a means of marking those of higher social status, as a means of humiliating enemies and slaves by symbolic castration, as a means of differentiating a circumcising group from their non-circumcising neighbors, as a means of discouraging masturbation or other socially proscribed sexual behaviors, as a means of removing "excess '' pleasure, as a means of increasing a man 's attractiveness to women, as a demonstration of one 's ability to endure pain, or as a male counterpart to menstruation or the breaking of the hymen, or to copy the rare natural occurrence of a missing foreskin of an important leader, a way to repel demonesses, and as a display of disgust of the smegma produced by the foreskin. Removing the foreskin can prevent or treat a medical condition known as phimosis. It has been suggested that the custom of circumcision gave advantages to tribes that practiced it and thus led to its spread.
Darby describes these theories as "conflicting '', and states that "the only point of agreement among proponents of the various theories is that promoting good health had nothing to do with it. '' Immerman et al. suggest that circumcision causes lowered sexual arousal of pubescent males, and hypothesize that this was a competitive advantage to tribes practising circumcision, leading to its spread. Wilson suggests that circumcision reduces insemination efficiency, reducing a man 's capacity for extra-pair fertilizations by impairing sperm competition. Thus, men who display this signal of sexual obedience may gain social benefits if married men are selected to offer social trust and investment preferentially to peers who are less threatening to their paternity. It is possible that circumcision arose independently in different cultures for different reasons.
"The distribution of circumcision and initiation rites throughout Africa, and the frequent resemblance between details of ceremonial procedure in areas thousands of miles apart, indicate that the circumcision ritual has an old tradition behind it and in its present form is the result of a long process of development. ''
African cultural history is conveniently spoken of in terms of language group. The Niger -- Congo speakers of today extend from Senegal to Kenya to South Africa and all points between. In the historic period, the Niger -- Congo speaking peoples predominantly have and have had male circumcision that occurred in young warrior initiation schools, the schools of Senegal and Gambia being not so very different from those of the Kenyan Gikuyu and South African Zulu. Their common ancestor was a horticultural group five, perhaps seven, thousand years ago from an area of the Cross River in modern Nigeria. From that area a horticultural frontier moved outward into West Africa and the Congo Basin. Certainly the warrior schools with male circumcision were a part of the ancestral society 's cultural repertoire.
Male circumcision in East Africa is a rite of passage from childhood to adulthood, but is only practiced in some nations (tribes). Some peoples in East Africa do not practice male circumcision (for example the Luo of western Kenya).
Amongst the Gikuyu (Kikuyu) people of Kenya and the Maasai people of Kenya and Tanzania, male circumcision has historically been the graduation element of an educational program that taught tribal beliefs, practices, culture, religion and history to youth who were on the verge of becoming full - fledged members of society. The circumcision ceremony was very public, and required a display of courage under the knife in order to maintain the honor and prestige of the young man and his family. The only form of anesthesia was a bath in the cold morning waters of a river, which tended to numb the senses to a minor degree. The youths being circumcised were required to maintain a stoic expression and not to flinch from the pain.
After circumcision, young men became members of the warrior class, and were free to date and marry. The graduants became a fraternity that served together, and continued to have mutual obligation to each other for life.
In the modern context in East Africa, the physical element of male circumcision remains (in the societies that have historically practiced it) but without most of the other accompanying rites, context and programs. For many, the operation is now performed in private on one individual, in a hospital or doctor 's office. Anesthesia is often used in such settings. There are tribes however, that do not accept this modernized practice. They insist on circumcision in a group ceremony, and a test of courage at the banks of a river. This more traditional approach is common amongst the Meru and the Kisii tribes of Kenya.
Despite the loss of the rites and ceremonies that accompanied male circumcision in the past, the physical operation remains crucial to personal identity and pride, and acceptance in society. Uncircumcised men in these communities risk being "outed '', and subjected to ridicule as "boys ''. There have been many cases of forced circumcision of men from such communities who are discovered to have escaped the ritual.
In some South African ethnic groups, circumcision has roots in several belief systems, and is performed most of the time on teenage boys:
The young men in the eastern Cape belong to the Xhosa ethnic group for whom circumcision is considered part of the passage into manhood... A law was recently introduced requiring initiation schools to be licensed and only allowing circumcisions to be performed on youths aged 18 and older. But Eastern Cape provincial Health Department spokesman Sizwe Kupelo told Reuters news agency that boys as young as 11 had died. Each year thousands of young men go into the bush alone, without water, to attend initiation schools. Many do not survive the ordeal.
Sixth Dynasty (2345 -- 2181 BCE) tomb artwork in Egypt has been thought to be the oldest documentary evidence of circumcision, the most ancient depiction being a bas - relief from the necropolis at Saqqara (c. 2400 BCE) with the inscriptions reading: "The ointment is to make it acceptable. '' and "Hold him so that he does not fall ''. In the oldest written account, by an Egyptian named Uha, in the 23rd century BCE, he describes a mass circumcision and boasts of his ability to stoically endure the pain: "When I was circumcised, together with one hundred and twenty men... there was none thereof who hit out, there was none thereof who was hit, and there was none thereof who scratched and there was none thereof who was scratched. ''
Herodotus, writing in the 5th century BCE, wrote that the Egyptians "practise circumcision for the sake of cleanliness, considering it better to be cleanly than comely. '' Gollaher (2000) considered circumcision in ancient Egypt to be a mark of passage from childhood to adulthood. He mentions that the alteration of the body and ritual of circumcision were supposed to give access to ancient mysteries reserved solely for the initiated. (See also Clement of Alexandria, Stromateis 1.15) The content of those mysteries are unclear but are likely to be myths, prayers, and incantations central to Egyptian religion. The Egyptian Book of the Dead, for example, tells of the sun god Ra cutting himself, the blood creating two minor guardian deities. The Egyptologist Emmanuel vicomte de Rougé interpreted this as an act of circumcision. Circumcisions were performed by priests in a public ceremony, using a stone blade. It is thought to have been more popular among the upper echelons of the society, although it was not universal and those lower down the social order are known to have had the procedure done. The Egyptian hieroglyph for "penis '' depicts either a circumcised or an erect organ.
Circumcision was also adopted by some Semitic peoples living in or around Egypt. Herodotus reported that circumcision is only practiced by the Egyptians, Colchians, Ethiopians, Phoenicians, the ' Syrians of Palestine ', and "the Syrians who dwell about the rivers Thermodon and Parthenius, as well as their neighbours the Macronians and Macrones ''. He also reports, however, that "the Phoenicians, when they come to have commerce with the Greeks, cease to follow the Egyptians in this custom, and allow their children to remain uncircumcised. ''
According to Genesis, God told Abraham to circumcise himself, his household and his slaves as an everlasting covenant in their flesh, see also Abrahamic Covenant. Those who were not circumcised were to be "cut off '' from their people. Covenants in biblical times were often sealed by severing an animal, with the implication that the party who breaks the covenant will suffer a similar fate. In Hebrew, the verb meaning to seal a covenant translates literally as "to cut ''. It is presumed by Jewish scholars that the removal of the foreskin symbolically represents such a sealing of the covenant. Moses might not have been circumcised; one of his sons was not, nor were some of his followers while traveling through the desert.. Moses 's wife Zipporah circumcised their son when God threatened to kill Moses.
According to Hodges, ancient Greek aesthetics of the human form considered circumcision a mutilation of a previously perfectly shaped organ. Greek artwork of the period portrayed penises as covered by the foreskin (sometimes in exquisite detail), except in the portrayal of satyrs, lechers, and barbarians. This dislike of the appearance of the circumcised penis led to a decline in the incidence of circumcision among many peoples that had previously practiced it throughout Hellenistic times.
In Egypt, only the priestly caste retained circumcision, and by the 2nd century, the only circumcising groups in the Roman Empire were Jews, Jewish Christians, Egyptian priests, and the Nabatean Arabs. Circumcision was sufficiently rare among non-Jews that being circumcised was considered conclusive evidence of Judaism (or Early Christianity and others derogatorily called Judaizers) in Roman courts -- Suetonius in Domitian 12.2 described a court proceeding in which a ninety - year - old man was stripped naked before the court to determine whether he was evading the head tax placed on Jews and Judaizers.
Cultural pressures to circumcise operated throughout the Hellenistic world: when the Judean king John Hyrcanus conquered the Idumeans, he forced them to become circumcised and convert to Judaism, but their ancestors the Edomites had practiced circumcision in pre-Hellenistic times.
Some Jews tried to hide their circumcision status, as told in 1 Maccabees. This was mainly for social and economic benefits and also so that they could exercise in gymnasiums and compete in sporting events. Techniques for restoring the appearance of an uncircumcised penis were known by the 2nd century BCE. In one such technique, a copper weight (called the Judeum pondum) was hung from the remnants of the circumcised foreskin until, in time, they became sufficiently stretched to cover the glans. The 1st - century writer Celsus described two surgical techniques for foreskin restoration in his medical treatise De Medicina. In one of these, the skin of the penile shaft was loosened by cutting in around the base of the glans. The skin was then stretched over the glans and allowed to heal, giving the appearance of an uncircumcised penis. This was possible because the Abrahamic covenant of circumcision defined in the Bible was a relatively minor circumcision; named milah, this involved cutting off the foreskin that extended beyond the glans. Jewish religious writers denounced such practices as abrogating the covenant of Abraham in 1 Maccabees and the Talmud.
Because of these attempts, and for other reasons, a second more radical step was added to the circumcision procedure. This was added around 140 CE, and was named Brit Peri'ah. In this step, the foreskin was cut further back, to the ridge behind the glans penis, called the coronal sulcus. The inner mucosal tissue was removed by use of a sharp finger nail or implement, including the excising and removal of the frenulum from the underside of the glans. Later during the Talmudic period (500 -- 625 CE) a third step, known as Metzitzah, began to be practiced. In this step the mohel would suck the blood from the circumcision wound with his mouth to remove what was believed to be bad excess blood. As it actually increases the likelihood of infections such as tuberculosis and venereal diseases, modern day mohels use a glass tube placed over the infant 's penis for suction of the blood. In many Jewish ritual circumcisions this step of Metzitzah has been eliminated.
First Maccabees tells us that the Seleucids forbade the practice of brit milah, and punished those who performed it -- as well as the infants who underwent it -- with death.
The 1st - century Jewish author Philo Judaeus (20 BCE - 50 CE) defended Jewish circumcision on several grounds, including health, cleanliness and fertility. He also thought that circumcision should be done as early as possible as it would not be as likely to be done by someone 's own free will. He claimed that the foreskin prevented semen from reaching the vagina and so should be done as a way to increase the nation 's population. He also noted that circumcision should be performed as an effective means to reduce sexual pleasure: "The legislators thought good to dock the organ which ministers to such intercourse thus making circumcision the symbol of excision of excessive and superfluous pleasure. '' There was also division in Pharisaic Judaism between Hillel the Elder and Shammai on the issue of circumcision of proselytes.
The Jewish philosopher Maimonides (1135 -- 1204) insisted that faith should be the only reason for circumcision. He recognised that it was "a very hard thing '' to have done to oneself but that it was done to "quell all the impulses of matter '' and "perfect what is defective morally. '' Sages at the time had recognised that the foreskin heightened sexual pleasure. Maimonides reasoned that the bleeding and loss of protective covering rendered the penis weakened and in so doing had the effect of reducing a man 's lustful thoughts and making sex less pleasurable. He also warned that it is "hard for a woman with whom an uncircumcised man has had sexual intercourse to separate from him. ''
A 13th - century French disciple of Maimonides, Isaac ben Yediah claimed that circumcision was an effective way of reducing a woman 's sexual desire. With a non-circumcised man, he said, she always orgasms first and so her sexual appetite is never fulfilled, but with a circumcised man she receives no pleasure and hardly ever orgasms "because of the great heat and fire burning in her. ''
Flavius Josephus in Jewish Antiquities book 20, chapter 2 records the story of King Izates who having been persuaded by a Jewish merchant named Ananias to embrace the Jewish religion, decided to get circumcised so as to follow Jewish law. Despite being reticent for fear of reprisals from his non-Jewish subjects he was eventually persuaded to do it by a Galileean Jew named Eleazar on the grounds that it was one thing to read the Law and another thing to practice it. Despite his mother Helen and Ananias 's fear of the consequences, Josephus said that God looked after Izates and his reign was peaceful and blessed.
The Council of Jerusalem in Acts of the Apostles 15 addressed the issue of whether circumcision was required of new converts to Christianity. Both Simon Peter and James the Just spoke against requiring circumcision in Gentile converts and the Council ruled that circumcision was not necessary. However, Acts 16 and many references in the Letters of Paul show that the practice was not immediately eliminated. Paul of Tarsus, who was said to be directly responsible for one man 's circumcision in Acts 16: 1 -- 3 and who appeared to praise Jewish circumcision in Romans 3: 2, said that circumcision did n't matter in 1 Corinthians 7: 19 and then increasingly turned against the practice, accusing those who promoted circumcision of wanting to make a good showing in the flesh and boasting or glorying in the flesh in Galatians 6: 11 -- 13. In a later letter, Philippians 3: 2, he is reported as warning Christians to beware the "mutilation '' (permanent dead link) Strong 's G2699). Circumcision was so closely associated with Jewish men that Jewish Christians were referred to as "those of the circumcision '' (e.g. Colossians 3: 20) or conversely Christians who were circumcised were referred to as Jewish Christians or Judaizers. These terms (circumcised / uncircumcised) are generally interpreted to mean Jews and Greeks, who were predominant, however it is an oversimplification as 1st - century Iudaea Province also had some Jews who no longer circumcised, and some Greeks (called Proselytes or Judaizers) and others such as Egyptians, Ethiopians, and Arabs who did. According to the Gospel of Thomas saying 53, Jesus says:
Parallels to Thomas 53 are found in Paul 's Romans 2: 29, Philippians 3: 3, 1 Corinthians 7: 19, Galatians 6: 15, Colossians 2: 11 -- 12.
In John 's Gospel 7: 23 Jesus is reported as giving this response to those who criticized him for healing on the Sabbath:
This passage has been seen as a comment on the Rabbinic belief that circumcision heals the penis (Jerusalem Bible, note to John 7: 23) or as a criticism of circumcision.
Europeans, with the exception of the Jews, did not practice male circumcision. A rare exception occurred in Visigothic Spain, where during the armed campaign king Wamba ordered circumcision of everyone who committed atrocities against the civilian population.
As part of an attempted reconciliation of Coptic and Catholic practices, the Catholic Church condemned the observance of circumcision as a moral sin and ordered against its practice in the Council of Basel - Florence in 1442. According to UNAIDS, the papal bull of Union with The Copts issued during that council stated that circumcision was merely unnecessary for Christians; El - Hout and Khauli, however, regard it as condemnation of the procedure.
In the 18th century, Edward Gibbon referred to circumcision as a "singular mutilation '' practised only by Jews and Turks and as "a painful and often dangerous rite ''... (R. Darby)
In 1753 in London there was a proposal for Jewish emancipation. It was furiously opposed by the pamphleteers of the time, who spread the fear that Jewish emancipation meant universal circumcision. Men were urged to protect:
These negative attitudes remained well into the 19th century. English explorer Sir Richard Burton observed that "Christendom practically holds circumcision in horror ''.
Although negative attitudes prevailed for much of the 19th century, this began to change in the latter part of the century, especially in the English - speaking world. This shift can be seen in the account on circumcision in the Encyclopædia Britannica. The ninth edition, published in 1876, discusses the practice as a religious rite among Jews, Muslims, the ancient Egyptians and tribal peoples in various parts of the world. The author of the entry rejected sanitary explanations of the procedure in favour of a religious one: "like other body mutilations... (it is) of the nature of a representative sacrifice ''. (R. Darby)
However, by 1910 the entry (in the Encyclopædia Britannica) had been turned on its head:
"This surgical operation, which is commonly prescribed for purely medical reasons, is also an initiation or religious ceremony among Jews and Muslims ''.
Now it was primarily a medical procedure and only after that a religious ritual. The entry explained that "in recent years the medical profession has been responsible for its considerable extension among other than Jewish children... for reasons of health '' (11th edition, Vol. 6).
By 1929 the entry is much reduced in size and consists merely of a brief description of the operation, which is "done as a preventive measure in the infant '' and "performed chiefly for purposes of cleanliness ''. Ironically, readers are then referred to the entries for "Mutilation '' and "Deformation '' for a discussion of circumcision in its religious context (14th edition, 1929, Vol. 5). (R. Darby)
There were two related concerns that led to the widespread adoption of this surgical procedure at this time. The first was a growing belief within the medical community regarding the efficacy of circumcision in reducing the risk of contracting sexually transmitted diseases, such as syphilis. The second was the notion that circumcision would lessen the urge towards masturbation, or "self abuse '' as it was often called.
The tradition of male circumcision is said to have been practiced within the British Royal Family, with varying accounts regarding which monarch started it: either Queen Victoria on account of her rumored adherence to British Israelism and the notion she was a descendant of King David (or on the advice of her personal physician), or her grandfather King George. The German - born King George was also the Prince - Elector of Hanover, and rumors existed that the Prince electors were circumcised. This is highly dubious since there is no evidence that Victoria was a supporter of the British Israeli movement, and the links between the royal family and the ancient House of David were only first proposed by its followers in the 1870s, long after she bore her sons (there is also evidence lacking that her sons, particularly Edward, had circumcisions); there is also no indication that the prince electors (or George himself) were circumcised and that the king introduced it upon his arrival to Britain and his ascension to the throne in 1714. If male members of the royal family were circumcised, the reason was due to their embrace of a custom popular amongst the upper - classes in the late 19th and 20th centuries. Prince Charles and his brothers are believed to have been circumcised (the former by a reputable rabbi and mohel), but the supposed tradition ended before the births of William and his brother, Harry due to their mother 's objections. Speculations arose in the media that William 's son may have been circumcised following his birth in 2013, but this is also highly unlikely.
The first medical doctor to advocate for the adoption of circumcision was the eminent English physician, Jonathan Hutchinson. In 1855, he published a study in which he compared the rate of contraction of venereal disease amongst the gentile and Jewish population of London. Although his manipulation and usage of the data has since been shown to have been flawed (the protection that Jews appear to have are more likely due to cultural factors), his study appeared to demonstrate that circumcised men were significantly less vulnerable to such disease. (A 2006 systematic review concluded that the evidence "strongly indicates that circumcised men are at lower risk of chancroid and syphilis. '')
Hutchinson was a notable leader in the campaign for medical circumcision for the next fifty years, publishing A plea for circumcison in the British Medical Journal (1890), where he contended that the foreskin "... constitutes a harbour for filth, and is a constant source of irritation. It conduces to masturbation, and adds to the difficulties of sexual continence. It increases the risk of syphilis in early life, and of cancer in the aged. '' As can be seen, he was also a convert to the idea that circumcision would prevent masturbation, a great Victorian concern. In an 1893 article, On circumcision as a preventive of masturbation he wrote: "I am inclined to believe that (circumcision) may often accomplish much, both in breaking the habit (of masturbation) as an immediate result, and in diminishing the temptation to it subsequently. ''
Nathaniel Heckford, a paediatrician at the East London Hospital for Children, wrote Circumcision as a Remedial Measure in Certain Cases of Epilepsy, Chorea, etc. (1865), in which he argued that circumcision acted as an effective remedial measure in the prevention of certain cases of epilepsy and chorea.
These increasingly common medical beliefs were even applied to females. The controversial obstetrical surgeon Isaac Baker Brown founded the London Surgical Home for Women in 1858, where he worked on advancing surgical procedures. In 1866, Baker Brown described the use of clitoridectomy, the removal of the clitoris, as a cure for several conditions, including epilepsy, catalepsy and mania, which he attributed to masturbation. In On the Curability of Certain Forms of Insanity, Epilepsy, Catalepsy, and Hysteria in Females, he gave a 70 per cent success rate using this treatment.
However, during 1866, Baker Brown began to receive negative feedback from within the medical profession from doctors who opposed the use of clitoridectomies and questioned the validity of Baker Brown 's claims of success. An article appeared in The Times in December, which was favourable towards Baker Brown 's work but suggested that Baker Brown had treated women of unsound mind. He was also accused of performing clitoridectomies without the consent or knowledge of his patients or their families. In 1867 he was expelled from the Obstetrical Society of London for carrying out the operations without consent. Baker Brown 's ideas were more accepted in the United States, where, from the 1860s, the operation was being used to cure hysteria, nymphomania, and in young girls what was called "rebellion '' or "unfeminine aggression ''.
Lewis Sayre, New York orthopedic surgeon, became a prominent advocate for circumcision in America. In 1870, he examined a five - year - old boy who was unable to straighten his legs, and whose condition had so far defied treatment. Upon noting that the boy 's genitals were inflamed, Sayre hypothesized that chronic irritation of the boy 's foreskin had paralyzed his knees via reflex neurosis. Sayre circumcised the boy, and within a few weeks, he recovered from his paralysis. After several additional incidents in which circumcision also appeared effective in treating paralyzed joints, Sayre began to promote circumcision as a powerful orthopedic remedy. Sayre 's prominence within the medical profession allowed him to reach a wide audience.
As more practitioners tried circumcision as a treatment for otherwise intractable medical conditions, sometimes achieving positive results, the list of ailments reputed to be treatable through circumcision grew. By the 1890s, hernia, bladder infections, kidney stones, insomnia, chronic indigestion, rheumatism, epilepsy, asthma, bedwetting, Bright 's disease, erectile dysfunction, syphilis, insanity, and skin cancer had all been linked to the foreskin, and many physicians advocated universal circumcision as a preventive health measure.
Specific medical arguments aside, several hypotheses have been raised in explaining the public 's acceptance of infant circumcision as preventive medicine. The success of the germ theory of disease had not only enabled physicians to combat many of the postoperative complications of surgery, but had made the wider public deeply suspicious of dirt and bodily secretions. Accordingly, the smegma that collects under the foreskin was viewed as unhealthy, and circumcision readily accepted as good penile hygiene. Secondly, moral sentiment of the day regarded masturbation as not only sinful, but also physically and mentally unhealthy, stimulating the foreskin to produce the host of maladies of which it was suspected. In this climate, circumcision could be employed as a means of discouraging masturbation. All About the Baby, a popular parenting book of the 1890s, recommended infant circumcision for precisely this purpose. (However, a survey of 1410 men in the United States in 1992, Laumann found that circumcised men were more likely to report masturbating at least once a month.) As hospitals proliferated in urban areas, childbirth, at least among the upper and middle classes, was increasingly under the care of physicians in hospitals rather than with midwives in the home. It has been suggested that once a critical mass of infants were being circumcised in the hospital, circumcision became a class marker of those wealthy enough to afford a hospital birth.
During the same time period, circumcision was becoming easier to perform. William Stewart Halsted 's 1885 discovery of hypodermic cocaine as a local anaesthetic made it easier for doctors without expertise in the use of chloroform and other general anaesthetics to perform minor surgeries. Also, several mechanically aided circumcision techniques, forerunners of modern clamp - based circumcision methods, were first published in the medical literature of the 1890s, allowing surgeons to perform circumcisions more safely and successfully.
By the 1920s, advances in the understanding of disease had undermined much of the original medical basis for preventive circumcision. Doctors continued to promote it, however, as good penile hygiene and as a preventive for a handful of conditions local to the penis: balanitis, phimosis, and penile cancer.
Circumcision in English - speaking countries arose in a climate of negative attitudes towards sex, especially concerning masturbation. In her 1978 article The Ritual of Circumcision, Karen Erickson Paige writes: "The current medical rationale for circumcision developed after the operation was in wide practice. The original reason for the surgical removal of the foreskin, or prepuce, was to control ' masturbatory insanity ' -- the range of mental disorders that people believed were caused by the ' polluting ' practice of ' self - abuse. ' ''
"Self - abuse '' was a term commonly used to describe masturbation in the 19th century. According to Paige, "treatments ranged from diet, moral exhortations, hydrotherapy, and marriage, to such drastic measures as surgery, physical restraints, frights, and punishment. Some doctors recommended covering the penis with plaster of Paris, leather, or rubber; cauterization; making boys wear chastity belts or spiked rings; and in extreme cases, castration. '' Paige details how circumcision became popular as a masturbation remedy:
In the 1890s, it became a popular technique to prevent, or cure, masturbatory insanity. In 1891 the president of the Royal College of Surgeons of England published On Circumcision as Preventive of Masturbation, and two years later another British doctor wrote Circumcision: Its Advantages and How to Perform It, which listed the reasons for removing the "vestigial '' prepuce. Evidently the foreskin could cause "nocturnal incontinence, '' hysteria, epilepsy, and irritation that might "give rise to erotic stimulation and, consequently, masturbation. '' Another physician, P.C. Remondino, added that "circumcision is like a substantial and well - secured life annuity... it insures better health, greater capacity for labor, longer life, less nervousness, sickness, loss of time, and less doctor bills. '' No wonder it became a popular remedy.
At the same time circumcisions were advocated on men, clitoridectomies (removal of the clitoris) were also performed for the same reason (to treat female masturbators). The US "Orificial Surgery Society '' for female "circumcision '' operated until 1925, and clitoridectomies and infibulations would continue to be advocated by some through the 1930s. As late as 1936, L.E. Holt, an author of pediatric textbooks, advocated male and female circumcision as a treatment for masturbation.
One of the leading advocates of circumcision was John Harvey Kellogg. He advocated the consumption of Kellogg 's corn flakes to prevent masturbation, and he believed that circumcision would be an effective way to eliminate masturbation in males.
Covering the organs with a cage has been practiced with entire success. A remedy which is almost always successful in small boys is circumcision, especially when there is any degree of phimosis. The operation should be performed by a surgeon without administering an anesthetic, as the brief pain attending the operation will have a salutary effect upon the mind, especially if it be connected with the idea of punishment, as it may well be in some cases. The soreness which continues for several weeks interrupts the practice, and if it had not previously become too firmly fixed, it may be forgotten and not resumed. If any attempt is made to watch the child, he should be so carefully surrounded by vigilance that he can not possibly transgress without detection. If he is only partially watched, he soon learns to elude observation, and thus the effect is only to make him cunning in his vice.
Robert Darby (2003), writing in the Medical Journal of Australia, noted that some 19th - century circumcision advocates -- and their opponents -- believed that the foreskin was sexually sensitive:
In the 19th century the role of the foreskin in erotic sensation was well understood by physicians who wanted to cut it off precisely because they considered it the major factor leading boys to masturbation. The Victorian physician and venereologist William Acton (1814 -- 1875) damned it as "a source of serious mischief '', and most of his contemporaries concurred. Both opponents and supporters of circumcision agreed that the significant role the foreskin played in sexual response was the main reason why it should be either left in place or removed. William Hammond, a Professor of Mind in New York in the late 19th century, commented that "circumcision, when performed in early life, generally lessens the voluptuous sensations of sexual intercourse '', and both he and Acton considered the foreskin necessary for optimal sexual function, especially in old age. Jonathan Hutchinson, English surgeon and pathologist (1828 -- 1913), and many others, thought this was the main reason why it should be excised.
Born in the United Kingdom during the late - nineteenth century, John Maynard Keynes and his brother Geoffrey, were both circumcised in boyhood due to parents ' concern about their masturbatory habits. Mainstream pediatric manuals continued to recommend circumcision as a deterrent against masturbation until the 1950s.
Infant circumcision was taken up in the United Kingdom, the United States, Australia, New Zealand, and the English - speaking parts of Canada. Although it is difficult to determine historical circumcision rates, one estimate of infant circumcision rates in the United States holds that 30 % of newborn American boys were being circumcised in 1900, 55 % in 1925, and 72 % in 1950.
In South Korea, circumcision was largely unknown before the establishment of the United States trusteeship in 1945 and the spread of American influence. More than 90 % of South Korean high school boys are now circumcised at an average age of 12 years, which makes South Korea a unique case. However circumcision rates are now declining in South Korea.
Decline in circumcision in the English - speaking world began in the postwar period. The British paediatrician Douglas Gairdner published a famous study in 1949, The fate of the foreskin, described as "a model of perceptive and pungent writing. '' It revealed that for the years 1942 -- 1947, about 16 children per year in England and Wales had died because of circumcision, a rate of about 1 per 6000 circumcisions. The article had an influential impact on medical practice and public opinion.
In 1949, a lack of consensus in the medical community as to whether circumcision carried with it any notable health benefit motivated the United Kingdom 's newly formed National Health Service to remove infant circumcision from its list of covered services. Since then, circumcision has been an out - of - pocket cost to parents, and the proportion of circumcised men is around 9 %.
Similar trends have operated in Canada, (where public medical insurance is universal, and where private insurance does not replicate services already paid from the public purse) individual provincial health insurance plans began delisting non-therapeutic circumcision in the 1980s. Manitoba was the final province to delist non-therapeutic circumcision, which occurred in 2005. The practice has also declined to about nine percent of newborn boys in Australia and is almost unknown in New Zealand.
|
who plays hammer man in blood and bone | Blood and Bone - wikipedia
Blood and Bone is a 2009 American direct - to - DVD martial arts film directed by Ben Ramsey and written by Michael Andrews. The film stars Michael Jai White, Eamonn Walker and Julian Sands, and features martial artist Matt Mullins, former professional wrestler Ernest "The Cat '' Miller, MMA fighters Bob Sapp, Kimbo Slice, Maurice Smith, and Gina Carano and former professional bodybuilder Melvin Anthony.
Fresh out of prison, Isaiah Bone (Michael Jai White) moves to Los Angeles, where underground fights are being held. One night, after watching a match involving local champion Hammerman, Bone makes a deal with promoter Pinball to get him into the fight scene for 20 % of his earnings; 40 % if Pinball puts his own money on the line. On that same night, Bone encounters mob boss James and his girlfriend Angela Soto. Bone enters his first underground fight and quickly defeats his opponent with only two kicks. Pinball explains to Bone that Angela was previously married, but James set up her husband Danny on a triple - homicide, sending him to jail. When James learned that she was pregnant, he had her undergo an abortion. Since then, Angela has fallen into drug addiction.
Over the next few nights, Bone makes a name for himself in the underground fighting scene, defeating every fighter in his path and earning himself and Pinball thousands in cash. At the same time, he bonds with the people who live at his apartment building: Tamara - the landlady who manages his apartment, Roberto - an elderly Latin - American man whom he plays chess with, and Jared - a young boy Tamara adopted after his father was sent to prison. Then, after making the once - undefeated Hammerman fall to the ground, Bone is offered a deal by James. The international underground fighting scene is run by a league of rich, powerful men known as the "Consortium '', but mainly by a black market arms dealer named Franklin McVeigh, and James wants Bone to square off against Pretty Boy Price, the reigning champion. After telling James he will consider the offer, Bone reveals to Angela that he was cell mates and close friends with Danny. Then one day, Danny was murdered by an inmate named JC. Angela reveals that shortly after Danny went to prison, she gave birth to a son, but lost custody of him and does not know if he is still alive. Bone promises to bring her to her son, but sends her to a drug rehab clinic until she is ready.
The next morning, James offers McVeigh US $5 million to have him get the Consortium approve and schedule a fight between Bone and Price. That night, Bone discovers that Roberto has been murdered in front of the apartment, mauled to death by James ' dogs because he witnessed one of James ' street killings. Bone declines the offer to fight for James mainly because he never agreed to it; as a result, James orders his thugs to hunt down Bone and Pinball. James ' bodyguard Teddy D and his thugs head to the rehab clinic to pick up Angela, only to have Bone and Pinball dispatch them. Transmitting his location through Teddy D 's GPS phone, James has the duo follow him to McVeigh 's mansion. There, Bone is ordered by James to fight Price and win back his money, or else he will have Angela, Tamara, Jared and Pinball killed. During the conversation, Bone secretly records James ' revelation that he had Danny set up and murdered through the GPS phone, which he transmits to Pinball 's cell phone; Pinball then sends the video to the police. Bone faces Price, but at the point where he is close to defeating him, he taps the ground and forfeits the fight. An infuriated James grabs his katana and attacks Bone, but Bone is thrown a jian by McVeigh 's bodyguard to even the odds. Bone drops the sword and uses the sheath instead, beating James with it. In the middle of the melee, Bone blocks a sword slash, causing James to cut off his own hand. He runs off before the police arrive at the mansion to arrest James.
The next day, Angela is reunited with Jared. Tamara, who had threatened to kick Bone out due to his association with James, offers him to stay; he declines, saying he will only cause further trouble. Before exiting the apartment, he leaves her an envelope full of cash and asks her to take Angela in once she is rehabilitated. He also parts ways with Pinball, saying he has business to take care of.
During the end credits, James is attacked and sodomized in prison by JC and his gang with a shiv.
Kam Williams of News Blaze gave the film 2.5 stars out of 4, calling it "a decent, low - budget splatter affair for folks who care more about body counts and orchestrated martial arts stunts than a coherent storyline. Michael Jai White makes a convincing case here that he deserves future consideration as an action star on the order of a Vin Diesel or a Sylvester Stallone ''.
|
where was the fertile crescent and who lived there | Fertile Crescent - wikipedia
The Fertile Crescent (also known as the "cradle of civilization '') is a crescent - shaped region where agriculture and early human civilizations like the Sumer and Ancient Egypt flourished due to inundations from the surrounding Nile, Euphrates, and Tigris rivers. Technological advances in the region include the development of writing, glass, the wheel, agriculture, and the use of irrigation.
Modern - day countries with significant territory within the Fertile Crescent are Iraq, Syria, Lebanon, Cyprus, Jordan, Israel, Palestine, Egypt, as well as the southeastern fringe of Turkey and the western fringes of Iran. The term is also used in international geopolitics and diplomatic relations.
The term "Fertile Crescent '' was popularized by archaeologist James Henry Breasted in Outlines of European History (1914) and Ancient Times, A History of the Early World (1916). Breasted wrote:
This fertile crescent is approximately a semicircle, with the open side toward the south, having the west end at the southeast corner of the Mediterranean, the center directly north of Arabia, and the east end at the north end of the Persian Gulf (see map, p. 100). It lies like an army facing south, with one wing stretching along the eastern shore of the Mediterranean and the other reaching out to the Persian Gulf, while the center has its back against the northern mountains. The end of the western wing is Palestine; Assyria makes up a large part of the center; while the end of the eastern wing is Babylonia. This great semicircle, for lack of a name, may be called the Fertile Crescent. It may also be likened to the shores of a desert - bay, upon which the mountains behind look down -- a bay not of water but of sandy waste, some eight hundred kilometres across, forming a northern extension of the Arabian desert and sweeping as far north as the latitude of the northeast corner of the Mediterranean. This desert - bay is a limestone plateau of some height -- too high indeed to be watered by the Tigris and Euphrates, which have cut cañons obliquely across it. Nevertheless, after the meager winter rains, wide tracts of the northern desert - bay are clothed with scanty grass, and spring thus turns the region for a short time into grasslands. The history of Western Asia may be described as an age - long struggle between the mountain peoples of the north and the desert wanderers of these grasslands -- a struggle which is still going on -- for the possession of the Fertile Crescent, the shores of the desert - bay. There is no name, either geographical or political, which includes all of this great semicircle (see map, p. 100). Hence we are obliged to coin a term and call it the Fertile Crescent.
In current usage, the Fertile Crescent includes Iraq, Kuwait, and surrounding portions of Iran and Turkey, as well as the rest of the Levantine coast of the Mediterranean Sea, Syria, Jordan, Israel, and Lebanon. Water sources include the Jordan River. The inner boundary is delimited by the dry climate of the Syrian Desert to the south. Around the outer boundary are the Anatolian highlands to the north and the Sahara Desert to the west.
As crucial as rivers and marshlands were to the rise of civilization in the Fertile Crescent, they were not the only factor. The area is geographically important as the "bridge '' between Africa and Eurasia, which has allowed it to retain a greater amount of biodiversity than either Europe or North Africa, where climate changes during the Ice Age led to repeated extinction events when ecosystems became squeezed against the waters of the Mediterranean Sea. The Saharan pump theory posits that this Middle Eastern land - bridge was extremely important to the modern distribution of Old World flora and fauna, including the spread of humanity.
The area has borne the brunt of the tectonic divergence between the African and Arabian plates and the converging Arabian and Eurasian plates, which has made the region a very diverse zone of high snow - covered mountains.
The Fertile Crescent had many diverse climates, and major climatic changes encouraged the evolution of many "r '' type annual plants, which produce more edible seeds than "K '' type perennial plants. The region 's dramatic variety in elevation gave rise to many species of edible plants for early experiments in cultivation. Most importantly, the Fertile Crescent was home to the eight Neolithic founder crops important in early agriculture (i.e., wild progenitors to emmer wheat, einkorn, barley, flax, chick pea, pea, lentil, bitter vetch), and four of the five most important species of domesticated animals -- cows, goats, sheep, and pigs; the fifth species, the horse, lived nearby. The Fertile Crescent flora comprises a high percentage of plants that can self - pollinate, but may also be cross-pollinated. These plants, called "selfers '', were one of the geographical advantages of the area because they did not depend on other plants for reproduction.
Fertile Crescent Mesopotamia
Egypt
Persia
Anatolia
The Levant
As well as possessing many sites with the skeletal and cultural remains of both pre-modern and early modern humans (e.g., at Tabun and Es Skhul caves in Israel), later Pleistocene hunter - gatherers, and Epipalaeolithic semi-sedentary hunter - gatherers (the Natufians); the Fertile Crescent is most famous for its sites related to the origins of agriculture. The western zone around the Jordan and upper Euphrates rivers gave rise to the first known Neolithic farming settlements (referred to as Pre-Pottery Neolithic A (PPNA), which date to around 9,000 BCE (and includes sites such as Göbekli Tepe and Jericho).
This region, alongside Mesopotamia (which lies to the east of the Fertile Crescent, between the rivers Tigris and Euphrates), also saw the emergence of early complex societies during the succeeding Bronze Age. There is also early evidence from the region for writing and the formation of hierarchical statelevel societies. This has earned the region the nickname "The cradle of civilization ''.
It is in this region where the first libraries appeared, some 5,000 years ago. The oldest known library was found in northern Syria, in the ruins of Ebla, a major commercial center that was destroyed around 1650 BCE.
Both the Tigris and Euphrates start in the Taurus Mountains of what is modern - day Turkey. Farmers in southern Mesopotamia had to protect their fields from flooding each year, except northern Mesopotamia which had just enough rain to make some farming possible. To protect against flooding, they made levees.
Since the Bronze Age, the region 's natural fertility has been greatly extended by irrigation works, upon which much of its agricultural production continues to depend. The last two millennia have seen repeated cycles of decline and recovery as past works have fallen into disrepair through the replacement of states, to be replaced under their successors. Another ongoing problem has been salination -- gradual concentration of salt and other minerals in soils with a long history of irrigation.
Prehistoric seedless figs were discovered at Gilgal I in the Jordan Valley, suggesting that fig trees were being planted some 11,400 years ago. Cereals were already grown in Syria as long as 9,000 years ago. Small cats (Felis silvestris) also were domesticated in this region.
Linguistically, the Fertile Crescent was a region of great diversity. Historically, Semitic languages generally prevailed in the lowlands, whilst in the mountainous areas to the east and north a number of generally unrelated languages were found including Elamite, Kassite, and Hurro - Urartian. The precise affiliation of these, and their date of arrival, remain topics of scholarly discussion. However, given lack of textual evidence for the earliest era of prehistory, this debate is unlikely to be resolved in the near future.
The evidence which does exist suggests that by the third millennium BCE and into the second, several language groups already existed in the region. These included:
Links between Hurro - Urartian and Hattic and the indigenous languages of the Caucasus have frequently been suggested, but are not generally accepted.
Coordinates: 36 ° N 40 ° E / 36 ° N 40 ° E / 36; 40
|
why manufacture of sulphuric acid is called contact process | Contact process - wikipedia
The contact process is the current method of producing sulfuric acid in the high concentrations needed for industrial processes. Platinum used to be the catalyst for this reaction; however, as it is susceptible to reacting with arsenic impurities in the sulfur feedstock, vanadium (V) oxide (V O) is now preferred.
This process was patented in 1831 by British vinegar merchant Peregrine Phillips. In addition to being a far more economical process for producing concentrated sulfuric acid than the previous lead chamber process, the contact process also produces sulfur trioxide and oleum.
The process can be divided into five stages:
Purification of the air and sulfur dioxide (SO) is necessary to avoid catalyst poisoning (i.e. removing catalytic activities). The gas is then washed with water and dried with sulfuric acid.
To conserve energy, the mixture is heated by exhaust gases from the catalytic converter by heat exchangers.
Sulfur dioxide and dioxygen then react as follows:
According to the Le Chatelier 's principle, a lower temperature should be used to shift the chemical equilibrium towards the right, hence increasing the percentage yield. However too low of a temperature will lower the formation rate to an uneconomical level. Hence to increase the reaction rate, high temperatures (450 ° C), medium pressures (1 - 2 atm), and vanadium (V) oxide (V O) are used to ensure an adequate (> 95 %) conversion. The catalyst only serves to increase the rate of reaction as it does not change the position of the thermodynamic equilibrium. The mechanism for the action of the catalyst comprises two steps:
Hot sulfur trioxide passes through the heat exchanger and is dissolved in concentrated H SO in the absorption tower to form oleum:
Note that directly dissolving SO in water is impractical due to the highly exothermic nature of the reaction. Acidic vapor or mists are formed instead of a liquid.
Oleum is reacted with water to form concentrated H SO.
This includes the dusting tower, cooling pipes, scrubbers, drying tower, arsenic purifier and testing box. Sulfur dioxide has many impurities such as vapours, dust particles and arsenous oxide. Therefore, it must be purified to avoid catalyst poisoning (i.e.: destroying catalytic activity and loss of efficiency). In this process, the gas is washed with water, and dried by sulfuric acid. In the dusting tower, the sulfur dioxide is exposed to a steam which removes the dust particles. After the gas is cooled, the sulfur dioxide enters the washing tower where it is sprayed by water to remove any soluble impurities. In the drying tower, sulfuric acid is sprayed on the gas to remove the moisture from it. Finally, arsenic oxide is removed when the gas is exposed to ferric hydroxide.
The next step to the Contact Process is DCDA or Double Contact Double Absorption. In this process the product gases (SO) and (SO) are passed through absorption towers twice to achieve further absorption and conversion of SO to SO and production of higher grade sulfuric acid.
SO - rich gases enter the catalytic converter, usually a tower with multiple catalyst beds, and are converted to SO, achieving the first stage of conversion. The exit gases from this stage contain both SO and SO which are passed through intermediate absorption towers where sulfuric acid is trickled down packed columns and SO reacts with water increasing the sulfuric acid concentration. Though SO too passes through the tower it is unreactive and comes out of the absorption tower.
This stream of gas containing SO, after necessary cooling is passed through the catalytic converter bed column again achieving up to 99.8 % conversion of SO to SO and the gases are again passed through the final absorption column thus resulting not only achieving high conversion efficiency for SO but also enabling production of higher concentration of sulfuric acid.
The industrial production of sulfuric acid involves proper control of temperatures and flow rates of the gases as both the conversion efficiency and absorption are dependent on these.
|
who has the highest batting average in baseball | List of major League baseball career batting average Leaders - wikipedia
In baseball, the batting average (BA) is defined by the number of hits divided by at bats. It is usually reported to three decimal places and pronounced as if it were multiplied by 1,000: a player with a batting average of. 300 is "batting three - hundred. '' A point (or percentage point) is understood to be. 001. If necessary to break ties, batting averages could be taken to more than three decimal places.
Outfielder Ty Cobb, whose career ended in 1928, has the highest batting average in Major League Baseball (MLB) history. He batted. 366 over 24 seasons, mostly with the Detroit Tigers. In addition, he won a record 11 batting titles for leading the American League in BA over the course of an entire season. He batted over. 360 in 11 consecutive seasons from 1909 to 1919. Rogers Hornsby has the second highest BA of all - time, at. 358. He won seven batting titles in the National League (NL) and has the highest NL average in a single season since 1900, when he batted. 424 in 1924. He batted over. 370 in six consecutive seasons.
Shoeless Joe Jackson is the only other player to finish his career with a. 350 batting average. He batted. 356 over 13 seasons before he was permanently suspended from organized baseball in 1921 for his role in the Black Sox Scandal. Lefty O'Doul first came to the major leagues as a pitcher, but after developing a sore arm, he converted to an outfielder and won two batting titles. The fifth player on the list, and the last with at least a. 345 BA, is Ed Delahanty. Delahanty 's career was cut short when he fell into the Niagara Falls and died during the 1903 season.
The last player to bat. 400 in a season, Ted Williams, ranks tied for seventh on the all - time career BA list. Babe Ruth hit for a career. 342 average and is tied for ninth on the list. Miguel Cabrera holds the highest career batting average among active players.
|
who was ordered to guard the sword of kings | Swiss Guards - wikipedia
Swiss Guards (French: Gardes Suisses; German: Schweizergarde) are the Swiss soldiers who have served as guards at foreign European courts since the late 15th century.
Foreign military service was outlawed by the revised Swiss Federal Constitution of 1874, with the only exception being the Pontifical Swiss Guard (Latin: Pontificia Cohors Helvetica, Cohors Pedestris Helvetiorum a Sacra Custodia Pontificis; Italian: Guardia Svizzera Pontificia) stationed in Vatican City. The modern Papal Swiss Guard serves as both a ceremonial unit and a bodyguard. Established in 1506, it is one of the oldest military units in the world.
The earliest Swiss guard unit to be established on a permanent basis was the Hundred Swiss (Cent Suisses), which served at the French court from 1490 to 1817. This small force was complemented in 1567 by a Swiss Guards regiment. In the 18th and early 19th centuries several other Swiss Guard units existed for periods in various European courts.
In addition to small household and palace units, Swiss mercenary regiments have served as regular line troops in various armies; notably those of France, Spain and Naples (see Swiss mercenaries). They were considered the most effective mercenaries of the 15th century until their battle - drill was improved upon by the German Landsknechte. At the Battle of Marignano (1515), the Landsknecht in French service defeated the Swiss pikemen.
There were two different corps of Swiss mercenaries performing guard duties for the Kings of France: the Hundred Swiss (Cent Suisses), serving within the Palace as essentially bodyguards and ceremonial troops, and the Swiss Guards (Gardes Suisses), guarding the entrances and outer perimeter. In addition the Gardes Suisses served in the field as a fighting regiment in times of war.
Francis I of France used some 120,000 Swiss mercenaries in the Italian Wars. After the Perpetual Peace of 1516, a separate Soldbündnis or service pact between the Swiss Confederacy and the kingdom of France was concluded in 1521, allowing the service of Swiss mercenary regiments as regular parts of the French armed forces. This arrangement outlasted three centuries, with four Swiss regiments participating in the French invasion of Russia in 1812, foreign military service of Swiss citizens being finally outlawed in 1848 with the formation of Switzerland as a federal state.
The Hundred Swiss were created in 1480 when Louis XI retained a Swiss company for his personal guard. By 1496 they comprised one hundred guardsmen plus about twenty - seven officers and sergeants. Their main role was the protection of the King within the palace as the garde du dedans du Louvre (the Louvre indoor guard), but in the earlier part of their history they accompanied the King to war. In the Battle of Pavia (1525) the Hundred Swiss of Francis I of France were slain before Francis was captured by the Spanish. The Hundred Swiss shared the indoor guard with the King 's Bodyguards (Garde du Corps), who were Frenchmen.
The Hundred Swiss were armed with halberds, the blade of which carried the Royal arms in gold, as well as gold - hilted swords. Their ceremonial dress as worn until 1789 comprised an elaborate 16th century Swiss costume covered with braiding and livery lace. A less ornate dark blue and red uniform with bearskin headdress was worn for ordinary duties.
The Cent Suisses company was disbanded after Louis XVI of France left the Palace of Versailles in October 1789. It was, however, refounded on 15 July 1814 with an establishment of 136 guardsmen and eight officers. The Hundred Swiss accompanied Louis XVIII into exile in Belgium the following year and returned with him to Paris following the Battle of Waterloo. The unit then resumed its traditional role of palace guards at the Tuileries but in 1817 it was replaced by a new guard company drawn from the French regiments of the Royal Guard.
In 1616, Louis XIII of France gave a regiment of Swiss infantry the name of Gardes suisses (Swiss Guards). The new regiment had the primary role of protecting the doors, gates and outer perimeters of the various royal palaces. In its early years this unit was officially a regiment of the line, but it was generally regarded as part of the Maison militaire du roi de France.
During the 17th and 18th centuries the Swiss Guards maintained a reputation for discipline and steadiness in both peacetime service and foreign campaigning. Their officers were all Swiss and their rate of pay substantially higher than that of the regular French soldiers. Internal discipline was maintained according to Swiss codes which were significantly harsher than those of the regular French Army.
By the end of the 17th century the Swiss Guards were formally part of the Maison militaire du roi. As such they were brigaded with the Regiment of French Guards (Gardes Françaises), with whom they shared the outer guard, and were in peace - time stationed in barracks on the outskirts of Paris. Like the eleven Swiss regiments of line infantry in French service, the Gardes suisses wore red coats. The line regiments had black, yellow or light blue facings but the Swiss Guards were distinguished by dark blue lapels and cuffs edged in white embroidery. Only the grenadier company wore bearskins while the other companies wore the standard tricorn headdress of the French infantry. The Guards were recruited from all the Swiss cantons. The nominal establishment was 1,600 men though actual numbers normally seem to have been below this.
The most famous episode in the history of the Swiss Guards was their defence of the Tuileries Palace in central Paris during the French Revolution. Of the nine hundred Swiss Guards defending the Palace on 10 August 1792, about six hundred were killed during the fighting or massacred after surrender. One group of sixty Swiss were taken as prisoners to the Paris City Hall before being killed by the crowd there. An estimated hundred and sixty more died in prison of their wounds, or were killed during the September Massacres that followed. Apart from less than a hundred Swiss who escaped from the Tuileries, some hidden by sympathetic Parisians, the only survivors of the regiment were a three - hundred - strong detachment that had been sent to Normandy to escort grain convoys a few days before 10 August. The Swiss officers were mostly amongst those massacred, although Major Karl Josef von Bachmann in command at the Tuileries was formally tried and guillotined in September, still wearing his red uniform coat. Two Swiss officers, the captains Henri de Salis and Joseph Zimmermann, did however survive and went on to reach senior rank under Napoleon and the Restoration.
There appears to be no truth in the charge that Louis XVI caused the defeat and destruction of the Guards by ordering them to lay down their arms when they could still have held the Tuileries. Rather, the Swiss ran low on ammunition and were overwhelmed by superior numbers when fighting broke out spontaneously after the Royal Family had been escorted from the Palace to take refuge with the National Assembly. A note has survived written by the King ordering the Swiss to retire from the Palace and return to their barracks but this was only acted on after their position had become untenable. The regimental standards had been secretly buried by the adjutant shortly before the regiment was summoned to the Tuileries on the night of 8 / 9 August, indicating that the likely end was foreseen. They were discovered by a gardener and ceremoniously burned by the new Republican authorities on 14 August. The barracks of the Guard at Courbevoie were stormed by the local National Guard and the few Swiss still on duty there also killed.
The heroic but futile stand of the Swiss is commemorated by Bertel Thorvaldsen 's Lion Monument in Lucerne, dedicated in 1821, which shows a dying lion collapsed upon broken symbols of the French monarchy. An inscription on the monument lists the twenty - six Swiss officers who died on 10 August and 2 -- 3 September 1792, and records that approximately 760 Swiss Guardsmen were killed on those days.
The French Revolution abolished mercenary troops in its citizen army, but Napoleon and the Bourbon Restoration both made use of Swiss troops. Four Swiss infantry regiments were employed by Napoleon, serving in both Spain and Russia. Two of the eight infantry regiments included in the Garde Royale from 1815 to 1830 were Swiss and can be regarded as successors of the old Gardes suisses. When the Tuileries was stormed again, in the July Revolution (29 July 1830), the Swiss regiments, fearful of another massacre, were withdrawn or melted into the crowd. They were not used again. In 1831 disbanded veterans of the Swiss regiments and another foreign unit, the Hohenlohe Regiment, were recruited into the newly raised French Foreign Legion for service in Algeria.
The Pontifical Swiss Guard (German: Päpstliche Schweizergarde; French: Garde suisse pontificale; Italian: Guardia Svizzera Pontificia; Latin: Pontificia Cohors Helvetica or Cohors Pedestris Helvetiorum a Sacra Custodia Pontificis) is an exception to the Swiss rulings of 1874 and 1927. A small force maintained by the Holy See, it is responsible for the safety of the Pope, including the security of the Apostolic Palace. The Swiss Guard serves as the de facto military of Vatican City.
Swiss Guard units similar to those of France were in existence at several other Royal Courts at the dates indicated below:
The Swiss constitution, as amended in 1874, forbade all military capitulations and recruitment of Swiss by foreign powers, although volunteering in foreign armies continued until prohibited outright in 1927. The Papal Swiss Guard (see above) remains an exception to this prohibition, reflecting the unique political status of the Vatican and the bodyguard - like role of the unit.
When writing Hamlet, Shakespeare assumed (perhaps relying on his sources) that the royal house of Denmark employed a Swiss Guard: In Act IV, Scene v (line 98) he has King Claudius exclaim "Where are my Switzers? Let them guard the door ''. However, it may also be due to the word "Swiss '' having become a generic term for a royal guard in popular European usage. Coincidentally, the present - day gatekeepers of the royal palace of Copenhagen are known as schweizere, "Swiss ''.
|
who was the girl in the opening scene of jaws | Susan Backlinie - wikipedia
Susan Backlinie (born Susan Jane Swindall on September 1, 1946) is a former actress and stuntwoman. She is known for her role as Chrissie Watkins, the first shark victim in Steven Spielberg 's blockbuster Jaws (1975).
Along with being a stuntwoman specializing in swimming work, she was also an animal trainer. She currently works as a computer accountant in her birthplace of Ventura, California.
Backlinie 's appearance in Jaws took three days to shoot, with Backlinie strapped into a harness while the crew struggled to get the desired effects. Backlinie also appeared in Spielberg 's film 1941 parodying her role in Jaws. Instead of being attacked by a shark during a midnight swim, she 's "picked up '' by the periscope of a Japanese submarine. The scene has been described as the best joke in what is otherwise widely considered one of Spielberg 's least successful films. Backlinie also appeared in the 1977 film Day of the Animals, regarded by some as a Jaws clone about nature gone bad.
When Jaws co-star Richard Dreyfuss saw a daily of her performance of being attacked by the shark, he told her it absolutely terrified him.
Contrary to rumor, Backlinie 's startled reaction and screams of anguish were not due to her being injured by the harness that yanked her back and forth in the water. However, she was attached to a line that was anchored to the ocean floor beneath her, and she was intentionally not warned when she would be first pulled under water. This helped provoke a more genuine expression of surprise from her initially - but the remainder of her performance was her own as an actress.
She appeared in her own pictorial ("The Lady and the Lion '') in the January 1973 issue of Penthouse magazine.
|
when was the writ of habeas corpus created | Habeas corpus in the United States - wikipedia
Habeas corpus (/ ˈheɪbiəs ˈkɔːrpəs /) is a recourse in law challenging the reasons or conditions of a person 's confinement under color of law. A petition for habeas corpus is filed with a court that has jurisdiction over the custodian, and if granted, a writ is issued directing the custodian to bring the confined person before the court for examination into those reasons or conditions. The Suspension Clause of the United States Constitution specifically included the English common law procedure in Article One, Section 9, clause 2, which demands that "The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it. ''
United States law affords persons the right to petition the federal courts for a writ of habeas corpus. Individual states also afford persons the ability to petition their own state court systems for habeas corpus pursuant to their respective constitutions and laws when held or sentenced by state authorities.
Federal habeas review did not extend to those in state custody until almost a century after the nation 's founding. During the Civil War and Reconstruction, as later during the War on Terrorism, the right to petition for a writ of habeas corpus was substantially curtailed for persons accused of engaging in certain conduct. In reaction to the former, and to ensure state courts enforced federal law, a Reconstruction Act for the first time extended the right of federal court habeas review to those in the custody of state courts (prisons and jails), expanding the writ essentially to all imprisoned on American soil. The federal habeas statute that resulted, with substantial amendments, is now at 28 U.S.C. § 2241. For many decades, the great majority of habeas petitions reviewed in federal court have been filed by those imprisoned in federal prisons by state courts for state crimes (e.g., murder, rape, robbery, etc.), since in the American system, crime has historically been a matter of state law.
The privilege of habeas corpus is not a right against unlawful arrest, but rather a right to be released from imprisonment after such arrest. If one believes the arrest is without legal merit and subsequently refuses to come willingly, he still may be guilty of resisting arrest, which can sometimes be a crime in and of itself (even if the initial arrest itself was illegal) depending on the state.
Habeas corpus derives from the English common law where the first recorded usage was in 1305, in the reign of King Edward I of England. The procedure for the issuing of writs of habeas corpus was first codified by the Habeas Corpus Act 1679, following judicial rulings which had restricted the effectiveness of the writ. A previous act had been passed in 1640 to overturn a ruling that the command of the Queen was a sufficient answer to a petition of habeas corpus. Winston Churchill, in his chapter on the English Common Law in The Birth of Britain, explains the process thus:
Only the King had a right to summon a jury. Henry (II) accordingly did not grant it to private courts... But all this was only a first step. Henry also had to provide means whereby the litigant, eager for royal justice, could remove his case out of the court of his lord into the court of the King. The device which Henry used was the royal writ... and any man who could by some fiction fit his own case to the wording of one of the royal writs might claim the King 's justice.
The writ of habeas corpus was issued by a superior court in the name of the Monarch, and commanded the addressee (a lower court, sheriff, or private subject) to produce the prisoner before the Royal courts of law. Petitions for habeas corpus could be made by the prisoner himself or by a third party on his behalf, and as a result of the Habeas Corpus Acts could be made regardless of whether the court was in session, by presenting the petition to a judge.
The 1679 Act remains important in 21st century cases. This Act and the historical body of British practice that relies upon it has been used to interpret the habeas rights granted by the United States Constitution, while taking into account the understanding of the writ held by the framers of the Constitution.
The Suspension Clause of Article I does not expressly establish a right to the writ of habeas corpus; rather, it prevents Congress from restricting it. There has been much scholarly debate over whether the Clause positively establishes a right under the federal constitution, merely exists to prevent Congress from prohibiting state courts from granting the writ, or protects a pre-existing common law right enforceable by federal judges. However, in the cases of Immigration and Naturalization Service v. St. Cyr (2001), and Boumediene v. Bush (2008) the U.S. Supreme Court suggested that the Suspension Clause protects "the writ as it existed in 1789, '' that is, as a writ which federal judges could issue in the exercise of their common law authority.
Regardless of whether the writ is positively guaranteed by the constitution, habeas corpus was first established by statute in the Judiciary Act of 1789. This statutory writ applied only to those held in custody by officials of the executive branch of the federal government and not to those held by state governments, which independently afford habeas corpus pursuant to their respective constitutions and laws. From 1789 until 1866, the federal writ of habeas corpus was largely restricted to prisoners in federal custody, at a time when no direct appeals from federal criminal convictions were provided for by law. Habeas corpus remained the only means for judicial review of federal capital convictions until 1889, and the only means for review of federal convictions for other "infamous crimes '' until 1891. Until 1983 the writ of habeas corpus remained the only way that decisions of military courts could be reviewed by the Supreme Court.
The authority of federal courts to review the claims of prisoners in state custody was not clearly established until Congress adopted a statute (28 U.S.C. § 2254) granting federal courts that authority in 1867, as part of the post-Civil War Reconstruction. The U.S. Supreme Court in the case of Waley v. Johnson (1942) interpreted this authority broadly to allow the writ to be used to challenge convictions or sentences in violation of a defendant 's constitutional rights where no other remedy was available.
The U.S. Congress grants federal district courts, the Supreme Court, and all Article III federal judges, acting in their own right, jurisdiction under 28 U.S.C. § 2241 to issue writs of habeas corpus to release prisoners held by any government entity within the country from custody, subject to certain limitations, if the prisoner --
In 1950s and 1960s, decisions by the Warren Court greatly expanded the use and scope of the federal writ largely due to the "constitutionalizing '' of criminal procedure by applying the Bill of Rights, in part, to state courts using the incorporation doctrine. This afforded state prisoners many more opportunities to claim that their convictions were unconstitutional, which provided grounds for habeas corpus relief. In the last thirty years, decisions by the Burger and Rehnquist Courts have somewhat narrowed the writ. The Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA) further limited the use of the federal writ by imposing a one - year statute of limitations and dramatically increasing the federal judiciary 's deference to decisions previously made in state court proceedings either on appeal or in a state court habeas corpus action.
One of AEDPA 's most controversial changes is the requirement that any constitutional right invoked to vacate a state court conviction rooted in a mistake of law by the state court must have "resulted in a decision that was contrary to, or involved an unreasonable application of, clearly established Federal law, as determined by the Supreme Court of the United States. '' Thus, a U.S. Court of Appeals must ignore its own precedents and affirm a state court decision contrary to its precedents, if the U.S. Supreme Court has never squarely addressed a particular issue of federal law.
On April 27, 1861, the right of habeas corpus was unilaterally suspended by President Abraham Lincoln in Maryland during the American Civil War. Lincoln had received word that anti-war Maryland officials intended to destroy the railroad tracks between Annapolis and Philadelphia, which was a vital supply line for the army preparing to fight the south. (Indeed, soon after, the Maryland legislature would simultaneously vote to stay in the Union and to close these rail lines, in an apparent effort to prevent war between its northern and southern neighbors.) Lincoln did not issue a sweeping order; it only applied to the Maryland route. Lincoln chose to suspend the writ over a proposal to bombard Baltimore, favored by his General - in - Chief Winfield Scott. Lincoln was also motivated by requests by generals to set up military courts to rein in his political opponents, "Copperheads, '' or Peace Democrats, so named because they did not want to resort to war to force the southern states back into the Union, as well as to intimidate those in the Union who supported the Confederate cause. Congress was not yet in session to consider a suspension of the writs; however, when it came into session it failed to pass a bill favored by Lincoln to sanction his suspensions. During this period one sitting U.S. Congressman from the opposing party, as well as the mayor, police chief, entire Board of Police, and the city council of Baltimore were arrested without charge and imprisoned indefinitely without trial.
Lincoln 's action was rapidly challenged in court and overturned by the U.S. Circuit Court of Appeals in Maryland (led by the Chief Justice of the Supreme Court, Roger B. Taney) in Ex Parte Merryman. Chief Justice Taney ruled the suspension unconstitutional, stating that only Congress could suspend habeas corpus. Lincoln and his Attorney General Edward Bates not only ignored the Chief Justice 's order, but when Lincoln 's dismissal of the ruling was criticized in an editorial by prominent Baltimore newspaper editor Frank Key Howard, they had the editor also arrested by federal troops without charge or trial. Ironically, the troops imprisoned Howard, who was Francis Scott Key 's grandson, in Fort McHenry, which, as he noted, was the same fort where the Star Spangled Banner had been waving "o'er the land of the free '' in his grandfather 's song. (In 1863, Howard wrote about his experience as a "political prisoner '' at Fort McHenry in the book Fourteen Months in the American Bastille; two of the publishers selling the book were then arrested.)
When Congress convened in July 1861 it failed to support Lincoln 's unilateral suspension of habeas corpus. A joint resolution was introduced into the Senate to approve of the president 's suspension of the writ of habeas corpus, but filibustering by Senate Democrats, who did not support it, and opposition to its imprecise wording by Sen. Lyman Trumbull prevented a vote on the resolution before the end of the first session, and the resolution was not taken up again. Sen. Trumbull himself introduced a bill to suspend habeas corpus, but failed on getting a vote before the end of the first session.
Shortly thereafter, on September 17, 1861, the day the Maryland legislature was to reconvene, Lincoln imprisoned one third of the members of the Maryland General Assembly without charges or hearings in further defiance of the Chief Justice 's ruling. Thus, the legislative session had to be cancelled.
On February 14, 1862, the war was firmly in progress and Lincoln ordered most prisoners released, putting an end to court challenges for the time being. He again suspended habeas corpus on his own authority in September that same year, however, in response to resistance to his calling up of the militia.
When Congress met again in December 1862, the House of Representatives passed a bill indemnifying the president for his suspension of habeas corpus. The Senate amended the bill, and the compromise reported out of the conference committee altered it to remove the indemnity and to suspend habeas corpus on Congress 's own authority. That bill, the Habeas Corpus Suspension Act, was signed into law March 3, 1863. Lincoln exercised his powers under it in September, suspending habeas corpus throughout the Union in any case involving prisoners of war, spies, traitors, or military personnel, The suspension of habeas corpus remained in effect until Andrew Johnson revoked it on December 1, 1865.
General Ambrose E. Burnside had former - Congressman Clement Vallandigham arrested in May 1863 for continuing to express sympathy for the Confederate cause after having been warned to cease doing so. Vallandigham was tried by a military tribunal and sentenced to two years in a military prison. Lincoln quickly commuted his sentence to banishment to the Confederacy. Vallandigham appealed his sentence, arguing that the Enrollment Act did not authorize his trial by a military tribunal rather than in ordinary civilian courts, that he was not ordinarily subject to court martial, and that Gen. Burnside could not expand the jurisdiction of military courts on his own authority. The Supreme Court did not address the substance of Vallandigham 's appeal, instead denying that it possessed the jurisdiction to review the proceedings of military tribunals without explicit congressional authorization.
In 1864, Lambdin P. Milligan and four others were accused of planning to steal Union weapons and invade Union prisoner - of - war camps and were sentenced to hang by a military court. However, their execution was not set until May 1865, so they were able to argue the case after the war ended. In Ex parte Milligan (1866), the U.S. Supreme Court decided that Congress 's 1863 suspension of the writ did not empower the President to try to convict citizens before military tribunals where the civil courts were open and operational. This was one of the key Supreme Court Cases of the American Civil War that dealt with wartime civil liberties and martial law.
In the Confederacy, Jefferson Davis also suspended habeas corpus and imposed martial law. This was in part to maintain order and spur industrial growth in the South to compensate for the economic loss inflicted by its secession.
Following the end of the Civil War, numerous groups arose in the South to oppose Reconstruction, including the Ku Klux Klan. In response, Congress passed the Enforcement Acts in 1870 -- 71. One of these, the Civil Rights Act of 1871, permitted the president to suspend habeas corpus if conspiracies against federal authority were so violent that they could not be checked by ordinary means. That same year, President Ulysses S. Grant suspended the writ of habeas corpus in nine South Carolina counties; the Act 's sunset clause ended that suspension with the close of the next regular session of Congress.
In response to continuing unrest, the Philippine Commission availed itself of an option in the Philippine Organic Act of 1902, 32 Stat. 692, and on January 31, 1905, requested that Governor - General Luke Edward Wright suspend the writ of habeas corpus. He did so the same day, and habeas corpus was suspended until he revoked his proclamation on October 15, 1905. The suspension gave rise to the United States Supreme Court case Fischer v. Baker, 203 U.S. 174 (1906).
Immediately following the attack on Pearl Harbor, the governor of Hawaii, Joseph Poindexter, invoked the Hawaiian Organic Act, 31 Stat. 141 (1900), suspended habeas corpus, and declared martial law. Hawaii was governed by Lieutenant Generals Walter Short, Delos Emmons, and Robert C. Richardson, Jr. for the remainder of the war. In Duncan v. Kahanamoku, 327 U.S. 304 (1946), the United States Supreme Court held that the declaration of martial law did not permit the trial of civilians in military tribunals for offenses unrelated to the military (in this case, public drunkenness).
In 1942, eight German saboteurs, including two U.S. citizens, who had secretly entered the United States to attack its civil infrastructure as part of Operation Pastorius were convicted by a secret military tribunal set up by President Franklin D. Roosevelt. In Ex parte Quirin (1942), the U.S. Supreme Court decided that the writ of habeas corpus did not apply, and that the military tribunal had jurisdiction to try the saboteurs, due to their status as unlawful combatants.
The period of martial law in Hawaii ended in October 1944. It was held in Duncan v. Kahanamoku (1946) that although the initial imposition of martial law in December 1941 may have been lawful, due to the Pearl Harbor attack and threat of imminent invasion, by 1944 the imminent threat had receded and civilian courts could again function in Hawaii. The Organic Act therefore did not authorize the military to continue to keep civilian courts closed.
After the end of the war, several German prisoners held in American - occupied Germany petitioned the District Court for the District of Columbia for a writ of habeas corpus. In Johnson v. Eisentrager (1950) the U.S. Supreme Court decided that the American court system had no jurisdiction over German war criminals who had been captured in Germany, and had never entered U.S. soil.
In 1996, following the Oklahoma City bombing, Congress passed (91 -- 8 in the Senate, 293 -- 133 in the House) and President Clinton signed into law the Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA). The AEDPA was intended to "deter terrorism, provide justice for victims, provide for an effective death penalty, and for other purposes. '' The AEDPA introduced one of the few limitations on habeas corpus. For the first time, its Section 101 set a statute of limitations of one year following conviction for prisoners to seek the writ. The Act limits the power of federal judges to grant relief unless the state court 's adjudication of the claim has resulted in a decision that
It barred second or successive petitions generally but with several exceptions. Petitioners who had already filed a federal habeas petition were required first to secure authorization from the appropriate United States Court of Appeals, to ensure that such an exception was at least facially made out.
The November 13, 2001, Presidential Military Order purported to give the President of the United States the power to detain non-citizens suspected of connection to terrorists or terrorism as enemy combatants. As such, that person could be held indefinitely, without charges being filed against him or her, without a court hearing, and without legal counsel. Many legal and constitutional scholars contended that these provisions were in direct opposition to habeas corpus, and the United States Bill of Rights and, indeed, in Hamdi v. Rumsfeld (2004) the U.S. Supreme Court re-confirmed the right of every American citizen to access habeas corpus even when declared to be an enemy combatant. The Court affirmed the basic principle that habeas corpus could not be revoked in the case of a citizen.
In Hamdan v. Rumsfeld (2006) Salim Ahmed Hamdan petitioned for a writ of habeas corpus, challenging that the military commissions set up by the Bush administration to try detainees at Guantanamo Bay "violate both the UCMJ and the four Geneva Conventions. '' In a 5 - 3 ruling the Court rejected Congress 's attempts to strip the court of jurisdiction over habeas corpus appeals by detainees at Guantánamo Bay. Congress had previously passed the Department of Defense Appropriations Act, 2006 which stated in Section 1005 (e), "Procedures for Status Review of Detainees Outside the United States '':
(1) Except as provided in section 1005 of the Detainee Treatment Act of 2005, no court, justice, or judge shall have jurisdiction to hear or consider an application for a writ of habeas corpus filed by or on behalf of an alien detained by the Department of Defense at Guantanamo Bay, Cuba.
(2) The jurisdiction of the United States Court of Appeals for the District of Columbia Circuit on any claims with respect to an alien under this paragraph shall be limited to the consideration of whether the status determination... was consistent with the standards and procedures specified by the Secretary of Defense for Combatant Status Review Tribunals (including the requirement that the conclusion of the Tribunal be supported by a preponderance of the evidence and allowing a rebuttable presumption in favor of the Government 's evidence), and to the extent the Constitution and laws of the United States are applicable, whether the use of such standards and procedures to make the determination is consistent with the Constitution and laws of the United States.
On September 29, the U.S. House and Senate approved the Military Commissions Act of 2006, a bill which suspended habeas corpus for any alien determined to be an "unlawful enemy combatant engaged in hostilities or having supported hostilities against the United States '' by a vote of 65 - 34. (This was the result on the bill to approve the military trials for detainees; an amendment to remove the suspension of habeas corpus failed 48 - 51.) President Bush signed the Military Commissions Act of 2006 (MCA) into law on October 17, 2006. With the MCA 's passage, the law altered the language from "alien detained... at Guantanamo Bay '':
Except as provided in section 1005 of the Detainee Treatment Act of 2005, no court, justice, or judge shall have jurisdiction to hear or consider an application for a writ of habeas corpus filed by or on behalf of an alien detained by the United States who has been determined by the United States to have been properly detained as an enemy combatant or is awaiting such determination. '' § 1005 (e) (1), 119 Stat. 2742.
The Supreme Court ruled in Boumediene v. Bush that the MCA amounts to an unconstitutional encroachment on habeas corpus rights, and established jurisdiction for federal courts to hear petitions for habeas corpus from Guantanamo detainees tried under the Act. Under the MCA, the law restricted habeas appeals for only those aliens detained as enemy combatants, or awaiting such determination. Left unchanged was the provision that, after such determination is made, it is subject to appeal in federal courts, including a review of whether the evidence warrants the determination. If the status was upheld, then their imprisonment was deemed lawful; if not, then the government could change the prisoner 's status to something else, at which point the habeas restrictions no longer applied.
There is, however, no legal time limit which would force the government to provide a Combatant Status Review Tribunal hearing. Prisoners were, but are no longer, legally prohibited from petitioning any court for any reason before a CSRT hearing takes place.
In January 2007, Attorney General Alberto Gonzales told the Senate Judiciary Committee that in his opinion: "There is no express grant of habeas in the Constitution. There 's a prohibition against taking it away. '' He was challenged by Sen. Arlen Specter who asked him to explain how it is possible to prohibit something from being taken away, without first being granted. Robert Parry wrotes in the Baltimore Chronicle & Sentinel:
Applying Gonzales 's reasoning, one could argue that the First Amendment does n't explicitly say Americans have the right to worship as they choose, speak as they wish or assemble peacefully. Ironically, Gonzales may be wrong in another way about the lack of specificity in the Constitution 's granting of habeas corpus rights. Many of the legal features attributed to habeas corpus are delineated in a positive way in the Sixth Amendment...
The Department of Justice has taken the position in litigation that the Military Commissions Act of 2006 does not amount to a suspension of the writ of habeas corpus. The U.S. Court of Appeals for the D.C. Circuit agreed in a 2 - 1 decision, on February 20, 2007, which the U.S. Supreme Court initially declined to review. The U.S. Supreme Court then reversed its decision to deny review and took up the case in June 2007. In June 2008, the court ruled 5 - 4 that the act did suspend habeas and found it unconstitutional.
On June 7, 2007, the Habeas Corpus Restoration Act of 2007 was approved by the Senate Judiciary Committee with an 11 -- 8 vote split along party lines, with all but one Republican voting against it. Although the Act would restore statutory habeas corpus to enemy combatants, it would not overturn the provisions of the AEDPA which set a statute of limitations on habeas corpus claims from ordinary civilian federal and state prisoners.
On June 11, 2007, a federal appeals court ruled that Ali Saleh Kahlah al - Marri, a legal resident of the United States, could not be detained indefinitely without charge. In a two - to - one ruling by the U.S. Court of Appeals for the Fourth Circuit, the Court held the President of the United States lacks legal authority to detain al - Marri without charge; all three judges ruled that al - Marri is entitled to traditional habeas corpus protections which give him the right to challenge his detainment in a U.S. Court.
In July 2008, the U.S. Court of Appeals for the Fourth Circuit ruled that "if properly designated an enemy combatant pursuant to the legal authority of the President, such persons may be detained without charge or criminal proceedings for the duration of the relevant hostilities. ''
On October 7, 2008, U.S. District Judge Ricardo M. Urbina ruled that 17 Uyghurs, Muslims from China 's northwestern Xinjiang region, must be brought to appear in his court in Washington, DC, three days later: "Because the Constitution prohibits indefinite detentions without cause, the continued detention is unlawful. ''
On January 21, 2009, President Barack Obama issued an executive order regarding the Guantanamo Bay Naval Base and the individuals held there. This order stated that the detainees "have the constitutional privilege of the writ of habeas corpus. ''
-- U.S. Senator Lindsey Graham, 2011
Following the December 1, 2011, vote by the United States Senate to reject an NDAA amendment proscribing the indefinite detention of U.S. citizens, the ACLU has argued that the legitimacy of Habeas Corpus is threatened: "The Senate voted 38 - 60 to reject an important amendment (that) would have removed harmful provisions authorizing the U.S. military to pick up and imprison without charge or trial civilians, including American citizens, anywhere in the world... We 're disappointed that, despite robust opposition to the harmful detention legislation from virtually the entire national security leadership of the government, the Senate said ' no ' to the Udall amendment and ' yes ' to indefinite detention without charge or trial. '' The New York Times has stated that the vote leaves the constitutional rights of U.S. citizens "ambiguous, '' with some senators including Carl Levin and Lindsey Graham arguing that the Supreme Court had already approved holding Americans as enemy combatants, and other senators, including Dianne Feinstein and Richard Durbin, asserting the opposite.
On April 20, 2015, a New York Supreme Court justice issued an order to "show cause & writ of habeas corpus '' in a proceeding on behalf of two chimpanzees used in research at the State University of New York at Stony Brook. The justice, Barbara Jaffe, amended her order later in the day by striking the reference to habeas corpus.
Habeas corpus is an action often taken after sentencing by a defendant who seeks relief for some perceived error in his criminal trial. There are a number of such post-trial actions and proceedings, their differences being potentially confusing, thus bearing some explanation. Some of the most common are an appeal to which the defendant has as a right, a writ of certiorari, a writ coram nobis and a writ of habeas corpus.
An appeal to which the defendant has a right can not be abridged by the court which is, by designation of its jurisdiction, obligated to hear the appeal. In such an appeal, the appellant feels that some error has been made in his trial, necessitating an appeal. A matter of importance is the basis on which such an appeal might be filed: generally appeals as a matter of right may only address issues which were originally raised in trial (as evidenced by documentation in the official record). Any issue not raised in the original trial may not be considered on appeal and will be considered waived via estoppel. A convenient test for whether a petition is likely to succeed on the grounds of error is confirming that
A writ of certiorari, otherwise known simply as cert, is an order by a higher court directing a lower court to send record of a case for review, and is the next logical step in post-trial procedure. While states may have similar processes, a writ of cert is usually only issued, in the United States, by the Supreme Court, although some states retain this procedure. Unlike the aforementioned appeal, a writ of cert is not a matter of right. A writ of cert will have to be petitioned for, the higher court issuing such writs on limited bases according to constraints such as time. In another sense, a writ of cert is like an appeal in its constraints; it too may only seek relief on grounds raised in the original trial.
A petition for a writ of error coram nobis or error coram vobis challenges a final judgment in a criminal proceeding. Use of this type of petition varies from jurisdiction to jurisdiction, but is usually limited to situations where it was not possible to raise this issue earlier on direct appeal. These petitions focus on issues outside the original premises of the trial, i.e., issues that require new evidence or those that could not otherwise be raised by direct appeal or writs of cert. These often fall in two logical categories: (1) that the trial lawyer was ineffectual or incompetent or (2) that some constitutional right has been violated.
In 2004, there were about 19,000 non-capital federal habeas corpus petitions filed and there were about 210 capital federal habeas corpus petitions filed in U.S. District Court. The vast majority of these were from state prisoners, not from those held in federal prisons. There are about 60 habeas corpus cases filed in the U.S. Supreme Court 's original jurisdiction each year. The U.S. Courts of Appeal do not have original jurisdiction over habeas corpus petitions.
In 1992, less than 1 % of federal habeas corpus petitions involved death penalty sentences, although 21 % involved life sentences. At that time about 23 % had been convicted of homicide, about 39 % had been convicted of other serious violent crimes, about 27 % had been convicted of serious non-violent crimes, and about 12 % were convicted of other offenses. These are almost exclusively state offenses and thus petitions filed by state prisoners.
Exhaustion of state - court remedies often takes five to ten years after a conviction, so only state prisoners facing longer prison sentences are able to avail themselves of federal habeas corpus rights without facing a summary dismissal for failure to exhaust state remedies. The lack of state remedies to exhaust also means that the timeline for federal death penalty habeas review is much shorter than the timeline for state death penalty habeas review (which can drag on literally for decades).
In 2004, the percentage of federal habeas corpus petitions involving state death sentences was still about 1 % of the total.
About 63 % of issues raised in habeas corpus petitions by state court prisoners are dismissed on procedural grounds at the U.S. District Court level, and about 35 % of those issues are dismissed based on the allegations in the petition on the merits (on the merits has a different meaning than what it 's used for here). About 2 % are either "remanded '' to a state court for further proceedings (which poses an interesting problem of federalism -- the federal court usually issues a writ to the state prison to release the prisoner, but only if the state court does not hold a certain proceeding within a certain time), or, far less frequently, resolved favorably to the prisoner on the merits outright. About 57 % of habeas corpus issues dismissed on procedural grounds in 1992 were dismissed for a failure to exhaust state remedies.
Success rates are not uniform, however. James Liebman, Professor of Law at Columbia Law School, stated in 1996 that his study found that when habeas corpus petitions in death penalty cases were traced from conviction to completion of the case that there was "a 40 percent success rate in all capital cases from 1978 to 1995. '' Similarly, a study by Ronald Tabek in a law review article puts the success rate in habeas corpus cases involving death row inmates even higher, finding that between "1976 and 1991, approximately 47 % of the habeas petitions filed by death row inmates were granted. '' Most habeas corpus petitioners in death penalty cases are represented by attorneys, but most habeas corpus petitioners in non-death penalty cases represent themselves. This is because federal funds are not available to non-capital state habeas petitioners to pay for attorneys unless there is good cause, there being no federal right to counsel in such matters. However, in state capital cases, the federal government provides funding for the representation of all capital habeas petitioners.
Thus, about 20 % of successful habeas corpus petitions involve death penalty cases.
These success rates predate major revisions in habeas corpus law that restricted the availability of federal habeas corpus relief when AEDPA was adopted in 1996, over a decade ago. Post-AEDPA, the great disparity in success rates remains, however, with the federal courts ' overturning of state capital cases a major reason that many states have been unable to carry out a majority of capital sentences imposed and have long backlog lists.
The time required to adjudicate habeas corpus petitions varies greatly based upon factors including the number of issues raised, whether the adjudication is on procedural grounds or on the merits, and the nature of the claims raised.
In 1992, U.S. District Courts took an average of two and a half years to adjudicate habeas corpus petitions in death penalty cases raising multiple issues that were resolved on the merits, about half of that time - length for other multiple issue homicide cases, and about nine months in cases resolved on procedural grounds.
AEDPA was designed to reduce the disposition times of federal habeas corpus petitions. But AEDPA has a little impact in non-capital cases, where a majority of cases are dismissed on procedural grounds, very few prisoners prevail and most prisoners are not represented by attorneys. The disposition time in capital cases has actually increased 250 % from the time of AEDPA 's passage to 2004.
In 1991, the average number of federal habeas corpus petitions filed in the United States was 14 per 1,000 people in state prison, but this ranged greatly from state to state from a low of 4 per 1,000 in Rhode Island to a high of 37 per 1,000 in Missouri.
The Anti-Terrorism and Effective Death Penalty Act of 1996 (AEDPA) produced a brief surge in the number of habeas corpus filings by state prisoners, as deadlines imposed by the act encouraged prisoners to file sooner than they might have otherwise done so, but this had run its course by 2000, and by 2004, habeas corpus petition filing rates per 1,000 prisoners was similar to pre-AEDPA filing rates.
There was a temporary surge in habeas corpus petitions filed by federal prisoners in 2005 as a result of the Booker decision by the U.S. Supreme Court.
Media related to Habeas corpus in the United States at Wikimedia Commons
|
what is the predominant religion in the ukraine | Religion in Ukraine - wikipedia
Religion in Ukraine (Razumkov 2016 survey)
Religion in Ukraine is diverse, with a majority of the population adhering to Christianity. A 2016 survey conducted by the Razumkov Centre found that 70 % of the population declared themselves believers. About 65.4 % of the population declared adherence to various types of Orthodoxy (25 % Orthodoxy of the Kiev Patriarchate, 21.2 % just Orthodox, 15 % Orthodoxy of the Moscow Patriarchate, 1.8 % Ukrainian Autocephalous Orthodox Church, and 2 % other types of Orthodoxy), 7.1 % just Christians, 6.5 % Greek Rite Catholics, 1.9 % Protestants, 1.1 % Muslims and 1.0 % Latin Rite Catholics. Judaism and Hinduism were each the religions of 0.2 % of the population. A further 16.3 % declared themselves non-religious or did not identify in the above religions. According to the surveys conducted by Razumkov in the 2000s and early 2010s, such proportions have remained relatively constant throughout the last decade, while the proportion of believers overall has decreased from 76 % in 2014 to 70 % in 2016. (p. 22).
As of 2016, Christianity is particularly strong in westernmost Ukrainian regions, where most Greek Catholics live besides the Orthodox population. In central, southern and eastern regions, Christians constitute a smaller proportion of the total population, particularly low in the easternmost region of Donbass. Another religion that is present in Ukraine besides Christianity is Rodnovery (Slavic native faith), which comprises Ukrainian - and Russian - language communities (some Rodnover organizations call the religion Православ'я Pravoslavya, "Orthodoxy '', thus functioning in homonymy with Christian Orthodox churches). Hinduism has been spread after the collapse of the Soviet Union by Indian and International Society for Krishna Consciousness ' missionaries and is particularly present in the Donbass region. Crimean Tatars professing Islam represent a significant part of the population in Crimea, which prior 2014 was a subject of Ukraine, but has been since that year occupied by Russia. As of 2016, without Crimea, where Muslims formed 15 % of the population in 2013, only Donbass maintains a larger community of Muslims compared to other Ukrainian regions (6 %).
A 2006 survey by the same Razumkov Center found that 62.5 % of all respondents were unaware of their religious affiliation, not religious or not affiliated to any religious body, 33.6 % were Christians (26.8 % Orthodox, 5.9 % Catholics, and 0.9 % Protestants), 0.1 % were Jewish, and 3.8 % were members of other religions.
Since before the outbreak of the War in Donbass in 2014, but even more violently so from that year onwards, there has been unrest between pro-Ukrainian and pro-Russian religious groups in the country.
In pre-historic times and in the early Middle Ages, the territories of present - day Ukraine supported different tribes practising their traditional pagan religions (though note for example the Tengrism of Old Great Bulgaria in the Ukrainian lands in the 7th century CE). Byzantine - rite Christianity first became prominent about the turn of the first millennium. (Later traditions and legends relate that in the first century CE the Apostle Andrew himself had visited the site where the city of Kiev would later arise.)
In the 10th century the emerging state of Kievan Rus ' came increasingly under the cultural influence of the Byzantine Empire. The first recorded Rus ' convert to Eastern Orthodoxy, the Princess Saint Olga, visited Constantinople in 945 or 957. In the 980s, according to tradition, Olga 's grandson, Knyaz (Prince) Vladimir had his people baptised in the Dnieper River. This began a long history of the dominance of Eastern Orthodoxy in Ruthenia, a religious ascendancy that would later influence both Ukraine and Russia. Domination of "Little Russia '' by Moscow (from the 17th century onwards) and by Saint Petersburg (from 1721) eventually led to the decline of Uniate Catholicism (officially founded in 1596) in the Ukrainian lands under Tsarist control.
Judaism has existed in the Ukrainian lands for approximately 2000 years: Jewish traders appeared in Greek colonies. After the 7th century Judaism influenced the neighbouring Khazar Khaganate. From the 13th century Ashkenazi Jewish presence in Ukraine increased significantly. In the 18th century a new teaching of Judaism originated and became established in the Ukrainian lands - Hasidism.
The Golden Horde (which adopted Islam in 1313) and the Sunni Ottoman Empire (which conquered the Ukrainian littoral in the 1470s) brought Islam to their subject territories in present - day Ukraine. Crimean Tatars accepted Islam as the state religion (1313 -- 1502) of the Golden Horde, and later ruled as vassals of the Ottoman Empire (until the late 18th century).
During the period of Soviet rule (c. 1917 -- 1991) the governing Soviet authorities officially promoted atheism and taught it in schools, while promoting various levels of persecution of religious believers and of their organizations. Only a small fraction of people remained official church - goers in that period, and the number of non-believers increased.
The 20th century saw schisms within Eastern Orthodoxy in Ukrainian territory.
As of 2016, according to a survey held by the Razumkov Center, 70 % of the total respondents declared to be believers, while 10.1 % were uncertain whether they believed or not, 7.2 % were uninterested in beliefs, 6.3 % were unbelievers, 2.7 % were atheists, and a further 3.9 % found it difficult to answer the question. Of the total respondents, about 65.4 % of the population declared to be adherents of various types of Orthodoxy (25 % Orthodoxy of the Kievan Patriarchate, 21.2 % just Orthodox, 15 % Orthodoxy of the Moscovian Patriarchate, 1.8 % Ukrainian Autocephalous Orthodox Church, and 2 % other types of Orthodoxy), 7.1 % just Christians, 6.5 % Greek Rite Catholics, 1.9 % Protestants, 1.1 % Muslims and 1.0 % Latin Rite Catholics. Judaism and Hinduism were the religions of 0.2 % of the population each. A further 16.3 % of the population were non-believers or believed in some other religion.
Among those Ukrainians who declared to believe in Orthodoxy, 38.1 % declared to be members of the Ukrainian Orthodox Church of the Kievan Patriarchate, while 23.0 % declared to be members of the Ukrainian Orthodox Church of the Moscovian Patriarchate. A further 2.7 % were members of the Ukrainian Autocephalous Orthodox Church. Among the remaining Orthodox Ukrainians, 32.3 % declared to be "just Orthodox '', without affiliation to any patriarchate, while a further 3.1 % declared that they "did not know '' which patriarchate or Orthodox church they belonged to.
It is important to note that the Orthodox Church of the Kyevan Patriarchate is considered schismatic and doctrinally errant and ethnophyletic by the Orthodox Church of the Moscovian Patriarchate (officially called the Ukrainian Orthodox Church, which is the Ukrainian branch of the Russian Orthodox Church) and is not recognised as part of the broader Eastern Orthodox Church.
(including Greek)
(other Christian)
A February 2015 survey by Razumkov Centre, SOCIS, Rating and KIIS gave the following data at oblast level:
(unspecified)
(incl. others)
As of 2016, 81.9 % of the population of Ukraine declared to believe in Christianity.
According to the same survey, 65.4 % of the total population adhered to Orthodoxy.
Orthodoxy is stronger in central (76.7 %) and southern Ukraine (71.0 %), while it comprises about two thirds of the total population in eastern Ukraine (63.2 %), and a particularly low proportion of the population in western Ukraine (57.0 %) and the Donbass (50.6 %).
As of 2016 the Ukrainian Orthodox Church of the Moscovian Patriarchate has 12,334 officially registered churches, by far more than any other religion in Ukraine. The church is headed by the Metropolitan of Kiev and all Ukraine, Onuphrius (Berezovsky), and it uses predominantly the old Slavonic language for its services.
The Ukrainian Orthodox Church of the Kievan Patriarchate was formed after the declaration of independence of Ukraine in 1991, and has been headed since 1995 by Patriarch Filaret (Denysenko) with the title Patriarch of Kiev and all Rus - Ukraine, who was earlier the Russian Orthodox Metropolitan of Kiev and all Ukraine. The church claims direct lineage to the Kievan Metropolia of Petro Mohyla. As of 2016, the church has 4,921 registered communities. They use both Ukrainian and common Slavonic as liturgical languages.
The Ukrainian Autocephalous Orthodox Church was founded in 1919 in Kiev, banned during the Soviet era, and then legalized in 1989. As of 2016, it is constituted by 1,217 communities divided in two branches.
In the interest of the possible future unification of the country 's Orthodox churches, it did not name a patriarch to succeed the late Patriarch Dmitriy. The Autocephalous Church was formally headed in the country by Metropolitan Methodij of Ternopil and Podil; however, the large eparchies of Kharkiv - Poltava, Lviv, Rivne - Volyn, and Tavriya have officially broken relations with Methodij and have asked to be placed under the direct jurisdiction of the Istanbul - based Ecumenical Patriarch Bartholomew. The church uses Ukrainian as its liturgical language.
There are also communities belonging to the Russian Orthodox Old - Rite Church and other Old Believers, to the Russian Orthodox Church Outside Russia, to the Ruthenian Orthodox Church, to various branches of the True Orthodox Church - Catacombism (including the Ruthenian True Orthodox Church, the Ukrainian True Orthodox Church and the Church of the Goths), to the Romanian Orthodox Church (Metropolis of Bessarabia), to the Ukrainian Autocephalous Orthodox Church Canonical, and to a variety of other minor Christian Orthodox churches.
Adherents of Oriental Orthodox Christianity in Ukraine are mainly ethnic Armenians. Historical ties between peoples of Ukraine and Armenia have resulted in significant presence of Armenian diaspora in Ukraine throughout history and up to the modern times. Most of ethnic Armenians in Ukraine are adherents of the Armenian Apostolic Church, one of main churches of the Oriental Orthodoxy, distinctive from Eastern Orthodoxy in terms of particular miaphysite christology. In spite of those theological differences, relations between Armenian Apostolic Church and various Eastern Orthodox Churches in Ukraine are friendly. There is an Armenian eparchy (diocese) in Ukraine, centered in Armenian Cathedral of Lviv, and also there are many Armenian churches and other monuments on the territory of Ukraine.
Byzantine Rite Catholicism is the religion of 6.5 % of the population of Ukraine as of 2016. This church is largely concentrated in western Ukraine, where it gathers a significant proportion of the population (29.9 %). Latin Rite Catholicism, instead, is the religion of 1.0 % of the population of Ukraine, mostly in western (1.4 %) and central (1.9 %) regions. Catholicism is largely absent in eastern Ukraine and non-existent in Donbass.
As of 2016, there are 4,733 registered Catholic churches, among which 3,799 belong to the two Byzantine Rite Churches and 933 belong to the Latin Church.
The Ukrainian Greek Catholic Church traditionally constituted the second largest group of believers after the Christian Orthodox churches. The Union of Brest formed the Church in 1596 to unify Orthodox and Catholic believers. Outlawed by the Soviet Union in 1946 and legalized in 1987, the church was for forty - three years the single largest banned religious community in the world. Major Archbishop Sviatoslav Shevchuk is the present head of the Ukrainian Greek Catholic Church. The church uses Ukrainian as its liturgical language.
The Church of the Latin Rite is traditionally associated with historical pockets of citizens of Polish ancestry who lived mainly in the central and western regions. It uses the Polish, Latin, Ukrainian and Russian as liturgical languages.
Main concentrations of the Ruthenian Catholic Church are in Trans - Carpathia near the Hungarian border. This community has multiple ties in Hungary, Slovakia and the United States.
The Armenian Catholic Church has a very small presence. As of 2016 there is only one officially registered church belonging to Armenian Catholics.
As of 2016, Protestants make up 1.9 % of the population of Ukraine, with a strong concentration in western Ukraine (3.6 %). In the country there are communities of Evangelicalism, Baptists, Charismatic Christianity, as well as Methodists, Mennonites, Lutherans, Presbyterians, and others. There is also a Sub-Carpathian Reformed Church with about 140,000 members, which is one of the earliest Protestant communities in the country. The Embassy of God of Sunday Adelaja maintains a significant presence throughout the country, as do other neopentecostal groups.
As of 2016, there are 2,973 Evangelical churches, 2,853 churches of the Baptists, 1,082 Seventh - day Adventist churches, 128 Calvinist churches, 79 Lutheran churches, 1,337 churches of Charismatic Christianity, and 1,347 other organizations belonging to the Protestant spectrum (including 928 Jehovah 's Witnesses ' halls and 44 Mormon congregations). In total, as of 2016 there are 9,799 registered Protestant groups in Ukraine.
As of 2016, there are 29 registered communities belonging to the Church of the East.
Jehovah 's Witnesses claim to have 265,985 adherents, as reported in the movement 's 2013 Yearbook. In 2010 the Church of Jesus Christ of Latter - day Saints (Mormons) dedicated its Kyiv Ukraine Temple, and in 2012 claimed a membership more than 11,000 in 57 congregations in Ukraine.
As of 2016, Islam is the religion of 1.1 % of the population of Ukraine. Muslims are mostly concentrated in Donbass, where they make up 6.0 % of the population. In the same year, there are 229 registered Islamic organizations. In Crimea, which in 2014 was incorporated by Russia, Crimean Tatar Muslims make up to 15 % of the population. A major part of the south steppes of modern Ukraine at a certain period of time were inhabited by Turkic peoples, most of whom were Muslims since the fall of the Khazar Khanate.
The Crimean Tatars are the only indigenous Muslim ethnic group in the country. The Nogays, another Muslim group who lived in the steppes of southern Ukraine, emigrated to Turkey in the 18th - 19th century. In addition, there are Muslim communities in all major Ukrainian cities representing Soviet - era migrants from Muslim backgrounds. There are approximately 150 mosques in Ukraine. Many Muslim mosques use the Crimean Tatar language, Arabic, Azeri, the Tatar language and Russian.
There are no single administrative centre, instead there are five of them: Spiritual Administration of Crimea Muslims (DUMK) - Crimean Tatars; Spiritual Administration of Ukraine Muslims (DUMU) - people of Caucasus, Crimean and Volga Tatars, Pakistani, Afghans, Arabs, Russians, Ukrainians; Spiritual Centre of Ukraine Muslims (DCUM) - Volga Tatars; Kiev muftiat - Kazan and Volga Tatars; UMMA - Arabs, Uzbeks, Russians, Ukrainians.
The size of the Jewish population of Ukraine has varied over time. Jews are primarily an ethnicity, closely linked with the religion of Judaism. Jews in Ukraine are estimated to be between 100 and 300 thousands. However, ethnic Jews may be irreligious or practise other religions than Judaism. It is estimated than only 35 - 40 % of the Jewish population of Ukraine is religious. Most observant Jews are believers of Orthodox Judaism, but there are as well communities of Chabad - Lubavitch and Reform Judaism, Conservative Judaism and Reconstructionist Judaism.
Judaic congregations use Russian, Hebrew, Yiddish and Ukrainian languages. As of 2016, 0.2 % of the population of Ukraine was found to be constituted by Jews believing in Judaism. There are, in the same year, 271 officially registered Jewish religious communities.
Hinduism is a minority faith in Ukraine. The International Society for Krishna Consciousness managed to propagate the Hindu faith through their missionary activities. As of 2016, Hindu believers constituted 0.2 % of the population of Ukraine, with a slightly higher proportion in Donbass (0.6 %). In the same year, there were 85 Hindu, Hindu - inspired and other Eastern religions - inspired organizations in the country, among which 42 are Krishna Consciousness congregations.
The Slavic native faith (Rodnovery, Ukrainian: Рідновірство Ridnovirstvo, Рідновір'я Ridnovirya or Рiдна Вiра Ridna Vira; otherwise called Православ'я Pravoslavya -- "Orthodoxy '') is represented in Ukraine by numerous organizations. As of 2016 there are 138 registered communities divided between the Church of the Native Ukrainian National Faith (Рідна Українська Національна Віра, RUNVira) -- 72 churches, the Ancestral Fire of Native Orthodoxy (Родового Вогнища Рідної Православної Віри) -- 21 churches, the Church of the Ukrainian Gentiles (Церкви Українських Язичників) -- 7 churches, the Federation of Ukrainian Rodnovers (Об'єднання Рідновірів України) -- 6 churches, and other organizations -- 32 churches. The Federation of Ukrainian Rodnovers was founded in 1998 by Halyna Lozko and has chapters in Kiev, Kharkiv, Odessa, Boryspil, Chernihiv, Mykolaiv, Lviv and Yuzhnoukrainsk. There are many other unregistered groups and federations, for instance the Ancestral Fire of the Slavic Native Faith (Родове Вогнище Слов'янської Рідної Віри), the Wrath of God Native Orthodox Faith, the North Caucasian Scythian Regional Fire, the Order of the Knights of the Solar God and the Rodoliubye Russian Rodnover Community.
Lev Sylenko founded the Church of the Native Ukrainian National Faith (RUNVira) in 1966 in Chicago, United States, and only opened their first temple in the mother country of Ukraine after the breakup of the Soviet Union in 1991. The current headquarters of RUNVira is in Spring Glen, New York, United States. The doctrine of the Church of the Native Ukrainian National Faith, "Sylenkoism '' or "Dazhbogism '', is monist and centered around the god Dazhbog.
Sociologists estimated between 1,000 and 95,000 Rodnovers (0.2 %) in Ukraine in the early 2000s. Since 1999, the Book of Veles, which is a central philosophical text for many Rodnover churches, is taught in Ukrainian school curricula.
Rodnovery has taken a significant role in the War in Donbass. The Azov Battalion, fighting for Kiev, is mainly composed of Ukrainian Rodnovers. It has been reported that it has been collaborating with the Kievan Patriarchate and taking over its churches. On the other side of the spectrum, Russian Rodnovers who fight for Moscow have organized into battalions such as the Svarog Battalion (of the Vostok Brigade), the Svarozhich Battalion (Slavic Unification and Revival Battalion), and the Rusich Company. Donbass has been documented as being a stronghold of Rodnovery; there are Russian Rodnover organizations which are reorganizing local villages and society according to traditional Indo - European trifunctionalism (according to which males are born to play one out of three roles in society, whether priests, warriors or farmers).
The group of worshipers of gods according to the Roman way Sarmatia (Roman name for modern - day Ukraine) is building a Temple of Jupiter Perennus or Jupiter Perunnus (Perun) in the city of Poltava. The site was attacked and desecrated by Christians in 2011, but construction works continue to the present.
As of 2016, there were 241 officially registered churches belonging to various new religious movements (including the aforementioned Hindu - inspired ones, 11 congregations of the Bahá'í Faith, six congregations of the Church of the Last Testament -- Vissarionism, and others), 58 registered Buddhist groups, and various registered churches for minority ethnicities -- including two Chinese Taoist churches, one Korean Methodist church, four Jewish Karaite churches, eight churches for Christian Jews and 35 churches of Messianic Judaism.
In December 1996, the All - Ukrainian Council of Churches and Religious Organizations was formed with the objective of uniting around 90 - 95 % of religious communities of Ukraine. Since the end of 2003, the Council of Representatives of the Christian Churches of Ukraine exists in parallel to the council to promote the principles of Christianity in Ukraine and religious freedom. Affiliation with either or both of the assemblies is voluntary.
In 2007, the council accounted for representatives of 19 organizations, while in 2013, only 18. The Council of Christian Churches accounted for representatives from 9 churches.
Cathedral of the Assumption, Kiev (1078, rebuilt in 2000)
St. Micahels Golden - Domed Cathedral, Kiev (1113, rebuilt in 1999)
Church of Our Lady Pyrohoshcha, Kiev (1132, rebuilt in 1997 -- 1998)
Church of St. Nicholas, Kiev (1899 -- 1909)
Trinity Cathedral, Chernihiv (1695)
Trinity Cathedral, Sumy (1913)
Latin Cathedral, Lviv (approx. 1493)
St. George 's Cathedral, Lviv (1760)
Church of St. Elizabeth, Lviv (1903 -- 1911)
Church of the Nativity, Ternopil (1602 -- 1608)
Holy Resurrection Cathedral, Ivano - Frankivsk (1729 -- 1763)
Transfiguration Cathedral, Vinnytsia (1758)
Church of St. Martin, Mukachevo (1904)
Church of the Nativity of the Blessed Virgin, Stryi (1425, rebuilt in 1891).
Saint Bartholomew church, Drohobych (15th century, rebuilt in 1906 -- 1913)
Church of St. Stanislaus, Chortkiv (1619, rebuilt in the early 20th century)
Church of St. Anna, Bar (1811)
Church of the Assumption of the Heart of Jesus, Chernivtsi (1892 -- 1894)
Neo-Gothic church of St. Anne, Ozeriany (1875)
Central Lutheran Cathedral Ukraine St. Paul, Odesa (1827)
Church of St. Nicholas, Dnipro (1895)
Church of the Assumption of the Blessed Virgin Mary, Otynia (1905 -- 1918)
Evangelical Presbyterian Church, Odesa (1896)
|
what is the last thing amina sees before dying | Amina - wikipedia
Amina (also Aminatu; d. 1610) was a Hausa Muslim Warrior Queen of Zazzau (now Zaria), in what is now north west Nigeria. She is the subject of many legends, but is believed by historians to have been a real ruler. There is controversy among scholars as to the date of her reign, one school placing her in the mid-15th century, and a second placing her reign in the mid to late 16th century.
The earliest source to mention Amina is Muhammed Bello 's history Ifaq al - Maysur, composed around 1836. He claims that she was "the first to establish government among them, '' and she forced Katsina, Kano and other regions to pay tribute to her. Bello, unfortunately provided no chronological details about her. She is also mentioned in the Kano Chronicle, a well - regarded and detailed history of the city of Kano, composed in the late 19th century, but incorporating earlier documentary material. According to this chronicle, she was a contemporary of Muhammad Dauda, who ruled from 1421 -- 38, and Amina conquered as far as Nupe and Kwarafa, collected tribute from far and wide and ruled for 34 years. A number of scholars accept this information and date her reign to the early to mid-15th century.
There is also a local chronicle of Zaria itself, written in the 19th century (it goes up to 1902) and published in 1910 that gives a list of the rulers and the duration of their reigns. Amina is not mentioned in this chronicle, but oral tradition in the early 20th century held her to be the daughter of Bakwa Turunku, whose reign is dated by the chronicle from 1492 -- 1522, and on this basis some scholars date her reign to the early 16th century. Abdullahi Smith, using similar discripancies places her reign after 1576.
More recent oral tradition has a series of lively stories about the queen, and these have found their way into popular culture. Among them were: Amina was a fierce warrior and loved fighting. As a child, her grandmother Marka, the favorite wife of her grandfather Sarkin Nohir, once caught her holding a dagger. Amina holding the dagger did not shock Marka, rather it was that Amina held it exactly as a warrior would. As an adult, she refused to marry for the fear of losing power. She helped Zazzau (Zaria) become the center of trade and to gain more land. Her mother, Bakwa, died when Amina was 36 years old, leaving her to rule over Zaria.
|
what country sank the lusitania in 1915 and why did this upset the united states | Sinking of the RMS Lusitania - wikipedia
The sinking of the Cunard ocean liner RMS Lusitania occurred on Friday, 7 May 1915 during the First World War, as Germany waged submarine warfare against the United Kingdom which had implemented a naval blockade of Germany. The ship was identified and torpedoed by the German U-boat U-20 and sank in 18 minutes. The vessel went down 11 miles (18 km) off the Old Head of Kinsale, Ireland, killing 1,198 and leaving 761 survivors. The sinking turned public opinion in many countries against Germany, contributed to the American entry into World War I and became an iconic symbol in military recruiting campaigns of why the war was being fought.
Lusitania fell victim to torpedo attack relatively early in the First World War, before tactics for evading submarines were properly implemented or understood. The contemporary investigations in both the United Kingdom and the United States into the precise causes of the ship 's loss were obstructed by the needs of wartime secrecy and a propaganda campaign to ensure all blame fell upon Germany. Argument over whether the ship was a legitimate military target raged back and forth throughout the war as both sides made misleading claims about the ship. At the time she was sunk, she was carrying over 4 million rounds of small - arms ammunition (. 303 caliber), almost 5,000 shrapnel shell casings (for a total of some 50 tons), and 3,240 brass percussion fuses, in addition to 1,266 passengers and a crew of 696. Several attempts have been made over the years since the sinking to dive to the wreck seeking information about precisely how the ship sank, and argument continues to the present day.
When Lusitania was built, her construction and operating expenses were subsidised by the British government, with the provision that she could be converted to an Armed Merchant Cruiser if need be. At the outbreak of the First World War, the British Admiralty considered her for requisition as an armed merchant cruiser, and she was put on the official list of AMCs.
The Admiralty then cancelled their earlier decision and decided not to use her as an AMC after all; large liners such as Lusitania consumed enormous quantities of coal (910 tons / day, or 37.6 tons / hour) and became a serious drain on the Admiralty 's fuel reserves, so express liners were deemed inappropriate for the role when smaller cruisers would do. They were also very distinctive; so smaller liners were used as transports instead. Lusitania remained on the official AMC list and was listed as an auxiliary cruiser in the 1914 edition of Jane 's All the World 's Fighting Ships, along with Mauretania.
At the outbreak of hostilities, fears for the safety of Lusitania and other great liners ran high. During the ship 's first eastbound crossing after the war started, she was painted in a drab grey colour scheme in an attempt to mask her identity and make her more difficult to detect visually. When it turned out that the German Navy was kept in check by the Royal Navy, and their commerce threat almost entirely evaporated, it very soon seemed that the Atlantic was safe for ships like Lusitania, if the bookings justified the expense of keeping them in service.
Many of the large liners were laid up over the autumn and winter of 1914 -- 1915, in part due to falling demand for passenger travel across the Atlantic, and in part to protect them from damage due to mines or other dangers. Among the most recognisable of these liners, some were eventually used as troop transports, while others became hospital ships. Lusitania remained in commercial service; although bookings aboard her were by no means strong during that autumn and winter, demand was strong enough to keep her in civilian service. Economizing measures were taken, however. One of these was the shutting down of her No. 4 boiler room to conserve coal and crew costs; this reduced her maximum speed from over 25 to 21 knots (46 to 39 km / h). Even so, she was the fastest first - class passenger liner left in commercial service.
With apparent dangers evaporating, the ship 's disguised paint scheme was also dropped and she was returned to civilian colours. Her name was picked out in gilt, her funnels were repainted in their traditional Cunard livery, and her superstructure was painted white again. One alteration was the addition of a bronze / gold coloured band around the base of the superstructure just above the black paint.
By early 1915, a new threat to British shipping began to materialise: U-boats (submarines). At first, the Germans only used them to attack naval vessels, and they achieved only occasional -- but sometimes spectacular -- successes. U-boats then began to attack merchant vessels at times, although almost always in accordance with the old cruiser rules. Desperate to gain an advantage on the Atlantic, the German government decided to step up their submarine campaign. On 4 February 1915, Germany declared the seas around the British Isles a war zone: from 18 February, Allied ships in the area would be sunk without warning. This was not wholly unrestricted submarine warfare, since efforts would be taken to avoid sinking neutral ships.
Lusitania was scheduled to arrive in Liverpool on 6 March 1915. The Admiralty issued her specific instructions on how to avoid submarines. Despite a severe shortage of destroyers, Admiral Henry Oliver ordered HMS Louis and Laverock to escort Lusitania, and took the further precaution of sending the Q ship Lyons to patrol Liverpool Bay. The destroyer commander attempted to discover the whereabouts of Lusitania by telephoning Cunard, who refused to give out any information and referred him to the Admiralty. At sea, the ships contacted Lusitania by radio, but did not have the codes used to communicate with merchant ships. Captain Daniel Dow of Lusitania refused to give his own position except in code, and since he was, in any case, some distance from the positions he gave, continued to Liverpool unescorted.
It seems that, in response to this new submarine threat, some alterations were made to Lusitania and her operation. She was ordered not to fly any flags in the war zone; a number of warnings, plus advice, were sent to the ship 's commander to help him decide how to best protect his ship against the new threat ' and it also seems that her funnels were most likely painted a dark grey to help make her less visible to enemy submarines. Clearly, there was no hope of disguising her actual identity, since her profile was so well - known, and no attempt was made to paint out the ship 's name at the prow.
Captain Dow, apparently suffering from stress from operating his ship in the war zone, and after a significant "false flag '' controversy, left the ship; Cunard later explained that he was "tired and really ill. '' He was replaced with a new commander, Captain William Thomas Turner, who had previously commanded Lusitania, Mauretania, and Aquitania in the years before the war.
On 17 April 1915, Lusitania left Liverpool on her 201st transatlantic voyage, arriving in New York on 24 April. A group of German -- Americans, hoping to avoid controversy if Lusitania were attacked by a U-boat, discussed their concerns with a representative of the German Embassy. The embassy decided to warn passengers before her next crossing not to sail aboard Lusitania, and on 22 April placed a warning advertisement in 50 American newspapers, including those in New York:
Notice! Travellers intending to embark on the Atlantic voyage are reminded that a state of war exists between Germany and her allies and Great Britain and her allies; that the zone of war includes the waters adjacent to the British Isles; that, in accordance with formal notice given by the Imperial German Government, vessels flying the flag of Great Britain, or any of her allies, are liable to destruction in those waters and that travellers sailing in the war zone on the ships of Great Britain or her allies do so at their own risk. Imperial German Embassy Washington, D.C. 22 April 1915
This warning was printed adjacent to an advertisement for Lusitania 's return voyage. The warning led to some agitation in the press and worried the ship 's passengers and crew.
While many British passenger ships had been called into duty for the war effort, Lusitania remained on her traditional route between Liverpool and New York. She departed Pier 54 in New York on 1 May 1915 on her return trip to Liverpool with 1,959 people aboard. In addition to her crew of 694, she carried 1,265 passengers, mostly British nationals as well as a large number of Canadians, along with 128 Americans. Her First Class accommodations, for which she was quite famous on the North Atlantic run, were booked at just over half capacity at 290. Second Class was severely overbooked with 601 passengers, far exceeding the maximum capacity of 460. While a large number of small children and infants helped reduce the squeeze into the limited number of two - and four - berth cabins, the situation was ultimately rectified by allowing some Second Class passengers to occupy empty First Class cabins. In Third Class, the situation was considered to be the norm for an eastbound crossing, with only 373 travelling in accommodations designed for 1,186.
Captain Turner, known as "Bowler Bill '' for his favourite shoreside headgear, had returned to his old command of Lusitania. He was commodore of the Cunard Line and a highly experienced master mariner, and had relieved Daniel Dow, the ship 's regular captain. Dow had been instructed by his chairman, Alfred Booth, to take some leave, due to the stress of captaining the ship in U-boat infested sea lanes and for his protestations that the ship should not become an armed merchant cruiser, making her a prime target for German forces.. Turner tried to calm the passengers by explaining that the ship 's speed made her safe from attack by submarine. However, Cunard shut down one of the ship 's four boiler rooms to reduce costs on sparsely subscribed wartime voyages, reducing her top speed from 25.5 to around 22 knots.
Lusitania steamed out of New York at noon on 1 May, two hours behind schedule, because of a last - minute transfer of forty - one passengers and crew from the recently requisitioned Cameronia. Shortly after departure three German - speaking men were found on board hiding in a steward 's pantry. Detective Inspector William Pierpoint of the Liverpool police, who was travelling in the guise of a first - class passenger, interrogated them before locking them in the cells for further questioning when the ship reached Liverpool. Also among the crew was an Englishman, Neal Leach, who had been working as a tutor in Germany before the war. Leach had been interned but later released by Germany. The German embassy in Washington was notified about Leach 's arrival in America, where he met known German agents. Leach and the three German stowaways went down with the ship, but they had probably been tasked with spying on Lusitania and her cargo. Most probably, Pierpoint, who survived the sinking, would already have been informed about Leach.
As the liner steamed across the ocean, the British Admiralty had been tracking the movements of U-20, commanded by Kapitänleutnant Walther Schwieger, through wireless intercepts and radio direction finding. The submarine left Borkum on 30 April, heading north - west across the North Sea. On 2 May she had reached Peterhead and proceeded around the north of Scotland and Ireland, and then along the western and southern coasts of Ireland, to enter the Irish Sea from the south. Although the submarine 's departure, destination, and expected arrival time were known to Room 40 in the Admiralty, the activities of the decoding department were considered so secret that they were unknown even to the normal intelligence division which tracked enemy ships or to the trade division responsible for warning merchant vessels. Only the very highest officers in the Admiralty saw the information and passed on warnings only when they felt it essential.
On 27 March, Room 40 had intercepted a message which clearly demonstrated that the Germans had broken the code used to pass messages to British merchant ships. Cruisers protecting merchant ships were warned not to use the code to give directions to shipping because it could just as easily attract enemy submarines as steer ships away from them. However, Queenstown (now Cobh) was not given this warning and continued to give directions in the compromised code, which was not changed until after Lusitania 's sinking. At this time, the Royal Navy was significantly involved with operations leading up to the landings at Gallipoli, and the intelligence department had been undertaking a program of misinformation to convince Germany to expect an attack on her northern coast. As part of this, ordinary cross-channel traffic to the Netherlands was halted from 19 April and false reports were leaked about troop ship movements from ports on Britain 's western and southern coasts. This led to a demand from the German army for offensive action against the expected troop movements and consequently, a surge in German submarine activity on the British west coast. The fleet was warned to expect additional submarines, but this warning was not passed on to those sections of the navy dealing with merchant vessels. The return of the battleship Orion from Devonport to Scotland was delayed until 4 May and she was given orders to stay 100 miles (160 km) from the Irish coast.
On 5 May, U-20 stopped a merchant schooner, Earl of Lathom, off the Old Head of Kinsale, examined her papers, then ordered her crew to leave before sinking the schooner with gunfire. On 6 May, U-20 fired a torpedo at Cayo Romano from Cuba, a British steamer flying a neutral flag, off Fastnet Rock narrowly missing by a few feet. At 22: 30 on 5 May, the Royal Navy sent an uncoded warning to all ships -- "Submarines active off the south coast of Ireland '' -- and at midnight an addition was made to the regular nightly warnings, "submarine off Fastnet ''. On 6 May U-20 sank the 6,000 ton steamer Candidate. It then failed to get off a shot at the 16,000 ton liner Arabic, because although she kept a straight course the liner was too fast, but then sank another 6,000 ton British cargo ship flying no flag, Centurion, all in the region of the Coningbeg light ship. The specific mention of a submarine was dropped from the midnight broadcast on 6 -- 7 May as news of the new sinkings had not yet reached the navy at Queenstown, and it was correctly assumed that there was no longer a submarine at Fastnet.
Captain Turner of Lusitania was given a warning message twice on the evening of 6 May, and took what he felt were prudent precautions. That evening a Seamen 's Charities fund concert took place throughout the ship and the captain was obliged to attend the event in the first - class lounge.
At about 11: 00 on 7 May, the Admiralty radioed another warning to all ships, probably as a result of a request by Alfred Booth, who was concerned about Lusitania: "U-boats active in southern part of Irish Channel. Last heard of twenty miles south of Coningbeg Light Vessel ''. Booth and all of Liverpool had received news of the sinkings, which the admiralty had known about by at least 3: 00 that morning. Turner adjusted his heading northeast, not knowing that this report related to events of the previous day and apparently thinking submarines would be more likely to keep to the open sea, so that Lusitania would be safer close to land. At 13: 00 another message was received, "Submarine five miles south of Cape Clear proceeding west when sighted at 10: 00 am ''. This report was entirely inaccurate as no submarine had been at that location, but gave the impression that at least one submarine had been safely passed.
U-20 was low on fuel and had only three torpedoes left. On the morning of 7 May, visibility was poor and Schwieger decided to head for home. He submerged at 11: 00 after sighting a fishing boat which he believed might be a British patrol and shortly after was passed while still submerged by a ship at high speed. This was the cruiser Juno returning to Queenstown, travelling fast and zig - zagging having received warning of submarine activity off Queenstown at 07: 45. The Admiralty considered these old cruisers highly vulnerable to submarines, and indeed Schwieger attempted to target the ship.
On the morning of 6 May, Lusitania was 750 miles (1,210 km) west of southern Ireland. By 05: 00 on 7 May she reached a point 120 miles (190 km) west south west of Fastnet Rock (off the southern tip of Ireland), where she met the patrolling boarding vessel Partridge. By 06: 00, heavy fog had arrived and extra lookouts were posted. As the ship came closer to Ireland, Captain Turner ordered depth soundings to be made and at 08: 00 for speed to be reduced to eighteen knots, then to 15 knots and for the foghorn to be sounded. Some of the passengers were disturbed that the ship appeared to be advertising her presence. By 10: 00 the fog began to lift, by noon it had been replaced by bright sunshine over a clear smooth sea and speed increased to 18 knots.
U-20 surfaced again at 12: 45 as visibility was now excellent. At 13: 20 something was sighted and Schwieger was summoned to the conning tower: at first it appeared to be several ships because of the number of funnels and masts, but this resolved into one large steamer appearing over the horizon. At 13: 25 the submarine submerged to periscope depth of 11 metres and set a course to intercept the liner at her maximum submerged speed of 9 knots. When the ships had closed to 2 miles (3.2 km) Lusitania turned away, Schwieger feared he had lost his target, but she turned again, this time onto a near ideal course to bring her into position for an attack. At 14: 10 with the target at 700m range he ordered one gyroscopic torpedo to be fired, set to run at a depth of three metres.
In Schwieger 's own words, recorded in the log of U-20:
Torpedo hits starboard side right behind the bridge. An unusually heavy detonation takes place with a very strong explosive cloud. The explosion of the torpedo must have been followed by a second one (boiler or coal or powder?)... The ship stops immediately and heels over to starboard very quickly, immersing simultaneously at the bow... the name Lusitania becomes visible in golden letters.
U-20 's torpedo officer, Raimund Weisbach, viewed the destruction through the vessel 's periscope and felt the explosion was unusually severe. Within six minutes, Lusitania 's forecastle began to submerge.
On board the Lusitania, Leslie Morton, an eighteen - year - old lookout at the bow, had spotted thin lines of foam racing toward the ship. He shouted, "Torpedoes coming on the starboard side! '' through a megaphone, thinking the bubbles came from two projectiles. The torpedo struck Lusitania under the bridge, sending a plume of debris, steel plating and water upward and knocking lifeboat number five off its davits. "It sounded like a million - ton hammer hitting a steam boiler a hundred feet high, '' one passenger said. A second, more powerful explosion followed, sending a geyser of water, coal, dust and debris high above the deck. Schwieger 's log entries attest that he only launched one torpedo. Some doubt the validity of this claim, contending that the German government subsequently altered the published fair copy of Schwieger 's log, but accounts from other U-20 crew members corroborate it. The entries were also consistent with intercepted radio reports sent to Germany by U-20 once she had returned to the North Sea, before any possibility of an official coverup.
German drawing of Lusitania being torpedoed. Incorrectly shows torpedo hit on port side of ship.
English drawing of Lusitania being torpedoed; shows disputed "second torpedo ''.
Lusitania is shown sinking as Irish fishermen race to the rescue. In fact, the launching of the lifeboats was more chaotic.
At 14: 12, Captain Turner ordered Quartermaster Johnston stationed at the ship 's wheel to steer ' hard - a-starboard ' towards the Irish coast, which Johnston confirmed, but the ship could not be steadied on the course and rapidly ceased to respond to the wheel. Turner signalled for the engines to be reversed to halt the ship, but although the signal was received in the engine room, nothing could be done. Steam pressure had collapsed from 195 psi before the explosion, to 50 psi and falling afterwards. Lusitania 's wireless operator sent out an immediate SOS, which was acknowledged by a coastal wireless station. Shortly afterward he transmitted the ship 's position, 10 miles (16 km) south of the Old Head of Kinsale. At 14: 14 electrical power failed, plunging the cavernous interior of the ship into darkness. Radio signals continued on emergency batteries, but electric lifts failed, trapping passengers and crew; bulkhead doors closed as a precaution before the attack could not be reopened to release trapped men.
About one minute after the electrical power failed, Captain Turner gave the order to abandon ship. Water had flooded the ship 's starboard longitudinal compartments, causing a 15 - degree list to starboard.
Lusitania 's severe starboard list complicated the launch of her lifeboats. Ten minutes after the torpedoing, when she had slowed enough to start putting boats in the water, the lifeboats on the starboard side swung out too far to step aboard safely. While it was still possible to board the lifeboats on the port side, lowering them presented a different problem. As was typical for the period, the hull plates of Lusitania were riveted, and as the lifeboats were lowered they dragged on the inch high rivets, which threatened to seriously damage the boats before they landed in the water.
Many lifeboats overturned while loading or lowering, spilling passengers into the sea; others were overturned by the ship 's motion when they hit the water. It has been claimed that some boats, because of the negligence of some officers, crashed down onto the deck, crushing other passengers, and sliding down towards the bridge. This has been disputed by passenger and crew testimony. Some crewmen would lose their grip on ropes used to lower the lifeboats while trying to lower the boats into the ocean, and this caused the passengers to spill into the sea. Others tipped on launch as some panicking people jumped into the boat. Lusitania had 48 lifeboats, more than enough for all the crew and passengers, but only six were successfully lowered, all from the starboard side. Lifeboat 1 overturned as it was being lowered, spilling its original occupants into the sea, but it managed to right itself shortly afterwards and was later filled with people from in the water. Lifeboats 9 and 11 managed to reach the water safely with a few people, but both later picked up many swimmers. Lifeboats 13 and 15 also safely reached the water, each overloaded with around 70 people. Finally, Lifeboat 21 reached the water safely and cleared the ship moments before her final plunge. A few of her collapsible lifeboats washed off her decks as she sank and provided flotation for some survivors.
Two lifeboats on the port side cleared the ship as well. Lifeboat 14 was lowered and launched safely, but because the boat plug was not in place, it filled with seawater and sank almost immediately after reaching the water. Later, Lifeboat 2 floated away from the ship with new occupants (its previous ones having been spilled into the sea when they upset the boat) after they removed a rope and one of the ship 's "tentacle - like '' funnel stays. They rowed away shortly before the ship sank.
There was panic and disorder on the decks. Schwieger had been observing this through U-20 's periscope, and by 14: 25, he dropped the periscope and headed out to sea. Later in the war, Schwieger was killed in action when, as commander of U-88, he was chased by HMS Stonecrop, hit a British mine, and sank on 5 September 1917, north of Terschelling. There were no survivors from U-88's sinking.
The track of Lusitania. View of casualties and survivors in the water and in lifeboats. Painting by William Lionel Wyllie.
The second explosion made passengers believe U-20 had torpedoed Lusitania a second time.
The effect of U-20's torpedo.
Captain Turner was on the deck near the bridge clutching the ship 's logbook and charts when a wave swept upward towards the bridge and the rest of the ship 's forward superstructure, knocking him overboard into the sea. He managed to swim and find a chair floating in the water which he clung to. He survived, having been pulled unconscious from the water after spending three hours there. Lusitania 's bow slammed into the bottom about 100 metres (330 ft) below at a shallow angle because of her forward momentum as she sank. Along the way, some boilers exploded, including one that caused the third funnel to collapse; the remaining funnels collapsed soon after. As he had taken the ship 's logbook and charts with him, Turner 's last navigational fix had been only two minutes before the torpedoing, and he was able to remember the ship 's speed and bearing at the moment of the sinking. This was accurate enough to locate the wreck after the war. The ship travelled about two miles (3 km) from the time of the torpedoing to her final resting place, leaving a trail of debris and people behind. After her bow sank completely, Lusitania 's stern rose out of the water, enough for her propellers to be seen, and went under.
Lusitania sank in only 18 minutes, 11.5 miles (19 km) off the Old Head of Kinsale. It took several hours for help to arrive from the Irish coast, but by the time help had arrived, many in the 52 ° F (11 ° C) water had succumbed to the cold. By the days ' end, 764 passengers and crew from Lusitania had been rescued and landed at Queenstown. Eventually, the final death toll for the disaster came to a catastrophic number. Of the 1,959 passengers and crew aboard Lusitania at the time of her sinking, 1,195 had been lost. In the days following the disaster, the Cunard line offered local fishermen and sea merchants a cash reward for the bodies floating all throughout the Irish Sea, some floating as far away as the Welsh coast. In all, only 289 bodies were recovered, 65 of which were never identified. The bodies of many of the victims were buried at either Queenstown, where 148 bodies were interred in the Old Church Cemetery, or the Church of St. Multose in Kinsale, but the bodies of the remaining 885 victims were never recovered.
Two days before, U-20 had sunk Earl of Lathom, but first allowed the crew to escape in boats. According to international maritime law, any military vessel stopping an unarmed civilian ship was required to allow those on board time to escape before sinking it. The conventions had been drawn up in a time before the invention of the submarine and took no account of the severe risk a small vessel, such as a submarine, faced if it gave up the advantage of a surprise attack. Schwieger could have allowed the crew and passengers of Lusitania to take to the boats, but he considered the danger of being rammed or fired upon by deck guns too great. Merchant ships had, in fact, been advised to steer directly at any U-boat that surfaced. A cash bonus had been offered for any that were sunk, though the advice was carefully worded so as not to amount to an order to ram. This feat would only be accomplished once during the war by a commercial vessel when in 1918 the White Star Liner RMS Olympic, sister ship to Titanic, rammed into SM U-103 in the English Channel, sinking the submarine.
According to Bailey and Ryan, Lusitania was travelling without any flag and her name painted over with darkish dye.
One story -- an urban legend -- states that when Lieutenant Schwieger of U-20 gave the order to fire, his quartermaster, Charles Voegele, would not take part in an attack on women and children, and refused to pass on the order to the torpedo room -- a decision for which he was court - martialed and imprisoned at Kiel until the end of the war. This rumour persisted until 1972, when the French daily paper Le Monde published a letter to the editor.
Immediately following the sinking, on 8 May, the local county coroner John Hogan opened an inquest in Kinsale into the deaths of two males and three females whose bodies had been brought ashore by a local boat, Heron. Most of the survivors (and dead) had been taken to Queenstown instead of Kinsale, which was closer. On 10 May Captain Turner gave evidence as to the events of the sinking where he described that the ship had been struck by one torpedo between the third and fourth funnels. This had been followed immediately by a second explosion. He acknowledged receiving general warnings about submarines, but had not been informed of the sinking of Earl of Lathom. He stated that he had received other instructions from the admiralty which he had carried out but was not permitted to discuss. The coroner brought in a verdict that the deceased had drowned following an attack on an unarmed non-combatant vessel contrary to international law. Half an hour after the inquest had concluded and its results given to the press, the Crown Solicitor for Cork, Harry Wynne, arrived with instructions to halt it. Captain Turner was not to give evidence and no statements should be made about any instructions given to shipping about avoiding submarines.
The formal Board of Trade investigation into the sinking was presided over by Wreck Commissioner Lord Mersey and took place in the Westminster Central Hall from 15 -- 18 June 1915 with further sessions at the Westminster Palace Hotel on 1 July and Caxton Hall on 17 July. Lord Mersey had a background in commercial rather than maritime law but had presided over a number of important maritime investigations, including that into the loss of Titanic. He was assisted by four assessors, Admiral Sir Frederick Samuel Inglefield, Lieutenant Commander Hearn and two merchant navy captains, D. Davies and J. Spedding. The Attorney General, Sir Edward Carson, represented the Board of Trade, assisted by the Solicitor General, F.E. Smith. Butler Aspinall, who had previously represented the Board of Trade at the Titanic inquiry, was retained to represent Cunard. A total of 36 witnesses were called, Lord Mersey querying why more of the survivors would not be giving evidence. Most of the sessions were public but two on 15 and 18 June were held in camera when evidence regarding navigation of the ship was presented.
Statements were collected from all the crew. These were all written out for presentation to the inquiry on standard forms in identical handwriting with similar phrasing. Quartermaster Johnston later described that pressure had been placed upon him to be loyal to the company, and that it had been suggested to him it would help the case if two torpedoes had struck the ship, rather than the one which he described. Giving evidence to the tribunal he was not asked about torpedoes. Other witnesses who claimed that only one torpedo had been involved were refused permission to testify. In contrast to his statement at the inquest, Captain Turner stated that two torpedoes had struck the ship, not one. In an interview in 1933, Turner reverted to his original statement that there had been only one torpedo. Most witnesses said there had been two, but a couple said three, possibly involving a second submarine. Clem Edwards, representing the seamen 's union, attempted to introduce evidence about which watertight compartments had been involved but was prevented from doing so by Lord Mersey.
It was during the closed hearings that the Admiralty tried to lay the blame on Captain Turner, their intended line being that Turner had been negligent. The roots of this view began in the first reports about the sinking from Vice-Admiral Coke commanding the navy at Queenstown. He reported that "ship was especially warned that submarines were active on south coast and to keep mid-channel course avoiding headlands also position of submarine off Cape Clear at 10: 00 was communicated by W / T to her ''. Captain Webb, Director of the Trade Division, began to prepare a dossier of signals sent to Lusitania which Turner may have failed to observe. First Sea Lord Fisher noted on one document submitted by Webb for review: "As the Cunard company would not have employed an incompetent man its a certainty that Captain Turner is not a fool but a knave. I hope that Turner will be arrested immediately after the enquiry whatever the verdict ''. First Lord Winston Churchill noted: "I consider the Admiralty 's case against Turner should be pressed by a skilful counsel and that Captain Webb should attend as a witness, if not employed as an assessor. We will pursue the captain without check ''. In the event, both Churchill and Fisher were replaced in their positions before the enquiry because of the failures of the Gallipoli campaign.
Part of the proceedings turned on the question of proper evasive tactics against submarines. It was put to Captain Turner that he had failed to comply with Admiralty instructions to travel at high speed, maintain a zig - zag course and keep away from shore. Naval instructions about zig - zag were read to the captain, who confirmed that he had received them, though later added that they did not appear to be as he recollected. This was unsurprising, since the regulations quoted had only been approved on 25 April, after Lusitania 's last arrival in New York, and started distribution on 13 May, after she sank. Lusitania had slowed to 15 knots at one point because of fog, but had otherwise maintained 18 knots passing Ireland. 18 knots was faster than all but nine other ships in the British merchant fleet could achieve and was comfortably faster than the submarine. Although he might have achieved 21 knots and had given orders to raise steam ready to do so, he was also under orders to time his arrival at Liverpool for high tide so that the ship would not have to wait to enter port. Thus, he chose to travel more slowly. At the time, no ship had been torpedoed travelling at more than 15 knots. Although the Admiralty instructed ships to keep well offshore and it was claimed that Turner had only been 8 miles (13 km) away, his actual distance when hit was thirteen miles (21 km). As a matter of established procedure, only ships travelling closer than five miles (8.0 km) from shore were ordinarily being censured for being too close.
Turner stated that he had discussed the matter of what course the ship should take, with his two most senior officers, Captain Anderson and Chief Officer Piper, neither of whom survived. The three had agreed that the Admiralty warning of "submarine activity 20 miles (32 km) south of Coningbeg '' effectively overrode other Admiralty advice to keep to ' mid channel ', which was precisely where the submarine had been reported. He had, therefore, ordered the change of course at 12: 40, intending to bring the ship closer to land and then take a course north of the reported submarine.
At one point in the proceedings, Smith attempted to press a point he was making, by quoting from a signal sent to British ships. Lord Mersey queried which message this was, and it transpired that the message in question existed in the version of evidence given to Smith by the Board of Trade Solicitor, Sir Ellis Cunliffe, but not in versions given to others. Cunliffe explained the discrepancy by saying that different versions of the papers had been prepared for use, depending whether the enquiry had been in camera or not, but the message quoted appeared never to have existed. Lord Mersey observed that it was his job to get at the truth, and thereafter became more critical of Admiralty evidence.
On 10 June, just before the hearing, significant changes were made to the Defence of the Realm Act, which made it an offence to collect or publish information about the nature, use, or carriage of "war materials '' for any reason. Previously, this had only been an offence if the information was collected to aid the enemy. This was used to prohibit discussion about the ship 's cargo. The rifle cartridges carried by Lusitania were mentioned during the case, Lord Mersey stating that "the 5,000 cases of ammunition on board were 50 yards away from where the torpedo struck the ship ''.
An additional hearing took place on 1 July, at the insistence of Joseph Marichal, who was threatening to sue Cunard for their poor handling of the disaster. He testified that the second explosion had sounded to him like the rattling of machine gun fire and appeared to be below the second class dining room at the rear of the ship where he had been seated. Information about Marechal 's background was sought out by the British government and leaked to the press so as to discredit him.
Captain Turner, the Cunard Company, and the Royal Navy were absolved of any negligence, and all blame was placed on the German government. Lord Mersey found that Turner "exercised his judgment for the best '' and that the blame for the disaster "must rest solely with those who plotted and with those who committed the crime ''.
Two days after he closed the inquiry, Lord Mersey waived his fees for the case and formally resigned. His last words on the subject were: "The Lusitania case was a damned, dirty business! '' The full report has never been made available to the public. A copy was thought to exist amongst Lord Mersey 's private papers after his death, but has since proved untraceable.
In the United States, 67 claims for compensation were lodged against Cunard, which were all heard together in 1918. Judge Julius Mayer, who was chosen to hear the case, had previously presided over the case brought following the loss of the Titanic, where he had ruled in favour of the shipping company. Mayer was a conservative who was considered a safe pair of hands with matters of national interest, and whose favourite remark to lawyers was to "come to the point ''. The case was to be heard without a jury. The two sides agreed beforehand that no question would be raised regarding whether Lusitania had been armed or carrying troops or ammunition. Thirty - three witnesses who could not travel to the US gave statements in England to Commissioner R.V. Wynne. Evidence produced in open court for the Mersey investigation was considered, but evidence from the British closed sessions was not. The Defence of the Realm Act was invoked so that British witnesses could not give evidence on any subject it covered. Statements had been collected in Queenstown immediately after the sinking by the American Consul, Wesley Frost, but these were not produced.
Captain Turner gave evidence in Britain and now gave a more spirited defence of his actions. He argued that up until the time of the sinking he had no reason to think that zig - zagging in a fast ship would help. Indeed, that he had since commanded another ship which was sunk while zig - zagging. His position was supported by evidence from other captains, who said that prior to the sinking of Lusitania no merchant ships zig - zagged. Turner had argued that maintaining a steady course for 30 minutes was necessary to take a four - point bearing and precisely confirm the ship 's position, but on this point he received less support, with other captains arguing a two - point bearing could have been taken in five minutes and would have been sufficiently accurate.
Many witnesses testified that portholes across the ship had been open at the time of the sinking, and an expert witness confirmed that such a porthole three feet under water would let in four tons of water per minute. Testimony varied on how many torpedoes there had been, and whether the strike occurred between the first and second funnel, or third and fourth. The nature of the official cargo was considered, but experts considered that under no conditions could the cargo have exploded. A record exists that Crewman Jack Roper wrote to Cunard in 1919 requesting expenses for his testimony in accord with the line indicated by Cunard.
Mayer 's judgement was that "the cause of the sinking was the illegal act of the Imperial German Government '', that two torpedoes had been involved, that the captain had acted properly and emergency procedures had been up to the standard then expected. He ruled that further claims for compensation should be addressed to the German government (which eventually paid $2.5 million in 1925).
On 8 May Dr. Bernhard Dernburg, the former German Colonial Secretary, made a statement in Cleveland, Ohio, in which he attempted to justify the sinking of Lusitania. At the time Dernburg was recognised as the official spokesman of the Imperial German government in the United States. Dernburg said that because Lusitania "carried contraband of war '' and also because she "was classed as an auxiliary cruiser '' Germany had had a right to destroy her regardless of any passengers aboard. Dernburg further said that the warnings given by the German Embassy before her sailing, plus the 18 February note declaring the existence of "war zones '' relieved Germany of any responsibility for the deaths of the American citizens aboard. He referred to the ammunition and military goods declared on Lusitania 's manifest and said that "vessels of that kind '' could be seized and destroyed under the Hague rules without any respect to a war zone.
The following day the German government issued an official communication regarding the sinking in which it said that the Cunard liner Lusitania "was yesterday torpedoed by a German submarine and sank '', that Lusitania "was naturally armed with guns, as were recently most of the English mercantile steamers '' and that "as is well known here, she had large quantities of war material in her cargo ''.
Dudley Field Malone, Collector of the Port of New York, issued an official denial to the German charges, saying that Lusitania had been inspected before her departure and no guns were found, mounted or unmounted. Malone stated that no merchant ship would have been allowed to arm itself in the Port and leave the harbour. Assistant Manager of the Cunard Line, Herman Winter, denied the charge that she carried munitions:
She had aboard 4,200 cases of cartridges, but they were cartridges for small arms, packed in separate cases... they certainly do not come under the classification of ammunition. The United States authorities would not permit us to carry ammunition, classified as such by the military authorities, on a passenger liner. For years we have been sending small - arms cartridges abroad on the Lusitania.
The fact that Lusitania had been carrying shell casings and rifle cartridges was not made known to the British public at the time, as it was felt that, although allowed under the regulations of the time, it would be used in German propaganda.
The sinking was severely criticised by and met with disapproval in Turkey and Austria - Hungary, while in the German press, the sinking was deplored by Vorwärts, the daily newspaper of the Social Democratic Party of Germany, and also by Captain Persius, an outspoken naval critic who wrote for the Berliner Tageblatt.
One Catholic Centre Party newspaper, the Kölnische Volkszeitung (de), stated: "The sinking of the giant English steamship is a success of moral significance which is still greater than material success. With joyful pride we contemplate this latest deed of our Navy. It will not be the last. The English wish to abandon the German people to death by starvation. We are more humane. We simply sank an English ship with passengers who, at their own risk and responsibility, entered the zone of operations. ''
In the aftermath of the sinking, the German government tried to justify it by claiming in an official statement that she had been armed with guns, and had "large quantities of war material '' in her cargo. They also stated that since she was classed as an auxiliary cruiser, Germany had had a right to destroy her regardless of any passengers aboard, and that the warnings issued by the German Embassy before her sailing plus 18 February note declaring the existence of "war zones '', relieved Germany of any responsibility for the deaths of American citizens aboard. While it was true that Lusitania had been fitted with gun mounts as part of government loan requirements during her construction, to enable rapid conversion into an Armed Merchant Cruiser (AMC) in the event of war, the guns themselves were never fitted. However, she was still listed officially as an AMC. Her cargo had included an estimated 4,200,000 rifle cartridges, 1,250 empty shell cases, and 18 cases of non-explosive fuses, all of which were listed in her manifest, but the cartridges were not officially classed as ammunition by the Cunard Line. Various theories have been put forward over the years that she had also carried undeclared high explosives that were detonated by the torpedo and helped to sink her, but this has never been proven.
Schwieger was condemned in the Allied press as a war criminal.
Of the 139 US citizens aboard Lusitania, 128 lost their lives, and there was massive outrage in Britain and America, The Nation calling it "a deed for which a Hun would blush, a Turk be ashamed, and a Barbary pirate apologize '' and the British felt that the Americans had to declare war on Germany. However, US President Woodrow Wilson refused to over-react. He said at Philadelphia on 10 May 1915:
There is such a thing as a man being too proud to fight. There is such a thing as a nation being so right that it does not need to convince others by force that it is right.
When Germany began its submarine campaign against Britain, Wilson had warned that the US would hold the German government strictly accountable for any violations of American rights. On 1 May he stated that "no warning that an unlawful and inhumane act will be committed '' could be accepted as a legitimate excuse for that act.
During the weeks after the sinking, the issue was hotly debated within the administration. Secretary of State William Jennings Bryan urged compromise and restraint. The US, he believed, should try to persuade the British to abandon their interdiction of foodstuffs and limit their mine - laying operations at the same time as the Germans were persuaded to curtail their submarine campaign. He also suggested that the US government issue an explicit warning against US citizens travelling on any belligerent ships. Despite being sympathetic to Bryan 's antiwar feelings, Wilson insisted that the German government must apologise for the sinking, compensate US victims, and promise to avoid any similar occurrence in the future.
Backed by State Department second - in - command Robert Lansing, Wilson made his position clear in three notes to the German government issued on 13 May, 9 June, and 21 July.
The first note affirmed the right of Americans to travel as passengers on merchant ships and called for the Germans to abandon submarine warfare against commercial vessels, whatever flag they sailed under (including 3 other ships: the Falaba, the Cushing, and the Gulflight).
In the second note, Wilson rejected the German arguments that the British blockade was illegal, and was a cruel and deadly attack on innocent civilians, and their charge that Lusitania had been carrying munitions. William Jennings Bryan considered Wilson 's second note too provocative and resigned in protest after failing to moderate it, to be replaced by Robert Lansing who later said in his memoirs that following the tragedy he always had the "conviction that we (the United States) would ultimately become the ally of Britain ''.
The third note, of 21 July, issued an ultimatum, to the effect that the US would regard any subsequent sinkings as "deliberately unfriendly ''.
While the American public and leadership were not ready for war, the path to an eventual declaration of war had been set as a result of the sinking of Lusitania. On 19 August U-24 sank the White Star liner SS Arabic, with the loss of 44 passengers and crew, three of whom were American. The German government, while insisting on the legitimacy of its campaign against Allied shipping, disavowed the sinking of Arabic; it offered an indemnity and pledged to order submarine commanders to abandon unannounced attacks on merchant and passenger vessels.
The British public, press, and government in general were upset at Wilson 's actions -- not realising it reflected general US opinion at the time. They sneered "too proud or too scared? ''. Shells that did not explode at the front were called "Wilsons ''.
Germany, however, continued to sink merchant vessels bound for Britain, particularly after the Battle of Jutland in late May 1916.
German Chancellor Theobald von Bethmann - Hollweg persuaded the Kaiser to forbid action against ships flying neutral flags and the U-boat war was postponed once again on 27 August, as it was realised that British ships could easily fly neutral flags.
There was disagreement over this move between the navy 's admirals (headed by Alfred von Tirpitz) and Bethman - Hollweg. Backed by Army Chief of Staff Erich von Falkenhayn, Kaiser Wilhelm II endorsed the Chancellor 's solution, and Tirpitz and the Admiralty backed down. The German restriction order of 9 September 1915 stated that attacks were only allowed on ships that were definitely British, while neutral ships were to be treated under the Prize Law rules, and no attacks on passenger liners were to be permitted at all. The war situation demanded that there could be no possibility of orders being misinterpreted, and on 18 September Henning von Holtzendorff, the new head of the German Admiralty, issued a secret order: all U-boats operating in the English Channel and off the west coast of the United Kingdom were recalled, and the U-boat war would continue only in the North sea, where it would be conducted under the Prize Law rules.
In January 1917 the German Government announced it would now conduct full unrestricted submarine warfare. Once again, Woodrow Wilson was furious and on 6 April 1917 the United States Congress followed Wilson 's request to declare war on Germany. US buildup of participation was at first slow, but during the German Spring Offensive in March 1918, which at first went well for the Germans with the Allies barely holding the lines, was reversed with the arrival by April 1918 of two million American troops.
It was in the interests of the British to keep US citizens aware of German actions and attitudes. One over-enthusiastic propagandist 's fabricated story was circulated that in some regions of Germany, schoolchildren were given a holiday to celebrate the sinking of Lusitania. This story was based on the popular reception given the Goetz medal (see below) and was so effective that James W. Gerard, the US ambassador to Germany, recounted it being told in his memoir of his time in Germany, Face to Face with Kaiserism (1918), though without vouching for its validity.
In August 1915, the Munich medallist and sculptor Karl X. Goetz (1875 -- 1950), who had produced a series of propagandist and satirical medals as a running commentary on the war, privately struck a small run of medals as a limited - circulation satirical attack (fewer than 500 were struck) on the Cunard Line for trying to continue business as usual during wartime. Goetz blamed both the British government and the Cunard Line for allowing Lusitania to sail despite the German embassy 's warnings. Popular demand led to many unauthorised copies being made.
One side of the popular medal showed Lusitania sinking laden with guns (incorrectly depicted sinking stern first) with the motto "KEINE BANNWARE! '' ("NO CONTRABAND! ''), while the reverse showed a skeleton selling Cunard tickets with the motto "Geschäft Über Alles '' ("Business Above All '').
Goetz had put an incorrect date for the sinking on the medal, an error he later blamed on a mistake in a newspaper story about the sinking: instead of 7 May, he had put "5. Mai '', two days before the actual sinking. Not realising his error, Goetz made copies of the medal and sold them in Munich and also to some numismatic dealers with whom he conducted business.
The British Foreign Office obtained a copy of the medal, photographed it, and sent copies to the United States where it was published in the New York Times on 5 May 1916. Many popular magazines ran photographs of the medal, and it was falsely claimed that it had been awarded to the crew of the U-boat.
Emile Henry Lacombe wrote a letter to the New York Times advancing a conspiracy theory about the German sinking of the Lusitania in 1915. His letter was published Monday 22 October 1917 on page 14 titled "A NEW THEORY OF THE LUSITANIA SINKING. The Evidence of the German Medal Dated May 5 and the Report of the Explosive "Cigars '' on Board. ''
The Goetz medal attracted so much attention that Lord Newton, who was in charge of Propaganda at the Foreign Office in 1916, decided to develop the anti-German feelings aroused by it for propaganda purposes and asked department store entrepreneur Harry Gordon Selfridge to reproduce the medal again. The replica medals were produced in an attractive case and were an exact copy of the German medal, and were sold for a shilling apiece. On the cases it was stated that the medals had been distributed in Germany "to commemorate the sinking of Lusitania '' and they came with a propaganda leaflet which strongly denounced the Germans and used the medal 's incorrect date (5 May) to incorrectly claim that the sinking of Lusitania was premeditated, rather than just being incident to Germany 's larger plan to sink any ship in a combat zone without warning. The head of the Lusitania Souvenir Medal Committee later estimated that 250,000 were sold, proceeds being given to the Red Cross and St. Dunstan 's Blinded Soldiers and Sailors Hostel. Unlike the original Goetz medals which were sand - cast from bronze, the British copies were of diecast iron and were of poorer quality. However, a few original medals were also made in iron. Originals usually have "KGoetz '' on the edge. Over the years various other copies have been made.
Belatedly realising his mistake, Goetz issued a corrected medal with the date of "7. Mai ''. The Bavarian government, alarmed at the strong worldwide reaction to Goetz 's work, suppressed the medal and ordered confiscation in April 1917. The original German medals can easily be distinguished from the English copies because the date is in German, i.e. with a dot behind the number; the English version was altered to read ' May ' rather than ' Mai '. After the war Goetz expressed his regret that his work had been the cause of increasing anti-German feelings, but it remains a celebrated propaganda act.
Circa 1920 the French medallist René Baudichon created a counterblast to the Goetz medal. The Baudichon medal is in bronze, 54 millimetres (2.1 in) diameter and weighs 79.51 grams (2.805 oz). The obverse shows Liberty as depicted on the Statue of Liberty but holding a raised sword and rising from a stormy sea. Behind her the sun is breaking through clouds and six ships are steaming. Signed R Baudichon. Legend: Ultrix America Juris, 1917 U.S.A 1918 (America avenger of right). The reverse shows a view of the starboard quarter of the Lusitania correctly depicted sinking bow first. In the foreground there is a capsized lifeboat. The upper field shows a child drowning, head, hands and feet above the water; RB monogram. Legend: Lusitania May 7, 1915.
There is no footage of the sinking.
The "Prize rules '' or "Cruiser rules '', laid down by the Hague Conventions of 1899 and 1907, governed the seizure of vessels at sea during wartime, although changes in technology such as radio and the submarine eventually made parts of them irrelevant. Merchant ships were to be warned by warships, and their passengers and crew allowed to abandon ship before they were sunk, unless the ship resisted or tried to escape, or was in a convoy protected by warships. Limited armament on a merchant ship, such as one or two guns, did not necessarily affect the ship 's immunity to attack without warning, and neither did a cargo of munitions or materiel.
In November 1914 the British announced that the entire North Sea was now a War Zone, and issued orders restricting the passage of neutral shipping into and through the North Sea to special channels where supervision would be possible (the other approaches having been mined). It was in response to this, and to the British Admiralty 's order of 31 January 1915 that British merchant ships should fly neutral colours as a ruse de guerre, that Admiral Hugo von Pohl, commander of the German High Seas Fleet, published a warning in the Deutscher Reichsanzeiger (Imperial German Gazette) on 4 February 1915:
(1) The waters around Great Britain and Ireland, including the whole of the English Channel, are hereby declared to be a War Zone. From February 18 onwards every enemy merchant vessel encountered in this zone will be destroyed, nor will it always be possible to avert the danger thereby threatened to the crew and passengers.
(2) Neutral vessels also will run a risk in the War Zone, because in view of the hazards of sea warfare and the British authorization of January 31 of the misuse of neutral flags, it may not always be possible to prevent attacks on enemy ships from harming neutral ships.
In response, the Admiralty issued orders on 10 February 1915 which directed merchant ships to escape from hostile U-boats when possible, but "if a submarine comes up suddenly close ahead of you with obvious hostile intention, steer straight for her at your utmost speed... '' Further instructions ten days later advised armed steamers to open fire on a submarine even if it had not yet fired. Given the extreme vulnerability of a submarine to ramming or even small - caliber shellfire, a U-boat that surfaced and gave warning against a merchantman which had been given such instructions was putting itself in great danger. The Germans knew of these orders, even though they were intended to be secret, copies having been obtained from captured ships and from wireless intercepts; Bailey and Ryan in their "The Lusitania Disaster '', put much emphasis on these Admiralty orders to merchantmen, arguing it was unreasonable to expect a submarine to surface and give warning under such circumstances. In their opinion this, rather than the munitions, the nonexistent armament, or any other suggested reason, is the best rationale for the Germans ' actions in the sinking.
The cargo included 4,200,000 rounds of Remington. 303 rifle / machine - gun cartridges, 1,250 cases of empty 3 - inch (76 mm) fragmentation shell casings and eighteen cases of percussion fuses, all of which were listed on the ship 's two - page manifest, filed with US Customs after she departed New York on 1 May. However, these munitions were classed as small arms ammunition, were non-explosive in bulk, and were clearly marked as such. It was perfectly legal under American shipping regulations for the liner to carry these; experts agreed they were not to blame for the second explosion. Allegations the ship was carrying more controversial cargo, such as fine aluminium powder, concealed as cheese on her cargo manifests, or guncotton (pyroxylene) disguised as casks of beef, have never been proven. In the 1960s, American diver John Light did several dives on the wreck, trying to prove the existence of contraband explosives in Lusitania ' s cargo hold, ignited by the torpedo. Light claimed to have found a large hole on Lusitania ' s port side, opposite of where the torpedo had struck, though later expeditions disproved his findings.
In 1993, Dr. Robert Ballard, the famous explorer who discovered Titanic and Bismarck, conducted an in - depth exploration of the wreck of Lusitania. Ballard tried to confirm John Light 's findings of a large hole on the port side of the wreck, and he did n't find anything. During his investigation, Ballard noted a large quantity of coal on the sea bed near the wreck, and after consulting an explosives expert advanced the theory of a coal dust explosion. He believed dust in the bunkers would have been thrown into the air by the vibration from the explosion; the resulting cloud would have been ignited by a spark, causing the second explosion. In the years since he first advanced this theory, it has been argued that this is nearly impossible. Critics of the theory say coal dust would have been too damp to have been stirred into the air by the torpedo impact in explosive concentrations; additionally, the coal bunker where the torpedo struck would have been flooded almost immediately by seawater flowing through the damaged hull plates.
In 2007, marine forensic investigators considered that an explosion in the ship 's steam - generating plant could be a plausible explanation for the second explosion. However, accounts from the few survivors who managed to escape from the forward two boiler rooms reported that the ship 's boilers did not explode. Leading Fireman Albert Martin later testified he thought the torpedo actually entered the boiler room and exploded between a group of boilers, which was a physical impossibility. It is also known the forward boiler room filled with steam, and steam pressure feeding the turbines dropped dramatically following the second explosion. These point toward a failure, of one sort or another, in the ship 's steam - generating plant. It is possible the failure came, not directly from one of the boilers in boiler room no. 1, but rather in the high - pressure steam lines to the turbines.
The original torpedo damage alone, striking the ship on the starboard coal bunker of boiler room no. 1, would probably have sunk the ship without a second explosion. This first blast was enough to cause, on its own, serious off - centre flooding, although the sinking would possibly have been slower. The deficiencies of the ship 's original watertight bulkhead design exacerbated the situation, as did the many portholes which had been left open for ventilation.
The wreck of Lusitania lies on her starboard side at an approximately 30 - degree angle in 305 feet (93 metres) of sea water. She is severely collapsed onto her starboard side as a result of the force with which she slammed into the sea floor, and over decades, Lusitania has deteriorated significantly faster than Titanic because of the corrosion in the winter tides. The keel has an "unusual curvature '', in a boomerang shape, which may be related to a lack of strength from the loss of her superstructure. The beam is reduced with the funnels missing presumably to deterioration. The bow is the most prominent portion of the wreck with the stern damaged from depth charging in the Second World War as well as the removal of three of the four propellers by Oceaneering International in 1982. Some of the prominent features on Lusitania include her still - legible name, some bollards with the ropes still intact, pieces of the ruined promenade deck, some portholes, the prow and the remaining propeller. Recent expeditions to the wreck have revealed that Lusitania is in surprisingly poor condition compared to Titanic, as her hull has already started to collapse.
|
the railway labor act of 1926 applies to all of the following except | Railway Labor Act - wikipedia
The Railway Labor Act is a United States federal law on US labor law that governs labor relations in the railroad and airline industries. The Act, passed in 1926 and amended in 1934 and 1936, seeks to substitute bargaining, arbitration and mediation for strikes to resolve labor disputes. Its provisions were originally enforced under the Board of Mediation, but they were later enforced under a National Mediation Board.
The Great Railroad Strike of 1877 lasted for six weeks and was eventually put down with the intervention of federal troops. Congress later passed the Arbitration Act of 1888, which authorized the creation of arbitration panels with the power to investigate the causes of labor disputes and to issue non-binding arbitration awards. The Act was a complete failure: only one panel was ever convened under the Act, and that one, in the case of the 1894 Pullman Strike, issued its report only after the strike had been ended by a federal court injunction, backed by federal troops.
Congress attempted to correct the shortcomings in the Erdman Act, passed in 1898. The Act likewise provided for voluntary arbitration, but made any award issued by the panel binding and enforceable in federal court. It also outlawed discrimination against employees for union activities, prohibited "yellow dog contracts '' (in which an employee agrees not to join a union while employed), and required both sides to maintain the status quo during any arbitration proceedings and for three months after an award was issued. The arbitration procedures were rarely used. A successor statute, the Newlands Act of 1913, which created the Board of Mediation, proved to be more effective. The Newlands Act was largely superseded when the federal government nationalized the railroads in 1917, following the nation 's entry into World War I. (See United States Railroad Administration.)
The Adamson Act, passed in 1916, provided workers with an eight - hour day, at the same daily wage they had received previously for a ten - hour day, and required time and a half pay for overtime work. Another law passed in the same year, amid increasing concerns about the war in Europe, gave President Woodrow Wilson the power to "take possession of and assume control of any system of transportation '' for transportation of troops and war material.
Wilson exercised that authority on December 26, 1917. While Congress considered nationalizing the railroads on a permanent basis after World War I, the Wilson administration announced that it was returning the railroad system to its owners. Congress tried to preserve, on the other hand, the most successful features of the federal wartime administration, the adjustment boards, by creating a Railroad Labor Board (RLB) with the power to issue non-binding proposals for the resolution of labor disputes, as part of the Esch -- Cummins Act (Transportation Act of 1920).
The RLB soon destroyed whatever moral authority its decisions might have had in a series of decisions. In 1921 it ordered a twelve percent reduction in employees ' wages, which the railroads were quick to implement. The following year, when shop employees of the railroads launched a national strike, the RLB issued a declaration that purported to outlaw the strike; the Department of Justice then obtained an injunction that carried out that declaration. From that point forward railway unions refused to have anything to do with the RLB.
The RLA was the product of negotiations between the major railroad companies and the unions that represented their employees. Like its predecessors, it relied on boards of adjustment, established by the parties, to resolve labor disputes, with a government - appointed Board of Mediation to attempt to resolve those disputes that board of adjustment could not. The RLA promoted voluntary arbitration as the best method for resolving those disputes that the Board of Mediation could not settle.
Congress strengthened the procedures in the 1934 amendments to the Act, which created a procedure for resolving whether a union had the support of the majority of employees in a particular "craft or class, '' while turning the Board of Mediation into a permanent agency, the National Mediation Board (NMB), with broader powers.
Congress extended the RLA to cover airline employees in 1936.
Unlike the National Labor Relations Act (NLRA), which adopts a less interventionist approach to the way the parties conduct collective bargaining or resolve their disputes arising under collective bargaining agreements, the RLA specifies both (1) the negotiation and mediation procedures that unions and employers must exhaust before they may change the status quo and (2) the methods for resolving "minor '' disputes over the interpretation or application of collective bargaining agreements.
The RLA permits strikes over major disputes only after the union has exhausted the RLA 's negotiation and mediation procedures and bars almost all strikes over minor disputes. The RLA also authorizes the courts to enjoin strikes if the union has not exhausted those procedures.
On the other hand, the RLA imposes fewer restrictions on the tactics that unions may use when they do have the right to strike. The RLA, unlike the NLRA, allows secondary boycotts against other RLA - regulated carriers and permits employees to engage in other types of strikes, such as intermittent strikes, that might be unprotected under the NLRA.
The RLA categorizes all labor disputes as either "major '' disputes, which concern the making or modification of the collective bargaining agreement between the parties, or "minor '' disputes, which involve the interpretation or application of collective bargaining agreements. Unions can strike over major disputes only after they have exhausted the RLA 's "almost interminable '' negotiation and mediation procedures. They can not, on the other hand, strike over minor disputes, either during the arbitration procedures or after an award is issued.
The federal courts have the power to enjoin a strike over a major dispute if the union has not exhausted the RLA 's negotiation and mediation procedures. The Norris - LaGuardia Act dictates the procedures that the court must follow. Once the NMB releases the parties from mediation, however, they retain the power to engage in strikes or lockouts, even if they subsequently resume negotiations or the NMB offers mediation again.
The federal courts likewise have the power to enjoin a union from striking over arbitrable disputes, that is minor disputes. The court may, on the other hand, also require the employer to restore the status quo as a condition of any injunctive relief against a strike.
Major dispute bargaining is handled through the "Section 6 '' process, named for the section of the Act that describes the bargaining process. The railroad carriers have formed a coalition for national handling of Railway Labor Act bargaining under Section 6, named the National Carriers Conference Committee (NCCC). The railroad unions also form coalitions of various unions to increase bargaining power in the Section 6 process.
Carriers may lawfully replace strikers engaged in a lawful strike but may not, however, discharge them except for misconduct or eliminate their jobs to retaliate against them for striking. It is not clear whether the employer can discharge workers for striking before all of the RLA 's bargaining and mediation processes have been exhausted.
The employer must also allow strikers to replace replacements hired on a temporary basis and permanent replacements who have not completed the training required before they can become active employees. The employer may, on the other hand, allow less senior employees who crossed the picket line to keep the jobs they were given after crossing the line, even if the seniority rules in effect before the strike would have required the employer to reassign their jobs to returning strikers.
The NMB has the responsibility for conducting elections when a union claims to represent a carrier 's employees. The NMB defines the craft or class of employees eligible to vote, which almost always extends to all of the employees performing a particular job function throughout the company 's operations, rather than just those at a particular site or in a particular region.
A union seeking to represent an unorganized group of employees must produce signed and dated authorization cards or other proof of support from at least 50 c/o of the craft or class. A party attempting to oust an incumbent union must produce evidence of support from a majority of the craft of class and then the NMB must conduct an election. If the employees are unrepresented and the employer agrees, the NMB may certify the union based on the authorization cards alone.
The NMB usually uses mail ballots to conduct elections, unlike the National Labor Relations Board (NLRB), which has historically preferred walk - in elections under the NLRA. The NMB can order a rerun election if it determines that either an employer or union has interfered with employees ' free choice.
Unlike the NLRA, which gives the NLRB nearly exclusive power to enforce the Act, the RLA allows employees to sue in federal court to challenge an employer 's violation of the Act. The courts can grant employees reinstatement and backpay, along with other forms of equitable relief.
|
which mode of reasoning comes into play when seeking to explain and to predict natural phenomena | Reason - wikipedia
Reason is the capacity for consciously making sense of things, establishing and verifying facts, applying logic, and changing or justifying practices, institutions, and beliefs based on new or existing information. It is closely associated with such characteristically human activities as philosophy, science, language, mathematics, and art and is normally considered to be a distinguishing ability possessed by humans. Reason, or an aspect of it, is sometimes referred to as rationality.
Reasoning is associated with thinking, cognition, and intellect. The philosophical field of logic studies ways in which humans reason formally through argument. Reasoning may be subdivided into forms of logical reasoning (forms associated with the strict sense): deductive reasoning, inductive reasoning, abductive reasoning; and other modes of reasoning considered more informal, such as intuitive reasoning and verbal reasoning. Along these lines, a distinction is often drawn between logical, discursive reasoning (reason proper), and intuitive reasoning, in which the reasoning process through intuition -- however valid -- may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks.
Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect, truth and falsehood, or ideas regarding notions of good or bad. Reasoning, as a part of executive decision making, is also closely identified with the ability to self - consciously change, in terms of goals, beliefs, attitudes, traditions, and institutions, and therefore with the capacity for freedom and self - determination.
In contrast to the use of "reason '' as an abstract noun, a reason is a consideration given which either explains or justifies events, phenomena, or behavior. Reasons justify decisions, reasons support explanations of natural phenomena; reasons can be given to explain the actions (conduct) of individuals.
Using reason, or reasoning, can also be described more plainly as providing good, or the best, reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one 's conduct by reason -- that is, doing what there are the best reasons for doing -- while giving equal (and impartial) weight to the interests of all those affected by what one does. ''
Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason.
In the English language and other modern European languages, "reason '', and related words, represent words which have always been used to translate Latin and classical Greek terms in the sense of their philosophical usage.
The earliest major philosophers to publish in English, such as Francis Bacon, Thomas Hobbes, and John Locke also routinely wrote in Latin and French, and compared their terms to Greek, treating the words "logos '', "ratio '', "raison '' and "reason '' as inter-changeable. The meaning of the word "reason '' in senses such as "human reason '' also overlaps to a large extent with "rationality '' and the adjective of "reason '' in philosophical contexts is normally "rational '', rather than "reasoned '' or "reasonable ''. Some philosophers, Thomas Hobbes for example, also used the word ratiocination as a synonym for "reasoning ''.
The proposal that reason gives humanity a special position in nature has been argued to be a defining characteristic of western philosophy and later western modern science, starting with classical Greece. Philosophy can be described as a way of life based upon reason, and in the other direction reason has been one of the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive, or "self - correcting, '' and the critique of reason has been a persistent theme in philosophy. It has been defined in different ways, at different times, by different thinkers about human nature.
For many classical philosophers, nature was understood teleologically, meaning that every type of thing had a definitive purpose which fit within a natural order that was itself understood to have aims. Perhaps starting with Pythagoras or Heraclitus, the cosmos is even said to have reason. Reason, by this account, is not just one characteristic that humans happen to have, and that influences happiness amongst other characteristics. Reason was considered of higher stature than other characteristics of human nature, such as sociability, because it is something humans share with nature itself, linking an apparently immortal part of the human mind with the divine order of the cosmos itself. Within the human mind or soul (psyche), reason was described by Plato as being the natural monarch which should rule over the other parts, such as spiritedness (thumos) and the passions. Aristotle, Plato 's student, defined human beings as rational animals, emphasizing reason as a characteristic of human nature. He defined the highest human happiness or well being (eudaimonia) as a life which is lived consistently, excellently and completely in accordance with reason.
The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. But teleological accounts such as Aristotle 's were highly influential for those who attempt to explain reason in a way which is consistent with monotheism and the immortality and divinity of the human soul. For example, in the neo-platonist account of Plotinus, the cosmos has one soul, which is the seat of all reason, and the souls of all individual humans are part of this soul. Reason is for Plotinus both the provider of form to material things, and the light which brings individuals souls back into line with their source. Such neo-Platonist accounts of the rational part of the human soul were standard amongst medieval Islamic philosophers, and under this influence, mainly via Averroes, came to be debated seriously in Europe until well into the renaissance, and they remain important in Iranian philosophy.
The early modern era was marked by a number of significant changes in the understanding of reason, starting in Europe. One of the most important of these changes involved a change in the metaphysical understanding of human beings. Scientists and philosophers began to question the teleological understanding of the world. Nature was no longer assumed to be human - like, with its own aims or reason, and human nature was no longer assumed to work according to anything other than the same "laws of nature '' which affect inanimate things. This new understanding eventually displaced the previous world view that derived from a spiritual understanding of the universe.
Accordingly, in the 17th century, René Descartes explicitly rejected the traditional notion of humans as "rational animals, '' suggesting instead that they are nothing more than "thinking things '' along the lines of other "things '' in nature. Any grounds of knowledge outside that understanding was, therefore, subject to doubt.
In his search for a foundation of all possible knowledge, Descartes deliberately decided to throw into doubt all knowledge -- except that of the mind itself in the process of thinking:
At this time I admit nothing that is not necessarily true. I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason -- words of whose meanings I was previously ignorant.
This eventually became known as epistemological or "subject - centred '' reason, because it is based on the knowing subject, who perceives the rest of the world and itself as a set of objects to be studied, and successfully mastered by applying the knowledge accumulated through such study. Breaking with tradition and many thinkers after him, Descartes explicitly did not divide the incorporeal soul into parts, such as reason and intellect, describing them as one indivisible incorporeal entity.
A contemporary of Descartes, Thomas Hobbes described reason as a broader version of "addition and subtraction '' which is not limited to numbers. This understanding of reason is sometimes termed "calculative '' reason. Similar to Descartes, Hobbes asserted that "No discourse whatsoever, can end in absolute knowledge of fact, past, or to come '' but that "sense and memory '' is absolute knowledge.
In the late 17th century, through the 18th century, John Locke and David Hume developed Descartes ' line of thought still further. Hume took it in an especially skeptical direction, proposing that there could be no possibility of deducing relationships of cause and effect, and therefore no knowledge is based on reasoning alone, even if it seems otherwise.
Hume famously remarked that, "We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them. '' Hume also took his definition of reason to unorthodox extremes by arguing, unlike his predecessors, that human reason is not qualitatively different from either simply conceiving individual ideas, or from judgments associating two ideas, and that "reason is nothing but a wonderful and unintelligible instinct in our souls, which carries us along a certain train of ideas, and endows them with particular qualities, according to their particular situations and relations. '' It followed from this that animals have reason, only much less complex than human reason.
In the 18th century, Immanuel Kant attempted to show that Hume was wrong by demonstrating that a "transcendental '' self, or "I '', was a necessary condition of all experience. Therefore, suggested Kant, on the basis of such a self, it is in fact possible to reason both about the conditions and limits of human knowledge. And so long as these limits are respected, reason can be the vehicle of morality, justice, aesthetics, theories of knowledge (epistemology), and understanding.
In the formulation of Kant, who wrote some of the most influential modern treatises on the subject, the great achievement of reason (German: Vernunft) is that it is able to exercise a kind of universal law - making. Kant was able therefore to re-formulate the basis of moral - practical, theoretical and aesthetic reasoning, on "universal '' laws.
Here practical reasoning is the self - legislating or self - governing formulation of universal norms, and theoretical reasoning the way humans posit universal laws of nature.
Under practical reason, the moral autonomy or freedom of human beings depends on their ability to behave according to laws that are given to them by the proper exercise of that reason. This contrasted with earlier forms of morality, which depended on religious understanding and interpretation, or nature for their substance.
According to Kant, in a free society each individual must be able to pursue their goals however they see fit, so long as their actions conform to principles given by reason. He formulated such a principle, called the "categorical imperative '', which would justify an action only if it could be universalized:
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.
In contrast to Hume then, Kant insists that reason itself (German Vernunft) has natural ends itself, the solution to the metaphysical problems, especially the discovery of the foundations of morality. Kant claimed that this problem could be solved with his "transcendental logic '' which unlike normal logic is not just an instrument, which can be used indifferently, as it was for Aristotle, but a theoretical science in its own right and the basis of all the others.
According to Jürgen Habermas, the "substantive unity '' of reason has dissolved in modern times, such that it can no longer answer the question "How should I live? '' Instead, the unity of reason has to be strictly formal, or "procedural. '' He thus described reason as a group of three autonomous spheres (on the model of Kant 's three critiques):
For Habermas, these three spheres are the domain of experts, and therefore need to be mediated with the "lifeworld '' by philosophers. In drawing such a picture of reason, Habermas hoped to demonstrate that the substantive unity of reason, which in pre-modern societies had been able to answer questions about the good life, could be made up for by the unity of reason 's formalizable procedures.
Hamann, Herder, Kant, Hegel, Kierkegaard, Nietzsche, Heidegger, Foucault, Rorty, and many other philosophers have contributed to a debate about what reason means, or ought to mean. Some, like Kierkegaard, Nietzsche, and Rorty, are skeptical about subject - centred, universal, or instrumental reason, and even skeptical toward reason as a whole. Others, including Hegel, believe that it has obscured the importance of intersubjectivity, or "spirit '' in human life, and attempt to reconstruct a model of what reason should be.
Some thinkers, e.g. Foucault, believe there are other forms of reason, neglected but essential to modern life, and to our understanding of what it means to live a life according to reason.
In the last several decades, a number of proposals have been made to "re-orient '' this critique of reason, or to recognize the "other voices '' or "new departments '' of reason:
For example, in opposition to subject - centred reason, Habermas has proposed a model of communicative reason that sees it as an essentially cooperative activity, based on the fact of linguistic intersubjectivity.
Nikolas Kompridis has proposed a widely encompassing view of reason as "that ensemble of practices that contributes to the opening and preserving of openness '' in human affairs, and a focus on reason 's possibilities for social change.
The philosopher Charles Taylor, influenced by the 20th century German philosopher Martin Heidegger, has proposed that reason ought to include the faculty of disclosure, which is tied to the way we make sense of things in everyday life, as a new "department '' of reason.
In the essay "What is Enlightenment? '', Michel Foucault proposed a concept of critique based on Kant 's distinction between "private '' and "public '' uses of reason. This distinction, as suggested, has two dimensions:
The terms "logic '' or "logical '' are sometimes used as if they were identical with the term "reason '' or with the concept of being "rational '', or sometimes logic is seen as the most pure or the defining form of reason. For example in modern economics, rational choice is assumed to equate to logically consistent choice.
Reason and logic can however be thought of as distinct, although logic is one important aspect of reason. Author Douglas Hofstadter, in Gödel, Escher, Bach, characterizes the distinction in this way. Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system.
Reason is a type of thought, and the word "logic '' involves the attempt to describe rules or norms by which reasoning operates, so that orderly reasoning can be taught. The oldest surviving writing to explicitly consider the rules by which reason operates are the works of the Greek philosopher Aristotle, especially Prior Analysis and Posterior Analysis. Although the Ancient Greeks had no separate word for logic as distinct from language and reason, Aristotle 's newly coined word "syllogism '' (syllogismos) identified logic clearly for the first time as a distinct field of study. When Aristotle referred to "the logical '' (hē logikē), he was referring more broadly to rational thought.
As pointed out by philosophers such as Hobbes, Locke and Hume, some animals are also clearly capable of a type of "associative thinking '', even to the extent of associating causes and effects. A dog once kicked, can learn how to recognize the warning signs and avoid being kicked in the future, but this does not mean the dog has reason in any strict sense of the word. It also does not mean that humans acting on the basis of experience or habit are using their reason.
Human reason requires more than being able to associate two ideas, even if those two ideas might be described by a reasoning human as a cause and an effect, perceptions of smoke, for example, and memories of fire. For reason to be involved, the association of smoke and the fire would have to be thought through in a way which can be explained, for example as cause and effect. In the explanation of Locke, for example, reason requires the mental use of a third idea in order to make this comparison by use of syllogism.
More generally, reason in the strict sense requires the ability to create and manipulate a system of symbols, as well as indices and icons, according to Charles Sanders Peirce, the symbols having only a nominal, though habitual, connection to either smoke or fire. One example of such a system of artificial symbols and signs is language.
The connection of reason to symbolic thinking has been expressed in different ways by philosophers. Thomas Hobbes described the creation of "Markes, or Notes of remembrance '' (Leviathan Ch. 4) as speech. He used the word speech as an English version of the Greek word logos so that speech did not need to be communicated. When communicated, such speech becomes language, and the marks or notes or remembrance are called "Signes '' by Hobbes. Going further back, although Aristotle is a source of the idea that only humans have reason (logos), he does mention that animals with imagination, for whom sense perceptions can persist, come closest to having something like reasoning and nous, and even uses the word "logos '' in one place to describe the distinctions which animals can perceive in such cases.
Reason and imagination rely on similar mental processes. Imagination is not only found in humans. Aristotle, for example, stated that phantasia (imagination: that which can hold images or phantasmata) and phronein (a type of thinking that can judge and understand in some sense) also exist in some animals. According to him, both are related to the primary perceptive ability of animals, which gathers the perceptions of different senses and defines the order of the things that are perceived without distinguishing universals, and without deliberation or logos. But this is not yet reason, because human imagination is different.
The recent modern writings of Terrence Deacon and Merlin Donald, writing about the origin of language, also connect reason connected to not only language, but also mimesis, More specifically they describe the ability to create language as part of an internal modeling of reality specific to humankind. Other results are consciousness, and imagination or fantasy. In contrast, modern proponents of a genetic pre-disposition to language itself include Noam Chomsky and Steven Pinker, to whom Donald and Deacon can be contrasted.
As reason is symbolic thinking, and peculiarly human, then this implies that humans have a special ability to maintain a clear consciousness of the distinctness of "icons '' or images and the real things they represent. Starting with a modern author, Merlin Donald writes
A dog might perceive the "meaning '' of a fight that was realistically play - acted by humans, but it could not reconstruct the message or distinguish the representation from its referent (a real fight). (...) Trained apes are able to make this distinction; young children make this distinction early -- hence, their effortless distinction between play - acting an event and the event itself
In classical descriptions, an equivalent description of this mental faculty is eikasia, in the philosophy of Plato. This is the ability to perceive whether a perception is an image of something else, related somehow but not the same, and therefore allows humans to perceive that a dream or memory or a reflection in a mirror is not reality as such. What Klein refers to as dianoetic eikasia is the eikasia concerned specifically with thinking and mental images, such as those mental symbols, icons, signes, and marks discussed above as definitive of reason. Explaining reason from this direction: human thinking is special in the way that we often understand visible things as if they were themselves images of our intelligible "objects of thought '' as "foundations '' (hypothēses in Ancient Greek). This thinking (dianoia) is "... an activity which consists in making the vast and diffuse jungle of the visible world depend on a plurality of more ' precise ' noēta. ''
Both Merlin Donald and the Socratic authors such Plato and Aristotle emphasize the importance of mimesis, often translated as imitation or representation. Donald writes
Imitation is found especially in monkeys and apes (... but...) Mimesis is fundamentally different from imitation and mimicry in that it involves the invention of intentional representations. (...) Mimesis is not absolutely tied to external communication.
Mimēsis is a concept, now popular again in academic discussion, that was particularly prevalent in Plato 's works, and within Aristotle, it is discussed mainly in the Poetics. In Michael Davis 's account of the theory of man in this work.
It is the distinctive feature of human action, that whenever we choose what we do, we imagine an action for ourselves as though we were inspecting it from the outside. Intentions are nothing more than imagined actions, internalizings of the external. All action is therefore imitation of action; it is poetic...
Donald like Plato (and Aristotle, especially in On Memory and Recollection), emphasizes the peculiarity in humans of voluntary initiation of a search through one 's mental world. The ancient Greek anamnēsis, normally translated as "recollection '' was opposed to mneme or memory. Memory, shared with some animals, requires a consciousness not only of what happened in the past, but also that something happened in the past, which is in other words a kind of eikasia "... but nothing except man is able to recollect. '' Recollection is a deliberate effort to search for and recapture something once known. Klein writes that, "To become aware of our having forgotten something means to begin recollecting. '' Donald calls the same thing autocueing, which he explains as follows: "Mimetic acts are reproducible on the basis of internal, self - generated cues. This permits voluntary recall of mimetic representations, without the aid of external cues -- probably the earliest form of representational thinking. ''
In a celebrated paper in modern times, the fantasy author and philologist J.R.R. Tolkien wrote in his essay "On Fairy Stories '' that the terms "fantasy '' and "enchantment '' are connected to not only "... the satisfaction of certain primordial human desires... '' but also "... the origin of language and of the mind. ''
Looking at logical categorizations of different types of reasoning the traditional main division made in philosophy is between deductive reasoning and inductive reasoning. Formal logic has been described as the science of deduction. The study of inductive reasoning is generally carried out within the field known as informal logic or critical thinking.
A subdivision of Philosophy is Logic. Logic is the study of reasoning. Deduction is a form of reasoning in which a conclusion follows necessarily from the stated premises. A deduction is also the conclusion reached by a deductive reasoning process. One classic example of deductive reasoning is that found in syllogisms like the following:
The reasoning in this argument is valid, because there is no way in which the premises, 1 and 2, could be true and the conclusion, 3, be false.
Induction is a form of inference producing propositions about unobserved objects or types, either specifically or generally, based on previous observation. It is used to ascribe properties or relations to objects or types based on previous observations or experiences, or to formulate general statements or laws based on limited observations of recurring phenomenal patterns.
Inductive reasoning contrasts strongly with deductive reasoning in that, even in the best, or strongest, cases of inductive reasoning, the truth of the premises does not guarantee the truth of the conclusion. Instead, the conclusion of an inductive argument follows with some degree of probability. Relatedly, the conclusion of an inductive argument contains more information than is already contained in the premises. Thus, this method of reasoning is ampliative.
A classic example of inductive reasoning comes from the empiricist David Hume:
Abductive reasoning, or argument to the best explanation, is a form of inductive reasoning, since the conclusion in an abductive argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favour one conclusion above others, by attempting to falsify alternative explanations or by demonstrating the likelihood of the favoured conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable.
Analogical reasoning is reasoning from the particular to the particular. It is often used in case - based reasoning, especially legal reasoning. An example follows:
Analogical reasoning can be viewed as a form of inductive reasoning from a single example, but if it is intended as inductive reasoning it is a bad example, because inductive reasoning typically uses a large number of examples to reason from the particular to the general. Analogical reasoning often leads to wrong conclusions. For example:
Flawed reasoning in arguments is known as fallacious reasoning. Bad reasoning within arguments can be because it commits either a formal fallacy or an informal fallacy.
Formal fallacies occur when there is a problem with the form, or structure, of the argument. The word "formal '' refers to this link to the form of the argument. An argument that contains a formal fallacy will always be invalid.
An informal fallacy is an error in reasoning that occurs due to a problem with the content, rather than mere structure, of the argument.
Philosophy is sometimes described as a life of reason, with normal human reason pursued in a more consistent and dedicated way than usual. Two categories of problem concerning reason have long been discussed by philosophers concerning reason, essentially being reasonings about reasoning itself as a human aim, or philosophizing about philosophizing. The first question is concerning whether we can be confident that reason can achieve knowledge of truth better than other ways of trying to achieve such knowledge. The other question is whether a life of reason, a life that aims to be guided by reason, can be expected to achieve a happy life more so than other ways of life (whether such a life of reason results in knowledge or not).
Since classical times a question has remained constant in philosophical debate (which is sometimes seen as a conflict between movements called Platonism and Aristotelianism) concerning the role of reason in confirming truth. People use logic, deduction, and induction, to reach conclusions they think are true. Conclusions reached in this way are considered more certain than sense perceptions on their own. On the other hand, if such reasoned conclusions are only built originally upon a foundation of sense perceptions, then, our most logical conclusions can never be said to be certain because they are built upon the very same fallible perceptions they seek to better.
This leads to the question of what types of first principles, or starting points of reasoning, are available for someone seeking to come to true conclusions. In Greek, "first principles '' are archai, "starting points '', and the faculty used to perceive them is sometimes referred to in Aristotle and Plato as nous which was close in meaning to awareness or consciousness.
Empiricism (sometimes associated with Aristotle but more correctly associated with British philosophers such as John Locke and David Hume, as well as their ancient equivalents such as Democritus) asserts that sensory impressions are the only available starting points for reasoning and attempting to attain truth. This approach always leads to the controversial conclusion that absolute knowledge is not attainable. Idealism, (associated with Plato and his school), claims that there is a "higher '' reality, from which certain people can directly arrive at truth without needing to rely only upon the senses, and that this higher reality is therefore the primary source of truth.
Philosophers such as Plato, Aristotle, Al - Farabi, Avicenna, Averroes, Maimonides, Aquinas and Hegel are sometimes said to have argued that reason must be fixed and discoverable -- perhaps by dialectic, analysis, or study. In the vision of these thinkers, reason is divine or at least has divine attributes. Such an approach allowed religious philosophers such as Thomas Aquinas and Étienne Gilson to try to show that reason and revelation are compatible. According to Hegel, "... the only thought which Philosophy brings with it to the contemplation of History, is the simple conception of reason; that reason is the Sovereign of the World; that the history of the world, therefore, presents us with a rational process. ''
Since the 17th century rationalists, reason has often been taken to be a subjective faculty, or rather the unaided ability (pure reason) to form concepts. For Descartes, Spinoza and Leibniz, this was associated with mathematics. Kant attempted to show that pure reason could form concepts (time and space) that are the conditions of experience. Kant made his argument in opposition to Hume, who denied that reason had any role to play in experience.
After Plato and Aristotle, western literature often treated reason as being the faculty that trained the passions and appetites. Stoic philosophy by contrast considered all passions bad. After the critiques of reason in the early Enlightenment the appetites were rarely discussed or conflated with the passions. Some Enlightenment camps took after the Stoics to say Reason should oppose Passion rather than order it, while others like the Romantics believed that Passion displaces Reason, as in the maxim "follow your heart ''.
Reason has been seen as a slave, or judge, of the passions, notably in the work of David Hume, and more recently of Freud. Reasoning which claims that the object of a desire is demanded by logic alone is called rationalization.
Rousseau first proposed, in his second Discourse, that reason and political life is not natural and possibly harmful to mankind. He asked what really can be said about what is natural to mankind. What, other than reason and civil society, "best suits his constitution ''? Rousseau saw "two principles prior to reason '' in human nature. First we hold an intense interest in our own well - being. Secondly we object to the suffering or death of any sentient being, especially one like ourselves. These two passions lead us to desire more than we could achieve. We become dependent upon each other, and on relationships of authority and obedience. This effectively puts the human race into slavery. Rousseau says that he almost dares to assert that nature does not destine men to be healthy. According to Velkley, "Rousseau outlines certain programs of rational self - correction, most notably the political legislation of the Contrat Social and the moral education in Émile. All the same, Rousseau understands such corrections to be only ameliorations of an essentially unsatisfactory condition, that of socially and intellectually corrupted humanity. ''
This quandary presented by Rousseau led to Kant 's new way of justifying reason as freedom to create good and evil. These therefore are not to be blamed on nature or God. In various ways, German Idealism after Kant, and major later figures such Nietzsche, Bergson, Husserl, Scheler, and Heidegger, remain pre-occupied with problems coming from the metaphysical demands or urges of reason. The influence of Rousseau and these later writers is also large upon art and politics. Many writers (such as Nikos Kazantzakis) extol passion and disparage reason. In politics modern nationalism comes from Rousseau 's argument that rationalist cosmopolitanism brings man ever further from his natural state.
Another view on reason and emotion was proposed in the 1994 book titled Descartes ' Error by Antonio Damasio. In it, Damasio presents the "Somatic Marker Hypothesis '' which states that emotions guide behavior and decision - making. Damasio argues that these somatic markers (known collectively as "gut feelings '') are "intuitive signals '' that direct our decision making processes in a certain way that can not be solved with rationality alone. Damasio further argues that rationality requires emotional input in order to function.
There are many religious traditions, some of which are explicitly fideist and others of which claim varying degrees of rationalism. Secular critics sometimes accuse all religious adherents of irrationality, since they claim such adherents are guilty of ignoring, suppressing, or forbidding some kinds of reasoning concerning some subjects (such as religious dogmas, moral taboos, etc.). Though the theologies and religions such as classical monotheism typically do not claim to be irrational, there is often a perceived conflict or tension between faith and tradition on the one hand, and reason on the other, as potentially competing sources of wisdom, law and truth.
Religious adherents sometimes respond by arguing that faith and reason can be reconciled, or have different non-overlapping domains, or that critics engage in a similar kind of irrationalism:
Some commentators have claimed that Western civilization can be almost defined by its serious testing of the limits of tension between "unaided '' reason and faith in "revealed '' truths -- figuratively summarized as Athens and Jerusalem, respectively. Leo Strauss spoke of a "Greater West '' that included all areas under the influence of the tension between Greek rationalism and Abrahamic revelation, including the Muslim lands. He was particularly influenced by the great Muslim philosopher Al - Farabi. To consider to what extent Eastern philosophy might have partaken of these important tensions, Strauss thought it best to consider whether dharma or tao may be equivalent to Nature (by which we mean physis in Greek). According to Strauss the beginning of philosophy involved the "discovery or invention of nature '' and the "pre-philosophical equivalent of nature '' was supplied by "such notions as ' custom ' or ' ways ' '', which appear to be really universal in all times and places. The philosophical concept of nature or natures as a way of understanding archai (first principles of knowledge) brought about a peculiar tension between reasoning on the one hand, and tradition or faith on the other.
Although there is this special history of debate concerning reason and faith in the Islamic, Christian and Jewish traditions, the pursuit of reason is sometimes argued to be compatible with the other practice of other religions of a different nature, such as Hinduism, because they do not define their tenets in such an absolute way.
Aristotle famously described reason (with language) as a part of human nature, which means that it is best for humans to live "politically '' meaning in communities of about the size and type of a small city state (polis in Greek). For example...
It is clear, then, that a human being is more of a political (politikon = of the polis) animal (zōion) than is any bee or than any of those animals that live in herds. For nature, as we say, makes nothing in vain, and humans are the only animals who possess reasoned speech (logos). Voice, of course, serves to indicate what is painful and pleasant; that is why it is also found in other animals, because their nature has reached the point where they can perceive what is painful and pleasant and express these to each other. But speech (logos) serves to make plain what is advantageous and harmful and so also what is just and unjust. For it is a peculiarity of humans, in contrast to the other animals, to have perception of good and bad, just and unjust, and the like; and the community in these things makes a household or city (polis). (...) By nature, then, the drive for such a community exists in everyone, but the first to set one up is responsible for things of very great goodness. For as humans are the best of all animals when perfected, so they are the worst when divorced from law and right. The reason is that injustice is most difficult to deal with when furnished with weapons, and the weapons a human being has are meant by nature to go along with prudence and virtue, but it is only too possible to turn them to contrary uses. Consequently, if a human being lacks virtue, he is the most unholy and savage thing, and when it comes to sex and food, the worst. But justice is something political (to do with the polis), for right is the arrangement of the political community, and right is discrimination of what is just. (Aristotle 's Politics 1253a 1.2. Peter Simpson 's translation, with Greek terms inserted in square brackets.)
The concept of human nature being fixed in this way, implied, in other words, that we can define what type of community is always best for people. This argument has remained a central argument in all political, ethical and moral thinking since then, and has become especially controversial since firstly Rousseau 's Second Discourse, and secondly, the Theory of Evolution. Already in Aristotle there was an awareness that the polis had not always existed and had needed to be invented or developed by humans themselves. The household came first, and the first villages and cities were just extensions of that, with the first cities being run as if they were still families with Kings acting like fathers.
Friendship (philia) seems to prevail (in) man and woman according to nature (kata phusin); for people are by nature (tēi phusei) pairing (sunduastikon) more than political (politikon = of the polis), inasmuch as the household (oikos) is prior (proteron = earlier) and more necessary than the polis and making children is more common (koinoteron) with the animals. In the other animals, community (koinōnia) goes no further than this, but people live together (sumoikousin) not only for the sake of making children, but also for the things for life; for from the start the functions (erga) are divided, and are different (for) man and woman. Thus they supply each other, putting their own into the common (eis to koinon). It is for these (reasons) that both utility (chrēsimon) and pleasure (hēdu) seem to be found in this kind of friendship. (Nicomachean Ethics, VIII. 12.1162 a. Rough literal translation with Greek terms shown in square brackets.)
Rousseau in his Second Discourse finally took the shocking step of claiming that this traditional account has things in reverse: with reason, language and rationally organized communities all having developed over a long period of time merely as a result of the fact that some habits of cooperation were found to solve certain types of problems, and that once such cooperation became more important, it forced people to develop increasingly complex cooperation -- often only to defend themselves from each other.
In other words, according to Rousseau, reason, language and rational community did not arise because of any conscious decision or plan by humans or gods, nor because of any pre-existing human nature. As a result, he claimed, living together in rationally organized communities like modern humans is a development with many negative aspects compared to the original state of man as an ape. If anything is specifically human in this theory, it is the flexibility and adaptability of humans. This view of the animal origins of distinctive human characteristics later received support from Charles Darwin 's Theory of Evolution.
The two competing theories concerning the origins of reason are relevant to political and ethical thought because, according to the Aristotelian theory, a best way of living together exists independently of historical circumstances. According to Rousseau, we should even doubt that reason, language and politics are a good thing, as opposed to being simply the best option given the particular course of events that lead to today. Rousseau 's theory, that human nature is malleable rather than fixed, is often taken to imply, for example by Karl Marx, a wider range of possible ways of living together than traditionally known.
However, while Rousseau 's initial impact encouraged bloody revolutions against traditional politics, including both the French Revolution and the Russian Revolution, his own conclusions about the best forms of community seem to have been remarkably classical, in favor of city - states such as Geneva, and rural living.
Scientific research into reasoning is carried out within the fields of psychology and cognitive science. Psychologists attempt to determine whether or not people are capable of rational thought in a number of different circumstances.
Assessing how well someone engages in reasoning is the project of determining the extent to which the person is rational or acts rationally. It is a key research question in the psychology of reasoning. Rationality is often divided into its respective theoretical and practical counterparts.
Experimental cognitive psychologists carry out research on reasoning behaviour. Such research may focus, for example, on how people perform on tests of reasoning such as intelligence or IQ tests, or on how well people 's reasoning matches ideals set by logic (see, for example, the Wason test). Experiments examine how people make inferences from conditionals e.g., If A then B and how they make inferences about alternatives, e.g., A or else B. They test whether people can make valid deductions about spatial and temporal relations, e.g., A is to the left of B, or A happens after B, and about quantified assertions, e.g., All the A are B. Experiments investigate how people make inferences about factual situations, hypothetical possibilities, probabilities, and counterfactual situations.
Developmental psychologists investigate the development of reasoning from birth to adulthood. Piaget 's theory of cognitive development was the first complete theory of reasoning development. Subsequently, several alternative theories were proposed, including the neo-Piagetian theories of cognitive development.
The biological functioning of the brain is studied by neurophysiologists and neuropsychologists. Research in this area includes research into the structure and function of normally functioning brains, and of damaged or otherwise unusual brains. In addition to carrying out research into reasoning, some psychologists, for example, clinical psychologists and psychotherapists work to alter people 's reasoning habits when they are unhelpful.
In artificial intelligence and computer science, scientists study and use automated reasoning for diverse applications including automated theorem proving the formal semantics of programming languages, and formal specification in software engineering.
Meta - reasoning is reasoning about reasoning. In computer science, a system performs meta - reasoning when it is reasoning about its own operation. This requires a programming language capable of reflection, the ability to observe and modify its own structure and behaviour.
A species could benefit greatly from better abilities to reason about, predict and understand the world. French social and cognitive scientists Dan Sperber and Hugo Mercier argue that there could have been other forces driving the evolution of reason. They point out that reasoning is very difficult for humans to do effectively, and that it is hard for individuals to doubt their own beliefs (confirmation bias). Reasoning is most effective when it is done as a collective - as demonstrated by the success of projects like science. They suggest that there are not just individual, but group selection pressures at play. Any group that managed to find ways of reasoning effectively would reap benefits for all its members, increasing their fitness. This could also help explain why humans, according to Sperber, are not optimized to reason effectively alone. Their argumentative theory of reasoning claims that reason may have more to do with winning arguments than with the search for the truth.
|
when did the us adopt the electoral college | Electoral College (United states) - wikipedia
The United States Electoral College is the mechanism established by the United States Constitution for the indirect election of the President of the United States and Vice President of the United States. Citizens of the United States vote in each state and the District of Columbia at a general election to choose a slate of "electors '' pledged to vote for a particular party 's candidate.
The Twelfth Amendment requires each elector to cast one vote for president and another vote for vice president. In each state and the District of Columbia, electors are chosen every four years on the Tuesday after the first Monday in November, and then meet to cast ballots on the first Monday after the second Wednesday in December. The candidates who receive a majority of electoral votes among the states are elected President and Vice President of the United States when the Electoral College vote is certified by Congress in January.
Each state chooses electors, totaling in number to that state 's combined total of senators and representatives. There are a total of 538 electors, corresponding to the 435 representatives and 100 senators, plus the three electors for the District of Columbia as provided by the Twenty - third Amendment. The Constitution bars any federal official, elected or appointed, from being an elector. The Office of the Federal Register is charged with administering the Electoral College. Since the mid-19th century when all electors have been popularly chosen, the Electoral College has elected the candidate who received the most popular votes nationwide, except in four elections: 1876, 1888, 2000, and 2016. In 1824, there were six states in which electors were legislatively appointed, rather than popularly elected, so the true national popular vote is uncertain; the electors failed to select a winning candidate, so the matter was decided by the House of Representatives.
All states except California (before 1913), Maine, and Nebraska have chosen electors on a "winner - take - all '' basis since the 1880s. Under the winner - take - all system, the state 's electors are awarded to the candidate with the most votes in that state, thus maximizing the state 's influence in the national election. Maine and Nebraska use the "congressional district method '', selecting one elector within each congressional district by popular vote and awarding two electors by a statewide popular vote. Although no elector is required by federal law to honor their pledge, there have been very few occasions when an elector voted contrary to a pledge and never once has it impacted the final outcome of a national election.
If no candidate for president receives a majority of electoral votes for president, the Twelfth Amendment provides that the House of Representatives will select the president, with each of the fifty state delegations casting one vote. If no candidate for vice president receives a majority of electoral votes for vice president, then the Senate will select the vice president, with each of the 100 senators having one vote.
The Constitutional Convention in 1787 used the Virginia Plan as the basis for discussions, as the Virginia delegation had proposed it first. The Virginia Plan called for the Congress to elect the president. Delegates from a majority of states agreed to this mode of election. However, a committee formed to work out various details including the mode of election of the president, recommended instead the election be by a group of people apportioned among the states in the same numbers as their representatives in Congress (the formula for which had been resolved in lengthy debates resulting in the Connecticut Compromise and Three - Fifths Compromise), but chosen by each state "in such manner as its Legislature may direct. '' Committee member Gouverneur Morris explained the reasons for the change; among others, there were fears of "intrigue '' if the president were chosen by a small group of men who met together regularly, as well as concerns for the independence of the president if he were elected by the Congress. However once the Electoral College had been decided on, several delegates (Mason, Butler, Morris, Wilson, and Madison) openly recognized its ability to protect the election process from cabal, corruption, intrigue, and faction. Some delegates, including James Wilson and James Madison, preferred popular election of the executive. Madison acknowledged that while a popular vote would be ideal, it would be difficult to get consensus on the proposal given the prevalence of slavery in the South:
There was one difficulty however of a serious nature attending an immediate choice by the people. The right of suffrage was much more diffusive in the Northern than the Southern States; and the latter could have no influence in the election on the score of Negroes. The substitution of electors obviated this difficulty and seemed on the whole to be liable to the fewest objections.
The Convention approved the Committee 's Electoral College proposal, with minor modifications, on September 6, 1787. Delegates from states with smaller populations or limited land area such as Connecticut, New Jersey and Maryland generally favored the Electoral College with some consideration for states. At the compromise providing for a runoff among the top five candidates, the small states supposed that the House of Representatives with each state delegation casting one vote would decide most elections.
In The Federalist Papers, James Madison explained his views on the selection of the president and the Constitution. In Federalist No. 39, Madison argued the Constitution was designed to be a mixture of state - based and population - based government. Congress would have two houses: the state - based Senate and the population - based House of Representatives. Meanwhile, the president would be elected by a mixture of the two modes.
Alexander Hamilton in Federalist No. 68 laid out what he believed were the key advantages to the Electoral College. The electors come directly from the people and them alone for that purpose only, and for that time only. This avoided a party - run legislature, or a permanent body that could be influenced by foreign interests before each election. Hamilton explained the election was to take place among all the states, so no corruption in any state could taint "the great body of the people '' in their selection. The choice was to be made by a majority of the Electoral College, as majority rule is critical to the principles of republican government. Hamilton argued that electors meeting in the state capitals were able to have information unavailable to the general public. Hamilton also argued that since no federal officeholder could be an elector, none of the electors would be beholden to any presidential candidate.
Another consideration was the decision would be made without "tumult and disorder '', as it would be a broad - based one made simultaneously in various locales where the decision - makers could deliberate reasonably, not in one place where decision - makers could be threatened or intimidated. If the Electoral College did not achieve a decisive majority, then the House of Representatives was to choose the president from among the top five candidates, ensuring selection of a presiding officer administering the laws would have both ability and good character. Hamilton was also concerned about somebody unqualified, but with a talent for "low intrigue, and the little arts of popularity '', attaining high office.
Additionally, in the Federalist No. 10, James Madison argued against "an interested and overbearing majority '' and the "mischiefs of faction '' in an electoral system. He defined a faction as "a number of citizens whether amounting to a majority or minority of the whole, who are united and actuated by some common impulse of passion, or of interest, adverse to the rights of other citizens, or to the permanent and aggregate interests of the community. '' What was then called republican government (i.e., federalism, as opposed to direct democracy), with its varied distribution of voter rights and powers, would countervail against factions. Madison further postulated in the Federalist No. 10 that the greater the population and expanse of the Republic, the more difficulty factions would face in organizing due to such issues as sectionalism.
Although the United States Constitution refers to "Electors '' and "electors '', neither the phrase "Electoral College '' nor any other name is used to describe the electors collectively. It was not until the early 19th century the name "Electoral College '' came into general usage as the collective designation for the electors selected to cast votes for president and vice president. The phrase was first written into federal law in 1845 and today the term appears in 3 U.S.C. § 4, in the section heading and in the text as "college of electors. ''
Article II, Section 1, Clause 2 of the Constitution states:
Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.
Article II, Section 1, Clause 4 of the Constitution states:
The Congress may determine the Time of chusing (sic) the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States.
Article II, Section 1, Clause 3 of the Constitution provided the original plan by which the electors chose the president and vice president. Under the original plan, the candidate who received a majority of votes from the electors would become president; the candidate receiving the second most votes would become vice president.
The original plan of the Electoral College was based upon several assumptions and anticipations of the Framers of the Constitution:
According to the text of Article II, however, each state government was free to have its own plan for selecting its electors, and the Constitution does not explicitly require states to popularly elect their electors. Several different methods for selecting electors are described at length below.
The emergence of political parties and nationally - coordinated election campaigns soon complicated matters in the elections of 1796 and 1800. In 1796, Federalist Party candidate John Adams won the presidential election. Finishing in second place was Democratic - Republican Party candidate Thomas Jefferson, the Federalists ' opponent, who became the vice president. This resulted in the President and Vice President not being of the same political party.
In 1800, the Democratic - Republican Party again nominated Jefferson for president, and also nominated Aaron Burr for vice president. After the election, Jefferson and Burr both obtained a majority of electoral votes, but tied one another with 73 votes each. Since ballots did not distinguish between votes for president and votes for vice president, every ballot cast for Burr technically counted as a vote for him to become president, despite Jefferson clearly being his party 's first choice. Lacking a clear winner by constitutional standards, the election had to be decided by the House of Representatives pursuant to the Constitution 's contingency election provision.
Having already lost the presidential contest, Federalist Party representatives in the lame duck House session seized upon the opportunity to embarrass their opposition and attempted to elect Burr over Jefferson. The House deadlocked for 35 ballots as neither candidate received the necessary majority vote of the state delegations in the House (the votes of nine states were needed for an election). Jefferson achieved electoral victory on the 36th ballot, but only after Federalist Party leader Alexander Hamilton -- who disfavored Burr 's personal character more than Jefferson 's policies -- had made known his preference for Jefferson.
Responding to the problems from those elections, the Congress proposed the Twelfth Amendment in 1803 -- prescribing electors cast separate ballots for president and vice president -- to replace the system outlined in Article II, Section 1, Clause 3. By June 1804, the states had ratified the amendment in time for the 1804 election.
Alexander Hamilton described the framers ' view of how electors would be chosen, "A small number of persons, selected by their fellow - citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated (tasks). '' The founders assumed this would take place district by district. That plan was carried out by many states until the 1880s. For example, in Massachusetts in 1820, the rule stated "the people shall vote by ballot, on which shall be designated who is voted for as an Elector for the district. '' In other words, the people did not place the name of a candidate for a president on the ballot, instead they voted for their local elector, whom they trusted later to cast a responsible vote for president.
Some states reasoned that the favorite presidential candidate among the people in their state would have a much better chance if all of the electors selected by their state were sure to vote the same way -- a "general ticket '' of electors pledged to a party candidate. So the slate of electors chosen by the state were no longer free agents, independent thinkers, or deliberative representatives. They became "voluntary party lackeys and intellectual non-entities. '' Once one state took that strategy, the others felt compelled to follow suit in order to compete for the strongest influence on the election.
When James Madison and Hamilton, two of the most important architects of the Electoral College, saw this strategy being taken by some states, they protested strongly. Madison and Hamilton both made it clear this approach violated the spirit of the Constitution. According to Hamilton, the selection of the president should be "made by men most capable of analyzing the qualities adapted to the station (of president). '' According to Hamilton, the electors were to analyze the list of potential presidents and select the best one. He also used the term "deliberate. '' Hamilton considered a pre-pledged elector to violate the spirit of Article II of the Constitution insofar as such electors could make no "analysis '' or "deliberate '' concerning the candidates. Madison agreed entirely, saying that when the Constitution was written, all of its authors assumed individual electors would be elected in their districts and it was inconceivable a "general ticket '' of electors dictated by a state would supplant the concept. Madison wrote to George Hay,
The district mode was mostly, if not exclusively in view when the Constitution was framed and adopted; & was exchanged for the general ticket (many years later).
The founders assumed that electors would be elected by the citizens of their district and that elector was to be free to analyze and deliberate regarding who is best suited to be president.
Madison and Hamilton were so upset by what they saw as a distortion of the framers ' original intent that they advocated for a constitutional amendment to prevent anything other than the district plan: "the election of Presidential Electors by districts, is an amendment very proper to be brought forward '', Madison told George Hay in 1823. Hamilton went further. He actually drafted an amendment to the Constitution mandating the district plan for selecting electors.
In 1789, at - large popular vote, the winner - take - all method, began with Pennsylvania and Maryland; Virginia and Delaware used a district plan by popular vote, and in the five other states participating in the election (Connecticut, Georgia, Maryland, New Hampshire New Jersey and South Carolina), state legislatures chose. By 1800, Virginia and Rhode Island voted at - large, Kentucky, Maryland and North Carolina voted popularly by district, and eleven states voted by state legislature. Beginning in 1804 there was a definite trend towards the winner - take - all system for statewide popular vote.
States using their state legislature to choose presidential electors have included fourteen states from all regions of the country. By 1832, only South Carolina used the state legislature, and it abandoned the method after 1860. States using popular vote by district have included ten states from all regions of the country. By 1832 there was only Maryland, and from 1836 district plans fell out of use until the 20th century, though Michigan used a district plan for 1892 only.
Since 1836, statewide, winner - take - all popular voting for electors has been the almost universal practice. As of 2016, Maine (from 1972) and Nebraska (from 1996) use the district plan, with two at - large electors assigned to support the winner of the statewide popular vote.
Section 2 of the Fourteenth Amendment allows for a state 's representation in the House of Representatives to be reduced if a state unconstitutionally denies people the right to vote. The reduction is in keeping with the proportion of people denied a vote. This amendment refers to "the right to vote at any election for the choice of electors for President and Vice President of the United States, '' among other elections, the only place in the Constitution mentioning electors being selected by popular vote.
On May 8, 1866, during a debate on the Fourteenth Amendment, Thaddeus Stevens, the leader of the Republicans in the House of Representatives, delivered a speech on the amendment 's intent. Regarding Section 2, he said:
The second section I consider the most important in the article. It fixes the basis of representation in Congress. If any State shall exclude any of her adult male citizens from the elective franchise, or abridge that right, she shall forfeit her right to representation in the same proportion. The effect of this provision will be either to compel the States to grant universal suffrage or so shear them of their power as to keep them forever in a hopeless minority in the national Government, both legislative and executive.
Federal law (2 U.S.C. § 6) implements Section 2 's mandate.
Even though the aggregate national popular vote is calculated by state officials, media organizations, and the Federal Election Commission, the people only indirectly elect the president, as the national popular vote is not the basis for electing the president or vice president. The president and vice president of the United States are elected by the Electoral College, which consists of 538 presidential electors from the fifty states and Washington, D.C. Presidential electors are selected on a state - by - state basis, as determined by the laws of each state. Since the election of 1824, most states have appointed their electors on a winner - take - all basis, based on the statewide popular vote on Election Day. Maine and Nebraska are the only two current exceptions, as both states use the congressional district method. Although ballots list the names of the presidential and vice presidential candidates (who run on a ticket), voters actually choose electors when they vote for president and vice president. These presidential electors in turn cast electoral votes for those two offices. Electors usually pledge to vote for their party 's nominee, but some "faithless electors '' have voted for other candidates or refrained from voting.
A candidate must receive an absolute majority of electoral votes (currently 270) to win the presidency or the vice presidency. If no candidate receives a majority in the election for president or vice president, the election is determined via a contingency procedure established by the Twelfth Amendment. In such a situation, the House chooses one of the top three presidential electoral vote - winners as the president, while the Senate chooses one of the top two vice presidential electoral vote - winners as vice president.
A state 's number of electors equals the number of representatives plus two electors for both senators the state has in the United States Congress. The number of representatives is based on the respective populations, determined every 10 years by the United States Census. Each representative represents on average 711,000 persons.
Under the Twenty - third Amendment, Washington, D.C., is allocated as many electors as it would have if it were a state, but no more electors than the least populous state. The least populous state (which is Wyoming according to the 2010 Census) has three electors; thus, D.C. can not have more than three electors. Even if D.C. were a state, its population would entitle it to only three electors; based on its population per electoral vote, D.C. has the second highest per - capita Electoral College representation, after Wyoming.
Currently, there is a total of 538 electors, there being 435 representatives and 100 senators, plus the three electors allocated to Washington, D.C. The six states with the most electors are California (55), Texas (38), New York (29), Florida (29), Illinois (20) and Pennsylvania (20). The seven smallest states by population -- Alaska, Delaware, Montana, North Dakota, South Dakota, Vermont, and Wyoming -- have three electors each. This is because each of these states is entitled to one representative and two senators.
Candidates for elector are nominated by state chapters of nationally oriented political parties in the months prior to Election Day. In some states, the electors are nominated by voters in primaries, the same way other presidential candidates are nominated. In some states, such as Oklahoma, Virginia and North Carolina, electors are nominated in party conventions. In Pennsylvania, the campaign committee of each candidate names their respective electoral college candidates (an attempt to discourage faithless electors). Varying by state, electors may also be elected by state legislatures, or appointed by the parties themselves.
Article II, Section 1, Clause 2 of the Constitution requires each state legislature to determine how electors for the state are to be chosen, but it disqualifies any person holding a federal office, either elected or appointed, from being an elector. Under Section 3 of the Fourteenth Amendment, any person who has sworn an oath to support the United States Constitution in order to hold either a state or federal office, and later rebelled against the United States directly or by giving assistance to those doing so, is disqualified from being an elector. However, the Congress may remove this disqualification by a two - thirds vote in each House.
Since the Civil War, all states have chosen presidential electors by popular vote. This process has been normalized to the point the names of the electors appear on the ballot in only eight states: Rhode Island, Tennessee, Louisiana, Arizona, Idaho, Oklahoma, North Dakota and South Dakota.
The Tuesday following the first Monday in November has been fixed as the day for holding federal elections, called the Election Day. In 48 states and Washington, D.C., the "winner - takes - all method '' is used (electors selected as a single bloc). Maine and Nebraska use the "congressional district method '', selecting one elector within each congressional district by popular vote and selecting the remaining two electors by a statewide popular vote. This method has been used in Maine since 1972 and in Nebraska since 1996.
The current system of choosing electors is called the "short ballot ''. In most states, voters choose a slate of electors, and only a few states list on the ballot the names of proposed electors. In some states, if a voter wants to write in a candidate for president, the voter is also required to write in the names of proposed electors.
After the election, each state prepares seven Certificates of Ascertainment, each listing the candidates for president and vice president, their pledged electors, and the total votes each candidacy received. One certificate is sent, as soon after Election Day as practicable, to the National Archivist in Washington D.C. The Certificates of Ascertainment are mandated to carry the State Seal, and the signature of the Governor (in the case of the District of Columbia, the Certificate is signed by the Mayor of the District of Columbia.)
The Electoral College never meets as one body. Electors meet in their respective state capitals (electors for the District of Columbia meet within the District) on the Monday after the second Wednesday in December, at which time they cast their electoral votes on separate ballots for president and vice president.
Although procedures in each state vary slightly, the electors generally follow a similar series of steps, and the Congress has constitutional authority to regulate the procedures the states follow. The meeting is opened by the election certification official -- often that state 's secretary of state or equivalent -- who reads the Certificate of Ascertainment. This document sets forth who was chosen to cast the electoral votes. The attendance of the electors is taken and any vacancies are noted in writing. The next step is the selection of a president or chairman of the meeting, sometimes also with a vice chairman. The electors sometimes choose a secretary, often not himself an elector, to take the minutes of the meeting. In many states, political officials give short speeches at this point in the proceedings.
When the time for balloting arrives, the electors choose one or two people to act as tellers. Some states provide for the placing in nomination of a candidate to receive the electoral votes (the candidate for president of the political party of the electors). Each elector submits a written ballot with the name of a candidate for president. In New Jersey, the electors cast ballots by checking the name of the candidate on a pre-printed card; in North Carolina, the electors write the name of the candidate on a blank card. The tellers count the ballots and announce the result. The next step is the casting of the vote for vice president, which follows a similar pattern.
Each state 's electors must complete six Certificates of Vote. Each Certificate of Vote must be signed by all of the electors and a Certificate of Ascertainment must be attached to each of the Certificates of Vote. Each Certificate of Vote must include the names of those who received an electoral vote for either the office of president or of vice president. The electors certify the Certificates of Vote and copies of the Certificates are then sent in the following fashion:
A staff member of the President of the Senate collects the Certificates of Vote as they arrive and prepares them for the joint session of the Congress. The Certificates are arranged -- unopened -- in alphabetical order and placed in two special mahogany boxes. Alabama through Missouri (including the District of Columbia) are placed in one box and Montana through Wyoming are placed in the other box. Before 1950, the Secretary of State 's office oversaw the certifications, but since then the Office of Federal Register in the Archivist 's office reviews them to make sure the documents sent to the archive and Congress match and that all formalities have been followed, sometimes requiring states to correct the documents.
Faithless electors are those who either cast electoral votes for someone other than the candidate of the party that they pledged to vote for or who abstain. Twenty - nine states plus the District of Columbia have passed laws to punish faithless electors, although none have ever been enforced. Many constitutional scholars claim that state restrictions would be struck down if challenged based on Article II and the Twelfth Amendment. In 1952, the constitutionality of state pledge laws was brought before the Supreme Court in Ray v. Blair, 343 U.S. 214 (1952). The Court ruled in favor of state laws requiring electors to pledge to vote for the winning candidate, as well as removing electors who refuse to pledge. As stated in the ruling, electors are acting as a functionary of the state, not the federal government. Therefore, states have the right to govern the process of choosing electors. The constitutionality of state laws punishing electors for actually casting a faithless vote, rather than refusing to pledge, has never been decided by the Supreme Court. However, in his dissent in Ray v. Blair, Justice Robert Jackson wrote: "no one faithful to our history can deny that the plan originally contemplated what is implicit in its text -- that electors would be free agents, to exercise an independent and nonpartisan judgment as to the men best qualified for the Nation 's highest offices. ''
While many laws punish a faithless elector only after the fact, states like Michigan also specify a faithless elector 's vote be voided.
As electoral slates are typically chosen by the political party or the party 's presidential nominee, electors usually have high loyalty to the party and its candidate: a faithless elector runs a greater risk of party censure than of criminal charges.
In 2000, elector Barbara Lett - Simmons of Washington, D.C., chose not to vote, rather than voting for Al Gore as she had pledged to do. In 2016, seven electors voted contrary to their pledges. Faithless electors have never changed the outcome of any presidential election.
The Twelfth Amendment mandates Congress assemble in joint session to count the electoral votes and declare the winners of the election. The session is ordinarily required to take place on January 6 in the calendar year immediately following the meetings of the presidential electors. Since the Twentieth Amendment, the newly elected Congress declares the winner of the election; all elections before 1936 were determined by the outgoing House.
The meeting is held at 1: 00 pm in the Chamber of the U.S. House of Representatives. The sitting vice president is expected to preside, but in several cases the President pro tempore of the Senate has chaired the proceedings. The vice president and the Speaker of the House sit at the podium, with the vice president in the seat of the Speaker of the House. Senate pages bring in the two mahogany boxes containing each state 's certified vote and place them on tables in front of the senators and representatives. Each house appoints two tellers to count the vote (normally one member of each political party). Relevant portions of the Certificate of Vote are read for each state, in alphabetical order.
Members of Congress can object to any state 's vote count, provided objection is presented in writing and is signed by at least one member of each house of Congress. An objection supported by at least one senator and one representative will be followed by the suspension of the joint session and by separate debates and votes in each House of Congress; after both Houses deliberate on the objection, the joint session is resumed. A state 's certificate of vote can be rejected only if both Houses of Congress vote to accept the objection. In that case, the votes from the State in question are simply ignored. The votes of Arkansas and Louisiana were rejected in the presidential election of 1872.
Objections to the electoral vote count are rarely raised, although it did occur during the vote count in 2001 after the close 2000 presidential election between Governor George W. Bush of Texas and the Vice President of the United States, Al Gore. Gore, who as vice president was required to preside over his own Electoral College defeat (by five electoral votes), denied the objections, all of which were raised by only several representatives and would have favored his candidacy, after no senators would agree to jointly object. Objections were again raised in the vote count of the 2004 elections, and on that occasion the document was presented by one representative and one senator. Although the joint session was suspended, the objections were quickly disposed of and rejected by both Houses of Congress. If there are no objections or all objections are overruled, the presiding officer simply includes a state 's votes, as declared in the certificate of vote, in the official tally.
After the certificates from all states are read and the respective votes are counted, the presiding officer simply announces the final result of the vote and, provided the required absolute majority of votes was achieved, declares the names of the persons elected president and vice president. This announcement concludes the joint session and formalizes the recognition of the president - elect and of the vice president - elect. The senators then depart from the House Chamber. The final tally is printed in the Senate and House journals.
The Twelfth Amendment requires the House of Representatives to go into session immediately to vote for a president if no candidate for president receives a majority of the electoral votes (since 1964, 270 of the 538 electoral votes).
In this event, the House of Representatives is limited to choosing from among the three candidates who received the most electoral votes for president. Each state delegation votes en bloc -- each delegation having a single vote; the District of Columbia does not receive a vote. A candidate must receive an absolute majority of state delegation votes (i.e., at present, a minimum of 26 votes) in order for that candidate to become the President - elect. Additionally, delegations from at least two - thirds of all the states must be present for voting to take place. The House continues balloting until it elects a president.
The House of Representatives has chosen the president only twice: in 1801 under Article II, Section 1, Clause 3; and in 1825 under the Twelfth Amendment.
If no candidate for vice president receives an absolute majority of electoral votes, then the Senate must go into session to elect a vice president. The Senate is limited to choosing from the two candidates who received the most electoral votes for vice president. Normally this would mean two candidates, one less than the number of candidates available in the House vote. However, the text is written in such a way that all candidates with the most and second most electoral votes are eligible for the Senate election -- this number could theoretically be larger than two. The Senate votes in the normal manner in this case (i.e., ballots are individually cast by each senator, not by state delegations). However, two - thirds of the senators must be present for voting to take place.
Additionally, the Twelfth Amendment states a "majority of the whole number '' of senators (currently 51 of 100) is necessary for election. Further, the language requiring an absolute majority of Senate votes precludes the sitting vice president from breaking any tie which might occur, although some academics and journalists have speculated to the contrary.
The only time the Senate chose the vice president was in 1837. In that instance, the Senate adopted an alphabetical roll call and voting aloud. The rules further stated, "(I) fa majority of the number of senators shall vote for either the said Richard M. Johnson or Francis Granger, he shall be declared by the presiding officer of the Senate constitutionally elected Vice President of the United States ''; the Senate chose Johnson.
Section 3 of the Twentieth Amendment specifies if the House of Representatives has not chosen a president - elect in time for the inauguration (noon EST on January 20), then the vice president - elect becomes acting president until the House selects a president. Section 3 also specifies Congress may statutorily provide for who will be acting president if there is neither a president - elect nor a vice president - elect in time for the inauguration. Under the Presidential Succession Act of 1947, the Speaker of the House would become acting president until either the House selects a president or the Senate selects a vice president. Neither of these situations has ever occurred.
Source: Presidential Elections 1789 -- 2000 at Psephos (Adam Carr 's Election Archive) Note: In 1788, 1792, 1796, and 1800, each elector cast two votes for president.
Before the advent of the short ballot in the early 20th century, as described above, the most common means of electing the presidential electors was through the general ticket. The general ticket is quite similar to the current system and is often confused with it. In the general ticket, voters cast ballots for individuals running for presidential elector (while in the short ballot, voters cast ballots for an entire slate of electors). In the general ticket, the state canvass would report the number of votes cast for each candidate for elector, a complicated process in states like New York with multiple positions to fill. Both the general ticket and the short ballot are often considered at - large or winner - takes - all voting. The short ballot was adopted by the various states at different times; it was adopted for use by North Carolina and Ohio in 1932. Alabama was still using the general ticket as late as 1960 and was one of the last states to switch to the short ballot.
The question of the extent to which state constitutions may constrain the legislature 's choice of a method of choosing electors has been touched on in two U.S. Supreme Court cases. In McPherson v. Blacker, 146 U.S. 1 (1892), the Court cited Article II, Section 1, Clause 2 which states that a state 's electors are selected "in such manner as the legislature thereof may direct '' and wrote these words "operat (e) as a limitation upon the state in respect of any attempt to circumscribe the legislative power ''. In Bush v. Palm Beach County Canvassing Board, 531 U.S. 70 (2000), a Florida Supreme Court decision was vacated (not reversed) based on McPherson. On the other hand, three dissenting justices in Bush v. Gore, 531 U.S. 98 (2000), wrote: "(N) othing in Article II of the Federal Constitution frees the state legislature from the constraints in the State Constitution that created it. ''
In the earliest presidential elections, state legislative choice was the most common method of choosing electors. A majority of the state legislatures selected presidential electors in both 1792 (9 of 15) and 1800 (10 of 16), and half of them did so in 1812. Even in the 1824 election, a quarter of state legislatures (6 of 24) chose electors. In that election, Andrew Jackson lost in spite of having pluralities of both the popular and electoral votes, with the outcome being decided by the six state legislatures choosing the electors. Some state legislatures simply chose electors, while other states used a hybrid method in which state legislatures chose from a group of electors elected by popular vote. By 1828, with the rise of Jacksonian democracy, only Delaware and South Carolina used legislative choice. Delaware ended its practice the following election (1832), while South Carolina continued using the method until it seceded from the Union in December 1860. South Carolina used the popular vote for the first time in the 1868 election.
Excluding South Carolina, legislative appointment was used in only four situations after 1832:
Legislative appointment was brandished as a possibility in the 2000 election. Had the recount continued, the Florida legislature was prepared to appoint the Republican slate of electors to avoid missing the federal safe - harbor deadline for choosing electors.
The Constitution gives each state legislature the power to decide how its state 's electors are chosen and it can be easier and cheaper for a state legislature to simply appoint a slate of electors than to create a legislative framework for holding elections to determine the electors. As noted above, the two situations in which legislative choice has been used since the Civil War have both been because there was not enough time or money to prepare for an election. However, appointment by state legislature can have negative consequences: bicameral legislatures can deadlock more easily than the electorate. This is precisely what happened to New York in 1789 when the legislature failed to appoint any electors.
Another method used early in U.S. history was to divide the state into electoral districts. By this method, voters in each district would cast their ballots for the electors they supported and the winner in each district would become the elector. This was similar to how states are currently separated by congressional districts. However, the difference stems from the fact every state always had two more electoral districts than congressional districts. As with congressional districts, moreover, this method is vulnerable to gerrymandering.
Under such a system, electors would be selected in proportion to the votes cast for their candidate or party, rather than being selected by the statewide plurality vote.
There are two versions of the congressional district method: one has been implemented in Maine and Nebraska; another has been proposed in Virginia. Under the implemented congressional district method, the electoral votes are distributed based on the popular vote winner within each of the states ' congressional districts; the statewide popular vote winner receives two additional electoral votes.
In 2013, a different version of the congressional district method was proposed in Virginia. This version would distribute Virginia 's electoral votes based on the popular vote winner within each of Virginia 's congressional districts; the two statewide electoral votes would be awarded based on which candidate won the most congressional districts, rather than on who won Virginia 's statewide popular vote.
The congressional district method can more easily be implemented than other alternatives to the winner - takes - all method, in view of major party resistance to relatively enabling third parties under the proportional method. State legislation is sufficient to use this method. Advocates of the congressional district method believe the system would encourage higher voter turnout and incentivize presidential candidates to broaden their campaigns in non-competitive states. Winner - take - all systems ignore thousands of popular votes; in Democratic California there are Republican districts, in Republican Texas there are Democratic districts. Because candidates have an incentive to campaign in competitive districts, with a district plan, candidates have an incentive to actively campaign in over thirty states versus seven "swing '' states. Opponents of the system, however, argue candidates might only spend time in certain battleground districts instead of the entire state and cases of gerrymandering could become exacerbated as political parties attempt to draw as many safe districts as they can.
Unlike simple congressional district comparisons, the district plan popular vote bonus in the 2008 election would have given Obama 56 % of the Electoral College versus the 68 % he did win, it "would have more closely approximated the percentage of the popular vote won (53 %) ''.
Of the 43 states whose electoral votes could be affected by the congressional district method, only Maine and Nebraska apply it today. Maine has four electoral votes, based on its two representatives and two senators. Nebraska has two senators and three representatives, giving it five electoral votes. Maine began using the congressional district method in the election of 1972. Nebraska has used the congressional district method since the election of 1992. Schwartz, Maralee (April 7, 1991). "Nebraska 's Vote Change ''. The Washington Post. Michigan used the system for the 1892 presidential election, and several other states used various forms of the district plan before 1840: Virginia, Delaware, Maryland, Kentucky, North Carolina, Massachusetts, Illinois, Maine, Missouri, and New York.
The congressional district method allows a state the chance to split its electoral votes between multiple candidates. Prior to 2008, neither Maine nor Nebraska had ever split their electoral votes. Nebraska split its electoral votes for the first time in 2008, giving John McCain its statewide electors and those of two congressional districts, while Barack Obama won the electoral vote of Nebraska 's 2nd congressional district. Following the 2008 split, some Nebraska Republicans made efforts to discard the congressional district method and return to the winner - takes - all system. In January 2010, a bill was introduced in the Nebraska legislature to revert to a winner - take - all system; the bill died in committee in March 2011. Republicans had also passed bills in 1995 and 1997 to eliminate the congressional district method in Nebraska, but those bills were vetoed by Democratic Governor Ben Nelson.
In 2010, Republicans in Pennsylvania, who controlled both houses of the legislature as well as the governorship, put forward a plan to change the state 's winner - takes - all system to a congressional district method system. Pennsylvania had voted for the Democratic candidate in the five previous presidential elections, so some saw this as an attempt to take away Democratic electoral votes. Although Democrat Barack Obama won Pennsylvania in 2008, he won only 55 % of Pennsylvania 's popular vote. The district plan would have awarded him 11 of its 21 electoral votes, a 52.4 % that is closer to the popular vote yet still overcoming Republican gerrymandering. The plan later lost support. Other Republicans, including Michigan state representative Pete Lund, RNC Chairman Reince Priebus, and Wisconsin Governor Scott Walker, have floated similar ideas.
Arguments between proponents and opponents of the current electoral system include four separate but related topics: indirect election, disproportionate voting power by some states, the winner - takes - all distribution method (as chosen by 48 of the 50 states), and federalism. Arguments against the Electoral College in common discussion mostly focus on the allocation of the voting power among the states. Gary Bugh 's research of congressional debates over proposed constitutional amendments to abolish the Electoral College reveals reform opponents have often appealed to a traditional version of representation, whereas reform advocates have tended to reference a more democratic view.
The elections of 1876, 1888, 2000, and 2016 produced an Electoral College winner who did not receive at least a plurality of the nationwide popular vote. In 1824, there were six states in which electors were legislatively appointed, rather than popularly elected, so it is uncertain what the national popular vote would have been if all presidential electors had been popularly elected. When no candidate received a majority of electoral votes in 1824, the election was decided by the House of Representatives and so could be considered distinct from the latter four elections in which all of the states had popular selection of electors. The true national popular vote was also uncertain in the 1960 election, and the plurality for the winner depends on how votes for Alabama electors are allocated.
Opponents of the Electoral College claim such outcomes do not logically follow the normative concept of how a democratic system should function. One view is the Electoral College violates the principle of political equality, since presidential elections are not decided by the one - person one - vote principle. Outcomes of this sort are attributable to the federal nature of the system. Supporters of the Electoral College argue candidates must build a popular base that is geographically broader and more diverse in voter interests than either a simple national plurality or majority. Neither is this feature attributable to having intermediate elections of Presidents, caused instead by the winner - takes - all method of allocating each state 's slate of electors. Allocation of electors in proportion to the state 's popular vote could reduce this effect.
Elections where the winning candidate loses the national popular vote typically result when the winner builds the requisite configuration of states (and thus captures their electoral votes) by small margins, but the losing candidate secures large voter margins in the remaining states. In this case, the very large margins secured by the losing candidate in the other states would aggregate to a plurality of the ballots cast nationally. However, commentators question the legitimacy of this national popular vote; pointing out that the national popular vote observed under the Electoral College system does not reflect the popular vote observed under a National Popular Vote system; as each electoral institution produces different incentives for, and strategy choices by, presidential campaigns. Because the national popular vote is irrelevant under the electoral college system, it is generally presumed candidates base their campaign strategies around the existence of the Electoral College; any close race has candidates campaigning to maximize electoral votes by focusing their get - out - the - vote efforts in crucially needed swing states and not attempting to maximize national popular vote totals by using limited campaign resources to run up margins or close up gaps in states considered "safe '' for themselves or their opponents, respectively. Conversely, the institutional structure of a national popular vote system would encourage candidates to pursue voter turnout wherever votes could be found, even in "safe '' states they are already expected to win, and in "safe '' states they have no hope of winning.
Educational YouTuber CGP Grey, who has produced several short videos criticizing the Electoral College, has illustrated how it is technically possible to win the necessary 270 electoral votes while winning only 22 % of the overall popular vote, by winning the barest simple majority of the 40 smallest states and the District of Columbia. Though the current political geography of the United States makes such an election unlikely (it would require winning both reliably Democratic jurisdictions like Massachusetts and D.C. and reliably Republican states like Utah and Alaska), he argues that a system in which such a result is even remotely possible is "indefensible ''.
The United States is the only country that elects a politically powerful president via an electoral college and the only one in which a candidate can become president without having obtained the highest number of votes in the sole or final round of popular voting.
According to this criticism, the Electoral College encourages political campaigners to focus on a few so - called "swing states '' while ignoring the rest of the country. Populous states in which pre-election poll results show no clear favorite are inundated with campaign visits, saturation television advertising, get - out - the - vote efforts by party organizers and debates, while "four out of five '' voters in the national election are "absolutely ignored '', according to one assessment. Since most states use a winner - takes - all arrangement in which the candidate with the most votes in that state receives all of the state 's electoral votes, there is a clear incentive to focus almost exclusively on only a few key undecided states; in recent elections, these states have included Pennsylvania, Ohio, and Florida in 2004 and 2008, and also Colorado in 2012. In contrast, states with large populations such as California, Texas, and New York, have in recent elections been considered "safe '' for a particular party -- Democratic for California and New York and Republican for Texas -- and therefore campaigns spend less time and money there. Many small states are also considered to be "safe '' for one of the two political parties and are also generally ignored by campaigners: of the 13 smallest states, six are reliably Democratic, six are reliably Republican, and only New Hampshire is considered as a swing state, according to critic George C. Edwards III. In the 2008 election, campaigns did not mount nationwide efforts but rather focused on select states.
Except in closely fought swing states, voter turnout is largely insignificant due to entrenched political party domination in most states. The Electoral College decreases the advantage a political party or campaign might gain for encouraging voters to turn out, except in those swing states. If the presidential election were decided by a national popular vote, in contrast, campaigns and parties would have a strong incentive to work to increase turnout everywhere. Individuals would similarly have a stronger incentive to persuade their friends and neighbors to turn out to vote. The differences in turnout between swing states and non-swing states under the current electoral college system suggest that replacing the Electoral College with direct election by popular vote would likely increase turnout and participation significantly.
According to this criticism, the electoral college reduces elections to a mere count of electors for a particular state, and, as a result, it obscures any voting problems within a particular state. For example, if a particular state blocks some groups from voting, perhaps by voter suppression methods such as imposing reading tests, poll taxes, registration requirements, or legally disfranchising specific minority groups, then voting inside that state would be reduced, but as the state 's electoral count would be the same, disenfranchisement has no effect on the overall electoral tally. Critics contend that such disenfranchisement is partially obscured by the Electoral College. A related argument is the Electoral College may have a dampening effect on voter turnout: there is no incentive for states to reach out to more of its citizens to include them in elections because the state 's electoral count remains fixed in any event. According to this view, if elections were by popular vote, then states would be motivated to include more citizens in elections since the state would then have more political clout nationally. Critics contend the electoral college system insulates states from negative publicity as well as possible federal penalties for disenfranching subgroups of citizens.
Legal scholars Akhil Amar and Vikram Amar have argued the original Electoral College compromise was enacted partially because it enabled the southern states to disenfranchise its slave populations. It permitted southern states to disfranchise large numbers of slaves while allowing these states to maintain political clout within the federation by using the three - fifths compromise. They noted that constitutional Framer James Madison believed the question of counting slaves had presented a serious challenge but that "the substitution of electors obviated this difficulty and seemed on the whole to be liable to the fewest objections. '' Akhil and Vikram Amar added that:
The founders ' system also encouraged the continued disfranchisement of women. In a direct national election system, any state that gave women the vote would automatically have doubled its national clout. Under the Electoral College, however, a state had no such incentive to increase the franchise; as with slaves, what mattered was how many women lived in a state, not how many were empowered... a state with low voter turnout gets precisely the same number of electoral votes as if it had a high turnout. By contrast, a well - designed direct election system could spur states to get out the vote.
Territories of the United States, such as Puerto Rico, the Northern Mariana Islands, the U.S. Virgin Islands, American Samoa, and Guam, are not entitled to electors in presidential elections. Constitutionally, only U.S. states (per Article II, Section 1, Clause 2) and Washington, D.C. (per the Twenty - third Amendment) are entitled to electors. Guam has held non-binding straw polls for president since the 1980s to draw attention to this fact. This means that roughly 4 million Americans do not have the right to vote in presidential elections. Various scholars consequently conclude that the U.S. national - electoral process is not fully democratic.
Researchers have variously attempted to measure which states ' voters have the greatest impact in such an indirect election.
Each state gets a minimum of three electoral votes, regardless of population, which gives low - population states a disproportionate number of electors per capita. For example, an electoral vote represents nearly four times as many people in California as in Wyoming. Sparsely populated states are likely to be increasingly overrepresented in the electoral college over time, because Americans are increasingly moving to big cities, most of which are in big states. This analysis gives a strong advantage to the smallest states, but ignores any extra influence that comes from larger states ' ability to deliver their votes as a single bloc.
Countervailing analyses which do take into consideration the sizes of the electoral voting blocs, such as the Banzhaf power index (BPI) model based on probability theory lead to very different conclusions about voters relative power. In 1968, John F. Banzhaf III (who developed the Banzhaf power index) determined that a voter in the state of New York had, on average, 3.312 times as much voting power in presidential elections as a voter in any other U.S. state. It was found that based on 1990 census and districting, individual voters in California, the largest state, had 3.3 times more individual power to choose a President than voters of Montana, the largest of the minimum 3 elector states. Because Banzhaf 's method ignores the demographic makeup of the states, it has been criticized for treating votes like independent coin - flips. More empirically based models of voting yield results which seem to favor larger states less.
In practice, the winner - take - all manner of allocating a state 's electors generally decreases the importance of minor parties. However, it has been argued the Electoral College is not a cause of the two - party system, and that it had a tendency to improve the chances of third - party candidates in some situations.
Proponents of the Electoral College claim that it prevents a candidate from winning the presidency by simply winning in heavily populated urban areas, and pushes candidates to make a wider geographic appeal than they would if they simply had to win the national popular vote. They believe that adoption of the popular vote would shift the disproportionate focus to large cities at the expense of rural areas.
Proponents of a national popular vote for president dismiss such arguments, pointing out combined population of the 50 biggest cities (not including metropolitan areas) only amounts to 15 % of the population, and that candidates in popular vote elections for governor and U.S. Senate, and for statewide allocation of electoral votes, do not ignore voters in less populated areas. In addition, it is already possible to win the required 270 electoral votes by winning only the 11 most populous states; what currently prevents such a result is the organic political diversity between those states (three reliably Republican states, four swing states, and four reliably Democratic states), not any inherent quality of the Electoral College itself. If all of those states came to lean reliably for one party, then the Electoral College itself would bring about an urban - centric victory.
The United States of America is a federal coalition which consists of component states. Proponents of the current system argue the collective opinion of even a small state merits attention at the federal level greater than that given to a small, though numerically equivalent, portion of a very populous state. The system also allows each state the freedom, within constitutional bounds, to design its own laws on voting and enfranchisement without an undue incentive to maximize the number of votes cast.
For many years early in the nation 's history, up until the Jacksonian Era, many states appointed their electors by a vote of the state legislature, and proponents argue that, in the end, the election of the President must still come down to the decisions of each state, or the federal nature of the United States will give way to a single massive, centralized government.
In his book A More Perfect Constitution, Professor Larry Sabato elaborated on this advantage of the Electoral College, arguing to "mend it, do n't end it '', in part because of its usefulness in forcing candidates to pay attention to lightly populated states and reinforcing the role of the state in federalism.
Instead of decreasing the power of minority groups by depressing voter turnout, proponents argue that by making the votes of a given state an all - or - nothing affair, minority groups can provide the critical edge that allows a candidate to win. This encourages candidates to court a wide variety of such minorities and advocacy groups.
Proponents of the Electoral College see its negative effect on third parties as beneficial. They argue the two party system has provided stability because it encourages a delayed adjustment during times of rapid political and cultural change. They believe it protects the most powerful office in the country from control by what these proponents view as regional minorities until they can moderate their views to win broad, long - term support across the nation. Advocates of a national popular vote for president suggest that this effect would also be true in popular vote elections. Of 918 elections for governor between 1948 and 2009, for example, more than 90 % were won by candidates securing more than 50 % of the vote, and none have been won with less than 35 % of the vote.
According to this argument, the fact the Electoral College is made up of real people instead of mere numbers allows for human judgment and flexibility to make a decision, if it happens that a candidate dies or becomes legally disabled around the time of the election. Advocates of the current system argue that human electors would be in a better position to choose a suitable replacement than the general voting public. According to this view, electors could act decisively during the critical time interval between when ballot choices become fixed in state ballots until mid-December when the electors formally cast their ballots. In the election of 1872, losing Liberal Republican candidate Horace Greeley died during this time interval which resulted in disarray for the Democratic Party, who also supported Greeley, but the Greeley electors were able to split their votes for different alternate candidates. A situation in which the winning candidate died has never happened. In the election of 1912, Vice President Sherman died shortly before the election when it was too late for states to remove his name from their ballots; accordingly, Sherman was listed posthumously, but the eight electoral votes that Sherman would have received were cast instead for Nicholas Murray Butler.
Some supporters of the Electoral College note that it isolates the impact of any election fraud, or other such problems, to the state where it occurs. It prevents instances where a party dominant in one state may dishonestly inflate the votes for a candidate and thereby affect the election outcome. For instance, recounts occur only on a state - by - state basis, not nationwide. Results in a single state where the popular vote is very close -- such as Florida in 2000 -- can decide the national election.
The closest the United States has come to abolishing the Electoral College occurred during the 91st Congress (1969 -- 1971). The presidential election of 1968 resulted in Richard Nixon receiving 301 electoral votes (56 % of electors), Hubert Humphrey 191 (35.5 %) and George Wallace 46 (8.5 %) with 13.5 % of the popular vote. However, Nixon had only received 511,944 more popular votes than Humphrey, 43.5 % to 42.9 %, less than 1 % of the national total.
Representative Emanuel Celler (D -- New York), Chairman of the House Judiciary Committee, responded to public concerns over the disparity between the popular vote and electoral vote by introducing House Joint Resolution 681, a proposed Constitutional amendment which would have replaced the Electoral College with simpler plurality system based on the national popular vote. With this system, the pair of candidates who had received the highest number of votes would win the presidency and vice presidency providing they won at least 40 % of the national popular vote. If no pair received 40 % of the popular vote, a runoff election would be held in which the choice of President and vice president would be made from the two pairs of persons who had received the highest number of votes in the first election. The word "pair '' was defined as "two persons who shall have consented to the joining of their names as candidates for the offices of President and Vice President ''.
On April 29, 1969, the House Judiciary Committee voted 28 to 6 to approve the proposal. Debate on the proposal before the full House of Representatives ended on September 11, 1969 and was eventually passed with bipartisan support on September 18, 1969, by a vote of 339 to 70.
On September 30, 1969, President Richard Nixon gave his endorsement for adoption of the proposal, encouraging the Senate to pass its version of the proposal which had been sponsored as Senate Joint Resolution 1 by Senator Birch Bayh (D -- Indiana).
On October 8, 1969, the New York Times reported that 30 state legislatures were "either certain or likely to approve a constitutional amendment embodying the direct election plan if it passes its final Congressional test in the Senate ''. Ratification of 38 state legislatures would have been needed for adoption. The paper also reported that 6 other states had yet to state a preference, 6 were leaning toward opposition and 8 were solidly opposed.
On August 14, 1970, the Senate Judiciary Committee sent its report advocating passage of the proposal to the full Senate. The Judiciary Committee had approved the proposal by a vote of 11 to 6. The six members who opposed the plan, Democratic Senators James Eastland of Mississippi, John Little McClellan of Arkansas and Sam Ervin of North Carolina along with Republican Senators Roman Hruska of Nebraska, Hiram Fong of Hawaii and Strom Thurmond of South Carolina, all argued that although the present system had potential loopholes, it had worked well throughout the years. Senator Bayh indicated that supporters of the measure were about a dozen votes shy from the 67 needed for the proposal to pass the full Senate. He called upon President Nixon to attempt to persuade undecided Republican senators to support the proposal. However, Nixon, while not reneging on his previous endorsement, chose not to make any further personal appeals to back the proposal.
On September 8, 1970, the Senate commenced openly debating the proposal and the proposal was quickly filibustered. The lead objectors to the proposal were mostly Southern senators and conservatives from small states, both Democrats and Republicans, who argued abolishing the Electoral College would reduce their states ' political influence. On September 17, 1970, a motion for cloture, which would have ended the filibuster, received 54 votes to 36 for cloture, failing to receive the then required two - thirds majority of senators voting. A second motion for cloture on September 29, 1970 also failed, by 53 to 34. Thereafter, the Senate Majority Leader, Mike Mansfield of Montana, moved to lay the proposal aside so the Senate could attend to other business. However, the proposal was never considered again and died when the 91st Congress ended on January 3, 1971.
On March 22, 1977, President Jimmy Carter wrote a letter of reform to Congress that also included his expression of essentially abolishing the Electoral College. The letter read in part:
My fourth recommendation is that the Congress adopt a Constitutional amendment to provide for direct popular election of the President. Such an amendment, which would abolish the Electoral College, will ensure that the candidate chosen by the voters actually becomes President. Under the Electoral College, it is always possible that the winner of the popular vote will not be elected. This has already happened in three elections, 1824, 1876, and 1888. In the last election, the result could have been changed by a small shift of votes in Ohio and Hawaii, despite a popular vote difference of 1.7 million. I do not recommend a Constitutional amendment lightly. I think the amendment process must be reserved for an issue of overriding governmental significance. But the method by which we elect our President is such an issue. I will not be proposing a specific direct election amendment. I prefer to allow the Congress to proceed with its work without the interruption of a new proposal.
President Carter 's proposed program for the reform of the Electoral College was very liberal for a modern President during this time, and in some aspects of the package, went beyond original expectations. Newspapers like The New York Times saw President Carter 's proposal at that time as "a modest surprise '' because of the indication of President Carter that he would be interested in only eliminating the electors but retaining the electoral vote system in a modified form.
Newspaper reaction to President Carter 's proposal ranged from some editorials praising the proposal to other editorials, like that in the Chicago Tribune, criticizing the President for proposing the end of the Electoral College.
In a letter to The New York Times, Representative Jonathan B. Bingham highlighted the danger of the "flawed, outdated mechanism of the Electoral College '' by underscoring how a shift of less than 10,000 votes in two key states would have led to President Gerald Ford being reelected despite Jimmy Carter 's nationwide 1.7 million - vote margin.
On January 5, 2017, Representative Steve Cohen introduced a joint resolution proposing a constitutional amendment that would replace the Electoral College with the popular election of the President and Vice President. Unlike the Bayh -- Celler amendment 40 % threshold for election, Cohen 's proposal only requires a candidate to have the "greatest number of votes '' to be elected.
Several states plus the District of Columbia have joined the National Popular Vote Interstate Compact. Those jurisdictions joining the compact agree to pledge their electors to the winner of the national popular vote. The Compact will not come into effect until the number of states agreeing to the Compact equals a majority (at least 270) of all electors. As of 2017, 10 states and the District of Columbia have joined the compact; collectively, these jurisdictions control 165 electoral votes, which is 61 % of the 270 required for the Compact to take effect. Only strongly "blue '' states have joined the compact, each of which returned large victory margins for Barack Obama in the 2012 election.
The Compact is based on the current rule in Article II, Section 1, Clause 2 of the Constitution that gives each state legislature the plenary power to determine how it chooses its electors, though some have suggested that Article I, Section 10, Clause 3 of the Constitution requires congressional consent before the Compact could be enforceable. Some scholars note that any attempted implementation of the NPV interstate compact would face court challenges to its constitutionality. Not only are states forbidden to enter into any agreement or compact with another state without consent of Congress, but it would overturn the Constitutional provision of electing a president of the United States by the people state by state.
|
which lane is the fast lane in the uk | Driving in the United Kingdom - wikipedia
Driving in the United Kingdom is governed by various legal powers and in some cases is subject to the passing of a driving test. The government produces a Highway Code that details the requirements for all road users, including drivers. Unlike most countries in the world, UK traffic drives on the left.
British roads are limited for most vehicles by the National Speed Limit. Road signs in the UK use imperial units, so speed limits are posted in miles per hour. Speed limits are the maximum speed at which certain drivers may legally drive on a road rather than a defined appropriate speed, and in some cases the nature of a road may dictate that one should drive significantly more slowly than the speed limit may allow. This restricts some vehicles by default to a speed of 30 mph in built up areas, and some light vehicles to 60 mph on single carriageways and 70 mph on dual carriageways and motorways, with some large vehicles or some of those towing trailers subject to reduced limits on some roads depending on the class of both road and vehicle. Sections of road subject to the national or in - town speed limit only require limit marker signs at the start of a section, without repeaters, provided street lights are or are not present as appropriate. Speed limits of 5 mph, 20 mph, 30 mph, 40 mph, 50 mph and 60 mph are also used on roads in the UK where it is deemed that the national or in - town speed limit is inappropriate, with repeater signs posted at regular intervals.
Drivers on dual carriageways (which may or may not be motorways) are usually expected to use the left-most lane unless overtaking other vehicles on the road, unless signs or road markings indicate that the left-most lane (s) is only for traffic leaving at the next junction. Drivers who wish to overtake a slower vehicle are thus expected to move out from their lane (having used the indicator lights to warn other road - users of their intention to do so), pass the slower vehicle and return to the left-most lane. This enables faster traffic to overtake unhindered if it wishes to do so. On the UK 's busiest roads, where there may be four or more lanes in each direction, there is often a situation where overtaking becomes continual as each successive lane moves at a slightly faster speed than that to its left.
On motorways an extra left lane termed the ' hard shoulder ' is usually present for use only when a vehicle has broken down. It is illegal to drive in this lane unless indicated otherwise, for example on one of the increasing number of Smart Motorways.
The action of undertaking, where the driver moves to the left of a slower moving vehicle to get past it is, although not illegal, discouraged on motorways under Highway Code 268. This rule allows for undertaking to occur in conditions that cause the left - hand lane to move faster than the right - hand lane and for traffic to keep up with the flow of the lane.
There are two broad categories of pedestrian crossing to aid the safe passage across major roads by those travelling on foot. There are no laws against jaywalking in the UK.
Driving licences may be obtained by any UK resident over the age of 17, subject to certain conditions. Initially, a provisional licence is issued, which restricts the holder to driving whilst accompanied by a driver who is at least 21 years old, who has held a full licence in the category of vehicle they are supervising the learner driver in for at least three years, and does not allow the provisional licence holder to drive on motorways. The provisional licence may be exchanged for a full licence after the holder has passed the practical driving test. Upon reaching the age of 70, drivers may apply to have their licences renewed with their doctors ' permission.
Many foreign driving licences permit one to drive in the UK, but must be exchanged for British licences after a year. Drivers from the USA, however, must take a British test if they wish to drive in the UK for more than a year after arriving in the country. This is because US driver licensing is carried out by individual states, but the US Constitution does not permit individual states to enter bilateral treaties with other sovereign governments. However driving licences from the European Union, Norway, Iceland, Liechtenstein and Switzerland are valid in the United Kingdom.
Some of the rules of the road should be enforced by the police, others are enforced by council wardens. Speed cameras are common. Red - light and bus lane cameras are also used. Motorists convicted of certain traffic, and certain non-traffic offences may have "points '' added to their licences: some traffic offences such as exceeding the speed limit by a small amount, typically warrant three points, and motorists with twelve points face a driving ban. Normally the points for a speeding offence, driving through a red light or in a bus lane will be punished with points from 3 - 6.
|
story behind the song somewhere over the rainbow | Over the Rainbow - wikipedia
"Over the Rainbow '' is a ballad, with music by Harold Arlen and lyrics by Yip Harburg. It was written for the movie The Wizard of Oz and was sung by actress Judy Garland, in her starring role as Dorothy Gale. It won the Academy Award for Best Original Song and became Garland 's signature song, as well as one of the most enduring standards of the 20th century.
About five minutes into the film, Dorothy sings the song after failing to get Aunt Em, Uncle Henry, and the farmhands to listen to her relate an unpleasant incident involving her dog, Toto, and the town spinster, Miss Gulch (Margaret Hamilton). Aunt Em tells her to "find yourself a place where you wo n't get into any trouble ''. This prompts her to walk off by herself, musing to Toto, "Some place where there is n't any trouble. Do you suppose there is such a place, Toto? There must be. It 's not a place you can get to by a boat, or a train. It 's far, far away. Behind the moon, beyond the rain... '', at which point she begins singing.
The song is number one on the "Songs of the Century '' list compiled by the Recording Industry Association of America and the National Endowment for the Arts. The American Film Institute also ranked the song the greatest movie song of all time on the list of "AFI 's 100 Years... 100 Songs ''.
The very first artist to record the song was actually big band singer Bea Wain, but MGM prohibited the release until The Wizard of Oz (1939) had opened and audiences heard Judy Garland perform it.
It was adopted (along with Irving Berlin 's "White Christmas '' (1942)) by American troops in Europe in World War II, as a symbol of the United States.
In April 2005, the United States Postal Service issued a commemorative stamp recognizing lyricist Yip Harburg 's accomplishments; it features the opening lyric from the song.
The song was used as an audio wakeup call in the STS - 88 space shuttle mission in Flight Day 4, which was dedicated to astronaut Robert D. Cabana from his daughter, Sara.
The song was honored with the 2014 Towering Song Award by the Songwriters Hall of Fame and was sung at its dinner on June 12, 2014 by Jackie Evancho.
In April 2016, The Daily Telegraph listed the song as number 8 on its list of the 100 greatest songs of all time.
In March 2017, Garland 's original rendition of the song was selected for preservation in the National Recording Registry by the Library of Congress as being "culturally, historically, or artistically significant. ''
The song 's sequence and the entirety of the Kansas scenes were directed by King Vidor, though he was not credited. The song was initially deleted from the film after a preview in San Luis Obispo, because MGM chief executive Louis B. Mayer and producer Mervyn LeRoy thought it "slowed down the picture '' and that "the song sounds like something for Jeanette MacDonald, not for a little girl singing in a barnyard. '' However, the persistence of associate producer Arthur Freed and Garland 's vocal coach / mentor Roger Edens to keep it in the film eventually paid off - it 's for this sequence that the film is best known and remembered.
At the start of the film, part of the song is played by the MGM orchestra over the opening credits. A reprise of it was deleted after being filmed. An additional chorus was to be sung by Dorothy while she was locked in the Witch 's castle, helplessly awaiting death as the hourglass ran out. However, although the visual portion of that reprise is presumably lost, the soundtrack of it survives and was included in the 2 - CD Deluxe Edition of the film 's soundtrack, released by Rhino Entertainment in 1995. In that extremely intense and fear - filled rendition, Dorothy cries her way through it, unable to finish, concluding with a tear - filled, "I 'm frightened, Auntie Em, I 'm frightened! '' This phrase was retained in the film and is followed immediately by Aunt Em 's brief appearance in the crystal ball, where she is soon replaced by the visage of the witch (Hamilton), mocking and taunting Dorothy before turning toward the camera to cackle. Another instrumental version is played in the underscore in the final scene, and over the closing credits.
On October 7, 1938, Judy Garland first recorded the song on the MGM soundstages, using an arrangement by Murray Cutter.
In September 1939, a studio recording of the song, not from the actual film soundtrack, was recorded and released as a single by Decca Records.
In March 1940, that same recording was included on a Decca 78 - RPM four - record studio cast album entitled The Wizard of Oz. Although this is not the version of the song featured in the film, Decca would continue to re-release the so - called "Cast Album '' well into the 1960s after it was re-issued as a single - record 33 ⁄ RPM LP.
It was not until 1956, when MGM released the true soundtrack album from the film, that the film version of the song was made available to the public. The 1956 soundtrack release was timed to coincide with the television premiere of the film. The soundtrack version has been re-released several times over the years, including in a "Deluxe Edition '' from Rhino Records in 1995.
Following the film 's release in 1939, the song became Garland 's signature song and she would perform it for the next 30 years, until her death in 1969. She performed it without altering it, singing exactly as she did for the film. She explained her fidelity by saying that she was staying true to the character of Dorothy and to the message of really being somewhere over the rainbow.
In 2017, Garland 's recorded rendition of the film was selected for preservation in the National Recording Registry by the Library of Congress as being "culturally, historically, or artistically significant. ''
An introductory verse ("When all the world is a hopeless jumble... '') that was not used in the film is often used in theatrical productions of The Wizard of Oz and is included in the piano sheet music book of songs from the film. It was also used in renditions by Frank Sinatra, by Doris Day on her album Hooray For Hollywood (1958) (Vol. 1), by Tony Bennett on his albums Tony Bennett Sings a String of Harold Arlen (1961) and Here 's to the Ladies (1995), by Ella Fitzgerald, by Sarah Vaughan, and by Norma Waterson, among others. Garland herself sang the introductory verse only once, on a 1948 radio broadcast of The Louella Parsons Show. Lyrics for a second verse ("Once by a word only lightly spoken... '') appear in the British edition of the sheet music.
The first German version in English language was recorded by the Swing Orchestra Heinz Wehner in March 1940. Wehner, at this time a well - known German Swing Artist, also took over the vocals. The first German version in German language was sung by Inge Brandenburg in 1960.
Israel Kamakawiwoʻole 's album Facing Future, released in 1993, included a ukulele medley of the song and Louis Armstrong 's "What a Wonderful World ''. It reached number 12 on Billboard 's Hot Digital Tracks chart the week of January 31, 2004 (for the survey week ending January 18, 2004). In the UK, it was released as a single under the title "Somewhere Over the Rainbow ''. It entered the UK Official Singles Chart in April 2007 at number 68. In Germany, the single also returned to the German Singles Chart in September 2010. After only 2 weeks on that chart, it had already received gold status for having sold 150,000 copies. In October 2010, it reached No. 1 in the German charts and 2011 it has been certified 5x Gold for selling more than 750,000 copies. It stayed 12 non-consecutive weeks at the top spot and was the most successful single in Germany in 2010. As of March 2012, it 's the 2nd best - selling download ever in Germany with digital sales between 500,000 and 600,000. In France, it debuted at number 4 in December 2010 and reached number one. In the USA, it was certified Platinum for 1,000,000 downloads sold. To date it has sold over 4.2 million digital copies as of October 2014. In Switzerland, it received Platinum, too, for 30,000 copies sold.
This version of the song has been used in several commercials, films and television programs including Finding Forrester, Meet Joe Black, 50 First Dates, Son of the Mask, Snakes on a Plane, Charmed, South Pacific, Cold Case, ER, Life on Mars, Horizon, and Scrubs. The Kamakawiwoʻole version of the song was covered by the cast of Glee on the season one finale, "Journey, '' and included on the extended play Glee: The Music, Journey to Regionals, charting at number 30 in the UK, 31 in Canada and Ireland, 42 in Australia, and 43 in the US. Cliff Richard recorded his own version of the medley based on this one with a medley of "Over the Rainbow / What A Wonderful World '' released as a single from the album Wanted, which charted in the UK in 2001 and Aselin Debison recorded the medley for her 2002 album Sweet is the Melody.
This version of the song was recorded in 1988, in Honolulu in just one take. Israel called the recording studio at 3am. He was given 15 minutes to arrive by Milan Bertosa. Bertosa is quoted to say '' And in walks the largest human being I had seen in my life. Israel was probably like 500 pounds. And the first thing at hand is to find something for him to sit on. '' The building security found Israel a big steel chair. "Then I put up some microphones, do a quick sound check, roll tape, and the first thing he does is ' Somewhere Over the Rainbow. ' He played and sang, one take, and it was over. ''
Eva Cassidy recorded a version of the song for the 1992 Chuck Brown / Eva Cassidy album The Other Side. After her death in 1996, it was included in her posthumously - released compilation album Songbird, released in 1998 and was released as a CD single in 2001. It was popularised by the BBC on BBC Radio 2 and on the television show Top of the Pops 2; the latter featured a video recording of Cassidy performing it. This publicity helped push sales of the compilation album Songbird to number 1 in the UK charts. Her recording was selected by the BBC in the UK for their Songs of the Century album in the year 1999. Her performance of it at Blues Alley was published for the first time in January 2011 on her Simply Eva album.
Danielle Hope, the winner of the Wizard of Oz - themed BBC talent show Over the Rainbow, released a cover version of the song. It was released by digital download on 23 May 2010 and a CD single was released on 31 May 2010. As it was recorded before a winner was announced, runners - up Lauren Samuels and Sophie Evans also recorded versions of it. These were both later made available for download on 6 June 2010. All three finalists appeared on the CD single 's B - side: a Wizard of Oz medley.
The single was a charity record, raising money for both the BBC Performing Arts Fund and Prostate UK.
|
who won the gold medal in basketball 2016 | Basketball at the 2016 Summer Olympics - wikipedia
Basketball at the 2016 Summer Olympics in Rio de Janeiro, Brazil was held from 6 to 21 August 2016. The preliminary and knockout matches for men were played inside the Carioca Arena 1 in Olympic Park which seats up to 16,000 spectators, and the matches for women were played in Youth Arena. This marked the first time that the men 's and women 's Olympic tournaments were played in multiple / separate venues. Hosting country, Brazil, failed to make it to the quarterfinals of both the men 's and women 's division after being eliminated from the group stage. Three countries in both categories took all of the medals: United States (who took both gold medals), Serbia and Spain.
Carioca Arena 1, the largest among the three Carioca Arenas, and Youth Arena, are the arenas that are being used for the basketball tournaments. The Ginásio do Maracanãzinho, site of the 1954 FIBA World Championship and the 1963 FIBA World Championship, hosted the indoor volleyball tournaments instead.
Carioca Arena 1 hosted the entire men 's tournament and the women 's knockout stage, while Youth Arena hosted the women 's preliminary round.
The National Olympic Committees might enter up to one 12 - player men 's team and up to one 12 - player women 's team.
Just as in 2012, the Olympic hosts were not guaranteed an Olympic berth. On 9 August 2015, it was announced that the Brazil men 's and women 's national teams would compete in the Olympic Basketball Tournament at the 2016 Rio Games after FIBA 's Central Board decided to grant them automatic places at its meeting in Tokyo.
The competition consisted of two stages; a group stage followed by a knockout stage.
The teams were divided into two groups of six countries, playing every team in their group once. Two points were awarded for a victory, one for a loss. The top four teams per group qualified for the quarter - finals.
The knockout stage was a single - elimination tournament consisting of three rounds. Semi-final losers played for the bronze medal.
The competition consisted of two stages; a group stage followed by a knockout stage.
The teams were divided into two groups of six countries, playing every team in their group once. Two points were awarded for a victory, one for a loss. The top four teams per group qualified for the quarter - finals.
The knockout stage was a single - elimination tournament consisting of three rounds. Semi-final losers played for the bronze medal.
The following referees were selected for the tournament.
|
how many player man city sign this season | 2017 -- 18 Manchester City FC. Season - wikipedia
The 2017 -- 18 season is Manchester City 's 116th season of competitive football, 89th season in the top division of English football and 21st season in the Premier League since the league was first created, with City as one of the original 22 founder - members. In addition to the Premier League, the club will also compete in the FA Cup, EFL Cup and UEFA Champions League, the latter being their seventh consecutive season competing in the competition.
The season covers the period from 1 July 2017 to 30 June 2018.
City have begun their season in brilliant form, during which several new club and English football records have been set. They have established national records in consecutive away (11) and overall (20) victories in all competitions; set a new English top league record for 16 consecutive wins; and broken club records by winning 10 consecutive away league matches; achieving 28 consecutive games unbeaten in all competitions; and 26 consecutive games unbeaten in the league.
On 1st November, Sergio Agüero scored his 178th City goal in a 4 -- 2 Champions League away victory over Napoli to become the club 's all time leading goalscorer, surpassing the previous total by Eric Brook.
On 16 May 2017, Manchester City announced they would face Manchester United as part of the 2017 International Champions Cup Matches against Tottenham Hotspur and Real Madrid were also confirmed as part of the same tournament. A standalone pre-season friendly against West Ham United also took place in Iceland. A friendly was also scheduled against partner club Girona to take place after the first match of the Premier League season.
Last updated: 3 December 2017 Source: Competitions
Last updated: 16 December 2017. Source: Premier League
Last updated: 16 December 2017. Source: Statto.com Ground: A = Away; H = Home. Result: D = Draw; L = Loss; W = Win; P = Postponed.
On 14 June 2017, Manchester City 's Premier League fixtures were announced.
Manchester City entered the competition in the third round and were drawn at home to Burnley
Manchester City entered the competition in the third round and were drawn away to West Bromwich Albion. A home tie against Wolverhampton Wanderers was confirmed for the fourth round. The club were handed an away tie against Leicester City in the quarter - finals.
On 24 August 2017, Manchester City were drawn into Group F alongside Shakhtar Donetsk, Napoli and Feyenoord. The club topped their group in the group stages and were handed a round of 16 two legged tie against Basel.
Last updated: 16 December 2017 Source: mancity.com Ordered by squad number. Appearances include league and cup appearances, including as substitute.
Appearances (Apps.) numbers are for appearances in competitive games only including sub appearances Red card numbers denote: Numbers in parentheses represent red cards overturned for wrongful dismissal.
Awarded to the player that receives the most votes in a poll conducted each month on the club 's official website
Summer: £ 210.35 m
Winter:
Total: £ 210.35 m
Summer: £ 68.24 m
Winter:
Total: £ 68.24 m
Summer: £ 142.11 m
Winter:
Total: £ 142.11 m
|
what is the current strength of lok sabha | Lok Sabha - Wikipedia
Coordinates: 28 ° 37 ′ 3 '' N 77 ° 12 ′ 30 '' E / 28.61750 ° N 77.20833 ° E / 28.61750; 77.20833
Government coalition (335) National Democratic Alliance (335)
Opposition Parties (210) United Progressive Alliance (50)
Janata Parivar Parties (6)
Unaligned Parties (144)
Others (10)
The Lok Sabha (House of the People) is the Lower house of India 's bicameral Parliament, with the Upper house being the Rajya Sabha. Members of the Lok Sabha are elected by adult universal suffrage and a first - past - the - post system to represent their respective constituencies, and they hold their seats for five years or until the body is dissolved by the President on the advice of the council of ministers. The house meets in the Lok Sabha Chambers of the Sansad Bhavan in New Delhi.
The maximum strength of the House allotted by the Constitution of India is 552. Currently the house has 545 seats which is made up by election of up to 543 elected members and at a maximum, 2 nominated members of the Anglo - Indian Community by the President of India. A total of 131 seats (24.03 %) are reserved for representatives of Scheduled Castes (84) and Scheduled Tribes (47). The quorum for the House is 10 % of the total membership. The Lok Sabha, unless sooner dissolved, continues to operate for five years from the date appointed for its first meeting. However, while a proclamation of emergency is in operation, this period may be extended by Parliament by law.
An exercise to redraw Lok Sabha constituencies ' boundaries is carried out by a Delimitation Commission every decade based on the Indian census, last of which was conducted in 2001. This exercise earlier also included redistribution of seats among states based on demographic changes but that provision of the mandate of the commission was suspended in 1976 following a constitutional amendment to incentivise the family planning programme which was being implemented. The 16th Lok Sabha was elected in May 2014 and is the latest to date.
The Lok Sabha has its own television channel, Lok Sabha TV, headquartered within the premises of Parliament.
A major portion of the Indian subcontinent was under British rule from 1858 to 1947. During this period, the office of the Secretary of State for India (along with the Council of India) was the authority through whom British Parliament exercised its rule in the Indian sub-continent, and the office of Viceroy of India was created, along with an Executive Council in India, consisting of high officials of the British government. The Indian Councils Act 1861 provided for a Legislative Council consisting of the members of the Executive Council and non-official members. The Indian Councils Act 1892 established legislatures in each of the provinces of British India and increased the powers of the Legislative Council. Although these Acts increased the representation of Indians in the government, their power still remained limited, and the electorate very small. The Indian Councils Act 1909 and the Government of India Act 1919 further expanded the participation of Indians in the administration. The Indian Independence Act, passed by the British parliament on 18 July 1947, divided British India (which did not include the Princely States) into two new independent countries, India and Pakistan, which were to be dominions under the Crown until they had each enacted a new constitution. The Constituent Assembly was divided into two for the separate nations, with each new Assembly having sovereign powers transferred to it for the respective dominion.
The Constitution of India was adopted on 26 November 1949 and came into effect on 26 January 1950, proclaiming India to be a sovereign, democratic republic. This contained the founding principles of the law of the land which would govern India in its new form, which now included all the princely states which had not acceded to Pakistan.
According to Article 79 (Part V - The Union.) of the Constitution of India, the Parliament of India consists of the President of India and the two Houses of Parliament known as the Council of States (Rajya Sabha) and the House of the People (Lok Sabha).
The Lok Sabha (House of the Leaders) was duly constituted for the first time on 17 April 1952 after the first General Elections held from 25 October 1951 to 21 February 1952.
Article 84 (Part V. -- The Union) of Indian Constitution sets qualifications for being a member of Lok Sabha, which are as follows:
However, a member can be disqualified of being a member of Parliament:
A seat in the Lok Sabha will become vacant in the following circumstances: (during normal functioning of the House)
Furthermore, as per article 101 (Part V. -- The Union) of Indian Constitution; A person can not be: (1) a member of both Houses of Parliament and provision shall be made by Parliament by law for the vacation by a person who is chosen a member of both Houses of his seat in one House or the other. (2) a member both of Parliament and of a House of the Legislature of a State.
System of elections in Lok Sabha
Members of the Lok Sabha are directly elected by the people of India, on the basis of Universal Suffrage. For the purpose of holding direct elections to Lok Sabha; each state is divided into territorial constituencies. In this respect, the constitution of India makes the following two provisions:
Note: The expression population here refers to the population ascertained at the preceding census (2001 Census) of which relevant figure have been published.
The Lok Sabha has certain powers that make it more powerful than the Rajya Sabha.
In conclusion, it is clear that the Lok Sabha is more powerful than the Rajya Sabha in almost all matters. Even in those matters in which the Constitution has placed both Houses on an equal footing, the Lok Sabha has more influence due to its greater numerical strength. This is typical of any Parliamentary democracy, with the lower House always being more powerful than the upper.
The Rules of Procedure and Conduct of Business in Lok Sabha and Directions issued by the Speaker from time to time there under regulate the procedure in Lok Sabha. The items of business, notice of which is received from the Ministers / Private Members and admitted by the Speaker, are included in the daily List of Business which is printed and circulated to members in advance. For various items of business to be taken up in the House the time is allotted by the House on the recommendations of the Business Advisory Committee. The Speaker presides over the sessions of the House and regulates procedure.
Three sessions of Lok Sabha take place in a year:
When in session, Lok Sabha holds its sittings usually from 11 A.M. to 1 P.M. and from 2 P.M. to 6 P.M. On some days the sittings are continuously held without observing lunch break and are also extended beyond 6 P.M. depending upon the business before the House. Lok Sabha does not ordinarily sit on Saturdays and Sundays and other closed holidays.
The first hour of every sitting is called Question Hour. Asking questions in Parliament is the free and unfettered right of members, and during Question Hour they may ask questions of ministers on different aspects of administration and government policy in the national and international spheres. Every minister whose turn it is to answer to questions has to stand up and answer for his department 's acts of omission or commission.
Questions are of three types -- Starred, Unstarred and Short Notice. A Starred Question is one to which a member desires an oral answer in the House and which is distinguished by an asterisk mark. An unstarred Question is one which is not called for oral answer in the house and on which no supplementary questions can consequently be asked. An answer to such a question is given in writing. Minimum period of notice for starred / unstarred question is 10 clear days. If the questions given notice of are admitted by the Speaker, they are listed and printed for answer on the dates allotted to the Ministries to which the subject matter of the question pertains.
The normal period of notice does not apply to short notice questions which relate to matters of urgent public importance. However, a Short Notice Question may be answered only on short notice if so permitted by the Speaker and the Minister concerned is prepared to answer it at shorter notice. A short notice question is taken up for answer immediately after the Question Hour, popularly known as Zero Hour.
Zero Hour: The time immediately following the Question Hour has come to be known as "Zero Hour ''. It starts at around 12 noon (hence the name) and members can, with prior notice to the Speaker, raise issues of importance during this time. Typically, discussions on important Bills, the Budget, and other issues of national importance take place from 2 pm onwards.
After the Question Hour, the House takes up miscellaneous items of work before proceeding to the main business of the day. These may consist of one or more of the following: Adjournment Motions, Questions involving breaches of Privileges, Papers to be laid on the Table, Communication of any messages from Rajya Sabha, Intimations regarding President 's assent to Bills, Calling Attention Notices, Matters under Rule 377, Presentation of Reports of Parliamentary Committee, Presentation of Petitions, miscellaneous statements by Ministers, Motions regarding elections to Committees, Bills to be withdrawn or introduced.
The main business of the day may be consideration of a Bill or financial business or consideration of a resolution or a motion.
Legislative proposals in the form of a Bill can be brought forward either by a Minister or by a private member. In the former case it is known as Government Bill and in the latter case it is known as a Private Members ' Bill. Every Bill passes through three stages -- called three readings -- before it is passed. To become law it must be passed by both the Houses of Parliament, Lok Sabha and Rajya Sabha, and then assented to by the president.
The presentation, discussion of, and voting on the annual General and Railways budgets -- followed by the passing of the Appropriations Bill and the Finance Bill -- is a long, drawn - out process that takes up a major part of the time of the House during its Budget Session every year.
Among other kinds of business that come up before the House are resolutions and motions. Resolutions and motions may be brought forward by Government or by private members. Government may move a resolution or a motion for obtaining the sanction to a scheme or opinion of the House on an important matter of policy or on a grave situation. Similarly, a private member may move a resolution or motion in order to draw the attention of the House and of the Government to a particular problem. The last two and half hours of sitting on every Friday are generally allotted for transaction of private members ' business. While private members ' bills are taken up on one Friday, private members ' resolutions are taken up on the succeeding Friday, and so on.
A Half - an - Hour Discussion can be raised on a matter of sufficient public importance which has been the subject of a recent question in Lok Sabha irrespective of the fact whether the question was answered orally or the answer was laid on the Table of the House and the answer which needs elucidation on a matter of fact. Normally not more than half an hour is allowed for such a discussion. Usually, half - an - hour discussion is listed on Mondays, Wednesdays and Fridays only. In one session, a member is allowed to raise not more than two half - an - hour discussions. During the discussion, the member, who has given notice, makes a short statement and not more than four members, who have intimated earlier and have secured one of the four places in the ballot, are permitted to ask a question each for further elucidating any matter of fact. Thereafter, the Minister concerned replies. There is no formal motion before the House nor voting.
Members may raise discussions on matters of urgent public importance with the permission of the Speaker. Such discussions may take place on two days in a week. No formal motion is moved in the House nor is there any voting on such a discussion.
After the member who initiates discussion on an item of business has spoken, other members can speak on that item of business in such order as the Speaker may call upon them. Only one member can speak at a time and all speeches are directed to the Chair. A matter requiring the decision of the House is decided by means of a question put by the Speaker on a motion made by a member.
A division is one of the forms in which the decision of the House is ascertained. Normally, when a motion is put to the House members for and against it indicate their opinion by saying "Aye '' or "No '' from their seats. The Chair goes by the voices and declares that the motion is either accepted or rejected by the House. If a member challenges the decision, the Chair orders that the lobbies be cleared. Then the division bell is rung and an entire network of bells installed in the various parts and rooms in Parliament House and Parliament House Annexe rings continuously for three and a half minutes. Members and Ministers rush to the Chamber from all sides. After the bell stops, all the doors to the Chamber are closed and nobody can enter or leave the Chamber till the division is over. Then the Chair puts the question for second time and declares whether in its opinion the "Ayes '' or the "Noes '', have it. If the opinion so declared is again challenged, the Chair asks the votes to be recorded by operating the Automatic Vote Recording Equipment.
With the announcement of the Speaker for recording the votes, the Secretary - General presses the button of a key board. Then a gong sounds serving as a signal to members for casting their votes. For casting a vote each member present in the Chamber has to press a switch and then operate one of the three push buttons fixed in his seat. The push switch must be kept pressed simultaneously until the gong sounds for the second time after 10 seconds. There are two Indicator Boards installed in the wall on either side of the Speaker 's Chair in the Chamber. Each vote cast by a member is flashed here. Immediately after the votes are cast, they are totalled mechanically and the details of the results are flashed on the Result Indicator Boards installed in the railings of the Speaker 's and Diplomatic Galleries.
Divisions are normally held with the aid of the Automatic Vote Recording Equipment. Where so directed by the Speaker in terms of relevant provision in the Rules of Procedure etc. in Lok Sabha, Divisions may be held either by distribution of ' Aye ' / ' No ' and ' Abstention ' slips to members in the House or by the members recording their votes by going into the lobbies. There is an Indicator Board in the machine room showing the name of each member. The result of Division and vote cast by each member with the aid of Automatic Vote Recording Equipment appear on this Board also. Immediately a photograph of the Indicator Board is taken. Later the Photograph is enlarged and the names of members who voted ' Ayes ' and for ' Noes ' are determined with the help of the photograph and incorporated in Lok Sabha Debates.
Three versions of Lok Sabha Debates are prepared viz., the Hindi version, the English version and the Original version. Only the Hindi and English versions are printed. The Original version, in cyclostyled form, is kept in the Parliament Library for record and reference. The Hindi version all Questions asked and Answers given thereto in Hindi and the speeches made in Hindi as also verbatim Hindi translation of Questions and Answers and of speeches made in English or in regional languages. The English version contains Lok Sabha proceedings in English and the English translation of the proceedings which take place in Hindi or in any regional language. The Original version, however, contains proceedings in Hindi or in English as they actually take place in the House and also the English / Hindi translation of speeches made in regional languages.
If conflicting legislation is enacted by the two Houses, a joint sitting is held to resolve the differences. In such a session, the members of the Lok Sabha would generally prevail, since the Lok Sabha includes more than twice as many members as the Rajya Sabha.
Speaker and Deputy Speaker
As per Article 93 of Indian Constitution, the Lok Sabha has a Speaker and a Deputy Speaker. In the Lok Sabha, the lower House of the Indian Parliament, both presiding officers -- the Speaker and the Deputy Speaker - are elected from among its members by a simple majority of members present and voting in the House. As such, no specific qualifications are prescribed for being elected the Speaker. The Constitution only requires that Speaker should be a member of the House. But an understanding of the Constitution and the laws of the country and the rules of procedure and conventions of Parliament is considered a major asset for the holder of the office of the Speaker. Vacation and resignation of, and removal from, the offices of Speaker and Deputy Speaker is mentioned under Article 94 of the Constitution of India. As per Article 94 of Indian Constitution. A Speaker or a Deputy Speaker, should vacate his / her office, a) if he / she ceases to be a member of the House of the People, b) he / she resigns, c) removed from his office by a resolution of the House of the People passed by a majority.
The Speaker of Lok Sabha is at once a member of the House and also its Presiding Officer. The Speaker of the Lok Sabha conducts the business in the house. He / she decides whether a bill is a money bill or not. He / she maintains discipline and decorum in the house and can punish a member for their unruly behaviour by suspending them. He / she permits the moving of various kinds of motions and resolutions like the motion of no confidence, motion of adjournment, motion of censure and calling attention notice as per the rules. The Speaker decides on the agenda to be taken up for discussion during the meeting. It is the Speaker of the Lok Sabha who presides over joint sittings called in the event of disagreement between the two Houses on a legislative measure. Following the 52nd Constitution amendment, the Speaker is vested with the power relating to the disqualification of a member of the Lok Sabha on grounds of defection. The Speaker makes obituary references in the House, formal references to important national and international events and the valedictory address at the conclusion of every Session of the Lok Sabha and also when the term of the House expires. Though a member of the House, the Speaker does not vote in the House except on those rare occasions when there is a tie at the end of a decision. Till date, the Speaker of the Lok Sabha has not been called upon to exercise this unique casting vote. While the office of Speaker is vacant due to absence / resignation / removal, the duties of the office shall be performed by the Deputy Speaker or, if the office of Deputy Speaker is also vacant, by such member of the House of the People as the President may appoint for the purpose.
Shri G.V. Mavalankar was the first Speaker of Lok Sabha (15 May 1952 - 27 February 1956) and Shri M. Ananthasayanam Ayyangar was the first Deputy Speaker of Lok Sabha (30 May 1952 -- 7 March 1956). In the 16th Lok Sabha, Sumitra Mahajan was elected as the speaker on 3 June 2014, and is its second woman speaker and Shri M. Thambidurai as the deputy speaker.
The Lok Sabha has also a separate non-elected Secretariat staff.
Lok Sabha is constituted after the general election as follows:
Currently elected members of 16th Lok Sabha by their political party (As of 14 November 2017):
|
who is currently the longest serving supreme court justice | List of United States Supreme Court justices by time in office - wikipedia
A total of 113 Justices have served on the Supreme Court of the United States, the highest judicial body in the United States. Justices have life tenure, and so they serve until they die in office, resign or retire, or are impeached and removed from office (which has never happened; the one impeached Justice was acquitted). The longest - serving member of the current court is Associate Justice Anthony Kennedy, with a term to date of 7004108520000000000 ♠ 10,852 days (29 years). For the 104 non-incumbent justices, the mean length of service was 6,112 days (16.7 years) with a standard deviation of 3,620 days (9.9 years). The median length of service was 5,740 days (15.7 years). Their period of service ranges from William O. Douglas 's 13,358 days (36 years) on the Court to the 163 - day tenure of Thomas Johnson.
A nominee who was confirmed by the United States Senate but declined to serve, such as Robert H. Harrison, or who died before taking his seat, such as Edwin M. Stanton, is not considered to have served as a Justice. The Term Start date is the day the Justice took the oath of office, with the Term End date being the date of the Justice 's death, resignation, or retirement. A highlighted row indicates that the Justice is currently serving on the Court.
* Chief Justice Term as Associate Justice and Chief Justice combined
|
when is air force day celebrated in india | Indian Air Force - Wikipedia
The Indian Air Force (IAF; IAST: Bhāratīya Vāyu Senā) is the air arm of the Indian armed forces. Its complement of personnel and aircraft assets ranks fourth amongst the airforces of the world. Its primary mission is to secure Indian airspace and to conduct aerial warfare during armed conflict. It was officially established on 8 October 1932 as an auxiliary air force of the British Empire which honored India 's aviation service during World War II with the prefix Royal. After India gained independence from the United Kingdom in 1947, the name Royal Indian Air Force was kept and served in the name of Dominion of India. With the government 's transition to a Republic in 1950, the prefix Royal was removed after only three years.
Since 1950 the IAF has been involved in four wars with neighboring Pakistan and one with the People 's Republic of China. Other major operations undertaken by the IAF include Operation Vijay, Operation Meghdoot, Operation Cactus and Operation Poomalai. The IAF 's mission expands beyond engagement with hostile forces, with the IAF participating in United Nations peacekeeping missions.
The President of India holds the rank of Supreme Commander of the IAF. As of 1 July 2017, 139,576 personnel are in service with the Indian Air Force. The Chief of Air Staff, an air chief marshal, is a four - star officer and is responsible for the bulk of operational command of the Air Force. There is never more than one serving ACM at any given time in the IAF. The rank of Marshal of the Air Force has been conferred by the President of India on one occasion in history, to Arjan Singh. On 26 January 2002 Singh became the first and so far, only five - star rank officer of the IAF.
The IAF 's mission is defined by the Armed Forces Act of 1947, the Constitution of India, and the Air Force Act of 1950. It decrees that in the aerial battlespace:
Defence of India and every part there of including preparation for defence and all such acts as may be conducive in times of war to its prosecution and after its termination to effective demobilisation.
In practice, this is taken as a directive meaning the IAF bears the responsibility of safeguarding Indian airspace and thus furthering national interests in conjunction with the other branches of the armed forces. The IAF provides close air support to the Indian Army troops on the battlefield as well as strategic and tactical airlift capabilities. The Integrated Space Cell is operated by the Indian Armed Forces, the civilian Department of Space, and the Indian Space Research Organisation. By uniting the civilian run space exploration organizations and the military faculty under a single Integrated Space Cell the military is able to efficiently benefit from innovation in the civilian sector of space exploration, and the civilian departments benefit as well.
The Indian Air Force, with highly trained crews, pilots, and access to modern military assets provides India with the capacity to provide rapid response evacuation, search - and - rescue (SAR) operations, and delivery of relief supplies to affected areas via cargo aircraft. The IAF provided extensive assistance to relief operations during natural calamities such as the Gujarat cyclone in 1998, the tsunami in 2004, and North India floods in 2013. The IAF has also undertaken relief missions such as Operation Rainbow in Sri Lanka.
The Indian Air Force was established on 8 October 1932 in British India as an auxiliary air force of the Royal Air Force. The enactment of the Indian Air Force Act 1932 stipulated out their auxiliary status and enforced the adoption of the Royal Air Force uniforms, badges, brevets and insignia. On 1 April 1933, the IAF commissioned its first squadron, No. 1 Squadron, with four Westland Wapiti biplanes and five Indian pilots. The Indian pilots were led by British RAF Commanding officer Flight Lieutenant (later Air Vice Marshal) Cecil Bouchier.
During World War II, the IAF played an instrumental role in halting the advance of the Japanese army in Burma, where the first IAF air strike was executed. The target for this first mission was the Japanese military base in Arakan, after which IAF strike missions continued against the Japanese airbases at Mae Hong Son, Chiang Mai and Chiang Rai in northern Thailand.
The IAF was mainly involved in strike, close air support, aerial reconnaissance, bomber escort and pathfinding missions for RAF and USAAF heavy bombers. RAF and IAF pilots would train by flying with their non-native air wings to gain combat experience and communication proficiency. IAF pilots participated in air operations in Europe as part of the RAF.
During the war, the IAF experienced a phase of steady expansion. New aircraft added to the fleet included the US - built Vultee Vengeance, Douglas DC - 3, the British Hawker Hurricane, Supermarine Spitfire, and Westland Lysander.
In recognition of the valiant service by the IAF, King George VI conferred the prefix "Royal '' in 1945. Thereafter the IAF was referred to as the Royal Indian Air Force. In 1950, when India became a republic, the prefix was dropped and it reverted to being the Indian Air Force.
After it became independent from the British Empire in 1947, British India was partitioned into the new states of the Dominion of India and the Dominion of Pakistan. Along the lines of the geographical partition, the assets of the air force were divided between the new countries. India 's air force retained the name of the Royal Indian Air Force, but three of the ten operational squadrons and facilities, located within the borders of Pakistan, were transferred to the Royal Pakistan Air Force. The RIAF Roundel was changed to an interim ' Chakra ' roundel derived from the Ashoka Chakra.
Around the same time, conflict broke out between them over the control of the princely state of Jammu & Kashmir. With Pakistani forces moving into the state, its Maharaja decided to accede to India in order to receive military help. The day after, the Instrument of Accession was signed, the RIAF was called upon to transport troops into the war zone. And this was when a good management of logistics came into help. This led to the eruption of full - scale war between India and Pakistan, though there was no formal declaration of war. During the war, the RIAF did not engage the Pakistan Air Force in air - to - air combat; however, it did provide effective transport and close air support to the Indian troops.
When India became a republic in 1950, the prefix ' Royal ' was dropped from the Indian Air Force. At the same time, the current IAF roundel was adopted.
The IAF saw significant conflict in 1960, when Belgium 's 75 - year rule over Congo ended abruptly, engulfing the nation in widespread violence and rebellion. The IAF activated No. 5 Squadron, equipped with English Electric Canberra, to support the United Nations Operation in the Congo. The squadron started undertaking operational missions in November. The unit remained there until 1966, when the UN mission ended. Operating from Leopoldville and Kamina, the Canberras soon destroyed the rebel Air Force and provided the UN ground forces with its only long - range air support force.
In late 1961, the Indian government decided to attack the Portuguese colony of Goa after years of disagreement between New Delhi and Lisbon. The Indian Air Force was requested to provide support elements to the ground force in what was called Operation Vijay. Probing flights by some fighters and bombers were carried out from 8 -- 18 December to draw out the Portuguese Air Force, but to no avail. On 18 December, two waves of Canberra bombers bombed the runway of Dabolim airfield taking care not to bomb the Terminals and the ATC tower. Two Portuguese transport aircraft (a Super Constellation and a DC - 6) found on the airfield were left alone so that they could be captured intact. However the Portuguese pilots managed to take off the aircraft from the still damaged airfield and made their getaway to Portugal. Hunters attacked the wireless station at Bambolim. Vampires were used to provide air support to the ground forces. In Daman, Mystères were used to strike Portuguese gun positions. Ouragans (called Toofanis in the IAF) bombed the runways at Diu and destroyed the control tower, wireless station and the meteorological station. After the Portuguese surrendered the former colony was integrated into India.
In 1962, border disagreements between China and India escalated to a war when China mobilised its troops across the Indian border. During the Sino - Indian War, India 's military planners failed to deploy and effectively use the IAF against the invading Chinese forces. This resulted in India losing a significant amount of advantage to the Chinese; especially in Jammu and Kashmir.
Three years after the Sino - Indian conflict, in 1965, Pakistan launched Operation Gibraltar, strategy of Pakistan to infiltrate Jammu and Kashmir, and start a rebellion against Indian rule. This came to be known as the Second Kashmir War. This was the first time the IAF actively engaged an enemy air force. However, instead of providing close air support to the Indian Army, the IAF carried out independent raids against PAF bases. These bases were situated deep inside Pakistani territory, making IAF fighters vulnerable to anti-aircraft fire. During the course of the conflict, the PAF enjoyed technological superiority over the IAF and had achieved substantial strategic and tactical advantage due to their sudden attack and whole hearted diplomatic and military support from the US and Britain. The IAF was restrained by the government from retaliating to PAF attacks in the eastern sector while a substantive part of its combat force was deployed there and could not be transferred to the western sector, against the possibility of Chinese intervention. Moreover, international (UN) stipulations and norms did not permit military force to be introduced into the Indian state of J&K beyond what was agreed during the 1949 ceasefire. Despite this, the IAF was able to prevent the PAF from gaining air superiority over conflict zones. The small and nimble IAF Folland Gnats proved effective against the F - 86 Sabres of the PAF earning it the nickname "Sabre Slayers ''. By the time the conflict had ended, the IAF lost 60 -- 70 aircraft, while the PAF lost 43 aircraft. More than 60 % of IAF 's aircraft losses took place in Ground Attack missions to enemy ground - fire, since fighter - bomber aircraft would carry out repeated dive attacks on the same target. According to, Air Chief Marshal Arjan Singh of the Indian Air Force, despite having been qualitatively inferior, IAF achieved air superiority in three days in the 1965 War
After the 1965 war, the IAF underwent a series of changes to improve its capabilities. In 1966, the Para Commandos regiment was created. To increase its logistics supply and rescue operations ability, the IAF inducted 72 HS 748s which were built by Hindustan Aeronautics Limited (HAL) under license from Avro. India started to put more stress on indigenous manufacture of fighter aircraft. As a result, HAL HF - 24 Marut, designed by the famed German aerospace engineer Kurt Tank, were inducted into the air force. HAL also started developing an improved version of the Folland Gnat, known as HAL Ajeet. At the same time, the IAF also started inducting Mach 2 capable Soviet MiG - 21 and Sukhoi Su - 7 fighters.
By late 1971, the intensification of the independence movement in erstwhile East Pakistan lead to the Bangladesh Liberation War between India and Pakistan. On 22 November 1971, 10 days before the start of a full - scale war, four PAF F - 86 Sabre jets attacked Indian and Mukti Bahini positions at Garibpur, near the international border. Two of the four PAF Sabres were shot down and one damaged by the IAF 's Folland Gnats. On 3 December, India formally declared war against Pakistan following massive preemptive strikes by the PAF against Indian Air Force installations in Srinagar, Ambala, Sirsa, Halwara and Jodhpur. However, the IAF did not suffer significantly because the leadership had anticipated such a move and precautions were taken. The Indian Air Force was quick to respond to Pakistani air strikes, following which the PAF carried out mostly defensive sorties.
Within the first two weeks, the IAF had carried out almost 12,000 sorties over East Pakistan and also provided close air support to the advancing Indian Army. IAF also assisted the Indian Navy in its operations against the Pakistani Navy and Maritime Security Agency in the Bay of Bengal and Arabian Sea. On the western front, the IAF destroyed more than 20 Pakistani tanks, 4 APCs and a supply train during the Battle of Longewala. The IAF undertook strategic bombing of West Pakistan by carrying out raids on oil installations in Karachi, the Mangla Dam and a gas plant in Sindh. Similar strategy was also deployed in East Pakistan and as the IAF achieved complete air superiority on the eastern front, the ordnance factories, runways, and other vital areas of East Pakistan were severely damaged. By the time Pakistani forces surrendered, the IAF destroyed 94 PAF Aircraft The IAF was able to conduct a wide range of missions -- troop support; air combat; deep penetration strikes; para-dropping behind enemy lines; feints to draw enemy fighters away from the actual target; bombing; and reconnaissance. In contrast, the Pakistan Air Force, which was solely focused on air combat, was blown out of the subcontinent 's skies within the first week of the war. Those PAF aircraft that survived took refuge at Iranian air bases or in concrete bunkers, refusing to offer a fight. Hostilities officially ended at 14: 30 GMT on 17 December, after the fall of Dacca on 15 December. India claimed large gains of territory in West Pakistan (although pre-war boundaries were recognised after the war), and the independence of Pakistan 's East wing as Bangladesh was confirmed. The IAF had flown over 16,000 sorties on both East and West fronts; including sorties by transport aircraft and helicopters. while the PAF flew about 30 and 2,840. More than 80 percent of the IAF 's sorties were close - support and interdiction, and according to neutral assessments about 45 IAF Aircraft were lost while, Pakistan lost 75 aircraft. Not including any F - 6s, Mirage IIIs, or the six Jordanian F - 104s which failed to return to their donors. But the imbalance in air losses was explained by the IAF 's considerably higher sortie rate, and its emphasis on ground - attack missions. On the ground Pakistan suffered most, with 9,000 killed and 25,000 wounded while India lost 3,000 dead and 12,000 wounded. The loss of armoured vehicles was similarly imbalanced. This represented a major defeat for Pakistan. Towards the end of the war, IAF 's transport planes dropped leaflets over Dhaka urging the Pakistani forces to surrender, demoralising Pakistani troops in East Pakistan.
In 1984, India launched Operation Meghdoot to capture the Siachen Glacier in the contested Kashmir region. In Op Meghdoot, IAF 's Mi - 8, Chetak and Cheetah helicopters airlifted hundreds of Indian troops to Siachen. Launched on 13 April 1984, this military operation was unique because of Siachen 's inhospitable terrain and climate. The military action was successful, given the fact that under a previous agreement, neither Pakistan nor India had stationed any personnel in the area. With India 's successful Operation Meghdoot, it gained control of the Siachen Glacier. India has established control over all of the 70 kilometres (43 mi) long Siachen Glacier and all of its tributary glaciers, as well as the three main passes of the Saltoro Ridge immediately west of the glacier -- Sia La, Bilafond La, and Gyong La. Pakistan controls the glacial valleys immediately west of the Saltoro Ridge. According to TIME magazine, India gained more than 1,000 square miles (3,000 km) of territory because of its military operations in Siachen.
Following the inability to negotiate an end to the Sri Lankan Civil War, and to provide humanitarian aid through an unarmed convoy of ships, the Indian Government decided to carry out an airdrop of the humanitarian supplies on the evening of 4 June 1987 designated Operation Poomalai (Tamil: Garland) or Eagle Mission 4. Five An - 32s escorted by four Mirage 2000 of 7 Sqn AF, ' The Battleaxes ', carried out the supply drop which faced no opposition from the Sri Lankan Armed Forces. Another Mirage 2000 orbited 150 km away, acting as an airborne relay of messages to the entire fleet since they would be outside radio range once they descended to low levels. The Mirage 2000 escort formation was led by Wg Cdr Ajit Bhavnani, with Sqn Ldrs Bakshi, NA Moitra and JS Panesar as his team members and Sqn Ldr KG Bewoor as the relay pilot. Sri Lanka accused India of "blatant violation of sovereignty ''. India insisted that it was acting only on humanitarian grounds.
In 1987, the IAF supported the Indian Peace Keeping Force (IPKF) in northern and eastern Sri Lanka in Operation Pawan. About 70,000 sorties were flown by the IAF 's transport and helicopter force in support of nearly 100,000 troops and paramilitary forces without a single aircraft lost or mission aborted. IAF An - 32s maintained a continuous air link between air bases in South India and Northern Sri Lanka transporting men, equipment, rations and evacuating casualties. Mi - 8s supported the ground forces and also provided air transportation to the Sri Lankan civil administration during the elections. Mi - 25s of No. 125 Helicopter Unit were utilised to provide suppressive fire against militant strong points and to interdict coastal and clandestine riverine traffic.
On the night of 3 November 1988, the Indian Air Force mounted special operations to airlift a parachute battalion group from Agra, non-stop over 2,000 kilometres to the remote Indian Ocean archipelago of the Maldives in response to Maldivian president Gayoom 's request for military help against a mercenary invasion in Operation Cactus. The IL - 76s of No. 44 Squadron landed at Hulhule at 0030 hours and the Indian paratroopers secured the airfield and restored Government rule at Male within hours. Four Mirage 2000 aircraft of 7 Sqn, led by Wg Cdr AV ' Doc ' Vaidya, carried out a show of force early that morning, making low - level passes over the islands.
On 11 May 1999, the Indian Air Force was called in to provide close air support to the Indian Army at the height of the ongoing Kargil conflict with the use of helicopters. The IAF strike was code named Operation Safed Sagar. The first strikes were launched on 26 May, when the Indian Air Force struck infiltrator positions with fighter aircraft and helicopter gunships. The initial strikes saw MiG - 27s carrying out offensive sorties, with MiG - 21s and later MiG - 29s providing fighter cover. The IAF also deployed its radars and the MiG - 29 fighters in vast numbers to keep check on Pakistani military movements across the border. Srinagar Airport was at this time closed to civilian air - traffic and dedicated to the Indian Air Force.
On 27 May, the Indian Air Force suffered its first fatality when it lost a MiG - 21 and a MiG - 27 in quick succession. The following day, while on an offensive sortie, a Mi - 17 was shot down by three Stinger missiles and lost its entire crew of four. Following these losses the IAF immediately withdrew helicopters from offensive roles as a measure against the threat of Man - portable air - defence systems (MANPAD). On 30 May, the Mirage 2000s were introduced in offensive capability, as they were deemed better in performance under the high - altitude conditions of the conflict zone. Mirage 2000s were not only better equipped to counter the MANPAD threat compared to the MiGs, but also gave IAF the ability to carry out aerial raids at night. The MiG - 29s were used extensively to provide fighter escort to the Mirage 2000. Radar transmissions of Pakistani F - 16s were picked up repeatedly, but these aircraft stayed away. The Mirages successfully targeted enemy camps and logistic bases in Kargil and severely disrupted their supply lines. Mirage 2000s were used for strikes on Muntho Dhalo and the heavily defended Tiger Hill and paved the way for their early recapture. At the height of the conflict, the IAF was conducting over forty sorties daily over the Kargil region. By 26 July, the Indian forces had successfully repulsed the Pakistani forces from Kargil.
On 10 August 1999, IAF MiG - 21s intercepted a Pakistan Navy Breguet Atlantique which was flying over Sir Creek, a disputed territory. The aircraft was shot down killing all 16 Pakistani Navy personnel on board. India claimed that the Atlantic was on a mission to gather information on IAF air defence, a charge emphatically rejected by Pakistan which argued that the unarmed aircraft was on a training mission.
Since the late 1990s, the Indian Air Force has been modernising its fleet to counter challenges in the new century. The fleet size of the IAF has decreased to 33 squadrons during this period because of the retirement of older aircraft. Still, India maintains the fourth largest air force in the world. The IAF plans to raise its strength to 42 squadrons. Self - reliance is the main aim that is being pursued by the defence research and manufacturing agencies.
On 20 August 2013, the Indian Air Force created a world record by performing the highest landing of a C - 130J at the Daulat Beg Oldi airstrip in Ladakh at the height of 16614 feet (5065 meters). The medium - lift aircraft will be used to deliver troops, supplies and improve communication networks. The aircraft belonged to the Veiled Vipers squadron based at Hindon Air Force Station.
On 13 July 2014, two MiG - 21s were sent from Jodhpur Air Base to investigate a Turkish Airlines aircraft over Jaisalmer when it repeated an identification code, provided by another commercial passenger plane that had already entered Indian airspace before it. The flights were on their way to Mumbai and Delhi, and the planes were later allowed to proceed after their credentials were verified.
On 25 July 2014, an advanced landing helicopter crashed in a field near Sitapur in Uttar Pradesh, on its way to Allahabad from Bareilly. At least 7 people were killed as a result.
On 28 March 2014, C - 130J - 30 KC - 3803 crashed near Gwalior, India, killing all 5 personnel aboard. The aircraft was conducting low level penetration training by flying at around 300 ft when it ran into wake turbulence from another aircraft in the formation, which caused it to crash.
On 2 January 2016, the Pathankot Air Force Station was attacked by terrorists resulting in seven casualties.
On 22 November 2017 at 10: 40 AM, the IAF conducted the first test launch from air of the 2.8 Mach surface attack Brahmos missile.
The President of India is the Supreme Commander of all Indian armed forces and by virtue of that fact is the national Commander - in - chief of the Air Force. The Chief of the Air Staff with the rank of air chief marshal is the Commander of the Indian Air Force. He is assisted by six officers, all with the rank of air marshal:
In January 2002, the government conferred the rank of Marshal of the Air Force on Arjan Singh making him the first and only Five - star officer with the Indian Air Force and ceremonial chief of the air force.
The Indian Air Force is divided into five operational and two functional commands. Each Command is headed by an Air Officer Commanding - in - Chief with the rank of Air Marshal. The purpose of an operational command is to conduct military operations using aircraft within its area of responsibility, whereas the responsibility of functional commands is to maintain combat readiness. Aside from the Training Command at Bangalore, the primary flight training is done at the Air Force Academy, Dundigul (located in Hyderabad), followed by operational training at various other schools. Advanced officer training for command positions is also conducted at the Defence Services Staff College; specialised advanced flight training schools are located at Bidar, Karnataka and Hakimpet, Telangana (also the location for helicopter training). Technical schools are found at a number of other locations.
Within each operational command are anywhere from nine to sixteen bases or stations, each commanded by an air commodore. A station typically has one wing and one or two squadrons assigned to it.
A wing is a formation intermediate between a command and a squadron. It generally consists of two or three IAF squadrons and helicopter units, along with forward base support units (FBSU). FBSUs do not have or host any squadrons or helicopter units but act as transit airbases for routine operations. In times of war, they can become fully fledged air bases playing host to various squadrons. In all, about 47 wings and 19 FBSUs make up the IAF. Wings are typically commanded by a group captain.
Squadrons are the field units and formations attached to static locations. Thus, a flying squadron or unit is a sub-unit of an air force station which carries out the primary task of the IAF. A fighter squadron consists of 18 aircraft; all fighter squadrons are headed by a commanding officer with the rank of wing commander. Some transport squadrons and helicopter units are headed by a commanding officer with the rank of group captain.
Flights are sub-divisions of squadrons, commanded by a squadron leader. Each flight consists of two sections.
The smallest unit is the section, led by a flight lieutenant. Each section consists of three aircraft.
Within this formation structure, IAF has several service branches for day - to - day operations. They are:
In September 2004, the IAF established its own special operation unit called the Garud Commando Force, consisting of approximately 1,500 personnel. For starting this special force volunteers from existing trades were called and sent for commando and specialised training at various institutes of army and other forces. The airmen who successfully completed all course were inducted in Garud force, while special recruitment and selections from various IAF training institute were made for selecting young air warriors for Garud SF. By doing this IAF got two set of personnel for its SF, i.e. experienced senior lot with experience of working in various IAF units and younger airmen who can be groomed and brought up to the standards of SF. The unit derives its name from Garuda, a divine mythical bird of Hindu Mythology, but more commonly the word for Garuda in Sanskrit. Garud is tasked with the protection of critical installations; During hostilities, Garuds undertake combat search and rescue, rescue of downed airmen and other forces from behind enemy lines, suppression of enemy air defence (SEAD), radar busting, combat control, missile and munitions guidance ("lasing '' of targets) and other missions in support of air operations. It has been suggested that they undertake an offensive role including raids on enemy air bases etc. during times of war.
Apart from protecting air bases from sabotage and attacks by commando raids, they are also tasked to seal off weapons systems, fighter hangars and other major systems during intrusions and conflicts. and disaster relief during calamities.
An Integrated Space Cell, which will be jointly operated by all the three services of the Indian armed forces, the civilian Department of Space and the Indian Space Research Organisation (ISRO) has been set up to utilise more effectively the country 's space - based assets for military purposes. This command will leverage space technology including satellites. Unlike an aerospace command, where the air force controls most of its activities, the Integrated Space Cell envisages co-operation and co-ordination between the three services as well as civilian agencies dealing with space.
India currently has 10 remote sensing satellites in orbit. Though most are not meant to be dedicated military satellites, some have a spatial resolution of 1 metre or below which can be also used for military applications. Noteworthy satellites include the Technology Experiment Satellite (TES) which has a panchromatic camera (PAN) with a resolution of 1 - metre, the RISAT - 2 which is capable of imaging in all - weather conditions and has a resolution of one metre, the CARTOSAT - 2, CARTOSAT - 2A and CARTOSAT - 2B which carries a panchromatic camera which has a resolution of 80 centimetres (black and white only).
The Surya Kiran Aerobatic Team (SKAT) (Surya Kiran is Sanskrit for Sun Rays) is an aerobatics demonstration team of the Indian Air Force. They were formed in 1996 and are successors to the Thunderbolts. The team has a total of 13 pilots (selected from the fighter stream of the IAF) and operate 9 HAL HJT - 16 Kiran Mk. 2 trainer aircraft painted in a "day - glo orange '' and white colour scheme. The Surya Kiran team were conferred squadron status in 2006, and presently have the designation of 52 Squadron ("The Sharks ''). The team is based at the Indian Air Force Station at Bidar. The IAF has begun the process of converting Surya Kirans to BAE Hawks.
Sarang (Sanskrit for Peacock) is the Helicopter Display Team of the Indian Air Force. The team was formed in October 2003 and their first public performance was at the Asian Aerospace Show, Singapore, 2004. The team flies four HAL Dhruvs painted in red and white with a peacock figure at each side of the fuselage. The team is based at the Indian Air Force base at Air Force Station Sulur, Coimbatore.
Over the years reliable sources provided notably divergent estimates of the personnel strength of the Indian Air Force after analysing open - source intelligence. The public policy organisation GlobalSecurity.org had estimated that the IAF had an estimated strength of 110,000 active personnel in 1994. In 2006, Anthony Cordesman estimated that strength to be 170,000 in the International Institute for Strategic Studies (IISS) publication "The Asian Conventional Military Balance in 2006 ''. In 2010, James Hackett revised that estimate to an approximate strength of 127,000 active personnel in the IISS publication "Military Balance 2010 ''.
As of 1 July 2017, the Indian Air Force has a sanctioned strength of 12,550 officers (12,404 serving with 146 under strength), and 142,529 airmen (127,172 serving with 15,357 under strength).
The rank structure of the Indian Air Force is based on that of the Royal Air Force. The highest rank attainable in the IAF is Marshal of the Indian Air Force, conferred by the President of India after exceptional service during wartime. MIAF Arjan Singh is the only officer to have achieved this rank. The head of the Indian Air Force is the Chief of the Air Staff, who holds the rank of Air Chief Marshal.
Anyone holding Indian citizenship can apply to be an officer in the Air Force as long as they satisfy the eligibility criteria. There are four entry points to become an officer. Male applicants, who are between the ages of 161⁄2 and 19 and have passed high school graduation, can apply at the Intermediate level. Men and women applicants, who have graduated from college (three - year course) and are between the ages of 18 and 28, can apply at the Graduate level entry. Graduates of engineering colleges can apply at the Engineer level if they are between the ages of 18 and 28 years. The age limit for the flying and ground duty branch is 23 years of age and for technical branch is 28 years of age. After completing a master 's degree, men and women between the ages of 18 and 28 years can apply at the Post Graduate level. Post graduate applicants do not qualify for the flying branch. For the technical branch the age limit is 28 years and for the ground duty branch it is 25. At the time of application, all applicants below 25 years of age must be single. The IAF selects candidates for officer training from these applicants. After completion of training, a candidate is commissioned as a Flying Officer.
The duty of an airman is to make sure that all the air and ground operations run smoothly. From operating Air Defence systems to fitting missiles, they are involved in all activities of an air base and give support to various technical and non-technical jobs. The airmen of Technical trades are responsible for maintenance, repair and prepare for use the propulsion system of aircraft and other airborne weapon delivery system, Radar, Voice / Data transmission and reception equipment, latest airborne weapon delivery systems, all types of light, mechanical, hydraulic, pneumatic systems of airborne missiles, aero engines, aircraft fuelling equipment and heavy duty mechanical vehicles, cranes and loading equipment etc. The competent and qualified Airmen from Technical trades also participate in flying as Flight Engineers, Flight Signallers and Flight Gunners. The recruitment of personnel below officer rank is conducted through All India Selection Tests and Recruitment Rallies. All India Selection Tests are conducted among 15 Airmen Selection Centres (ASCs) located all over India. These centres are under the direct functional control of Central Airmen Selection Board (CASB), with administrative control and support by respective commands. The role of CASB is to carry out selection and enrolment of airmen from the Airmen Selection Centres for their respective commands. Candidates initially take a written test at the time of application. Those passing the written test undergo a physical fitness test, an interview conducted in English, and medical examination. Candidates for training are selected from individuals passing the battery of tests, on the basis of their performance. Upon completion of training, an individual becomes an Airman. Some MWOs and WOs are granted honorary commission in the last year of their service as an honorary Flying Officer or Flight Lieutenant before retiring from the service.
Non combatants enrolled (NCs (E)) were established in British India as personal assistants to the officer class, and are equivalent to the orderly or sahayak of the Indian Army.
Almost all the commands have some percentage of civilian strength which are central government employees. These are regular ranks which are prevalent in ministries. They are usually not posted outside their stations and are employed in administrative and non-technical work.
The Indian Armed Forces have set up numerous military academies across India for training its personnel, such as the National Defence Academy (NDA). Besides the tri-service institutions, the Indian Air Force has a Training Command and several training establishments. While technical and other support staff are trained at various Ground Training Schools, the pilots are trained at the Air Force Academy, Dundigul (located in Hyderabad). The Pilot Training Establishment at Allahabad, the Air Force Administrative College at Coimbatore, the Institute of Aerospace Medicine at Bangalore, the Air Force Technical College, Bangalore at Jalahalli, the Tactics and Air Combat and Defence Establishment at Gwalior, and the Paratrooper 's Training School at Agra are some of the other training establishments of the IAF.
The Indian Air Force has aircraft and equipment of Russian (erstwhile Soviet Union), British, French, Israeli, US and Indian origins with Russian aircraft dominating its inventory. HAL produces some of the Russian and British aircraft in India under licence. The exact number of aircraft in service with the Indian Air Force can not be determined with precision from open sources. Various reliable sources provide notably divergent estimates for a variety of high - visibility aircraft. Flight International estimates there to be around 1,721 aircraft in service with the IAF. While the International Institute for Strategic Studies provides a similar estimate of 1,724 aircraft. Both sources agree there are approximately 900 combat capable (fighter, attack etc.) aircraft in the IAF.
The IAF is currently training the crew in operating the indigenously developed DRDO AEW&CS flying on the Embraer ERJ 145 aircraft. The IAF also operates the EL / W - 2090 Phalcon AEW&C incorporated in a Beriev A-50 platform. A total of 3 such systems are currently in service, with possible orders for 2 more. The two extra Phalcons are currently in negotiation over price differences between Russia and India. India is also going ahead with Project India, an inhouse AWACS program to develop and deliver 6 Phalcon class AWACS, based on DRDO work on the smaller AEW&CS.
The IAF currently operates 7 Ilyushin Il - 78 MKIs in the aerial refuelling (tanker) role.
For strategic airlift operations the IAF uses the Ilyushin Il - 76, known as Gajraj (Hindi for King Elephant) in Indian service. The IAF operated 17 Il - 76s in 2010, which are in the process of being replaced by C - 17 Globemaster IIIs.
The IAF C - 130Js are used by special forces for combined Army - Air Force operations. India purchased six C - 130Js; however one crashed at Gwalior on 28 March 2014 while on a training mission, killing all 5 on board and destroying the aircraft. The Antonov An - 32, known in Indian service as the Sutlej (named after Sutlej River), serves as a medium transport aircraft in the IAF. The aircraft is also used in bombing roles and para-dropping operations. The IAF currently operates 105 An - 32s, all of which are being upgraded. The Dornier Do 228 serves as light transport aircraft in the IAF. The IAF also operates Boeing 737s and Embraer ECJ - 135 Legacy aircraft as VIP transports and passenger airliners for troops. Other VIP transport aircraft are used for both the President of India and the Prime Minister of India under the call sign Air India One.
The Hawker Siddeley HS 748 once formed the backbone of the IAF 's transport fleet, but are now used mainly for training and communication duties. A replacement is under consideration.
The HAL HPT - 32 Deepak is IAF 's basic flight training aircraft for cadets. The HPT - 32 was grounded in July 2009 following a crash that killed two senior flight instructors, but was revived in May 2010 and is to be fitted with a parachute recovery system (PRS) to enhance survivability during an emergency in the air and to bring the trainer down safely. The HPT - 32 is to be phased out soon. The HPT 32 has been replaced by Pilatus, a Swiss aircraft. The IAF uses the HAL HJT - 16 Kiran mk. I for intermediate flight training of cadets, while the HJT - 16 Kiran mk. II provides advanced flight and weapons training. The HAL HJT - 16 Kiran Mk. 2 is also operated by the Surya Kiran Aerobatic Team (SKAT) of the IAF. The Kiran is to be replaced by the HAL HJT - 36 Sitara. The BAE Hawk Mk 132 serves as an advanced jet trainer in the IAF and is progressively replacing the Kiran Mk. II. The IAF has begun the process of converting the Surya Kiran display team to Hawks. A total of 106 BAE Hawk trainers have been ordered by the IAF of which 39 have entered service as of July 2010. IAF also ordered 72 Pipistrel Virus SW 80 microlight aircraft for basic training purpose.
The HAL Dhruv serves primarily as a light utility helicopter in the IAF. In addition to transport and utility roles, newer Dhruvs are also used as attack helicopters. 4 Dhruvs are also operated by the Indian Air Force Sarang Helicopter Display Team. The HAL Chetak is a light utility helicopter and is used primarily for training, rescue and light transport roles in the IAF. The HAL Chetak is being gradually replaced by HAL Dhruv. The HAL Cheetah is a light utility helicopter used for high altitude operations. It is used for both transport and search - and - rescue missions in the IAF.
The Mil Mi - 8 and the Mil Mi - 17, Mi - 17 1V and Mi - 17V 5 are operated by the IAF for medium lift strategic and utility roles. The Mi - 8 is being progressively replaced by the Mi - 17 series of helicopters. The IAF has ordered 22 Boeing AH - 64E Apache attack Helicopters, 68 HAL Light Combat Helicopters (LCH), 35 HAL Rudra attack Helicopters, 15 CH - 47F Chinook heavy lift helicopters and 150 Mi - 17V - 5s to replace and augment its existing fleet of Mi - 8s and Mi - 17s and Mi - 24 's. The Mil Mi - 26 serves as a heavy lift helicopter in the IAF. It can also be used to transport troops or as a flying ambulance. The IAF currently operates 3 Mi - 26s.
The Mil Mi - 35 serves primarily as an attack helicopter in the IAF. The Mil Mi - 35 can also act as a low - capacity troop transport. The IAF currently operates 2 squadrons (No. 104 Firebirds and No. 125 Gladiators) of Mi - 25 / 35s.
The IAF currently uses the IAI Searcher II and IAI Heron for reconnaissance and surveillance purposes. The IAI Harpy serves as an Unmanned Combat Aerial Vehicle (UCAV) which is designed to attack radar systems. The IAF also operates the DRDO Lakshya which serves as realistic towed aerial sub-targets for live fire training.
The SPYDER (Surface - to - air PYthon and DERby) is an Israeli short and medium range mobile air defence system developed by Rafael Advanced Defense Systems with assistance from Israel Aerospace Industries (IAI). The SPYDER is a low - level, quick - reaction surface - to - air missile system capable of engaging aircraft, helicopters, unmanned air vehicles, drones, and precision - guided munitions. It provides air defence for fixed assets and for point and area defence for mobile forces in combat areas. Six SPYDER - MRs along with 300 Python - 5 surface to missiles (SAMs) and 300 Derby SAMs are in service with the Indian Air Force
The S - 125 Pechora and the 9K33 Osa as Surface - to - air missile systems in service are being replaced with the Akash medium range surface - to - air missile system. A total of 8 squadrons has been ordered so far out of which 2 squadrons have been delivered and stationed at Gwalior and Pune.
The IAF currently operates the Prithvi - II short - range ballistic missile (SRBM). The Prithvi - II is an IAF - specific variant of the Prithvi ballistic missile.
The number of aircraft in the IAF has been decreasing from the late 1990s due to the retirement of older aircraft and several crashes. To deal with the depletion of force levels, the IAF has started to modernise its fleet. This includes both the upgrade of existing aircraft, equipment and infrastructure as well as induction of new aircraft and equipment, both indigenous and imported. As new aircraft enter service and numbers recover, the IAF plans to have a fleet of 42 squadrons.
On 3 January 2017, Minister of Defence Manohar Parrikar addressed a media conference and announced plans for a competition to select a Strategic Partner to deliver "... 200 new single engine fighters to be made in India, which will easily cost around (USD) $45 million apiece without weaponry '' with an expectation that Lockheed Martin (USA) and Saab (Sweden) will pitch the F - 16 Block 70 and Gripen, respectively. An MoD official said that a global tender will be put to market in the first quarter of 2018, with a private company nominated as the strategic partners production agency followed by a two or more year process to evaluate technical and financial bids and conduct trials, before the final government - to - government deal in 2021. This represents 11 squadrons of aircraft plus several ' attrition ' aircraft. India is also planning to set up an assembly line of American Lockheed Martin F - 16 Fighting Falcon Block 70 in Bengaluru. It is not yet confirmed whether IAF will induct these aircraft or not.
In 2018, the current defence minister Nirmala Seetharaman gave the go ahead to scale up the manufacturing of Tejas at HAL and also to export Tejas. She is quoted saying "We are not ditching the LCA. We have not gone for anything instead of Tejas. We are very confident that Tejas Mark II will be a big leap forward to fulfil the single engine fighter requirement of the forces. ''. IAF committed to buy 201 Mark - II variant of the Tejas taking the total order of Tejas to 324. The government also scrapped the plan to import single engine fighters leading to reduction in reliance on imports thereby strengthening the domestic defence industry.
The IAF also submitted a request for information to international suppliers for a stealth unmanned combat air vehicle (UCAV)
The IAF has placed orders for 120 HAL Tejas fighters, 36 Dassault Rafale multi-role fighters, 112 Pilatus PC - 7 MkII basic trainers, 72 HAL HJT - 36 Sitara trainers, 72 Pipistrel Virus SW 80 microlight aircraft, 10 C - 17 Globemaster III strategic air - lifters, 65 HAL Light Combat Helicopters, 139 Mi - 17V - 5 helicopters. and the IAF has also ordered 18 Israeli SPYDER Surface to Air Missile (SAM) units. IAF has also ordered 6 Airbus A330 tanker aircraft, 22 AH - 64E Apache Longbow heavy attack helicopters, 15 CH - 47F medium lift helicopters and IAI Harop UCAVs.
Indian defence companies such as HAL and DRDO are developing several aircraft for the IAF such as the HAL Tejas, Advanced Medium Combat Aircraft (AMCA), DRDO AEW&CS (revived from the Airavat Project), NAL Saras, HAL HJT - 36 Sitara, HAL HTT - 40, HAL Light Combat Helicopter (LCH), HAL Light Utility Helicopter (LUH), DRDO Rustom and AURA (Autonomous Unmanned Research Aircraft) UCAV. DRDO has developed the Akash missile system for the IAF and is developing the Maitri SAM with MBDA. DRDO is also developing the Prithvi II ballistic missile.
HAL has undertaken the joint development of the Sukhoi / HAL FGFA (Fifth Generation Fighter Aircraft) (a derivative project of the Sukhoi Su - 57) with Russia 's United Aircraft Corporation (UAC). HAL is also close to develop its own fifth generation fighter aircraft HAL Amca which will be inducted by 2028. DRDO has entered in a joint venture with Israel Aerospace Industries (IAI) to develop the Barak 8 SAM. DRDO is developing the air - launched version of the BrahMos cruise missile in a joint venture with Russia 's NPO Mashinostroeyenia. DRDO has now successfully developed the nuclear capable Nirbhay cruise missile.
The Air Force Network (AFNET), a robust digital information grid that enabled quick and accurate threat responses, was launched in 2010, helping the IAF become a truly network - centric air force. AFNET is a secure communication network linking command and control centres with offensive aircraft, sensor platforms and ground missile batteries. Integrated Air Command and Control System (IACCS), an automated system for Air Defence operations will ride the AFNet backbone integrating ground and airborne sensors, weapon systems and command and control nodes. Subsequent integration with civil radar and other networks shall provide an integrated Air Situation Picture, and reportedly acts as a force multiplier for intelligence analysis, mission control, and support activities like maintenance and logistics. The design features multiple layers of security measures, including encryption and intrusion prevention technologies, to hinder and deter espionage efforts.
|
where did the sun and the planets in our solar system come from | Formation and evolution of the Solar System - wikipedia
The formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
This model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant, and Pierre - Simon Laplace. Its subsequent development has interwoven a variety of scientific disciplines including astronomy, physics, geology, and planetary science. Since the dawn of the space age in the 1950s and the discovery of extrasolar planets in the 1990s, the model has been both challenged and refined to account for new observations.
The Solar System has evolved considerably since its initial formation. Many moons have formed from circling discs of gas and dust around their parent planets, while other moons are thought to have formed independently and later been captured by their planets. Still others, such as Earth 's Moon, may be the result of giant collisions. Collisions between bodies have occurred continually up to the present day and have been central to the evolution of the Solar System. The positions of the planets often shifted due to gravitational interactions. This planetary migration is now thought to have been responsible for much of the Solar System 's early evolution.
In roughly 5 billion years, the Sun will cool and expand outward to many times its current diameter (becoming a red giant), before casting off its outer layers as a planetary nebula and leaving behind a stellar remnant known as a white dwarf. In the far distant future, the gravity of passing stars will gradually reduce the Sun 's retinue of planets. Some planets will be destroyed, others ejected into interstellar space. Ultimately, over the course of tens of billions of years, it is likely that the Sun will be left with none of the original bodies in orbit around it.
Ideas concerning the origin and fate of the world date from the earliest known writings; however, for almost all of that time, there was no attempt to link such theories to the existence of a "Solar System '', simply because it was not generally thought that the Solar System, in the sense we now understand it, existed. The first step toward a theory of Solar System formation and evolution was the general acceptance of heliocentrism, which placed the Sun at the centre of the system and the Earth in orbit around it. This concept had developed for millennia (Aristarchus of Samos had suggested it as early as 250 BC), but was not widely accepted until the end of the 17th century. The first recorded use of the term "Solar System '' dates from 1704.
The current standard theory for Solar System formation, the nebular hypothesis, has fallen into and out of favour since its formulation by Emanuel Swedenborg, Immanuel Kant, and Pierre - Simon Laplace in the 18th century. The most significant criticism of the hypothesis was its apparent inability to explain the Sun 's relative lack of angular momentum when compared to the planets. However, since the early 1980s studies of young stars have shown them to be surrounded by cool discs of dust and gas, exactly as the nebular hypothesis predicts, which has led to its re-acceptance.
Understanding of how the Sun is expected to continue to evolve required an understanding of the source of its power. Arthur Stanley Eddington 's confirmation of Albert Einstein 's theory of relativity led to his realisation that the Sun 's energy comes from nuclear fusion reactions in its core, fusing hydrogen into helium. In 1935, Eddington went further and suggested that other elements also might form within stars. Fred Hoyle elaborated on this premise by arguing that evolved stars called red giants created many elements heavier than hydrogen and helium in their cores. When a red giant finally casts off its outer layers, these elements would then be recycled to form other star systems.
The nebular hypothesis says that the Solar System formed from the gravitational collapse of a fragment of a giant molecular cloud. The cloud was about 20 parsec (65 light years) across, while the fragments were roughly 1 parsec (three and a quarter light - years) across. The further collapse of the fragments led to the formation of dense cores 0.01 -- 0.1 pc (2,000 -- 20,000 AU) in size. One of these collapsing fragments (known as the pre-solar nebula) formed what became the Solar System. The composition of this region with a mass just over that of the Sun (M) was about the same as that of the Sun today, with hydrogen, along with helium and trace amounts of lithium produced by Big Bang nucleosynthesis, forming about 98 % of its mass. The remaining 2 % of the mass consisted of heavier elements that were created by nucleosynthesis in earlier generations of stars. Late in the life of these stars, they ejected heavier elements into the interstellar medium.
The oldest inclusions found in meteorites, thought to trace the first solid material to form in the pre-solar nebula, are 4568.2 million years old, which is one definition of the age of the Solar System. Studies of ancient meteorites reveal traces of stable daughter nuclei of short - lived isotopes, such as iron - 60, that only form in exploding, short - lived stars. This indicates that one or more supernovae occurred near the Sun while it was forming. A shock wave from a supernova may have triggered the formation of the Sun by creating relatively dense regions within the cloud, causing these regions to collapse. Because only massive, short - lived stars produce supernovae, the Sun must have formed in a large star - forming region that produced massive stars, possibly similar to the Orion Nebula. Studies of the structure of the Kuiper belt and of anomalous materials within it suggest that the Sun formed within a cluster of between 1,000 and 10,000 stars with a diameter of between 6.5 and 19.5 light years and a collective mass of 3,000 M. This cluster began to break apart between 135 million and 535 million years after formation. Several simulations of our young Sun interacting with close - passing stars over the first 100 million years of its life produce anomalous orbits observed in the outer Solar System, such as detached objects.
Because of the conservation of angular momentum, the nebula spun faster as it collapsed. As the material within the nebula condensed, the atoms within it began to collide with increasing frequency, converting their kinetic energy into heat. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc. Over about 100,000 years, the competing forces of gravity, gas pressure, magnetic fields, and rotation caused the contracting nebula to flatten into a spinning protoplanetary disc with a diameter of about 200 AU and form a hot, dense protostar (a star in which hydrogen fusion has not yet begun) at the centre.
At this point in its evolution, the Sun is thought to have been a T Tauri star. Studies of T Tauri stars show that they are often accompanied by discs of pre-planetary matter with masses of 0.001 -- 0.1 M. These discs extend to several hundred AU -- the Hubble Space Telescope has observed protoplanetary discs of up to 1000 AU in diameter in star - forming regions such as the Orion Nebula -- and are rather cool, reaching a surface temperature of only about 1000 kelvin at their hottest. Within 50 million years, the temperature and pressure at the core of the Sun became so great that its hydrogen began to fuse, creating an internal source of energy that countered gravitational contraction until hydrostatic equilibrium was achieved. This marked the Sun 's entry into the prime phase of its life, known as the main sequence. Main - sequence stars derive energy from the fusion of hydrogen into helium in their cores. The Sun remains a main - sequence star today.
The various planets are thought to have formed from the solar nebula, the disc - shaped cloud of gas and dust left over from the Sun 's formation. The currently accepted method by which the planets formed is accretion, in which the planets began as dust grains in orbit around the central protostar. Through direct contact, these grains formed into clumps up to 200 metres in diameter, which in turn collided to form larger bodies (planetesimals) of ~ 10 kilometres (km) in size. These gradually increased through further collisions, growing at the rate of centimetres per year over the course of the next few million years.
The inner Solar System, the region of the Solar System inside 4 AU, was too warm for volatile molecules like water and methane to condense, so the planetesimals that formed there could only form from compounds with high melting points, such as metals (like iron, nickel, and aluminium) and rocky silicates. These rocky bodies would become the terrestrial planets (Mercury, Venus, Earth, and Mars). These compounds are quite rare in the Universe, comprising only 0.6 % of the mass of the nebula, so the terrestrial planets could not grow very large. The terrestrial embryos grew to about 0.05 Earth masses (M) and ceased accumulating matter about 100,000 years after the formation of the Sun; subsequent collisions and mergers between these planet - sized bodies allowed terrestrial planets to grow to their present sizes (see Terrestrial planets below).
When the terrestrial planets were forming, they remained immersed in a disk of gas and dust. The gas was partially supported by pressure and so did not orbit the Sun as rapidly as the planets. The resulting drag and, more importantly, gravitational interactions with the surrounding material caused a transfer of angular momentum, and as a result the planets gradually migrated to new orbits. Models show that density and temperature variations in the disk governed this rate of migration, but the net trend was for the inner planets to migrate inward as the disk dissipated, leaving the planets in their current orbits.
The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, which is the point between the orbits of Mars and Jupiter where the material is cool enough for volatile icy compounds to remain solid. The ices that formed the Jovian planets were more abundant than the metals and silicates that formed the terrestrial planets, allowing the giant planets to grow massive enough to capture hydrogen and helium, the lightest and most abundant elements. Planetesimals beyond the frost line accumulated up to 4 M within about 3 million years. Today, the four giant planets comprise just under 99 % of all the mass orbiting the Sun. Theorists believe it is no accident that Jupiter lies just beyond the frost line. Because the frost line accumulated large amounts of water via evaporation from infalling icy material, it created a region of lower pressure that increased the speed of orbiting dust particles and halted their motion toward the Sun. In effect, the frost line acted as a barrier that caused material to accumulate rapidly at ~ 5 AU from the Sun. This excess material coalesced into a large embryo (or core) on the order of 10 M, which began to accumulate an envelope via accretion of gas from the surrounding disc at an ever - increasing rate. Once the envelope mass became about equal to the solid core mass, growth proceeded very rapidly, reaching about 150 Earth masses ~ 10 years thereafter and finally topping out at 318 M. Saturn may owe its substantially lower mass simply to having formed a few million years after Jupiter, when there was less gas available to consume.
T Tauri stars like the young Sun have far stronger stellar winds than more stable, older stars. Uranus and Neptune are thought to have formed after Jupiter and Saturn did, when the strong solar wind had blown away much of the disc material. As a result, the planets accumulated little hydrogen and helium -- not more than 1 M each. Uranus and Neptune are sometimes referred to as failed cores. The main problem with formation theories for these planets is the timescale of their formation. At the current locations it would have taken millions of years for their cores to accrete. This means that Uranus and Neptune may have formed closer to the Sun -- near or even between Jupiter and Saturn -- and later migrated or were ejected outward (see Planetary migration below). Motion in the planetesimal era was not all inward toward the Sun; the Stardust sample return from Comet Wild 2 has suggested that materials from the early formation of the Solar System migrated from the warmer inner Solar System to the region of the Kuiper belt.
After between three and ten million years, the young Sun 's solar wind would have cleared away all the gas and dust in the protoplanetary disc, blowing it into interstellar space, thus ending the growth of the planets.
The planets were originally thought to have formed in or near their current orbits. However, this view underwent radical change during the late 20th and early 21st centuries. Currently, it is thought that the Solar System looked very different after its initial formation: several objects at least as massive as Mercury were present in the inner Solar System, the outer Solar System was much more compact than it is now, and the Kuiper belt was much closer to the Sun.
At the end of the planetary formation epoch the inner Solar System was populated by 50 -- 100 Moon - to Mars - sized planetary embryos. Further growth was possible only because these bodies collided and merged, which took less than 100 million years. These objects would have gravitationally interacted with one another, tugging at each other 's orbits until they collided, growing larger until the four terrestrial planets we know today took shape. One such giant collision is thought to have formed the Moon (see Moons below), while another removed the outer envelope of the young Mercury.
One unresolved issue with this model is that it can not explain how the initial orbits of the proto - terrestrial planets, which would have needed to be highly eccentric to collide, produced the remarkably stable and nearly circular orbits they have today. One hypothesis for this "eccentricity dumping '' is that the terrestrials formed in a disc of gas still not expelled by the Sun. The "gravitational drag '' of this residual gas would have eventually lowered the planets ' energy, smoothing out their orbits. However, such gas, if it existed, would have prevented the terrestrial planets ' orbits from becoming so eccentric in the first place. Another hypothesis is that gravitational drag occurred not between the planets and residual gas but between the planets and the remaining small bodies. As the large bodies moved through the crowd of smaller objects, the smaller objects, attracted by the larger planets ' gravity, formed a region of higher density, a "gravitational wake '', in the larger objects ' path. As they did so, the increased gravity of the wake slowed the larger objects down into more regular orbits.
The outer edge of the terrestrial region, between 2 and 4 AU from the Sun, is called the asteroid belt. The asteroid belt initially contained more than enough matter to form 2 -- 3 Earth - like planets, and, indeed, a large number of planetesimals formed there. As with the terrestrials, planetesimals in this region later coalesced and formed 20 -- 30 Moon - to Mars - sized planetary embryos; however, the proximity of Jupiter meant that after this planet formed, 3 million years after the Sun, the region 's history changed dramatically. Orbital resonances with Jupiter and Saturn are particularly strong in the asteroid belt, and gravitational interactions with more massive embryos scattered many planetesimals into those resonances. Jupiter 's gravity increased the velocity of objects within these resonances, causing them to shatter upon collision with other bodies, rather than accrete.
As Jupiter migrated inward following its formation (see Planetary migration below), resonances would have swept across the asteroid belt, dynamically exciting the region 's population and increasing their velocities relative to each other. The cumulative action of the resonances and the embryos either scattered the planetesimals away from the asteroid belt or excited their orbital inclinations and eccentricities. Some of those massive embryos too were ejected by Jupiter, while others may have migrated to the inner Solar System and played a role in the final accretion of the terrestrial planets. During this primary depletion period, the effects of the giant planets and planetary embryos left the asteroid belt with a total mass equivalent to less than 1 % that of the Earth, composed mainly of small planetesimals. This is still 10 -- 20 times more than the current mass in the main belt, which is now about 1 / 2,000 M. A secondary depletion period that brought the asteroid belt down close to its present mass is thought to have followed when Jupiter and Saturn entered a temporary 2: 1 orbital resonance (see below).
The inner Solar System 's period of giant impacts probably played a role in the Earth acquiring its current water content (~ 6 × 10 kg) from the early asteroid belt. Water is too volatile to have been present at Earth 's formation and must have been subsequently delivered from outer, colder parts of the Solar System. The water was probably delivered by planetary embryos and small planetesimals thrown out of the asteroid belt by Jupiter. A population of main - belt comets discovered in 2006 has been also suggested as a possible source for Earth 's water. In contrast, comets from the Kuiper belt or farther regions delivered not more than about 6 % of Earth 's water. The panspermia hypothesis holds that life itself may have been deposited on Earth in this way, although this idea is not widely accepted.
According to the nebular hypothesis, the outer two planets may be in the "wrong place ''. Uranus and Neptune (known as the "ice giants '') exist in a region where the reduced density of the solar nebula and longer orbital times render their formation highly implausible. The two are instead thought to have formed in orbits near Jupiter and Saturn, where more material was available, and to have migrated outward to their current positions over hundreds of millions of years.
The migration of the outer planets is also necessary to account for the existence and properties of the Solar System 's outermost regions. Beyond Neptune, the Solar System continues into the Kuiper belt, the scattered disc, and the Oort cloud, three sparse populations of small icy bodies thought to be the points of origin for most observed comets. At their distance from the Sun, accretion was too slow to allow planets to form before the solar nebula dispersed, and thus the initial disc lacked enough mass density to consolidate into a planet. The Kuiper belt lies between 30 and 55 AU from the Sun, while the farther scattered disc extends to over 100 AU, and the distant Oort cloud begins at about 50,000 AU. Originally, however, the Kuiper belt was much denser and closer to the Sun, with an outer edge at approximately 30 AU. Its inner edge would have been just beyond the orbits of Uranus and Neptune, which were in turn far closer to the Sun when they formed (most likely in the range of 15 -- 20 AU), and in 50 % of simulations ended up opposite locations, with Uranus farther from the Sun than Neptune.
According to the Nice model, after the formation of the Solar System, the orbits of all the giant planets continued to change slowly, influenced by their interaction with the large number of remaining planetesimals. After 500 -- 600 million years (about 4 billion years ago) Jupiter and Saturn fell into a 2: 1 resonance: Saturn orbited the Sun once for every two Jupiter orbits. This resonance created a gravitational push against the outer planets, possibly causing Neptune to surge past Uranus and plough into the ancient Kuiper belt. The planets scattered the majority of the small icy bodies inwards, while themselves moving outwards. These planetesimals then scattered off the next planet they encountered in a similar manner, moving the planets ' orbits outwards while they moved inwards. This process continued until the planetesimals interacted with Jupiter, whose immense gravity sent them into highly elliptical orbits or even ejected them outright from the Solar System. This caused Jupiter to move slightly inward. Those objects scattered by Jupiter into highly elliptical orbits formed the Oort cloud; those objects scattered to a lesser degree by the migrating Neptune formed the current Kuiper belt and scattered disc. This scenario explains the Kuiper belt 's and scattered disc 's present low mass. Some of the scattered objects, including Pluto, became gravitationally tied to Neptune 's orbit, forcing them into mean - motion resonances. Eventually, friction within the planetesimal disc made the orbits of Uranus and Neptune circular again.
In contrast to the outer planets, the inner planets are not thought to have migrated significantly over the age of the Solar System, because their orbits have remained stable following the period of giant impacts.
Another question is why Mars came out so small compared with Earth. A study by Southwest Research Institute, San Antonio, Texas, published June 6, 2011 (called the Grand Tack Hypothesis), proposes that Jupiter had migrated inward to 1.5 AU. After Saturn formed, migrated inward, and established the 2: 3 mean motion resonance with Jupiter, the study assumes that both planets migrated back to their present positions. Jupiter thus would have consumed much of the material that would have created a bigger Mars. The same simulations also reproduce the characteristics of the modern asteroid belt, with dry asteroids and water - rich objects similar to comets. However, it is unclear whether conditions in the solar nebula would have allowed Jupiter and Saturn to move back to their current positions, and according to current estimates this possibility appears unlikely. Moreover, alternative explanations for the small mass of Mars exist.
Gravitational disruption from the outer planets ' migration would have sent large numbers of asteroids into the inner Solar System, severely depleting the original belt until it reached today 's extremely low mass. This event may have triggered the Late Heavy Bombardment that occurred approximately 4 billion years ago, 500 -- 600 million years after the formation of the Solar System. This period of heavy bombardment lasted several hundred million years and is evident in the cratering still visible on geologically dead bodies of the inner Solar System such as the Moon and Mercury. The oldest known evidence for life on Earth dates to 3.8 billion years ago -- almost immediately after the end of the Late Heavy Bombardment.
Impacts are thought to be a regular (if currently infrequent) part of the evolution of the Solar System. That they continue to happen is evidenced by the collision of Comet Shoemaker -- Levy 9 with Jupiter in 1994, the 2009 Jupiter impact event, the Tunguska event, the Chelyabinsk meteor and the impact feature Meteor Crater in Arizona. The process of accretion, therefore, is not complete, and may still pose a threat to life on Earth.
Over the course of the Solar System 's evolution, comets were ejected out of the inner Solar System by the gravity of the giant planets, and sent thousands of AU outward to form the Oort cloud, a spherical outer swarm of cometary nuclei at the farthest extent of the Sun 's gravitational pull. Eventually, after about 800 million years, the gravitational disruption caused by galactic tides, passing stars and giant molecular clouds began to deplete the cloud, sending comets into the inner Solar System. The evolution of the outer Solar System also appears to have been influenced by space weathering from the solar wind, micrometeorites, and the neutral components of the interstellar medium.
The evolution of the asteroid belt after Late Heavy Bombardment was mainly governed by collisions. Objects with large mass have enough gravity to retain any material ejected by a violent collision. In the asteroid belt this usually is not the case. As a result, many larger objects have been broken apart, and sometimes newer objects have been forged from the remnants in less violent collisions. Moons around some asteroids currently can only be explained as consolidations of material flung away from the parent object without enough energy to entirely escape its gravity.
Moons have come to exist around most planets and many other Solar System bodies. These natural satellites originated by one of three possible mechanisms:
Jupiter and Saturn have several large moons, such as Io, Europa, Ganymede and Titan, which may have originated from discs around each giant planet in much the same way that the planets formed from the disc around the Sun. This origin is indicated by the large sizes of the moons and their proximity to the planet. These attributes are impossible to achieve via capture, while the gaseous nature of the primaries also make formation from collision debris unlikely. The outer moons of the giant planets tend to be small and have eccentric orbits with arbitrary inclinations. These are the characteristics expected of captured bodies. Most such moons orbit in the direction opposite the rotation of their primary. The largest irregular moon is Neptune 's moon Triton, which is thought to be a captured Kuiper belt object.
Moons of solid Solar System bodies have been created by both collisions and capture. Mars 's two small moons, Deimos and Phobos, are thought to be captured asteroids. The Earth 's Moon is thought to have formed as a result of a single, large head - on collision. The impacting object probably had a mass comparable to that of Mars, and the impact probably occurred near the end of the period of giant impacts. The collision kicked into orbit some of the impactor 's mantle, which then coalesced into the Moon. The impact was probably the last in the series of mergers that formed the Earth. It has been further hypothesized that the Mars - sized object may have formed at one of the stable Earth -- Sun Lagrangian points (either L or L) and drifted from its position. The moons of trans - Neptunian objects Pluto (Charon) and Orcus (Vanth) may also have formed by means of a large collision: the Pluto -- Charon, Orcus -- Vanth and Earth -- Moon systems are unusual in the Solar System in that the satellite 's mass is at least 1 % that of the larger body.
Astronomers estimate that the Solar System as we know it today will not change drastically until the Sun has fused almost all the hydrogen fuel in its core into helium, beginning its evolution from the main sequence of the Hertzsprung -- Russell diagram and into its red - giant phase. Even so, the Solar System will continue to evolve until then.
The Solar System is chaotic over million - and billion - year timescales, with the orbits of the planets open to long - term variations. One notable example of this chaos is the Neptune -- Pluto system, which lies in a 3: 2 orbital resonance. Although the resonance itself will remain stable, it becomes impossible to predict the position of Pluto with any degree of accuracy more than 10 -- 20 million years (the Lyapunov time) into the future. Another example is Earth 's axial tilt, which, due to friction raised within Earth 's mantle by tidal interactions with the Moon (see below), will be incomputable at some point between 1.5 and 4.5 billion years from now.
The outer planets ' orbits are chaotic over longer timescales, with a Lyapunov time in the range of 2 -- 230 million years. In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets ' orbits becoming significantly more -- or less -- elliptical.
Ultimately, the Solar System is stable in that none of the planets are likely to collide with each other or be ejected from the system in the next few billion years. Beyond this, within five billion years or so Mars 's eccentricity may grow to around 0.2, such that it lies on an Earth - crossing orbit, leading to a potential collision. In the same timescale, Mercury 's eccentricity may grow even further, and a close encounter with Venus could theoretically eject it from the Solar System altogether or send it on a collision course with Venus or Earth. This could happen within a billion years, according to numerical simulations in which Mercury 's orbit is perturbed.
The evolution of moon systems is driven by tidal forces. A moon will raise a tidal bulge in the object it orbits (the primary) due to the differential gravitational force across diameter of the primary. If a moon is revolving in the same direction as the planet 's rotation and the planet is rotating faster than the orbital period of the moon, the bulge will constantly be pulled ahead of the moon. In this situation, angular momentum is transferred from the rotation of the primary to the revolution of the satellite. The moon gains energy and gradually spirals outward, while the primary rotates more slowly over time.
The Earth and its Moon are one example of this configuration. Today, the Moon is tidally locked to the Earth; one of its revolutions around the Earth (currently about 29 days) is equal to one of its rotations about its axis, so it always shows one face to the Earth. The Moon will continue to recede from Earth, and Earth 's spin will continue to slow gradually. In about 50 billion years, if they survive the Sun 's expansion, the Earth and Moon will become tidally locked to each other; each will be caught up in what is called a "spin -- orbit resonance '' in which the Moon will circle the Earth in about 47 days and both Moon and Earth will rotate around their axes in the same time, each only visible from one hemisphere of the other. Other examples are the Galilean moons of Jupiter (as well as many of Jupiter 's smaller moons) and most of the larger moons of Saturn.
A different scenario occurs when the moon is either revolving around the primary faster than the primary rotates, or is revolving in the direction opposite the planet 's rotation. In these cases, the tidal bulge lags behind the moon in its orbit. In the former case, the direction of angular momentum transfer is reversed, so the rotation of the primary speeds up while the satellite 's orbit shrinks. In the latter case, the angular momentum of the rotation and revolution have opposite signs, so transfer leads to decreases in the magnitude of each (that cancel each other out). In both cases, tidal deceleration causes the moon to spiral in towards the primary until it either is torn apart by tidal stresses, potentially creating a planetary ring system, or crashes into the planet 's surface or atmosphere. Such a fate awaits the moons Phobos of Mars (within 30 to 50 million years), Triton of Neptune (in 3.6 billion years), Metis and Adrastea of Jupiter, and at least 16 small satellites of Uranus and Neptune. Uranus 's Desdemona may even collide with one of its neighboring moons.
A third possibility is where the primary and moon are tidally locked to each other. In that case, the tidal bulge stays directly under the moon, there is no transfer of angular momentum, and the orbital period will not change. Pluto and Charon are an example of this type of configuration.
Prior to the 2004 arrival of the Cassini -- Huygens spacecraft, the rings of Saturn were widely thought to be much younger than the Solar System and were not expected to survive beyond another 300 million years. Gravitational interactions with Saturn 's moons were expected to gradually sweep the rings ' outer edge toward the planet, with abrasion by meteorites and Saturn 's gravity eventually taking the rest, leaving Saturn unadorned. However, data from the Cassini mission led scientists to revise that early view. Observations revealed 10 km - wide icy clumps of material that repeatedly break apart and reform, keeping the rings fresh. Saturn 's rings are far more massive than the rings of the other giant planets. This large mass is thought to have preserved Saturn 's rings since it first formed 4.5 billion years ago, and is likely to preserve them for billions of years to come.
In the long term, the greatest changes in the Solar System will come from changes in the Sun itself as it ages. As the Sun burns through its supply of hydrogen fuel, it gets hotter and burns the remaining fuel even faster. As a result, the Sun is growing brighter at a rate of ten percent every 1.1 billion years. In one billion years ' time, as the Sun 's radiation output increases, its circumstellar habitable zone will move outwards, making the Earth 's surface too hot for liquid water to exist there naturally. At this point, all life on land will become extinct. Evaporation of water, a potent greenhouse gas, from the oceans ' surface could accelerate temperature increase, potentially ending all life on Earth even sooner. During this time, it is possible that as Mars 's surface temperature gradually rises, carbon dioxide and water currently frozen under the surface regolith will release into the atmosphere, creating a greenhouse effect that will heat the planet until it achieves conditions parallel to Earth today, providing a potential future abode for life. By 3.5 billion years from now, Earth 's surface conditions will be similar to those of Venus today.
Around 5.4 billion years from now, the core of the Sun will become hot enough to trigger hydrogen fusion in its surrounding shell. This will cause the outer layers of the star to expand greatly, and the star will enter a phase of its life in which it is called a red giant. Within 7.5 billion years, the Sun will have expanded to a radius of 1.2 AU -- 256 times its current size. At the tip of the red giant branch, as a result of the vastly increased surface area, the Sun 's surface will be much cooler (about 2600 K) than now and its luminosity much higher -- up to 2,700 current solar luminosities. For part of its red giant life, the Sun will have a strong stellar wind that will carry away around 33 % of its mass. During these times, it is possible that Saturn 's moon Titan could achieve surface temperatures necessary to support life.
As the Sun expands, it will swallow the planets Mercury and Venus. Earth 's fate is less clear; although the Sun will envelop Earth 's current orbit, the star 's loss of mass (and thus weaker gravity) will cause the planets ' orbits to move farther out. If it were only for this, Venus and Earth would probably escape incineration, but a 2008 study suggests that Earth will likely be swallowed up as a result of tidal interactions with the Sun 's weakly bound outer envelope.
Gradually, the hydrogen burning in the shell around the solar core will increase the mass of the core until it reaches about 45 % of the present solar mass. At this point the density and temperature will become so high that the fusion of helium into carbon will begin, leading to a helium flash; the Sun will shrink from around 250 to 11 times its present (main - sequence) radius. Consequently, its luminosity will decrease from around 3,000 to 54 times its current level, and its surface temperature will increase to about 4770 K. The Sun will become a horizontal giant, burning helium in its core in a stable fashion much like it burns hydrogen today. The helium - fusing stage will last only 100 million years. Eventually, it will have to again resort to the reserves of hydrogen and helium in its outer layers and will expand a second time, turning into what is known as an asymptotic giant. Here the luminosity of the Sun will increase again, reaching about 2,090 present luminosities, and it will cool to about 3500 K. This phase lasts about 30 million years, after which, over the course of a further 100,000 years, the Sun 's remaining outer layers will fall away, ejecting a vast stream of matter into space and forming a halo known (misleadingly) as a planetary nebula. The ejected material will contain the helium and carbon produced by the Sun 's nuclear reactions, continuing the enrichment of the interstellar medium with heavy elements for future generations of stars.
This is a relatively peaceful event, nothing akin to a supernova, which the Sun is too small to undergo as part of its evolution. Any observer present to witness this occurrence would see a massive increase in the speed of the solar wind, but not enough to destroy a planet completely. However, the star 's loss of mass could send the orbits of the surviving planets into chaos, causing some to collide, others to be ejected from the Solar System, and still others to be torn apart by tidal interactions. Afterwards, all that will remain of the Sun is a white dwarf, an extraordinarily dense object, 54 % its original mass but only the size of the Earth. Initially, this white dwarf may be 100 times as luminous as the Sun is now. It will consist entirely of degenerate carbon and oxygen, but will never reach temperatures hot enough to fuse these elements. Thus the white dwarf Sun will gradually cool, growing dimmer and dimmer.
As the Sun dies, its gravitational pull on the orbiting bodies such as planets, comets and asteroids will weaken due to its mass loss. All remaining planets ' orbits will expand; if Venus, Earth, and Mars still exist, their orbits will lie roughly at 1.4 AU (210,000,000 km), 1.9 AU (280,000,000 km), and 2.8 AU (420,000,000 km). They and the other remaining planets will become dark, frigid hulks, completely devoid of any form of life. They will continue to orbit their star, their speed slowed due to their increased distance from the Sun and the Sun 's reduced gravity. Two billion years later, when the Sun has cooled to the 6000 -- 8000K range, the carbon and oxygen in the Sun 's core will freeze, with over 90 % of its remaining mass assuming a crystalline structure. Eventually, after billions more years, the Sun will finally cease to shine altogether, becoming a black dwarf.
The Solar System travels alone through the Milky Way in a circular orbit approximately 30,000 light years from the Galactic Centre. Its speed is about 220 km / s. The period required for the Solar System to complete one revolution around the Galactic Centre, the galactic year, is in the range of 220 -- 250 million years. Since its formation, the Solar System has completed at least 20 such revolutions.
Various scientists have speculated that the Solar System 's path through the galaxy is a factor in the periodicity of mass extinctions observed in the Earth 's fossil record. One hypothesis supposes that vertical oscillations made by the Sun as it orbits the Galactic Centre cause it to regularly pass through the galactic plane. When the Sun 's orbit takes it outside the galactic disc, the influence of the galactic tide is weaker; as it re-enters the galactic disc, as it does every 20 -- 25 million years, it comes under the influence of the far stronger "disc tides '', which, according to mathematical models, increase the flux of Oort cloud comets into the Solar System by a factor of 4, leading to a massive increase in the likelihood of a devastating impact.
However, others argue that the Sun is currently close to the galactic plane, and yet the last great extinction event was 15 million years ago. Therefore, the Sun 's vertical position can not alone explain such periodic extinctions, and that extinctions instead occur when the Sun passes through the galaxy 's spiral arms. Spiral arms are home not only to larger numbers of molecular clouds, whose gravity may distort the Oort cloud, but also to higher concentrations of bright blue giants, which live for relatively short periods and then explode violently as supernovae.
Although the vast majority of galaxies in the Universe are moving away from the Milky Way, the Andromeda Galaxy, the largest member of the Local Group of galaxies, is heading toward it at about 120 km / s. In 4 billion years, Andromeda and the Milky Way will collide, causing both to deform as tidal forces distort their outer arms into vast tidal tails. If this initial disruption occurs, astronomers calculate a 12 % chance that the Solar System will be pulled outward into the Milky Way 's tidal tail and a 3 % chance that it will become gravitationally bound to Andromeda and thus a part of that galaxy. After a further series of glancing blows, during which the likelihood of the Solar System 's ejection rises to 30 %, the galaxies ' supermassive black holes will merge. Eventually, in roughly 6 billion years, the Milky Way and Andromeda will complete their merger into a giant elliptical galaxy. During the merger, if there is enough gas, the increased gravity will force the gas to the centre of the forming elliptical galaxy. This may lead to a short period of intensive star formation called a starburst. In addition, the infalling gas will feed the newly formed black hole, transforming it into an active galactic nucleus. The force of these interactions will likely push the Solar System into the new galaxy 's outer halo, leaving it relatively unscathed by the radiation from these collisions.
It is a common misconception that this collision will disrupt the orbits of the planets in the Solar System. Although it is true that the gravity of passing stars can detach planets into interstellar space, distances between stars are so great that the likelihood of the Milky Way -- Andromeda collision causing such disruption to any individual star system is negligible. Although the Solar System as a whole could be affected by these events, the Sun and planets are not expected to be disturbed.
However, over time, the cumulative probability of a chance encounter with a star increases, and disruption of the planets becomes all but inevitable. Assuming that the Big Crunch or Big Rip scenarios for the end of the Universe do not occur, calculations suggest that the gravity of passing stars will have completely stripped the dead Sun of its remaining planets within 1 quadrillion (10) years. This point marks the end of the Solar System. Although the Sun and planets may survive, the Solar System, in any meaningful sense, will cease to exist.
The time frame of the Solar System 's formation has been determined using radiometric dating. Scientists estimate that the Solar System is 4.6 billion years old. The oldest known mineral grains on Earth are approximately 4.4 billion years old. Rocks this old are rare, as Earth 's surface is constantly being reshaped by erosion, volcanism, and plate tectonics. To estimate the age of the Solar System, scientists use meteorites, which were formed during the early condensation of the solar nebula. Almost all meteorites (see the Canyon Diablo meteorite) are found to have an age of 4.6 billion years, suggesting that the Solar System must be at least this old.
Studies of discs around other stars have also done much to establish a time frame for Solar System formation. Stars between one and three million years old have discs rich in gas, whereas discs around stars more than 10 million years old have little to no gas, suggesting that giant planets within them have ceased forming.
Note: All dates and times in this chronology are approximate and should be taken as an order of magnitude indicator only.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Virgo Supercluster → Laniakea Supercluster → Pisces -- Cetus Supercluster Complex → Observable universe → Universe Each arrow (→) may be read as "within '' or "part of ''.
|
where is the blind spot located in relation to fovea | Blind spot (vision) - wikipedia
A blind spot, scotoma, is an obscurity of the visual field. A particular blind spot known as the physiological blind spot, "blind point '', or punctum caecum in medical literature, is the place in the visual field that corresponds to the lack of light - detecting photoreceptor cells on the optic disc of the retina where the optic nerve passes through the optic disc.
Because there are no cells to detect light on the optic disc, the corresponding part of the field of vision is invisible. Some process in human brains interpolates the blind spot based on surrounding detail and information from the other eye, so we do not normally perceive the blind spot.
The blind spot is located about 12 -- 15 ° temporally and 1.5 ° below the horizontal and is roughly 7.5 ° high and 5.5 ° wide.
Although all vertebrates have this blind spot, cephalopod eyes, which are only superficially similar, do not. In them, the optic nerve approaches the receptors from behind, so it does not create a break in the retina.
The first documented observation of the phenomenon was in the 1660s by Edme Mariotte in France. At the time it was generally thought that the point at which the optic nerve entered the eye should actually be the most sensitive portion of the retina; however, Mariotte 's discovery disproved this theory.
|
bob's big boy restaurants in southern california | Bob 's Big Boy - wikipedia
Coordinates: 34 ° 09 ′ 09 '' N 118 ° 20 ′ 46 '' W / 34.15258 ° N 118.346154 ° W / 34.15258; - 118.346154
Bob 's Big Boy is a restaurant chain founded by Bob Wian in Southern California in 1936, originally named Bob 's Pantry. It is now part of Big Boy Restaurants International, the current primary trademark owner and franchisor of the Big Boy system. As of September 2017, only five Bob 's Big Boy Restaurants remain in operation, all in Southern California. Those five locations are in Burbank (Toluca Lake), Calimesa, Downey, Norco, and Northridge.
Wian created the Big Boy hamburger less than a year after opening his original location by slicing a bun into three slices and adding two hamburger patties.
Bob Wian entered Glendale High School as the Great Depression started in 1929. His father 's furniture business bankrupt, Wian washed dishes in the school cafeteria to pay for lunch. Not being a committed student -- he never took homework home -- classmates voted Wian most unlikely to succeed. But his father 's business failure and classmates ' doubts would lead Wian to success.
After graduation in 1933, Wian found work as the overnight dishwasher at a Los Angeles White Log Coffee Shop, a West Coast chain similar to White Castle. Suddenly he was interested in how a restaurant worked and how it could be improved; he became determined to own a restaurant or even a chain. And he was intent on proving his classmates wrong.
Now ambitious, Wian was promoted to fry cook then manager. He learned the White Log system, its merchandising and pricing of foods, and use of a central commissary; Wian would later apply these to Big Boy. He would also adopt White Log 's pancake batter recipe. And at White Log he befriended fellow fry cook Bennie Washam, who would later sketch the original Big Boy mascot.
Wanting wider experience, Wian quit and took a dishwashing job with his favorite Glendale restaurant, Lionel Sternberger 's Rite Spot. Again he was promoted to counterman and fry cook. The man who hired Wian and was his boss, Leonard Dunagen, would later be hired by Wian and became vice president of Bob 's Big Boy.
Wian discovered how Rite Spot made its chili, hamburgers, and red hamburger relish -- the same relish Wian would use on the Big Boy hamburger. And he learned the importance of consistency in foods served.
The Rite Spot also offered curb service, as Bob 's Big Boy would in several years. (His sister Dottie was a carhop at the Rite Spot before moving to Bob 's Big Boy.) However, Wian 's first drive - in work was at a Pig Stand. The restaurants used pig shaped die cut menus and some had a big pig in front; similarly Bob 's would use Big Boy shaped die cut menus and later display large Big Boy statues out in front.
Wian also patronized other restaurants looking for additional menu items, attempting to recreate the favored items at home, and sometimes prodding food suppliers for how they were made. Bob 's hot fudge sundae, for example, was adopted from the sundae served at C.C. Brown 's Ice Cream Parlor.
Wian claimed that there was nothing new at Bob 's Big Boy -- excepting the double - deck Big Boy hamburger -- and that he was building Big Boy in his mind while at these previous jobs. Confident from his restaurant employment and encouraged by his father, he was already looking for a location when The Pantry was placed for sale.
According to a 2013 Los Angeles Times article, Bob Wian started the 10 - stool Bob 's Pantry hamburger stand at 900 E. Colorado in Glendale in 1936. This stand expanded adding carhop service and was eventually razed and replaced by a McAllister designed drive - in in 1956. This location was known as "Bob 's # 1 '' and remained as a Bob 's until it was closed and demolished in 1989. A second Glendale location at Broadway and Maryland was known as "Bob 's # 4 '', while the Toluca Lake location was known as "Bob 's # 6 ''. Wian sold the chain to Marriott in 1967.
In August 1936, a 22 - year - old Robert "Bob '' Wian, an expectant father making $19 a week, quit his job and sold his 1933 DeSoto Roadster for $300 as down payment on a ten stool hamburger stand called "The Pantry ''. He cleaned the place until it "shine (d) like a brand new penny '', borrowed $50 from his dad for meat and supplies, and reopened as "Bob 's Pantry ''. Six months later Wian assembled his special double decker hamburger. Created as a joke for a customer wanting something different, the novel hamburger began drawing business. The "snappy '' name given the popular sandwich provided a new name for his restaurant: Bob 's Big Boy.
Wian expanded the small restaurant and opened a second drive - in in Burbank in 1938 launching drive - in curb service at both locations. During World War II Wian experienced shortages of both meat and manpower, and one of the four Bob 's then in operation closed.
Soon after the war -- in 1946 -- Wian formed Robert C. Wian Enterprises to assume his restaurant business. In the late 1940s Wian licensed two operators in the East to sell his Big Boy hamburger, Frisch 's Big Boy in Cincinnati and Eat'n Park Big Boy in Pittsburgh; this served Wian 's goal to procure and maintain a national trademark. In 1951, the third licensee Alex Schoenbaum of Shoney 's Big Boy sold Wian on a formal franchising system and with the popularity of the drive - in restaurant a series of franchising and subfranchising Big Boy followed in the 1950s. The franchisees were required to sell the Big Boy hamburger and use their own name with Big Boy, not Bob 's.
By 1951 eight Bob 's Big Boy were in operation. The Bob 's imprint of the first (1956) edition of Adventures of the Big Boy comic book lists ten locations, including one in Arizona, while a legal filing claims twelve locations. The eighteenth Bob 's opened in 1963. And the chain 's 1965 menu lists 23 California restaurants (including one opening in late 1965 and another in 1966) and 6 Arizona restaurants.
Wian provided his workers health insurance and a profit sharing plan, which included the option of employees to franchise a Bob 's. In 1955 the first such unit opened in Phoenix, another opened in Tucson in 1962 and three more locations by 1968. (By 1974 there were nine Phoenix metropolitan area Bob 's, including one under construction, when the units were acquired by JB 's Big Boy for $2.7 million.)
In 1961, a merger was proposed with the John R. Thompson Co., a Chicago - based restaurant operator until talks broke off. Five years later another merger was proposed, and in May 1967, Bob Wian sold Big Boy to the Marriott Corporation. The sale included Wian 's 22 company owned Bob 's Big Boys. (Another 580 franchised Big Boy restaurants operated in 38 states nationwide.)
After the merger, Wian remained as president of Marriott 's new "Big Boy Restaurants of America '' division. Used to dealing with franchisees and store managers on a personal level, he became overwhelmed with the increasing size of the chain, describing it as "this monster I built ''. And unhappy with Marriott 's new focus on rapid growth and corporate profits, over his approach and practices, Wian became discouraged and resigned as president in May 1968. He accepted membership on Marriott 's board, but his guidance never sought, Wian likewise quit that position in the summer of 1969. Although he attended the 1972 annual Big Boy Executive Conference, Wian avoided Bob 's Big Boys and refused invitations to special events at the restaurants. He remained close friends with long time associates at Big Boy.
Marriott began rapid expansion using the Bob 's name that it now owned. By 1971 there were 49 California Bob 's, and by 1979, 132. It bought the Ken 's Big Boy franchise in the Washington, DC - Baltimore metropolitan area, using the Bob 's name instead. In the mid-1970s Bob 's Big Boy expanded into Alaska and Hawaii. Marriott also bought the 39 - unit Manners Big Boy chain in 1974 which may have been renamed Bob 's Big Boy in 1979. The 26 operating Cleveland - area restaurants were sold to and rebranded Elias Brothers Big Boy in 1985.
In 1987 Marriott sold the Big Boy trademark to Elias Brothers, the Michigan Big Boy franchisee, but retained the Bob 's Big Boy name and restaurants as a franchisee. At the time, Marriott operated 208 Bob 's Big Boys, including company - owned Bob 's in Maryland, Virginia, New York, New Jersey, Pennsylvania, Florida, and the District of Columbia. Many of these eastern Bob 's were sited at rest stops on interstate toll roads, often conversions of recently purchased Howard Johnson 's restaurants, while others were in territory that belonged to previous franchisees. Several Bob 's Big Boy restaurants, such as five in the Harrisburg, Pennsylvania area and two along the Ohio Turnpike were not owned by Marriott.
Marriott kept its company - owned Bob 's units under franchise after the sale to Elias Brothers, and the number of such Bob 's increased to 238 by 1989 when Marriott decided to divest of its food service operations. In 1991, already having converted some San Diego stores to Allie 's, named after J. Willard Marriott 's wife Alice, it sold 104 California Bob 's (to a company which outbid Elias Brothers) removing the units from the Bob 's chain and the Big Boy system. Toll road Bob 's Big Boys remained in service longer due to Marriott 's contractual obligations, but are no longer in operation. Privately - owned eastern US Bob 's were also sold.
When Robert Liggett (i.e., Big Boy Restaurants International) bought Big Boy from the bankrupt Elias Brothers in 2000, ten western Bob 's Big Boys were in operation, dropping to eight by 2006. (The last Bob 's in Hawaii closed after suffering a fire in 2009.) Now limited to California, Bob 's grew to 16 restaurants by 2011, but started to decline again. Although Big Boy Restaurants International expected to open 140 California units by 2018, in 2017 only five Bob 's Big Boy Restaurants remain, all in the Greater Los Angeles Area of Southern California.
The Bob 's Big Boy Restaurant located at 4211 Riverside Drive in Burbank, California, is the oldest remaining Bob 's Big Boy in the United States. Built in 1949 by local residents Scott MacDonald and Ward Albert, it was designed by Los Angeles architect Wayne McAllister, "incorporating the 1940s transitional design of streamline moderne style, while anticipating the freeform 1950s coffee shop architecture. The towering Bob 's sign is an integral part of the building design and its most prominent feature. '' The building is said to have "made McAllister 's reputation, '' and he is credited with creating the restaurant 's circular drive - through design.
The restaurant was designated a California Point of Historical Interest in 1993. McAllister worked to preserve the structure as a historic landmark. McAllister was also the architect for the original Lawry 's restaurant on La Cienega Boulevard in Beverly Hills, the original Sands Hotel casino and Desert Inn casino in Las Vegas, and designed 40 coffee shops in the Los Angeles area in the late 1940s, and each with a distinctive look. Coffee shops started in Los Angeles because of the popularity of automobiles, and then spread across the United States.
The design of the Toluca Lake Bob 's represents a distinct period in the region 's architectural history, a style often referred to as Googie architecture. The building features a curving windowed facade and expansive roof overhangs with 1950s "free - form '' style of cantilevered roofs and tall display signs.
The Riverside Drive Bob 's Big Boy was designed as a drive - in, in which car hops brought food to the cars, and now operates a drive - thru window. In 1993, the tower sign was renovated, the dining room updated and an outdoor dining area added. Carhop service was reintroduced on weekends and a weekly classic car show is hosted in the parking lot.
Bob Hope and other movie personalities such as Mickey Rooney, Debbie Reynolds, Jonathan Winters, Dana Andrews, Martha Raye, Alexis Smith and Craig Stevens, were once regulars at the restaurant. Hope frequented the Burbank drive - in because it afforded him privacy.
Famed British musical group The Beatles dined at the Burbank location during their 1965 U.S. Tour. The table is the last booth on the right as one walks in, where the end of the windows facing out towards Riverside Drive 's stop. For many years a plaque described the event; the plaque has been stolen many times by fans, and has been replaced each time. Many regulars to the restaurant call this table and booth "The Beatles Booth. ''
Company sites
|
who is the founder of the baha'i faith | Bahá'í Faith - wikipedia
The Bahá'í Faith (Persian: بهائی Bahā'i) is a religion teaching the essential worth of all religions, and the unity and equality of all people. Established by Bahá'u'lláh in 1863, it initially grew in the Middle East and now has between 5 and 7 million adherents, known as Bahá'ís, spread out into most of the world 's countries and territories, with the highest concentration in Iran.
The religion was born in Iran, where it has faced ongoing persecution since its inception. It grew from the mid-19th century Bábí religion, whose founder taught that God would soon send a prophet in the manner of Jesus or Muhammad. In 1863, after being banished from his native Iran, Bahá'u'lláh announced that he was this prophet. He was further exiled, spending over a decade in the prison city of Akka in the Ottoman province of Syria, in what is now Israel. Following Bahá'u'lláh's death in 1892, leadership of the religion fell to his son ` Abdu'l - Bahá (1844 - 1921), and later his great - grandson Shoghi Effendi (1897 - 1957). Bahá'ís around the world annually elect local, regional, and national Spiritual Assemblies that govern the affairs of the religion, and every five years the members of all National Spiritual Assemblies elect the Universal House of Justice, the nine - member supreme governing institution of the worldwide Bahá'í community, which sits in Haifa, Israel near the shrine of Báb.
Bahá'í teachings are in some ways similar to other monotheistic faiths: God is considered single and all - powerful. However, Bahá'u'lláh taught that religion is orderly and progressively revealed by one God through Manifestations of God who are the founders of major world religions throughout history; Buddha, Moses, Jesus, and Muhammad being the most recent in the period before the Báb and Bahá'u'lláh. As such, Bahá'ís regard the major religions as fundamentally unified in purpose, though varied in social practices and interpretations. There is a similar emphasis on the unity of all people, openly rejecting notions of racism and nationalism. At the heart of Bahá'í teachings is the goal of a unified world order that ensures the prosperity of all nations, races, creeds, and classes.
Letters written by Bahá'u'lláh to various individuals, including some heads of state, have been collected and canonized into a body of Bahá'í scripture that includes works by his son ` Abdu'l - Bahá, and also the Báb, who is regarded as Bahá'u'lláh's forerunner. Prominent among Bahá'í literature are the Kitáb - i - Aqdas, Kitáb - i - Íqán, Some Answered Questions, and The Dawn - Breakers.
In English - language use, the word Bahá'í is used either as an adjective to refer to the Bahá'í Faith or as a term for a follower of Bahá'u'lláh. The word is not a noun meaning the religion as a whole. It is derived from the Arabic Bahá ' (بهاء), meaning "glory '' or "splendor '', although the word Bahá'í is actually derived from its use as a loan word in Persian, in particular the "' i '' suffix is Persian rather than Arabic. The term "Bahaism '' (or "Baha'ism '') is still used, mainly in a pejorative sense.
Three core principles establish a basis for Bahá'í teachings and doctrine: the unity of God, the unity of religion, and the unity of humanity. From these postulates stems the belief that God periodically reveals his will through divine messengers, whose purpose is to transform the character of humankind and to develop, within those who respond, moral and spiritual qualities. Religion is thus seen as orderly, unified, and progressive from age to age.
The Bahá'í writings describe a single, personal, inaccessible, omniscient, omnipresent, imperishable, and almighty God who is the creator of all things in the universe. The existence of God and the universe is thought to be eternal, without a beginning or end. Though inaccessible directly, God is nevertheless seen as conscious of creation, with a will and purpose that is expressed through messengers termed Manifestations of God.
Bahá'í teachings state that God is too great for humans to fully comprehend, or to create a complete and accurate image of, by themselves. Therefore, human understanding of God is achieved through his revelations via his Manifestations. In the Bahá'í religion, God is often referred to by titles and attributes (for example, the All - Powerful, or the All - Loving), and there is a substantial emphasis on monotheism; such doctrines as the Trinity are seen as compromising, if not contradicting, the Bahá'í view that God is single and has no equal. The Bahá'í teachings state that the attributes which are applied to God are used to translate Godliness into human terms and also to help individuals concentrate on their own attributes in worshipping God to develop their potentialities on their spiritual path. According to the Bahá'í teachings the human purpose is to learn to know and love God through such methods as prayer, reflection, and being of service to others.
Bahá'í notions of progressive religious revelation result in their accepting the validity of the well known religions of the world, whose founders and central figures are seen as Manifestations of God. Religious history is interpreted as a series of dispensations, where each manifestation brings a somewhat broader and more advanced revelation that is rendered as a text of scripture and passed on through history with greater or lesser reliability but at least true in substance, suited for the time and place in which it was expressed. Specific religious social teachings (for example, the direction of prayer, or dietary restrictions) may be revoked by a subsequent manifestation so that a more appropriate requirement for the time and place may be established. Conversely, certain general principles (for example, neighbourliness, or charity) are seen to be universal and consistent. In Bahá'í belief, this process of progressive revelation will not end; it is, however, believed to be cyclical. Bahá'ís do not expect a new manifestation of God to appear within 1000 years of Bahá'u'lláh's revelation.
Bahá'í beliefs are sometimes described as syncretic combinations of earlier religious beliefs. Bahá'ís, however, assert that their religion is a distinct tradition with its own scriptures, teachings, laws, and history. While the religion was initially seen as a sect of Islam, most religious specialists now see it as an independent religion, with its religious background in Shi'a Islam being seen as analogous to the Jewish context in which Christianity was established. Muslim institutions and clergy, both Sunni and Shia, consider Bahá'ís to be deserters or apostates from Islam, which has led to Bahá'ís being persecuted. Bahá'ís describe their faith as an independent world religion, differing from the other traditions in its relative age and in the appropriateness of Bahá'u'lláh's teachings to the modern context. Bahá'u'lláh is believed to have fulfilled the messianic expectations of these precursor faiths.
The Bahá'í writings state that human beings have a "rational soul '', and that this provides the species with a unique capacity to recognize God 's status and humanity 's relationship with its creator. Every human is seen to have a duty to recognize God through his messengers, and to conform to their teachings. Through recognition and obedience, service to humanity and regular prayer and spiritual practice, the Bahá'í writings state that the soul becomes closer to God, the spiritual ideal in Bahá'í belief. When a human dies, the soul passes into the next world, where its spiritual development in the physical world becomes a basis for judgment and advancement in the spiritual world. Heaven and Hell are taught to be spiritual states of nearness or distance from God that describe relationships in this world and the next, and not physical places of reward and punishment achieved after death.
The Bahá'í writings emphasize the essential equality of human beings, and the abolition of prejudice. Humanity is seen as essentially one, though highly varied; its diversity of race and culture are seen as worthy of appreciation and acceptance. Doctrines of racism, nationalism, caste, social class, and gender - based hierarchy are seen as artificial impediments to unity. The Bahá'í teachings state that the unification of humanity is the paramount issue in the religious and political conditions of the present world.
Shoghi Effendi, the Guardian of the religion from 1921 to 1957, wrote the following summary of what he considered to be the distinguishing principles of Bahá'u'lláh's teachings, which, he said, together with the laws and ordinances of the Kitáb - i - Aqdas constitute the bedrock of the Bahá'í Faith:
The independent search after truth, unfettered by superstition or tradition; the oneness of the entire human race, the pivotal principle and fundamental doctrine of the Faith; the basic unity of all religions; the condemnation of all forms of prejudice, whether religious, racial, class or national; the harmony which must exist between religion and science; the equality of men and women, the two wings on which the bird of human kind is able to soar; the introduction of compulsory education; the adoption of a universal auxiliary language; the abolition of the extremes of wealth and poverty; the institution of a world tribunal for the adjudication of disputes between nations; the exaltation of work, performed in the spirit of service, to the rank of worship; the glorification of justice as the ruling principle in human society, and of religion as a bulwark for the protection of all peoples and nations; and the establishment of a permanent and universal peace as the supreme goal of all mankind -- these stand out as the essential elements (which Bahá'u'lláh proclaimed).
The following principles are frequently listed as a quick summary of the Bahá'í teachings. They are derived from transcripts of speeches given by ` Abdu'l - Bahá during his tour of Europe and North America in 1912. The list is not authoritative and a variety of such lists circulate.
With specific regard to the pursuit of world peace, Bahá'u'lláh prescribed a world - embracing collective security arrangement for the establishment of a temporary era of peace referred to in the Baha'i teachings as the Lesser Peace. For the establishment of a lasting peace (The Most Great Peace) and the purging of the ' overwhelming Corruptions ' it is necessary that all the people of the world universally unite under a universal Faith.
The Bahá'í teachings speak of both a "Greater Covenant '', being universal and endless, and a "Lesser Covenant '', being unique to each religious dispensation. The Lesser Covenant is viewed as an agreement between a Messenger of God and his followers and includes social practices and the continuation of authority in the religion. At this time Bahá'ís view Bahá'u'lláh's revelation as a binding lesser covenant for his followers; in the Bahá'í writings being firm in the covenant is considered a virtue to work toward. The Greater Covenant is viewed as a more enduring agreement between God and humanity, where a Manifestation of God is expected to come to humanity about every thousand years, at times of turmoil and uncertainty. With unity as an essential teaching of the religion, Bahá'ís follow an administration they believe is divinely ordained, and therefore see attempts to create schisms and divisions as efforts that are contrary to the teachings of Bahá'u'lláh. Schisms have occurred over the succession of authority, but any Bahá'í divisions have had relatively little success and have failed to attract a sizeable following. The followers of such divisions are regarded as Covenant - breakers and shunned, essentially excommunicated.
The canonical texts are the writings of the Báb, Bahá'u'lláh, ` Abdu'l - Bahá, Shoghi Effendi and the Universal House of Justice, and the authenticated talks of ` Abdu'l - Bahá. The writings of the Báb and Bahá'u'lláh are considered as divine revelation, the writings and talks of ` Abdu'l - Bahá and the writings of Shoghi Effendi as authoritative interpretation, and those of the Universal House of Justice as authoritative legislation and elucidation. Some measure of divine guidance is assumed for all of these texts. Some of Bahá'u'lláh's most important writings include the Kitáb - i - Aqdas, literally the Most Holy Book, which defines many laws and practices for individuals and society, the Kitáb - i - Íqán, literally the Book of Certitude, which became the foundation of much of Bahá'í belief, the Gems of Divine Mysteries, which includes further doctrinal foundations, and the Seven Valleys and the Four Valleys which are mystical treatises.
Although the Bahá'í teachings have a strong emphasis on social and ethical issues, there exist a number of foundational texts that have been described as mystical. The Seven Valleys is considered Bahá'u'lláh's "greatest mystical composition. '' It was written to a follower of Sufism, in the style of ` Attar, The Persian Muslim poet, and sets forth the stages of the soul 's journey towards God. It was first translated into English in 1906, becoming one of the earliest available books of Bahá'u'lláh to the West. The Hidden Words is another book written by Bahá'u'lláh during the same period, containing 153 short passages in which Bahá'u'lláh claims to have taken the basic essence of certain spiritual truths and written them in brief form.
The Bahá'í Faith formed from the Persian religion of the Báb, a merchant who began preaching a new interpretation of Shia Islam in 1844. The Báb 's claim to divine revelation was rejected by the generality of Islamic clergy in Iran, ending in his public execution by authorities in 1850. The Báb taught that God would soon send a new messenger, and Bahá'ís consider Bahá'u'lláh to be that person. Although they are distinct movements, the Báb is so interwoven into Bahá'í theology and history that Bahá'ís celebrate his birth, death, and declaration as holy days, consider him one of their three central figures (along with Bahá'u'lláh and ` Abdu'l - Bahá), and a historical account of the Bábí movement (The Dawn - Breakers) is considered one of three books that every Bahá'í should "master '' and read "over and over again ''.
The Bahá'í community was mostly confined to the Persian and Ottoman empires until after the death of Bahá'u'lláh in 1892, at which time he had followers in 13 countries of Asia and Africa. Under the leadership of his son, ` Abdu'l - Bahá, the religion gained a footing in Europe and America, and was consolidated in Iran, where it still suffers intense persecution. After the death of ` Abdu'l - Bahá in 1921, the leadership of the Bahá'í community entered a new phase, evolving from a single individual to a administrative order with both elected bodies and appointed individuals.
On the evening of 22 May 1844, Siyyid ` Alí - Muhammad of Shiraz proclaimed that he was "the Báb '' (الباب "the Gate ''), referring to his later claim to the status of Mahdi, the Twelfth Imam of Shi ` a Islam. His followers were therefore known as Bábís. As the Báb 's teachings spread, which the Islamic clergy saw as a threat, his followers came under increased persecution and torture. The conflicts escalated in several places to military sieges by the Shah 's army. The Báb himself was imprisoned and eventually executed in 1850.
Bahá'ís see the Báb as the forerunner of the Bahá'í Faith, because the Báb 's writings introduced the concept of "He whom God shall make manifest '', a Messianic figure whose coming, according to Bahá'ís, was announced in the scriptures of all of the world 's great religions, and whom Bahá'u'lláh, the founder of the Bahá'í Faith, claimed to be in 1863. The Báb 's tomb, located in Haifa, Israel, is an important place of pilgrimage for Bahá'ís. The remains of the Báb were brought secretly from Iran to the Holy Land and eventually interred in the tomb built for them in a spot specifically designated by Bahá'u'lláh. The main written works translated into English of the Báb 's are collected in Selections from the Writings of the Báb out of the estimated 135 works.
Mírzá Husayn ` Alí Núrí was one of the early followers of the Báb, and later took the title of Bahá'u'lláh. Bábís faced a period of persecution that peaked in 1852 - 53 after a few individuals made a failed attempt to assassinate the Shah. Although they acted alone, the government responded with collective punishment, killing many Bábís. Bahá'u'lláh was put in prison. He claimed that in 1853, while incarcerated in the dungeon of the Síyáh - Chál in Tehran, he received the first intimations that he was the one anticipated by the Báb when he received a visit from the Maid of Heaven.
Shortly thereafter he was expelled from Tehran to Baghdad, in the Ottoman Empire; then to Constantinople (now Istanbul); and then to Adrianople (now Edirne). In 1863, at the time of his banishment from Baghdad to Constantinople, Bahá'u'lláh declared his claim to a divine mission to his family and followers. Tensions then grew between him and Subh - i - Azal, the appointed leader of the Bábís who did not recognize Bahá'u'lláh's claim. Throughout the rest of his life Bahá'u'lláh gained the allegiance of most of the Bábís, who came to be known as Bahá'ís. Beginning in 1866, he began declaring his mission as a Messenger of God in letters to the world 's religious and secular rulers, including Pope Pius IX, Napoleon III, and Queen Victoria.
Bahá'u'lláh was banished by Sultan Abdülâziz a final time in 1868 to the Ottoman penal colony of ` Akká, in present - day Israel. Towards the end of his life, the strict and harsh confinement was gradually relaxed, and he was allowed to live in a home near ` Akká, while still officially a prisoner of that city. He died there in 1892. Bahá'ís regard his resting place at Bahjí as the Qiblih to which they turn in prayer each day.
` Abbás Effendi was Bahá'u'lláh's eldest son, known by the title of ` Abdu'l - Bahá (Servant of Bahá). His father left a Will that appointed ` Abdu'l - Bahá as the leader of the Bahá'í community, and designated him as the "Centre of the Covenant '', "Head of the Faith '', and the sole authoritative interpreter of Bahá'u'lláh's writings. ` Abdu'l - Bahá had shared his father 's long exile and imprisonment, which continued until ` Abdu'l - Bahá 's own release as a result of the Young Turk Revolution in 1908. Following his release he led a life of travelling, speaking, teaching, and maintaining correspondence with communities of believers and individuals, expounding the principles of the Bahá'í Faith.
There are over 27,000 extant documents by ` Abdu'l - Bahá, mostly letters, of which only a fraction have been translated into English. Among the more well known are The Secret of Divine Civilization, the Tablet to Auguste - Henri Forel, and Some Answered Questions. Additionally notes taken of a number of his talks were published in various volumes like Paris Talks during his journeys to the West.
Bahá'u'lláh's Kitáb - i - Aqdas and The Will and Testament of ` Abdu'l - Bahá are foundational documents of the Bahá'í administrative order. Bahá'u'lláh established the elected Universal House of Justice, and ` Abdu'l - Bahá established the appointed hereditary Guardianship and clarified the relationship between the two institutions. In his Will, ` Abdu'l - Bahá appointed his eldest grandson, Shoghi Effendi, as the first Guardian of the Bahá'í Faith, serving as head of the religion until his death, for 36 years.
Shoghi Effendi throughout his lifetime translated Bahá'í texts; developed global plans for the expansion of the Bahá'í community; developed the Bahá'í World Centre; carried on a voluminous correspondence with communities and individuals around the world; and built the administrative structure of the religion, preparing the community for the election of the Universal House of Justice. He died in 1957 under conditions that did not allow for a successor to be appointed.
At local, regional, and national levels, Bahá'ís elect members to nine - person Spiritual Assemblies, which run the affairs of the religion. There are also appointed individuals working at various levels, including locally and internationally, which perform the function of propagating the teachings and protecting the community. The latter do not serve as clergy, which the Bahá'í Faith does not have. The Universal House of Justice, first elected in 1963, remains the successor and supreme governing body of the Bahá'í Faith, and its 9 members are elected every five years by the members of all National Spiritual Assemblies. Any male Bahá'í, 21 years or older, is eligible to be elected to the Universal House of Justice; all other positions are open to male and female Bahá'ís.
In 1937, Shoghi Effendi launched a seven - year plan for the Bahá'ís of North America, followed by another in 1946. In 1953, he launched the first international plan, the Ten Year World Crusade. This plan included extremely ambitious goals for the expansion of Bahá'í communities and institutions, the translation of Bahá'í texts into several new languages, and the sending of Bahá'í pioneers into previously unreached nations. He announced in letters during the Ten Year Crusade that it would be followed by other plans under the direction of the Universal House of Justice, which was elected in 1963 at the culmination of the Crusade. The House of Justice then launched a nine - year plan in 1964, and a series of subsequent multi-year plans of varying length and goals followed, guiding the direction of the international Bahá'í community.
Annually, on 21 April, the Universal House of Justice sends a ' Ridván ' message to the worldwide Bahá'í community, which generally gives an update on the progress made concerning the current plan, and provides further guidance for the year to come. The Bahá'ís around the world are currently being encouraged to focus on capacity building through children 's classes, youth groups, devotional gatherings, and a systematic study of the religion known as study circles. Further focuses are involvement in social action and participation in the prevalent discourses of society. The years from 2001 until 2021 represent four successive five - year plans, culminating in the centennial anniversary of the passing of ` Abdu'l - Bahá.
A Bahá'í published document reported 4.74 million Bahá'ís in 1986 growing at a rate of 4.4 %. Bahá'í sources since 1991 usually estimate the worldwide Bahá'í population to be above 5 million. The World Christian Encyclopedia estimated 7.1 million Bahá'ís in the world in 2000, representing 218 countries, and 7.3 million in 2010 with the same source. They further state: "The Baha'i Faith is the only religion to have grown faster in every United Nations region over the past 100 years than the general population; Baha'i was thus the fastest - growing religion between 1910 and 2010, growing at least twice as fast as the population of almost every UN region. '' This source 's only systematic flaw was to consistently have a higher estimate of Christians than other cross-national data sets.
From its origins in the Persian and Ottoman Empires, by the early 20th century there were a number of converts in South and South East Asia, Europe, and North America. During the 1950s and 1960s, vast travel teaching efforts brought the religion to almost every country and territory of the world. By the 1990s, Bahá'ís were developing programs for systematic consolidation on a large scale, and the early 21st century saw large influxes of new adherents around the world. The Bahá'í Faith is currently the largest religious minority in Iran, Panama, Belize, and South Carolina; the second largest international religion in Bolivia, Zambia, and Papua New Guinea; and the third largest international religion in Chad and Kenya.
According to The World Almanac and Book of Facts 2004:
The majority of Bahá'ís live in Asia (3.6 million), Africa (1.8 million), and Latin America (900,000). According to some estimates, the largest Bahá'í community in the world is in India, with 2.2 million Bahá'ís, next is Iran, with 350,000, the US, with 150,000, and Brazil, with 60,000. Aside from these countries, numbers vary greatly. Currently, no country has a Bahá'í majority.
The Bahá'í Faith is a medium - sized religion and was listed in The Britannica Book of the Year (1992 -- present) as the second most widespread of the world 's independent religions in terms of the number of countries represented. According to Britannica, the Bahá'í Faith (as of 2010) is established in 221 countries and territories and has an estimated seven million adherents worldwide. Additionally, Bahá'ís have self - organized in most of the nations of the world.
The Bahá'í religion was ranked by the Foreign Policy magazine as the world 's second fastest growing religion by percentage (1.7 %) in 2007.
The following are a few examples from Bahá'u'lláh's teachings on personal conduct that are required or encouraged of his followers.
The following are a few examples from Bahá'u'lláh's teachings on personal conduct that are prohibited or discouraged.
While some of the laws from the Kitáb - i - Aqdas are applicable at the present time, others are dependent upon the existence of a predominantly Bahá'í society, such as the punishments for arson or murder. The laws, when not in direct conflict with the civil laws of the country of residence, are binding on every Bahá'í, and the observance of personal laws, such as prayer or fasting, is the sole responsibility of the individual.
The purpose of marriage in the Bahá'i faith is mainly to foster spiritual harmony, fellowship and unity between a man and a woman and to provide a stable and loving environment for the rearing of children. The Bahá'í teachings on marriage call it a fortress for well - being and salvation and place marriage and the family as the foundation of the structure of human society. Bahá'u'lláh highly praised marriage, discouraged divorce, and required chastity outside of marriage; Bahá'u'lláh taught that a husband and wife should strive to improve the spiritual life of each other. Interracial marriage is also highly praised throughout Bahá'í scripture.
Bahá'ís intending to marry are asked to obtain a thorough understanding of the other 's character before deciding to marry. Although parents should not choose partners for their children, once two individuals decide to marry, they must receive the consent of all living biological parents, whether they are Bahá'í or not. The Bahá'í marriage ceremony is simple; the only compulsory part of the wedding is the reading of the wedding vows prescribed by Bahá'u'lláh which both the groom and the bride read, in the presence of two witnesses. The vows are "We will all, verily, abide by the Will of God. ''
Bahá'u'lláh prohibited a mendicant and ascetic lifestyle. Monasticism is forbidden, and Bahá'ís are taught to practice spirituality while engaging in useful work. The importance of self - exertion and service to humanity in one 's spiritual life is emphasised further in Bahá'u'lláh's writings, where he states that work done in the spirit of service to humanity enjoys a rank equal to that of prayer and worship in the sight of God.
Most Bahá'í meetings occur in individuals ' homes, local Bahá'í centers, or rented facilities. Worldwide, there are currently seven Bahá'í Houses of Worship, with an eighth near completion in Chile, and a further seven planned as of April 2012. Bahá'í writings refer to an institution called a "Mashriqu'l - Adhkár '' (Dawning - place of the Mention of God), which is to form the center of a complex of institutions including a hospital, university, and so on. The first ever Mashriqu'l - Adhkár in ` Ishqábád, Turkmenistan, has been the most complete House of Worship.
The Bahá'í calendar is based upon the calendar established by the Báb. The year consists of 19 months, each having 19 days, with four or five intercalary days, to make a full solar year. The Bahá'í New Year corresponds to the traditional Persian New Year, called Naw Rúz, and occurs on the vernal equinox, near 21 March, at the end of the month of fasting. Bahá'í communities gather at the beginning of each month at a meeting called a Feast for worship, consultation and socializing.
Each of the 19 months is given a name which is an attribute of God; some examples include Bahá ' (Splendour), ' Ilm (Knowledge), and Jamál (Beauty). The Bahá'í week is familiar in that it consists of seven days, with each day of the week also named after an attribute of God. Bahá'ís observe 11 Holy Days throughout the year, with work suspended on 9 of these. These days commemorate important anniversaries in the history of the religion.
The symbols of the religion are derived from the Arabic word Bahá ' (بهاء "splendor '' or "glory ''), with a numerical value of 9, which is why the most common symbol is the nine - pointed star. The ringstone symbol and calligraphy of the Greatest Name are also often encountered. The former consists of two five - pointed stars interspersed with a stylized Bahá ' whose shape is meant to recall the three onenesses, while the latter is a calligraphic rendering of the phrase Yá Bahá'u'l - Abhá (يا بهاء الأبهى "O Glory of the Most Glorious! '').
The five - pointed star is the symbol of the Bahá'í Faith. In the Bahá'í Faith, the star is known as the Haykal (Arabic: "temple '' ), and it was initiated and established by the Báb. The Báb and Bahá'u'lláh wrote various works in the form of a pentagram.
Since its inception the Bahá'í Faith has had involvement in socio - economic development beginning by giving greater freedom to women, promulgating the promotion of female education as a priority concern, and that involvement was given practical expression by creating schools, agricultural coops, and clinics.
The religion entered a new phase of activity when a message of the Universal House of Justice dated 20 October 1983 was released. Bahá'ís were urged to seek out ways, compatible with the Bahá'í teachings, in which they could become involved in the social and economic development of the communities in which they lived. Worldwide in 1979 there were 129 officially recognized Bahá'í socio - economic development projects. By 1987, the number of officially recognized development projects had increased to 1482.
Bahá'u'lláh wrote of the need for world government in this age of humanity 's collective life. Because of this emphasis the international Bahá'í community has chosen to support efforts of improving international relations through organizations such as the League of Nations and the United Nations, with some reservations about the present structure and constitution of the UN. The Bahá'í International Community is an agency under the direction of the Universal House of Justice in Haifa, and has consultative status with the following organizations:
The Bahá'í International Community has offices at the United Nations in New York and Geneva and representations to United Nations regional commissions and other offices in Addis Ababa, Bangkok, Nairobi, Rome, Santiago, and Vienna. In recent years an Office of the Environment and an Office for the Advancement of Women were established as part of its United Nations Office. The Bahá'í Faith has also undertaken joint development programs with various other United Nations agencies. In the 2000 Millennium Forum of the United Nations a Bahá'í was invited as the only non-governmental speaker during the summit.
Bahá'ís continue to be persecuted in Islamic countries, as Islamic leaders do not recognize the Bahá'í Faith as an independent religion, but rather as apostasy from Islam. The most severe persecutions have occurred in Iran, where over 200 Bahá'ís were executed between 1978 and 1998, and in Egypt. The rights of Bahá'ís have been restricted to greater or lesser extents in numerous other countries, including Afghanistan, Indonesia, Iraq, Morocco,, Yemen and several countries in sub-Saharan Africa.
The marginalization of the Iranian Bahá'ís by current governments is rooted in historical efforts by Muslim clergy to persecute the religious minority. When the Báb started attracting a large following, the clergy hoped to stop the movement from spreading by stating that its followers were enemies of God. These clerical directives led to mob attacks and public executions. Starting in the twentieth century, in addition to repression aimed at individual Bahá'ís, centrally directed campaigns that targeted the entire Bahá'í community and its institutions were initiated. In one case in Yazd in 1903 more than 100 Bahá'ís were killed. Bahá'í schools, such as the Tarbiyat boys ' and girls ' schools in Tehran, were closed in the 1930s and 1940s, Bahá'í marriages were not recognized and Bahá'í texts were censored.
During the reign of Mohammad Reza Pahlavi, to divert attention from economic difficulties in Iran and from a growing nationalist movement, a campaign of persecution against the Bahá'ís was instituted. An approved and coordinated anti-Bahá'í campaign (to incite public passion against the Bahá'ís) started in 1955 and it included the spreading of anti-Bahá'í propaganda on national radio stations and in official newspapers. In the late 1970s the Shah 's regime consistently lost legitimacy due to criticism that it was pro-Western. As the anti-Shah movement gained ground and support, revolutionary propaganda was spread which alleged that some of the Shah 's advisors were Bahá'ís. Bahá'ís were portrayed as economic threats, and as supporters of Israel and the West, and societal hostility against the Bahá'ís increased.
Since the Islamic Revolution of 1979 Iranian Bahá'ís have regularly had their homes ransacked or have been banned from attending university or from holding government jobs, and several hundred have received prison sentences for their religious beliefs, most recently for participating in study circles. Bahá'í cemeteries have been desecrated and property has been seized and occasionally demolished, including the House of Mírzá Buzurg, Bahá'u'lláh's father. The House of the Báb in Shiraz, one of three sites to which Bahá'ís perform pilgrimage, has been destroyed twice.
According to a US panel, attacks on Bahá'ís in Iran increased under Mahmoud Ahmadinejad 's presidency. The United Nations Commission on Human Rights revealed an October 2005 confidential letter from Command Headquarters of the Armed Forces of Iran ordering its members to identify Bahá'ís and to monitor their activities. Due to these actions, the Special Rapporteur of the United Nations Commission on Human Rights stated on 20 March 2006, that she "also expresses concern that the information gained as a result of such monitoring will be used as a basis for the increased persecution of, and discrimination against, members of the Bahá'í faith, in violation of international standards. The Special Rapporteur is concerned that this latest development indicates that the situation with regard to religious minorities in Iran is, in fact, deteriorating.
On 14 May 2008, members of an informal body known as the "Friends '' that oversaw the needs of the Bahá'í community in Iran were arrested and taken to Evin prison. The Friends court case has been postponed several times, but was finally underway on 12 January 2010. Other observers were not allowed in the court. Even the defence lawyers, who for two years have had minimal access to the defendants, had difficulty entering the courtroom. The chairman of the U.S. Commission on International Religious Freedom said that it seems that the government has already predetermined the outcome of the case and is violating international human rights law. Further sessions were held on 7 February 2010, 12 April 2010 and 12 June 2010. On 11 August 2010 it became known that the court sentence was 20 years imprisonment for each of the seven prisoners which was later reduced to ten years. After the sentence, they were transferred to Gohardasht prison. In March 2011 the sentences were reinstated to the original 20 years. On 3 January 2010, Iranian authorities detained ten more members of the Baha'i minority, reportedly including Leva Khanjani, granddaughter of Jamaloddin Khanjani, one of seven Baha'i leaders jailed since 2008 and in February, they arrested his son, Niki Khanjani.
The Iranian government claims that the Bahá'í Faith is not a religion, but is instead a political organization, and hence refuses to recognize it as a minority religion. However, the government has never produced convincing evidence supporting its characterization of the Bahá'í community. Also, the government 's statements that Bahá'ís who recanted their religion would have their rights restored, attest to the fact that Bahá'ís are persecuted solely for their religious affiliation. The Iranian government also accuses the Bahá'í Faith of being associated with Zionism because the Bahá'í World Centre is located in Haifa, Israel. These accusations against the Bahá'ís have no basis in historical fact, and the accusations are used by the Iranian government to use the Bahá'ís as "scapegoats ''. In fact it was the Iranian leader Naser al - Din Shah Qajar who banished Bahá'u'lláh from Persia to the Ottoman Empire and Bahá'u'lláh was later exiled by the Ottoman Sultan, at the behest of the Persian Shah, to territories further away from Iran and finally to Acre in Syria, which only a century later was incorporated into the state of Israel.
Bahá'í institutions and community activities have been illegal under Egyptian law since 1960. All Bahá'í community properties, including Bahá'í centers, libraries, and cemeteries, have been confiscated by the government and fatwas have been issued charging Bahá'ís with apostasy.
The Egyptian identification card controversy began in the 1990s when the government modernized the electronic processing of identity documents, which introduced a de facto requirement that documents must list the person 's religion as Muslim, Christian, or Jewish (the only three religions officially recognized by the government). Consequently, Bahá'ís were unable to obtain government identification documents (such as national identification cards, birth certificates, death certificates, marriage or divorce certificates, or passports) necessary to exercise their rights in their country unless they lied about their religion, which conflicts with Bahá'í religious principle. Without documents, they could not be employed, educated, treated in hospitals, travel outside of the country, or vote, among other hardships. Following a protracted legal process culminating in a court ruling favorable to the Bahá'ís, the interior minister of Egypt released a decree on 14 April 2009, amending the law to allow Egyptians who are not Muslim, Christian, or Jewish to obtain identification documents that list a dash in place of one of the three recognized religions. The first identification cards were issued to two Bahá'ís under the new decree on 8 August 2009.
|
describe any five features of central highlands of india | Eastern Himalaya - wikipedia
The Eastern Himalayas extend from the westernmost part of Kaligandaki Valley in central Nepal to northwest Yunnan in China, also encompassing Bhutan, North - East India (its northeastern states of Sikkim and the North Bengal hills), southeastern Tibet, and parts of northern Myanmar. This region is widely considered a biodiversity hotspot, with notable biocultural diversity
The Eastern Himalayas have a much more sophisticated geomorphic history and pervasive topographic features than the Central Himalayas. In the southwest of the Sub-Himalayas lies the Singalila Ridge, the western end of a group of uplands in Nepal. Most of the Sub-Himalayas are in Nepal; a small portion reaches into Sikkim, India and a fragment is in the southern half of Bhutan.
The Buxa range of Indo - Bhutan is also a part of the ancient rocks of the Himalayas. The ancient folds, running mainly along an east - west axis, were worn down during a long period of denudation lasting into cretaceous times, possibly over a hundred million years. During this time the carboniferous and permian rocks disappeared from the surface, except in its north near Hatisar in Bhutan and in the long trench extending from Jaldhaka River to Torsa River, where limestone and coal deposits are preserved in discontinuous basins. Limestone deposits also appear in Bhutan on the southern flanks of the Lower Himalayas. The rocks of the highlands are mainly sandstones of the Devonian age, with limestones and shales of the same period in places. The core of the mountain is exposed across the center, where Paleozoic rocks, mainly Cambrian and Silurian slates and Takhstasang gneiss outcrops are visible in the northwest and northeast, the latter extending to western Arunachal Pradesh in India.
In the Mesozoic era the whole of the worn - down plateau was under sea. In this expansive shallow sea, which covered most of Assam (India) and Bhutan, chalk deposits formed from seawater tides oscillating between land and sea levels. During subsequent periods, tertiary rocks were laid down. The Paro metamorphic belt may be found overlying Chasilakha - Soraya gneiss in some places. Silurian metamorphics in other places suggest long denudation of the surface. This was the time of Alpine mountain formation, and much of the movement in the palaeozoic region was probably connected with it. The Chomolhari tourmaline granites of Bhutan, stretching westwards from the Paro chu and ats much depth below the present surface, were formed during this period of uplift, fracture and subsidence.
The climate of the Eastern Himalayas is characterized by temperate summers and chilly winters. The hot season commences around the middle of April reaching its maxima in June, and ending by August - end. The average summer temperature is generally 20 ° C (68 ° F). The average annual rainfall is 500 millimetres (20 in). Snowfall is common at higher elevations.
In the valleys of Rangeet, Teesta, and Chumbi most precipitation during winter takes the form of snowfall. Snow accumulation in the valleys greatly reduces the area 's wintertime temperature. The northeast monsoon is the predominant feature of the Eastern Himalayan region 's weather, while on the southern slopes cold season precipitation is more important.
Agricultural conditions vary throughout the region. In the highlands the soil is morainic, and the hill slopes are cut by the locals into successive steps or terraces only a few meters broad, thus preventing water run - off and allowing spring crops to thrive. The region 's economy relied mostly on Shifting cultivation agriculture, supplemented by hunting, fishing and barter trade. Agricultural does not produce sufficient yields to meet local neesd. The region 's economy remained stagnant and at subsistence levels for centuries due to the lack of capital, investor access, or entrepreneurial knowledge. Inhabitants also relied heavily on wild and semi-cultivated species for food and herbal medicines.
The Eastern Himalayas consist of five distinct political / national territories:
The Eastern Himalayas sustain a diverse array of wildlife, including many rare species of fauna and flora. Nepal features, among other rare Nepali animal species, snow leopards in its Himalayan region, and one - horned rhinos, Asian elephants and wild Water Buffaloes in its southern region, making the country one of the world 's greatest Biodiversity hotspots. Three major river basins of Nepal, namely the Karnali, Narayani and Koshi Basins, feature highly dense forests and shelter no less than 5 % of the world 's butterfly species and 8 % of the world 's bird species. Preserving this diverse wilderness is essential for the area 's and the world 's biodiversity. The area has many ecological projects intended to ensure the survival and growth of many species.
At right is pictured the national flower of Bhutan (Meconopsis gakyidiana), commonly called the blue poppy (though taxonomically it is not of the poppy Genus or species, but only poppy - like; thus it can not produce opiom or its derivatives). The blue poppy (national flower of Bhutan) was the source of an ecological mystery for nearly a century, due to its misclassification as Meconopsis grandis. In 2017, after three years of field work and taxonomic studies, its classification was corrected by Bhutanese and Japanese researchers. The problem may have originally arisen due to the now understood finding that some Himalayan flora readily hybridize with each other and produce viable seeds, causing wider (and unknown) morphological diversity.
|
where is the beach from chitty chitty bang bang | Chitty Chitty Bang Bang - wikipedia
Chitty Chitty Bang Bang is a 1968 British musical adventure fantasy film, directed by Ken Hughes and written by Roald Dahl and Hughes, loosely based on Ian Fleming 's 1964 novel Chitty - Chitty - Bang - Bang: The Magical Car. The film stars Dick Van Dyke, Sally Ann Howes, Adrian Hall, Heather Ripley, Lionel Jeffries, James Robertson Justice, Robert Helpmann and Gert Fröbe.
The film was produced by Albert R. Broccoli, the regular co-producer of the James Bond series of films (also based on Ian Fleming novels). John Stears supervised the special effects. Irwin Kostal supervised and conducted the music, while the musical numbers, written by Richard M. and Robert B. Sherman of Mary Poppins, were staged by Marc Breaux and Dee Dee Wood. The song "Chitty Chitty Bang Bang '' was nominated for an Academy Award.
The story opens with a montage of European Grand Prix races in which one particular car appears to win every race it runs in from 1907 through 1908. However, in its final 1909 race, the car crashes and catches fire, ending its racing career. The car eventually ends up in an old garage in rural England, where two children, Jeremy and Jemima Potts, have grown fond of it. However, a man in the junkyard intends to buy the car from the garage owner, Mr. Coggins, for scrap. The children, who live with their widowed father Caractacus Potts, an eccentric inventor, and the family 's equally peculiar grandfather, implore their father to buy the car, but Caractacus ca n't afford it. While playing truant from school, they meet Truly Scrumptious, a beautiful upper - class woman with her own motor car, who brings them home to report their truancy to their father. After she leaves, Caractacus promises the children that he will save the car, but is taken aback at the cost he has committed himself to. He looks for ways to raise money to avoid letting them down.
The next morning, Potts discovers that the sweets produced by a machine he has invented can be played like a flute. He tries to sell the "Toot Sweets '' to Truly 's father, Lord Scrumptious, a major confectionery manufacturer. He is almost successful until the whistle attracts a pack of dogs who overrun the factory, resulting in Caractacus 's proposition being rejected.
Caractacus next takes his automatic hair - cutting machine to a carnival to raise money, but his invention accidentally ruins the hair of a customer. Potts eludes the man by joining a song - and - dance act. He becomes the centre of the show and earns enough in tips to buy the car and rebuild it. They name the car "Chitty Chitty Bang Bang '' for the unusual noise of its engine. In the first trip in the car, Caractacus, the children, and Truly picnic on the beach. Caractacus tells them a tale about nasty Baron Bomburst, the tyrant of fictional Vulgaria, who wants to steal Chitty Chitty Bang Bang.
As Potts tells his story, the quartet and the car are stranded by high tide and are attacked by pirates working for the Baron. All of a sudden, Chitty deploys huge flotation devices and transforms into a power boat, and they escape Bomburst 's yacht and return to shore. The Baron sends two spies to capture the car, but they capture Lord Scrumptious, then Grandpa Potts, mistaking each for the car 's creator. Caractacus, Truly, and the children see Grandpa being taken away by airship, and they give chase. When they accidentally drive off a cliff, Chitty sprouts wings and propellers and begins to fly. They follow the airship to Vulgaria and find a land without children; the Baroness Bomburst abhors them and imprisons any she finds. Grandpa has been ordered by the Baron to make another floating car, and he bluffs his abilities to avoid being executed. The Potts ' party is hidden by the local Toymaker, who now works only for the childish Baron. Chitty is discovered and taken to the castle. While Caractacus and the toymaker search for Grandpa and Truly searches for food, the children are caught by the Baroness ' Child Catcher.
The Toymaker takes Truly and Caractacus to a grotto beneath the castle where the townspeople have been hiding their children. They concoct a scheme to free the children and the village from the Baron. The Toymaker sneaks them into the castle disguised as life - size dolls for the Baron 's birthday. Caractacus snares the Baron, and the children swarm into the banquet hall, overcoming the Baron 's palace guards and guests. In the ensuing chaos, the Baron, Baroness, and the evil Child Catcher are captured. The Potts family and Truly fly back to England. When they arrive home, Lord Scrumptious surprises Caractacus with an offer to buy the Toot Sweet as a canine confection. Caractacus, realising that he will be rich, rushes to tell Truly the news. They kiss, and Truly agrees to marry him. As they drive home, he acknowledges the importance of pragmatism, as the car takes off into the air again.
The cast includes:
The part of Truly Scrumptious had originally been offered to Julie Andrews, to reunite her with Van Dyke after their success in Mary Poppins. Andrews rejected the role specifically because she considered that the part was too close to the Poppins mould. Instead, Sally Ann Howes was given the role. Dick Van Dyke was cast after he turned down the role of Fagin from another 1968 musical Oliver! (which ended up going to Ron Moody).
The Caractacus Potts inventions in the film were created by Rowland Emett; by 1976, Time magazine, describing Emett 's work, said no term other than "Fantasticator... could remotely convey the diverse genius of the perky, pink - cheeked Englishman whose pixilations, in cartoon, watercolor and clanking 3 - D reality, range from the celebrated Far Tottering and Oyster Creek Railway to the demented thingamabobs that made the 1968 movie Chitty Chitty Bang Bang a minuscule classic. ''
Six Chitty - Chitty Bang - Bang cars were created for the film, only one of which was fully functional. At a 1973 auction in Florida, one of them sold for $37,000, equal to $203,972 today. The original "hero '' car, in a condition described as fully functional and road - going, was offered at auction on 15 May 2011 by a California - based auction house. The car sold for $805,000, less than the $1 -- 2 million it was expected to reach. It was purchased by New Zealand film director Sir Peter Jackson.
The film was the tenth most popular at the US box office in 1969.
Time began its review saying the film is a "picture for the ages -- the ages between five and twelve '' and ends noting that "At a time when violence and sex are the dual sellers at the box office, Chitty Chitty Bang Bang looks better than it is simply because it 's not not all all bad bad ''; the film 's "eleven songs have all the rich melodic variety of an automobile horn. Persistent syncopation and some breathless choreography partly redeem it, but most of the film 's sporadic success is due to Director Ken Hughes 's fantasy scenes, which make up in imagination what they lack in technical facility. ''
The New York Times critic Renata Adler wrote, "in spite of the dreadful title, Chitty Chitty Bang Bang... is a fast, dense, friendly children 's musical, with something of the joys of singing together on a team bus on the way to a game ''; Adler called the screenplay "remarkably good '' and the film 's "preoccupation with sweets and machinery seems ideal for children ''; she ends her review on the same note as Time: "There is nothing coy, or stodgy or too frightening about the film; and this year, when it has seemed highly doubtful that children ought to go to the movies at all, Chitty Chitty Bang Bang sees to it that none of the audience 's terrific eagerness to have a good time is betrayed or lost. ''
Film critic Roger Ebert reviewed the film (Chicago Sun Times, 24 December 1968). He wrote: "Chitty Chitty Bang Bang contains about the best two - hour children 's movie you could hope for, with a marvelous magical auto and lots of adventure and a nutty old grandpa and a mean Baron and some funny dances and a couple of (scary) moments. ''
In 2008 film critic and historian Leonard Maltin considered the picture "one big Edsel, with totally forgettable score and some of the shoddiest special effects ever. '' In 2008, Entertainment Weekly called Helpmann 's depiction of the Child Catcher one of the "50 Most Vile Movie Villains. ''
As of March 2014, the film has a 65 % "Fresh '' rating (17 of 26 reviews) on Rotten Tomatoes.
The film was nominated for the American Film Institute 's 2006 AFI 's Greatest Movie Musicals list.
The original soundtrack album, as was typical of soundtrack albums, presented mostly songs with very few instrumental tracks. The songs were also edited, with specially recorded intros and outros and most instrumental portions removed, both because of time limitations of the vinyl LP and the belief that listeners would not be interested in listening to long instrumental dance portions during the songs.
The soundtrack has been released on CD four times, the first two releases using the original LP masters rather than going back to the original music masters to compile a more complete soundtrack album with underscoring and complete versions of songs. The 1997 Rykodisc release included several quick bits of dialogue from the film between some of the tracks and has gone out of circulation. On 24 February 2004, a few short months after MGM released the movie on a 2 - Disc Special Edition DVD, Varèse Sarabande reissued a newly remastered soundtrack album without the dialogue tracks, restoring it to its original 1968 LP format.
In 2011, Kritzerland released the definitive soundtrack album, a 2 - CD set featuring the Original Soundtrack Album plus bonus tracks, music from the Song and Picture Book Album on disc 1, and the Richard Sherman Demos, as well as six Playback Tracks (including a long version of international covers of the theme song). Inexplicably, this release was limited to only 1,000 units.
In April 2013, Perseverance Records re-released the Kritzerland double CD set with expansive new liner notes by John Trujillo and a completely new designed booklet by Perseverance regular James Wingrove.
Chitty Chitty Bang Bang was released numerous times in the VHS format. In 1998 the film saw its first DVD release. The year 2003 brought a two - disc "Special Edition '' release. On 2 November 2010, 20th Century Fox released a two - disc Blu - ray and DVD combination set featuring the extras from the 2003 release as well as new features. The 1993 LaserDisc release by MGM / UA Home Video was the first home video release with the proper 2.20: 1 Super Panavision 70 aspect ratio.
The film did not follow Fleming 's novel closely. A separate novelisation of the film was published at the time of the film 's release. It basically followed the film 's story but with some differences of tone and emphasis, e.g. it mentioned that Caractacus Potts had had difficulty coping after the death of his wife, and it made it clearer that the sequences including Baron Bomburst were extended fantasy sequences. It was written by John Burke.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.