question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
who did jackie have a baby with on roseanne | List of Roseanne characters - wikipedia
The following is a list of major characters in the television series Roseanne.
Played by Roseanne Barr. Roseanne, in a takeoff of her stand - up comedic and presumed real - life persona, is a bossy, loud, caustic, overweight, and dominant woman. She constantly tries to control the lives of her sister, husband, children, co-workers, and friends. Despite her dominating nature, Roseanne is a loving mother who works hard and makes as much time for her kids as possible. She and her younger sister Jackie are the daughters of Beverly and Al Harris. Roseanne is married to Dan Conner and, when the series begins, they have three children: Becky, Darlene and David Jacob ("D.J. ''); a fourth child, Jerry Garcia, is born late in the series.
She and her family deal with the many hardships of poverty, obesity, and domestic troubles with humor. She has always had troubles with weight, inspiring an episode in which she and Dan try to lose weight. She works at the Wellman Plastics factory at the beginning of the show 's run and quits that job after a conflict with the new egotistical domineering boss, Mr. Faber; she leads a walkout that includes most of her friends. She has several periods of unemployment and holds jobs as a fast - food employee, a telemarketer, a bartender, and a shampoo woman / hair sweeper at a beauty salon. Subsequently, she works for several years as a waitress in the luncheonette at Rodbell 's department store located in the Lanford Mall (much to the chagrin of daughters Becky and Darlene, who regularly hang out there).
Roseanne eventually co-owns a successful restaurant called the Lanford Lunch Box with Jackie, her mother Bev, Nancy, and her former boss from the luncheonette, Leon, after Bev sells her share in the restaurant to him. Roseanne and Jackie, in the last years of the show, win a lottery in excess of $108,000,000. At the end of the series, it is revealed that she did n't win the lottery at all and that most of what had happened on the show was actually from a book of her own writing. She also reveals that Dan died from a heart attack at the end of the previous season.
In June 2010, Entertainment Weekly named Roseanne one of the 100 Greatest Characters of the Last 20 Years. In 2009, she was listed in the Top 5 Classic TV Moms by Film.com. In May 2012, she was one of the 12 moms chosen by users of iVillage on their list of "Mommy Dearest: The TV Moms You Love ''. AOL named her the 11th Most Memorable Female TV Character.
Played by John Goodman. Dan Conner is the husband of Roseanne and father of Becky, Darlene, DJ, and Jerry. Dan is a loving, easygoing drywall contractor.
During the final episode, Roseanne reveals that the entire series was written as a book based on her life and family, and she changed certain elements of what she had n't liked; most notably, during the final season, Dan and Roseanne are shown living apart after Dan cheats on Roseanne, but he had actually died from his heart attack at Darlene and David 's wedding near the end of Season 8. Writing that he cheated was her way of explaining her feelings of loss and abandonment from his death. The potential absence of Dan from all or most of the next season prompted Phil Rosenthal of the Los Angeles Daily News to describe it as a rare occasion where ending the show would be preferred to doing without. Robinson described Goodman 's potential absence as leaving a tremendous void, owing to his ability to make those acting with him better. The revelation that Dan actually died and the series ' being a work of fiction within the show was not well received.
In an article about television dads, The Post and Courier editor Mindy Spar began discussing how ' 90s TV dads became goofier than dads from earlier decades, calling Dan more like one of the children than the father. IGN editor Edgar Arce called Dan Conner a prototypical everyman.
An article in the Sarasota Herald - Tribune praised Dan and Roseanne 's relationship, calling it realistic and commenting that while they mock each other, viewers can feel their love while they deal with the kinds of problems real families face. Daily News editor David Bianculli stated that while they were the most entertaining and realistic couples on television, they were one of the least during their separation. Their relationship was included in TV Guide 's list of the best TV couples of all time.
Played by Laurie Metcalf. Jackie is the neurotic younger sister of Roseanne by three years, but a loving and devoted aunt to her nieces and nephews. In the episode "Labor Day, '' it is revealed that Jackie 's real name may have originally been Marjorie, as her mother reveals that Roseanne could n't pronounce it, and wound up calling her "My Jackie '', thus leading to Jackie 's name.
Jackie is an intelligent, warm, highly sensitive underachiever with chronic low self - esteem. Roseanne seems to be in charge of Jackie 's life, which is a frequent cause of conflict between the two; however, Jackie sometimes enjoys having Roseanne mother her, especially when Jackie feels vulnerable. Jackie 's character seems to become more animated and colorful as the series progresses. Jackie holds numerous jobs: working in the Wellman Plastics factory for several years until the walkout, then becoming a police officer until she is injured on the job, and later a truck driver, then opening the Lanford Lunch Box with Roseanne and Nancy (her mother is a silent partner). Jackie often comes up with seemingly off - the - wall "crazy '' ideas, but it turns out that many of her unconventional ideas actually work. Her romantic relationships are frequently unstable, including one where she dates a man named Fisher and becomes the victim of domestic violence. However, she eventually marries Dan 's co-worker Fred, who impregnated her during a one - night stand. Jackie gives birth to their son, Andy, two months before she marries Fred.
At one point Jackie, unhappy with the self - absorbed couch potato Fred, starts going out with another couple, which then becomes one man. Dan sees her, warns her about Fred 's possible reaction, but later accidentally blurts it out to Fred. Fred returns home upset, accuses Jackie of "adultery '' and refuses to speak to her. Roseanne, with Bev 's help, has Fred realize that his accusations against Jackie are unjustified, and he goes back. However, at this point Jackie is realizing that she 's happier being single. The marriage proves to be short - lived because Jackie finds Fred boring, petty, and self - centered and they fall out of love. Jackie is crushed by the divorce at first, but moves on to become a successful single mom to Andy.
Despite Jackie 's apparent flightiness in the early episodes, Jackie is actually the backbone of the Conner / Harris family in many ways, as Roseanne admits in the last episode. In the final episode, it is also revealed that the character of Jackie had came out as a lesbian during the final season, and that Roseanne knew but had simply always pictured Jackie with a man.
Played by Michael Fishman (played by Sal Barone in the pilot episode). Born in 1981, DJ is the youngest Conner child for the majority of the series, until Roseanne gives birth to Jerry in season 7. He is more simple - minded, naive, and boisterous than his older sisters who frequently taunt him. He is portrayed as being something of a simpleton in school and abnormal; for instance peeking at both Darlene and Becky naked at different times. In the first episode of season three, it is stated that while Becky and Darlene were planned pregnancies for the Conners, DJ was a "surprise ''. As he grows up, DJ 's storylines deal with more mature topics such as masturbation and human sexuality. Later episodes depict DJ as having a close, brother - like friendship with Becky 's husband, Mark. He is also shown to develop an interest in filmmaking; so much so that he repeatedly asks Darlene if he can videotape her giving birth. He was one of the only characters that was n't changed in the final episode.
Played by Sara Gilbert. Dan and Roseanne 's second child. She is normally artistic, sarcastic, tomboyish, intelligent and slightly rough. Born in 1977, Darlene in the early seasons is a tomboy who loves sports and has trouble in school, but during high school she becomes moody, artistic, an animal rights activist, and a vegetarian to more closely match the real - life views and personality of Gilbert. During her freshman year of high school, she begins dating David, coincidentally the younger brother of Becky 's husband, Mark. Darlene possesses the same sarcasm and domineering attitude as her mother, often causing the two to clash. Her bossy nature is best seen with David, who usually gives in to her will. Darlene is a very talented writer. Along with David 's artistic talents, they begin working seriously on a graphic novel, and Darlene eventually applies for and is awarded early admission and a scholarship to an exclusive art school in Chicago before she finishes high school, which Roseanne allows her to attend after realizing it is her only chance to get out of Lanford and make a better life for herself as the writer Roseanne never was able to be. While in college, she meets a boy named Jimmy, whom she dates while still with David. David is aware she was dating Jimmy, and eventually tells her she has to choose between them. She chooses Jimmy. Later he dumps her because he could n't get close to her. Darlene realizes she still loves David after going to a movie together and Roseanne talks the two of them into reconciling. Darlene later becomes pregnant by David at Disney World and the two marry soon after. She eventually finishes art school and later gives birth to her daughter, Harris Conner Healy. Harris is born about three months prematurely and almost does n't survive; David and Darlene must decide whether to keep her on the breathing machine or take her off and let nature take its course; the two decide she should get a chance to "experience life without being hooked up to all those machines, '' and surprisingly Harris pulls through. In the series finale, it is revealed that Darlene had actually been dating Mark all along in reality, but was written in Roseanne 's book as having been with David instead, as Roseanne felt it made more sense.
Sara Gilbert was almost rejected from the role for not being cute enough.
Played alternately by Lecy Goranson (Seasons 1 -- 5 and 8), and Sarah Chalke (Seasons 6 -- 7 and 9, Guest Starring in Season 8). Born March 15, 1975, Becky, the oldest of Dan and Roseanne 's children, has a complex personality: she is quite bright and an overachiever but is also somewhat quick - tempered and sometimes angry with her parents and younger sister, Darlene. She dates her rebellious biker boyfriend Mark Healy against her parents ' wishes and then, at age 17, leaves home to marry him and move to Minneapolis. Later, Becky and Mark return home to live with Roseanne and then move out again into a trailer. In the final episode, it is revealed that she is pregnant with the couple 's first child.
During the show 's fifth season, Goranson left to attend Vassar College. At first, her character Becky is merely absent from the show, explained in the story when she marries and moves away to live with her husband, Mark. During the sixth season, however, the show 's producers recast Becky with actress Sarah Chalke.
Because cast and crew believed that the eighth season of Roseanne would be its last, Goranson had signed back on only for that season. These changes are addressed within the show and become a running gag throughout season 8 as both Goranson and Chalke continue to alternate in the role of Becky, depending on Goranson 's availability. During season eight, Goranson is credited in the opening sequence as a full - time cast member, and appears in 11 out of 25 episodes. Sarah Chalke makes five appearances and is credited as a guest star. In the show 's ninth season, Chalke replaced Goranson full - time, and no further "in - jokes '' about Becky 's casting are made.
At the end of the Season 9 finale, it is hinted that Becky is pregnant. She did not want to mention it to the family. In the same episode, it is also revealed that Becky had actually been dating David all along in reality, but was written in Roseanne 's book as having been with Mark instead, as Roseanne felt it made more sense.
Played by Johnny Galecki. David is Mark 's younger brother; he first appeared in "The Bowling Show '' (episode 4.14) when his first name was indicated to be Kevin. In a season 6 episode Roseanne comments that David is n't his real name, it 's just something that Darlene made up. So it 's possible that Kevin really is his name although everyone including his own family call him David and Roseanne might not have been serious. Whereas Mark was a rebellious delinquent at first, David is friendlier to the Conners. He 's shy, polite, thoughtful, sensitive, soft - spoken, artistic, and intelligent. He later enters a relationship with Darlene and collaborates with her by illustrating graphic novels that she writes, though he tends to be the more submissive one in the relationship. His well - behaved manner is endearing to the Conner family, who think of him as part of the family and jokingly refer to him as being more welcome in the family than Darlene. He moves in with the Conners after Roseanne, herself a victim of child abuse, sees how abusive David 's mother is to him. David and Darlene break up three times throughout the course of the show, each for a longer period of time than the last; they always end up back together. Eventually, David impregnates Darlene; causing them to decide to get married. In the series finale, Roseanne reveals that David had actually been dating Becky in "real life, '' and that Roseanne simply wrote his relationship as being with Darlene because she felt it made more sense.
Played by Glenn Quinn. Mark dates and later elopes with Becky, much to the Conners ' consternation. Despite Mark 's tough - guy image and rebel persona, he is rarely seen to engage in criminal activity. Roseanne initially had a strong dislike for Mark because of his condescending attitude toward her. Dan initially dislikes Mark as well; Mark 's choice to ride a British Triumph motorcycle rather than an American Harley - Davidson causes particular tension. However he soon respects Mark 's work ethic and hires him as a mechanic both at his bike shop and his truck - inspection office. Mark 's personality changes drastically over the course of his duration on the series -- he starts off as a rebellious delinquent, but ultimately proves himself to be a caring and responsible (although dull - witted) husband and brother. Roseanne and the rest of the family eventually grows fond of Mark, though they still get amusement out of insulting him. He has a younger brother, David, who dates (and later marries) Darlene. He also has two much younger little sisters, Lisa and Nikki, who appear briefly in the season 5 episode "No Place Like Home for The Holidays. '' In the series finale, Roseanne reveals that Mark had actually been dating Darlene in "real life, '' and that Roseanne simply wrote his relationship as being with Becky because she felt it made more sense.
Played by Sandra Bernhard. Nancy is the part owner of the Lanford Lunch Box. She is married to Arnie but later comes out as a lesbian after he leaves her, then admits to being bisexual. She frequently is seen dating women; her first girlfriend Marla is played by Morgan Fairchild. Nancy is never ashamed of her promiscuity, nor does she ever show any self - consciousness of her unusual behavior. In fact, she is one of the most self - confident of all the characters in the series, often even more than Roseanne. Her tendency toward self - absorption seems to only be quelled while dating a woman or being around Jackie. Nancy turns out to be a loyal and good friend to both Roseanne and Jackie throughout the series, although she does n't hesitate to reprimand them for their selfishness and cruelty when they treat their mother, Bev, so harshly that Bev ends up crying about it for days afterward, while Nancy supports her.
Played by Martin Mull. Leon, originally as Roseanne 's boss at Rodbell 's Luncheonette and later as her business partner in The Lanford Lunchbox, is portrayed as a foil to Roseanne; although they have a contentious relationship at times, it is apparent that Leon is still a good friend of Roseanne 's, eventually becoming an extended family member of sorts. As a gay man, he is occasionally seen dating many men and having romantic troubles; however, he later marries his life partner Scott (Fred Willard) in a very public ceremony. In the series finale, they visit the Conner residence bearing gifts for Darlene and David 's new baby, Harris, proclaiming themselves as the infant 's Aunt Scott and Aunt Leon. Near the end of the episode, they announce their plans to adopt a little girl. Leon 's role in the series expands significantly in the eighth and ninth seasons, as he is featured in more episodes. He is especially upset when Roseanne wins the lottery, but Roseanne and Jackie end up relinquishing control of the restaurant to Leon and Nancy. Leon is a Republican, and holds George Bush in high regard. He also is shown throughout the series ' run to be a fan of Liza Minnelli, as well as Broadway musicals.
Played by Natalie West. Crystal is the neurotic but kind and good - hearted friend of Roseanne and Jackie from childhood who later became Dan 's stepmother, despite being roughly the same age and half Ed 's age. A self - described "doormat '' when it comes to men, she speaks with a Southern accent despite having grown up in Lanford because her father was from Arkansas. Crystal refers to most of her young life as a great tragedy, having been kicked out by her mother at age sixteen and being forced to live with an aunt. She had previously married in May of the year she graduated high school (age eighteen) but her first husband, Sonny, died just a year later as a result of falling into the pillar of a new bridge as the cement was being poured (and was still part of the bridge). She soon found herself married to another man, whom she later divorced. Crystal grieved the death of Sonny through several years and could n't look past it. Crystal also bore a son, through her marriage to Sonny, named Lonnie. Crystal worked at the Wellman Plastics factory with Roseanne and Jackie and quit in order to start a successful cosmetics sales career. Crystal married two times before marrying Dan 's father, Ed (against Dan 's strong disapproval), and bore two children, Little Ed and Angela with him. Their marriage is perceived to be happy, with Ed 's absences creating most of the conflict. A regular cast - member for the first four seasons, Crystal appears as an occasional recurring character afterward.
Played by Ned Beatty. Ed is Dan 's father, a charming, traveling salesman who always brings presents for the grandchildren. Dan has a troubled past with his father, but he is well liked by everyone else. Despite a struggling relationship with his son, Ed is well - meaning and likable. Ed never makes any blatant attempt on the show to anger Dan on purpose, but he often baits his son covertly (e.g., when Ed begins to date Crystal and when Dan questions him on it, Ed makes a rude crack about Dan being "interested '' in Crystal). It usually does n't take long for Dan to become annoyed at his father 's presence. He is portrayed as irresponsible and neglectful of his first family, though it is later revealed he justified his actions in reference to raising Dan by reconciling his frequent commitment of Dan 's mother to a mental institution with his desire to provide Dan with at least one stable parent as he would often be gone on sales trips during Dan 's adolescence. He wants to learn from his actions and be better. He does love his son despite their troubled relationship, and also loves the rest of his family. He marries and has two children with Crystal, even though she is considerably younger than he is.
Played by Cole and Morgan Roberts. Born in 1995, Jerry is the family 's baby, born to Roseanne and Dan when they are in their forties. His name is a homage to Jerry Garcia of the Grateful Dead. Roseanne 's pregnancy in the seventh season was to coincide with her real - life pregnancy, made possible via fertility drugs (which Roseanne explains at the end of an episode in which she does not appear due to her pregnancy). Also, Roseanne explains at the end of the season 8 Halloween episode, her character became pregnant three months before the actress. Roseanne 's labor was to occur during a Grateful Dead concert; however, because of Jerry Garcia 's death in 1995, that was changed. Before Jerry 's birth, it is understood that Roseanne 's baby will be a girl. During the end credits following the birth episode, Roseanne explains to the audience that she wanted the baby 's gender to be the same as her real - life child 's.
Played by Michael O'Keefe. Fred is a mechanic who works at the garage with Dan and is introduced to Jackie, leading to a one - night stand, accidental impregnation, and subsequent marriage. It takes a lot of encouragement for Jackie to eventually warm up and face the fact that Fred is the father of her child and therefore a part of their life. Fred at first assumes that Jackie 's just putting him off lightly and initially does n't understand that Jackie has been through a very traumatic experience, in addition to her dysfunctional childhood. Fred, being very conventional, is shocked by the Conners and Harris ' unconventional ways and can not understand the inner workings of Jackie 's family. This led to the dissolution of his marriage to Jackie. Despite their attempts to make their marriage work, Jackie and Fred break up when they discover how incompatible they are.
Jackie and Fred 's son, born in 1994 on the episode "Labor Day ''. Jackie breastfeeds Andy at the altar while marrying Fred ("Altar Egos ''). Andy was the ring bearer in Darlene and David 's wedding ("The Wedding ''). Jackie likes to dress him up in outfits Fred deems to be feminine. DJ makes it a point to spend time with Andy because he does n't have any siblings. Andy and Jerry are like brothers and take naps together in the same crib, like their mothers, Jackie and Roseanne did.
Played by Estelle Parsons. Beverly is the mother of Roseanne and Jackie and the wife of Al, as well as the daughter of Nana Mary. She has a half - sister named Sonya. She is overbearing, shrill and is avoided by all members of her family. She nags them with her shockingly whiny voice, often with good intentions but coming off the wrong way. The family (especially Jackie) tries to avoid spending any time with Beverly, as she is quick to inadvertently criticize how people live their lives. After going back and forth playing tricks on each other in one Halloween episode, Roseanne ultimately gets the upper hand by having a fake phone conversation with Bev in front of Dan where she agrees to let her mother stay for three weeks. She is very traditional and conservative, as opposed to her daughters ' more liberal and feminist philosophies. She proves herself generous with the wealth she receives from her husband 's alimony, constantly giving financial gifts to the family to bail them out. She even provides the seed money for the Lanford Lunch Box and insists on staying on as a partner, but is later forced out because Roseanne and Jackie do not want to work with her. As revenge, she sells her share of the restaurant to Roseanne 's ex-boss and rival, Leon Carp, effectively making him their new partner. During the show 's final season, she comes out as a lesbian (according to one of Roseanne 's fictional twists on her family, along with winning the lottery), this is revealed in the finale in which Roseanne states that her mother is not a lesbian but that her sister is; she just thought it 'd be interesting to put a radical twist on the character of her Mother, who lived her life according to her husband 's rules, and because she wished her mother had a better sense of herself as a woman. Bev 's relationship with her own mother is very similar to the one her daughters have with her.
Played by Shelley Winters. Nana Mary is Beverly 's mother, and grandmother of Roseanne and Jackie who first appears in season three at a family barbecue. She has another daughter named Sonya. She makes several appearances from season three onward, mostly during family occasions. She is a brash but caring, free - spirited, outspoken, and lovable retiree who gambles with her grandchildren and great - grandchildren. Unlike Bev, she is popular with the family. She also disagrees with her daughter on certain situations (such as siding with Jackie when Bev tries to get her to marry Fred, and revealing that she had two abortions, upsetting Bev who is against the idea). She outsmarts and drives Bev crazy, much to the amusement of Roseanne and Jackie, who usually endure the same torment from their own mother. Her character is often a comic relief for the family, as well as offering a balance between Roseanne and Jackie 's relationship with their mother, and Bev 's relationship with Nana Mary. She was promiscuous in her younger days and claims to have dated Pablo Picasso and Louis Armstrong. Mary had Bev with another man before marrying her deceased husband, Marvin. She tells Bev she was very young when she was born, and does n't know who the father is, avoiding the subject whenever Bev brings it up. She is a big fan of a local radio call - in show that revolves around sex, and states, "If I do n't call, they worry. '' Nana Mary 's last appearance is in mid season nine, in which she finally has a heart - to - heart with her daughter, thus, closing the story on their relationship. Despite her absence, Mary goes on to appear at other family occasions, including the birth of her great - great - granddaughter.
Played by Tom Arnold. Arnie is the overweight, hot - tempered but jovial friend of Dan. Originally, he was written as a relative stranger to both Roseanne and Jackie, although this was later retconned, and he was subsequently written as having gone to high school with them. He frequently cheats on the women he dates and is very ill - mannered. However, Arnie always tries to be a good friend to Dan. He marries Nancy but leaves her, claiming to be abducted by aliens (later played upon in the fourth - season finale 's end - credit sketch, where he is seen conversing with aliens on a spaceship). Before he and Nancy are engaged, he has a one - night stand with a drunk Jackie. Arnie is often seen wearing a yellow University of Iowa sweatshirt (Tom Arnold attended the University of Iowa in real life). He is last seen in season 5, where he unsuccessfully tries to win Nancy back, even after finding out that she is a lesbian. Tom Arnold appears in the ending credits of a later episode, but is now playing Jackie Thomas (in his role on The Jackie Thomas Show), and none of the characters seem to recognize him.
All characters below appear in the first season and part of the second season before being written out all together. Clooney would make one more appearance in a season four episode.
All characters below appear in the show 's second season only, when Roseanne takes a job as a receptionist and shampoo girl at Art 's Beauty Salon.
|
how did the sons of liberty impact the american revolution | Sons of Liberty - wikipedia
2nd row: Paul Revere James Swan Alexander McDougall Benjamin Rush Charles Thomson
3rd row: Joseph Warren Marinus Willett Oliver Wolcott Christopher Gadsden Haym Salomon
The Sons of Liberty was a secret organization that was created in the Thirteen American Colonies to advance the rights of the European colonists and to fight taxation by the British government. It played a major role in most colonies in battling the Stamp Act in 1765. The group officially disbanded after the Stamp Act was repealed. However, the name was applied to other local separatist groups during the years preceding the American Revolution.
In the popular thought, the Sons of Liberty was a formal underground organization with recognized members and leaders. More likely, the name was an underground term for any men resisting new Crown taxes and laws. The well - known label allowed organizers to make or create anonymous summons to a Liberty Tree, "Liberty Pole '', or other public meeting - place. Furthermore, a unifying name helped to promote inter-Colonial efforts against Parliament and the Crown 's actions. Their motto became "No taxation without representation. ''
In 1765, the British government needed money to afford the 10,000 officers and soldiers living in the colonies, and intended that the colonists living there should contribute. The British passed a series of taxes aimed at the colonists, and many of the colonists refused to pay certain taxes; they argued that they should not be held accountable for taxes which were decided upon without any form of their consent through a representative. This became commonly known as "No Taxation without Representation. '' Parliament insisted on its right to rule the colonies despite the fact that the colonists had no representative in Parliament. The most incendiary tax was the Stamp Act of 1765, which caused a firestorm of opposition through legislative resolutions (starting in the colony of Virginia), public demonstrations, threats, and occasional hurtful losses.
The organization spread hour by hour, after independent starts in several different colonies. In August 1765, the group was founded in Boston, Massachusetts. By November 6, a committee was set up in New York to correspond with other colonies. In December, an alliance was formed between groups in New York and Connecticut. January bore witness to a correspondence link between Boston and New York City, and by March, Providence had initiated connections with New York, New Hampshire, and Newport, Rhode Island. March also marked the emergence of Sons of Liberty organizations in New Jersey, Maryland, and Virginia.
In Boston, another example of violence could be found in their treatment of local stamp distributor Andrew Oliver. They burned his effigy in the streets. When he did not resign, they escalated to burning down his office building. Even after he resigned, they almost destroyed the whole house of his close associate Lieutenant Governor Thomas Hutchinson. It is believed that the Sons of Liberty did this to excite the lower classes and get them actively involved in rebelling against the authorities. Their actions made many of the stamp distributors resign in fear.
Early in the American Revolution, the former Sons of Liberty generally joined more formal groups, such as the Committee of Safety.
The Sons of Liberty popularized the use of tar and feathering to punish and humiliate offending government officials starting in 1767. This method was also used against British Loyalists during the American Revolution. This punishment had long been used by sailors to punish their mates.
In December 1773, a new group calling itself the Sons of Liberty issued and distributed a declaration in New York City called the Association of the Sons of Liberty in New York, '' which formally stated that they were opposed to the Tea Act and that anyone who assisted in the execution of the act was "an enemy to the liberties of America '' and that "whoever shall transgress any of these resolutions, we will not deal with, or employ, or have any connection with him. ''
After the end of the American Revolutionary War, Isaac Sears, Marinus Willet, and John Lamb in New York City revived the Sons of Liberty. In March 1784, they rallied an enormous crowd that called for the expulsion of any remaining Loyalists from the state starting May 1. The Sons of Liberty were able to gain enough seats in the New York assembly elections of December 1784 to have passed a set of punitive laws against Loyalists. In violation of the Treaty of Paris (1783), they called for the confiscation of the property of Loyalists. Alexander Hamilton defended the Loyalists, citing the supremacy of the treaty.
In 1767, the Sons of Liberty adopted a flag called the rebellious stripes flag with nine vertical stripes, four white and five red. A flag having 13 horizontal red and white stripes was used by Commodore Esek Hopkins (Commander - in - Chief of the Continental Navy) and by American merchant ships during the war. This flag was also associated with the Sons of Liberty. Red and white were common colors of the flags, although other color combinations were used, such as green and white or yellow and white.
At various times, small secret organizations took the name "sons of liberty ''. They generally left very few records.
The name was also used during the American Civil War. By 1864, the Copperhead group the Knights of the Golden Circle set up an offshoot called Order of the Sons of Liberty. They both came under federal prosecution in 1864 for treason, especially in Indiana.
A radical wing of the Zionist movement launched a boycott in the U.S. against British films in 1948, in response to British policies in Palestine. It called itself the "Sons of Liberty. ''
The patriotic spirit of the Sons of Liberty has been used by Walt Disney Pictures through their 1957 film adaptation of Esther Forbes ' novel Johnny Tremain. Within the movie, the Sons of Liberty sing a rousing song titled "The Liberty Tree ''. This song raises the Liberty Tree to a national icon in a manner similar to the way in which George M. Cohan 's "You 're a Grand Old Flag '' revitalized respect for the American flag in the early twentieth century.
In the 1995 alternative history novel The Two Georges, the Sons of Liberty are depicted as a nativist terrorist organisation whose aim is to make the North American Union independent of the British Empire; their attempts include the theft of the titular portrait for a fifty million pound ransom and two assassination attempts on King - Emperor Charles III. The flag used by the Sons of Liberty and the Independence Party alike is a variation of the North American Jack and Stripes with the Union Jack in the canton being replaced by a bald eagle.
The Sons of Liberty are referred to in the 2001 video game Metal Gear Solid 2: Sons of Liberty. It refers to them in the title, and a group within the game calls itself the Sons of Liberty and models itself after them.
In 2015, a three - part mini-series aired on the History Channel with the same name.
The Sons of Liberty are referred to in the 2015 Broadway show Hamilton. In the song "Yorktown (The World Turned Upside Down), '' the character Hercules Mulligan sings, "I am runnin ' with the Sons of Liberty and I am lovin ' it. ''
In an episode of the Claymation children 's TV show Gumby, entitled "Son of Liberty '' (first aired in 1966), Gumby becomes a member of the Sons of Liberty.
|
who is entitled to sit in the house of lords | House of Lords - wikipedia
Lords Temporal HM Government
Confidence and supply
HM Most Loyal Opposition
Crossbench
Other groups
Lords Spiritual
The House of Lords of the United Kingdom, also known as the House of Peers, is the upper house of the Parliament of the United Kingdom. Like the House of Commons, it meets in the Palace of Westminster. Officially, the full name of the house is: The Right Honourable the Lords Spiritual and Temporal of the United Kingdom of Great Britain and Northern Ireland in Parliament assembled.
Unlike the elected House of Commons, all members of the House of Lords (excluding 90 hereditary peers elected among themselves and two peers who are ex officio members) are appointed. The membership of the House of Lords is drawn from the peerage and is made up of Lords Spiritual and Lords Temporal. The Lords Spiritual are 26 bishops in the established Church of England. Of the Lords Temporal, the majority are life peers who are appointed by the monarch on the advice of the Prime Minister, or on the advice of the House of Lords Appointments Commission. However, they also include some hereditary peers including four dukes. Membership was once an entitlement of all hereditary peers, other than those in the peerage of Ireland, but under the House of Lords Act 1999, the right to membership was restricted to 92 hereditary peers. Very few of these are female since most hereditary peerages can only be inherited by men.
While the House of Commons has a defined 650 - seat membership, the number of members in the House of Lords is not fixed. There are currently 798 sitting Lords. The House of Lords is the only upper house of any bicameral parliament to be larger than its respective lower house.
The House of Lords scrutinises bills that have been approved by the House of Commons. It regularly reviews and amends Bills from the Commons. While it is unable to prevent Bills passing into law, except in certain limited circumstances, it can delay Bills and force the Commons to reconsider their decisions. In this capacity, the House of Lords acts as a check on the House of Commons that is independent from the electoral process. Bills can be introduced into either the House of Lords or the House of Commons. While members of the Lords may also take on roles as government ministers, high - ranking officials such as cabinet ministers are usually drawn from the Commons. The House of Lords has its own support services, separate from the Commons, including the House of Lords Library.
The Queen 's Speech is delivered in the House of Lords during the State Opening of Parliament. In addition to its role as the upper house, until the establishment of the Supreme Court in 2009, the House of Lords, through the Law Lords, acted as the final court of appeal in the British judicial system. The House also has a Church of England role, in that Church Measures must be tabled within the House by the Lords Spiritual.
Today 's Parliament of the United Kingdom largely descends, in practice, from the Parliament of England, though the Treaty of Union of 1706 and the Acts of Union that ratified the Treaty in 1707 created a new Parliament of Great Britain to replace the Parliament of England and the Parliament of Scotland. This new parliament was, in effect, the continuation of the Parliament of England with the addition of 45 MPs and 16 Peers to represent Scotland.
The Parliament of England developed from the Magnum Concilium, the "Great Council '' that advised the King during medieval times. This royal council came to be composed of ecclesiastics, noblemen, and representatives of the counties of England and Wales (afterwards, representatives of the boroughs as well). The first English Parliament is often considered to be the "Model Parliament '' (held in 1295), which included archbishops, bishops, abbots, earls, barons, and representatives of the shires and boroughs of it.
The power of Parliament grew slowly, fluctuating as the strength of the monarchy grew or declined. For example, during much of the reign of Edward II (1307 -- 1327), the nobility was supreme, the Crown weak, and the shire and borough representatives entirely powerless. In 1569, the authority of Parliament was for the first time recognised not simply by custom or royal charter, but by an authoritative statute, passed by Parliament itself.
Further developments occurred during the reign of Edward II 's successor, Edward III. It was during this King 's reign that Parliament clearly separated into two distinct chambers: the House of Commons (consisting of the shire and borough representatives) and the House of Lords (consisting of the bishops and abbots and the peers). The authority of Parliament continued to grow, and, during the early fifteenth century, both Houses exercised powers to an extent not seen before. The Lords were far more powerful than the Commons because of the great influence of the great landowners and the prelates of the realm.
The power of the nobility suffered a decline during the civil wars of the late fifteenth century, known as the Wars of the Roses. Much of the nobility was killed on the battlefield or executed for participation in the war, and many aristocratic estates were lost to the Crown. Moreover, feudalism was dying, and the feudal armies controlled by the barons became obsolete. Henry VII (1485 -- 1509) clearly established the supremacy of the monarch, symbolised by the "Crown Imperial ''. The domination of the Sovereign continued to grow during the reigns of the Tudor monarchs in the 16th century. The Crown was at the height of its power during the reign of Henry VIII (1509 -- 1547).
The House of Lords remained more powerful than the House of Commons, but the Lower House continued to grow in influence, reaching a zenith in relation to the House of Lords during the middle 17th century. Conflicts between the King and the Parliament (for the most part, the House of Commons) ultimately led to the English Civil War during the 1640s. In 1649, after the defeat and execution of King Charles I, the Commonwealth of England was declared, but the nation was effectively under the overall control of Oliver Cromwell, Lord Protector of England, Scotland and Ireland.
The House of Lords was reduced to a largely powerless body, with Cromwell and his supporters in the Commons dominating the Government. On 19 March 1649, the House of Lords was abolished by an Act of Parliament, which declared that "The Commons of England (find) by too long experience that the House of Lords is useless and dangerous to the people of England. '' The House of Lords did not assemble again until the Convention Parliament met in 1660 and the monarchy was restored. It returned to its former position as the more powerful chamber of Parliament -- a position it would occupy until the 19th century.
The 19th century was marked by several changes to the House of Lords. The House, once a body of only about 50 members, had been greatly enlarged by the liberality of George III and his successors in creating peerages. The individual influence of a Lord of Parliament was thus diminished.
Moreover, the power of the House as a whole experienced a decrease, whilst that of the House of Commons grew. Particularly notable in the development of the Lower House 's superiority was the Reform Bill of 1832. The electoral system of the House of Commons was not, at the time, democratic: property qualifications greatly restricted the size of the electorate, and the boundaries of many constituencies had not been changed for centuries.
Entire cities such as Manchester were not represented by a single individual in the House of Commons, but the 11 voters of Old Sarum retained their ancient right to elect two members of parliament. A small borough was susceptible to bribery, and was often under the control of a patron, whose nominee was guaranteed to win an election. Some aristocrats were patrons of numerous "pocket boroughs '', and therefore controlled a considerable part of the membership of the House of Commons.
When the House of Commons passed a Reform Bill to correct some of these anomalies in 1831, the House of Lords rejected the proposal. The popular cause of reform, however, was not abandoned by the ministry, despite a second rejection of the bill in 1832. Prime Minister Earl Grey advised the King to overwhelm opposition to the bill in the House of Lords by creating about 80 new pro-Reform peers. William IV originally balked at the proposal, which effectively threatened the opposition of the House of Lords, but at length relented.
Before the new peers were created, however, the Lords who opposed the bill admitted defeat, and abstained from the vote, allowing the passage of the bill. The crisis damaged the political influence of the House of Lords, but did not altogether end it. A vital reform was effected by the House itself in 1868, when it changed its standing orders so as to prevent noble Lords from voting without taking the trouble to attend. Proxies were then abolished. Over the course of the century the power of the Upper House experienced further erosion, and the Commons gradually became the stronger House of Parliament.
The status of the House of Lords returned to the forefront of debate after the election of a Liberal Government in 1906. In 1909, the Chancellor of the Exchequer, David Lloyd George, introduced into the House of Commons the "People 's Budget '', which proposed a land tax targeting wealthy landowners. The popular measure, however, was defeated in the heavily Conservative House of Lords.
Having made the powers of the House of Lords a primary campaign issue, the Liberals were narrowly re-elected in January 1910. Prime Minister H.H. Asquith then proposed that the powers of the House of Lords be severely curtailed. After a further general election in December 1910, and with an undertaking by King George V to create sufficient new Liberal peers to overcome Lords ' opposition to the measure if necessary, the Asquith Government secured the passage of a bill to curtail the powers of the House of Lords.
The Parliament Act 1911 effectively abolished the power of the House of Lords to reject legislation, or to amend in a way unacceptable to the House of Commons: most bills could be delayed for no more than three parliamentary sessions or two calendar years. It was not meant to be a permanent solution; more comprehensive reforms were planned. Neither party, however, pursued the matter with much enthusiasm, and the House of Lords remained primarily hereditary. In 1949, the Parliament Act reduced the delaying power of the House of Lords further to two sessions or one year.
In 1958, the predominantly hereditary nature of the House of Lords was changed by the Life Peerages Act 1958, which authorised the creation of life baronies, with no numerical limits. The number of Life Peers then gradually increased, though not at a constant rate.
The Labour Party had for most of the twentieth century a commitment, based on the party 's historic opposition to class privilege, to abolish the House of Lords, or at least expel the hereditary element. In 1968, the Labour Government of Harold Wilson attempted to reform the House of Lords by introducing a system under which hereditary peers would be allowed to remain in the House and take part in debate, but would be unable to vote. This plan, however, was defeated in the House of Commons by a coalition of traditionalist Conservatives (such as Enoch Powell), and Labour members who continued to advocate the outright abolition of the Upper House (such as Michael Foot).
When Michael Foot attained the leadership of the Labour Party in 1980, abolition of the House of Lords became a part of the party 's agenda; under his successor, Neil Kinnock, however, a reformed Upper House was proposed instead. In the meantime, the creation of hereditary peerages (except for members of the Royal Family) has been arrested, with the exception of three creations during the administration of the Conservative Margaret Thatcher in the 1980s.
Whilst some hereditary peers were at best apathetic the Labour Party 's clear commitments were not lost on Baron Sudeley, who for decades was considered an expert on the House of Lords. In December 1979 the Conservative Monday Club published his extensive paper entitled Lords Reform -- Why tamper with the House of Lords? and in July 1980 The Monarchist carried another article by Lord Sudeley entitled Why Reform or Abolish the House of Lords?. In 1990 he wrote a further booklet for the Monday Club entitled The Preservation of the House of Lords.
The Labour Party included in its 1997 general election Manifesto a commitment to remove the hereditary peerage from the House of Lords. Their subsequent election victory in 1997 under Tony Blair finally heralded the demise of the traditional House of Lords. The Labour Government introduced legislation to expel all hereditary peers from the Upper House as a first step in Lords reform. As a part of a compromise, however, it agreed to permit 92 hereditary peers to remain until the reforms were complete. Thus all but 92 hereditary peers were expelled under the House of Lords Act 1999 (see below for its provisions), making the House of Lords predominantly an appointed house.
Since 1999, however, no further reform has taken place. The Wakeham Commission proposed introducing a 20 % elected element to the Lords, but this plan was widely criticised. A Joint Committee was established in 2001 to resolve the issue, but it reached no conclusion and instead gave Parliament seven options to choose from (fully appointed, 20 % elected, 40 % elected, 50 % elected, 60 % elected, 80 %, and fully elected). In a confusing series of votes in February 2003, all of these options were defeated, although the 80 % elected option fell by just three votes in the Commons. Socialist MPs favouring outright abolition voted against all the options.
In 2005, a cross-party group of senior MPs (Kenneth Clarke, Paul Tyler, Tony Wright, Sir George Young and Robin Cook) published a report proposing that 70 % of members of the House of Lords should be elected -- each member for a single long term -- by the single transferable vote system. Most of the remainder were to be appointed by a Commission to ensure a mix of "skills, knowledge and experience ''. This proposal was also not implemented. A cross-party campaign initiative called "Elect the Lords '' was set up to make the case for a predominantly elected Second Chamber in the run up to the 2005 general election.
At the 2005 election, the Labour Party proposed further reform of the Lords, but without specific details. The Conservative Party, which had, prior to 1997, opposed any tampering with the House of Lords, favoured an 80 % elected Second Chamber, while the Liberal Democrats called for a fully elected Senate. During 2006, a cross-party committee discussed Lords reform, with the aim of reaching a consensus: its findings were published in early 2007.
On 7 March 2007, members of the House of Commons voted ten times on a variety of alternative compositions for the upper chamber. Outright abolition, a wholly appointed house, a 20 % elected house, a 40 % elected house, a 50 % elected house and a 60 % elected house were all defeated in turn. Finally the vote for an 80 % elected chamber was won by 305 votes to 267, and the vote for a wholly elected chamber was won by an even greater margin: 337 to 224. Significantly this last vote represented an overall majority of MPs.
Furthermore, examination of the names of MPs voting at each division shows that, of the 305 who voted for the 80 % elected option, 211 went on to vote for the 100 % elected option. Given that this vote took place after the vote on 80 % -- whose result was already known when the vote on 100 % took place -- this showed a clear preference for a fully elected upper house among those who voted for the only other option that passed. But this was nevertheless only an indicative vote and many political and legislative hurdles remained to be overcome for supporters of an elected second chamber. The House of Lords, soon after, rejected this proposal and voted for an entirely appointed House of Lords.
In July 2008, Jack Straw, the Secretary of State for Justice and Lord Chancellor, introduced a white paper to the House of Commons proposing to replace the House of Lords with an 80 -- 100 % elected chamber, with one third being elected at each general election, for a term of approximately 12 -- 15 years. The white paper states that as the peerage would be totally separated from membership of the upper house, the name "House of Lords '' would no longer be appropriate: It goes on to explain that there is cross-party consensus for the new chamber to be titled the "Senate of the United Kingdom ''; however, to ensure the debate remains on the role of the upper house rather than its title, the white paper is neutral on the title of the new house.
On 30 November 2009, a Code of Conduct for Members of the House of Lords was agreed by them; certain amendments were agreed by them on 30 March 2010 and on 12 June 2014. The scandal over expenses in the Commons was at its highest pitch only six months before, and the Labourite leadership under Janet Royall determined that something sympathetic should be done.
In Meg Russell 's article "Is the House of Lords already reformed? '', she states three essential features of a legitimate House of Lords. The first is that it must have adequate powers over legislation to make the government think twice before making a decision. The House of Lords, she argues, currently has enough power to make it relevant. During Tony Blair 's first year, he was defeated thirty - eight times in the Lords. Secondly, as to the composition of the Lords, Meg Russell suggests that the composition must be distinct from the Commons, otherwise it would render the Lords useless. The third feature is the perceived legitimacy of the Lords. She writes, "In general legitimacy comes with election. ''
If the Lords have a distinct and elected composition, this would probably come about through fixed term proportional representation. If this happens, then the perceived legitimacy of the Lords could arguably outweigh the legitimacy of the Commons. This would especially be the case if the House of Lords had been elected more recently than the House of Commons as it could be said to reflect the will of the people better than the Commons.
In this scenario, there may well come a time when the Lords twice reject a Bill from the Commons and it is forced through. This would in turn trigger questions about the amount of power the Lords should have and there would be pressure for it to increase. This hypothetical process is known as the "circumnavigation of power theory ''. It implies that it would never be in any government 's interest to legitimise the Lords, as they would be forfeiting their own power.
The Conservative -- Liberal Democrat coalition agreed, following the 2010 general election, to clearly outline a provision for a wholly or mainly elected second chamber, elected by a proportional representation system. These proposals sparked a debate on 29 June 2010. As an interim measure, appointment of new peers will reflect shares of the vote secured by the political parties in the last general election.
Detailed proposals for Lords reform including a draft House of Lords Reform Bill were published on 17 May 2011. These include a 300 - member hybrid house, of which 80 % are elected. A further 20 % would be appointed, and reserve space would be included for some Church of England bishops. Under the proposals, members would also serve single non-renewable terms of 15 years. Former MPs would be allowed to stand for election to the Upper House, but members of the Upper House would not be immediately allowed to become MPs.
The details of the proposal were:
The proposals were considered by a Joint Committee on House of Lords Reform made up of both MPs and Peers, which issued its final report on 23 April 2012, making the following suggestions:
Deputy Prime Minister Nick Clegg introduced the House of Lords Reform Bill 2012 on 27 June 2012 which built on proposals published on 17 May 2011. However, this Bill was abandoned by the Government on 6 August 2012 following opposition from within the Conservative Party.
A private members bill to introduce some reforms was introduced by Dan Byles in 2013. The House of Lords Reform Act 2014 received the Royal Assent in 2014. Under the new law:
The House of Lords (Expulsion and Suspension) Act 2015 authorised the House to expel or suspend members.
This act makes provision to preferentially admit bishops of the Church of England who are women to the Lords Spiritual in the 10 years following its commencement.
In 2015, Rachel Treweek, Bishop of Gloucester, became the first woman to sit as a Lord Spiritual in the House of Lords.
The size of the House of Lords has varied greatly throughout its history. From about 50 members in the early 1700s, it increased to a record size of 1,330 in October 1999, before Lords reform reduced it to 669 by March 2000.
In April 2011, a cross-party group of former leading politicians, including many senior members of the House of Lords, called on the Prime Minister David Cameron to stop creating new peers. He had created 117 new peers since becoming prime minister in May 2010, a faster rate of elevation than any PM in British history. The expansion occurred while his government had tried (in vain) to reduce the size of the House of Commons by 50 members, from 650 to 600.
In August 2014, despite there being a seating capacity of only around 230 to 400 on the benches in the Lords chamber, the House had 774 active members (plus 54 who were not entitled to attend or vote, having been suspended or granted leave of absence). This made the House of Lords the largest parliamentary chamber in any democracy. In August 2014, former Speaker of the House of Commons Baroness Boothroyd requested that "older peers should retire gracefully '' to ease the overcrowding in the House of Lords. She also criticised successive prime ministers for filling the second chamber with "lobby fodder '' in an attempt to help their policies become law. She made her remarks days before a new batch of peers were due to be created.
In August 2015, following the creation of a further 45 peers in the Dissolution Honours, the total number of eligible members of the Lords increased to 826. In a report entitled Does size matter? the BBC said: "Increasingly, yes. Critics argue the House of Lords is the second largest legislature after the Chinese National People 's Congress and dwarfs Upper Houses in other bi-cameral democracies such as the United States (100 senators), France (348 senators), Australia (76 senators) and India (250 members). The Lords is also larger than the Supreme People 's Assembly of North Korea (687 members). (...) Peers grumble that there is not enough room to accommodate all of their colleagues in the Chamber, where there are only about 400 seats, and say they are constantly jostling for space -- particularly during high - profile sittings '', but added, "On the other hand, defenders of the Lords say that it does a vital job scrutinising legislation, a lot of which has come its way from the Commons in recent years ''.
The House of Lords does not control the term of the Prime Minister or of the Government. Only the Lower House may force the Prime Minister to resign or call elections by passing a motion of no - confidence or by withdrawing supply. Thus, the House of Lords ' oversight of the government is limited.
Most Cabinet ministers are from the House of Commons rather than the House of Lords. In particular, all Prime Ministers since 1902 have been members of the Lower House. (Alec Douglas - Home, who became Prime Minister in 1963 whilst still an Earl, disclaimed his peerage and was elected to the Commons soon after his term began.) In recent history, it has been very rare for major cabinet positions (except Lord Chancellor and Leader of the House of Lords) to have been filled by peers.
Exceptions include Lord Carrington, who was the Foreign Secretary between 1979 and 1982, Lord Young of Graffham (Minister without Portfolio, then Secretary of State for Employment and then Secretary of State for Trade and Industry from 1984 to 1989), Lady Amos, who served as Secretary of State for International Development and Lord Mandelson, who served as First Secretary of State, Secretary of State for Business, Innovation and Skills and President of the Board of Trade. George Robertson was briefly a peer whilst serving as Secretary of State for Defence before resigning to take up the post of Secretary General of NATO. From 1999 to 2010 the Attorney General for England and Wales was a Member of the House of Lords; the most recent was Baroness Scotland of Asthal.
The House of Lords remains a source for junior ministers and members of government. Like the House of Commons, the Lords also has a Government Chief Whip as well as several Junior Whips. Where a government department is not represented by a minister in the Lords or one is not available, government whips will act as spokesmen for them.
Legislation, with the exception of money bills, may be introduced in either House.
The House of Lords debates legislation, and has power to amend or reject bills. However, the power of the Lords to reject a bill passed by the House of Commons is severely restricted by the Parliament Acts. Under those Acts, certain types of bills may be presented for the Royal Assent without the consent of the House of Lords (i.e. the Commons can override the Lords ' veto). The House of Lords can not delay a money bill (a bill that, in the view of the Speaker of the House of Commons, solely concerns national taxation or public funds) for more than one month.
Other public bills can not be delayed by the House of Lords for more than two parliamentary sessions, or one calendar year. These provisions, however, only apply to public bills that originate in the House of Commons, and can not have the effect of extending a parliamentary term beyond five years. A further restriction is a constitutional convention known as the Salisbury Convention, which means that the House of Lords does not oppose legislation promised in the Government 's election manifesto.
By a custom that prevailed even before the Parliament Acts, the House of Lords is further restrained insofar as financial bills are concerned. The House of Lords may neither originate a bill concerning taxation or Supply (supply of treasury or exchequer funds), nor amend a bill so as to insert a taxation or Supply - related provision. (The House of Commons, however, often waives its privileges and allows the Upper House to make amendments with financial implications.) Moreover, the Upper House may not amend any Supply Bill. The House of Lords formerly maintained the absolute power to reject a bill relating to revenue or Supply, but this power was curtailed by the Parliament Acts, as aforementioned.
Historically, the House of Lords held several judicial functions. Most notably, until 2009 the House of Lords served as the court of last resort for most instances of UK law. Since 1 October 2009 this role is now held by the Supreme Court of the United Kingdom.
The Lords ' judicial functions originated from the ancient role of the Curia Regis as a body that addressed the petitions of the King 's subjects. The functions were exercised not by the whole House, but by a committee of "Law Lords ''. The bulk of the House 's judicial business was conducted by the twelve Lords of Appeal in Ordinary, who were specifically appointed for this purpose under the Appellate Jurisdiction Act 1876.
The judicial functions could also be exercised by Lords of Appeal (other members of the House who happened to have held high judicial office). No Lord of Appeal in Ordinary or Lord of Appeal could sit judicially beyond the age of seventy - five. The judicial business of the Lords was supervised by the Senior Lord of Appeal in Ordinary and their deputy, the Second Senior Lord of Appeal in Ordinary.
The jurisdiction of the House of Lords extended, in civil and in criminal cases, to appeals from the courts of England and Wales, and of Northern Ireland. From Scotland, appeals were possible only in civil cases; Scotland 's High Court of Justiciary is the highest court in criminal matters. The House of Lords was not the United Kingdom 's only court of last resort; in some cases, the Judicial Committee of the Privy Council performs such a function. The jurisdiction of the Privy Council in the United Kingdom, however, is relatively restricted; it encompasses appeals from ecclesiastical courts, disputes under the House of Commons Disqualification Act 1975, and a few other minor matters. Issues related to devolution were transferred from the Privy Council to the Supreme Court in 2009.
The twelve Law Lords did not all hear every case; rather, after World War II cases were heard by panels known as Appellate Committees, each of which normally consisted of five members (selected by the Senior Lord). An Appellate Committee hearing an important case could consist of more than five members. Though Appellate Committees met in separate committee rooms, judgement was given in the Lords Chamber itself. No further appeal lay from the House of Lords, although the House of Lords could refer a "preliminary question '' to the European Court of Justice in cases involving an element of European Union law, and a case could be brought at the European Court of Human Rights if the House of Lords did not provide a satisfactory remedy in cases where the European Convention on Human Rights was relevant.
A distinct judicial function -- one in which the whole House used to participate -- is that of trying impeachments. Impeachments were brought by the House of Commons, and tried in the House of Lords; a conviction required only a majority of the Lords voting. Impeachments, however, are to all intents and purposes obsolete; the last impeachment was that of Henry Dundas, 1st Viscount Melville, in 1806.
Similarly, the House of Lords was once the court that tried peers charged with high treason or felony. The House would be presided over not by the Lord Chancellor, but by the Lord High Steward, an official especially appointed for the occasion of the trial. If Parliament was not in session, then peers could be tried in a separate court, known as the Lord High Steward 's Court. Only peers, their wives, and their widows (unless remarried) were entitled to trials in the House of Lords or the Lord High Steward 's Court; the Lords Spiritual were tried in Ecclesiastical Courts. In 1948, the right of peers to be tried in such special courts was abolished; now, they are tried in the regular courts. The last such trial in the House was of Edward Russell, 26th Baron de Clifford, in 1935. An illustrative dramatisation circa 1928 of a trial of a peer (the fictional Duke of Denver) on a charge of murder (a felony) is portrayed in the 1972 BBC Television adaption of Dorothy L. Sayers ' Lord Peter Wimsey mystery Clouds of Witness.
The Constitutional Reform Act 2005 resulted in the creation of a separate Supreme Court of the United Kingdom, to which the judicial function of the House of Lords, and some of the judicial functions of the Judicial Committee of the Privy Council, were transferred. In addition, the office of Lord Chancellor was reformed by the act, removing his ability to act as both a government minister and a judge. This was motivated in part by concerns about the historical admixture of legislative, judicial, and executive power. The new Supreme Court is located at Middlesex Guildhall.
Members of the House of Lords who sit by virtue of their ecclesiastical offices are known as Lords Spiritual. Formerly, the Lords Spiritual were the majority in the English House of Lords, comprising the church 's archbishops, (diocesan) bishops, abbots, and those priors who were entitled to wear a mitre. After the English Reformation 's highpoint in 1539, only the archbishops and bishops continued to attend, as the Dissolution of the Monasteries had just disproved of and suppressed the positions of abbot and prior. In 1642 during the few Lords ' gatherings convened during English Interregnum which saw periodic war, the Lords Spiritual were excluded altogether, but they returned under the Clergy Act 1661.
The number of Lords Spiritual was further restricted by the Bishopric of Manchester Act 1847, and by later acts. The Lords Spiritual can now number no more than 26; these are the Archbishop of Canterbury, the Archbishop of York, the Bishop of London, the Bishop of Durham, the Bishop of Winchester (who sit by right regardless of seniority) and the 21 longest - serving bishops from other dioceses in the Church of England (excluding the dioceses of Sodor and Man and Gibraltar in Europe, as these lie entirely outside the United Kingdom). Following a change to the law in 2014 to allow women to be ordained bishops, the Lords Spiritual (Women) Act 2015 was passed, which provides that whenever a vacancy arises among the Lords Spiritual during the ten years following the Act coming into force, the vacancy has to be filled by a woman, if one is eligible. This does not apply to the five bishops who sit by right.
The current Lords Spiritual represent only the Church of England. Bishops of the Church of Scotland traditionally sat in the Parliament of Scotland but were finally excluded in 1689 (after a number of previous exclusions) when the Church of Scotland became permanently presbyterian. There are no longer bishops in the Church of Scotland in the traditional sense of the word, and that Church has never sent members to sit in the Westminster House of Lords. The Church of Ireland did obtain representation in the House of Lords after the union of Ireland and Great Britain in 1801.
Of the Church of Ireland 's ecclesiastics, four (one archbishop and three bishops) were to sit at any one time, with the members rotating at the end of every parliamentary session (which normally lasted approximately one year). The Church of Ireland, however, was disestablished in 1871, and thereafter ceased to be represented by Lords Spiritual. Bishops of Welsh sees in the Church of England originally sat in the House of Lords (after 1847, only if their seniority within the Church entitled them to), but the Church in Wales ceased to be a part of the Church of England in 1920 and was simultaneously disestablished in Wales. Accordingly, bishops of the Church in Wales were no longer eligible to be appointed to the House as bishops of the Church of England, but those already appointed remained.
Other ecclesiastics have sat in the House of Lords as Lords Temporal in recent times: Chief Rabbi Immanuel Jakobovits was appointed to the House of Lords (with the consent of the Queen, who acted on the advice of Prime Minister Margaret Thatcher), as was his successor Chief Rabbi Jonathan Sacks. Baroness Neuberger is the Senior Rabbi to the West London Synagogue. In recognition of his work at reconciliation and in the peace process in Northern Ireland, the Archbishop of Armagh (the senior Anglican bishop in Northern Ireland), Lord Eames was appointed to the Lords by John Major. Other clergymen appointed include the Reverend Donald Soper, the Reverend Timothy Beaumont, and some Scottish clerics.
There have been no Roman Catholic clergymen appointed, though it was rumoured that Cardinal Basil Hume and his successor Cormac Murphy O'Connor were offered peerages, by James Callaghan, Margaret Thatcher and Tony Blair respectively, but declined. Hume later accepted the Order of Merit, a personal appointment of the Queen, shortly before his death. O'Connor said he had his maiden speech ready, but Roman Catholics who have received Holy Orders are prohibited by Canon Law from holding major offices connected with any government other than the Holy See.
Former Archbishops of Canterbury, having reverted to the status of bishop but who are no longer diocesans, are invariably given life peerages and sit as Lords Temporal.
By custom at least one of the Bishops reads prayers in each legislative day (a role taken by the chaplain in the Commons). They often speak in debates; in 2004 Rowan Williams, the Archbishop of Canterbury, opened a debate into sentencing legislation. Measures (proposed laws of the Church of England) must be put before the Lords, and the Lords Spiritual have a role in ensuring that this takes place.
Since the Dissolution of the Monasteries, the Lords Temporal have been the most numerous group in the House of Lords. Unlike the Lords Spiritual, they may be publicly partisan, aligning themselves with one or another of the political parties that dominate the House of Commons. Publicly non-partisan Lords are called crossbenchers. Originally, the Lords Temporal included several hundred hereditary peers (that is, those whose peerages may be inherited), who ranked variously as dukes, marquesses, earls, viscounts, and barons (as well as Scottish Lords of Parliament). Such hereditary dignities can be created by the Crown; in modern times this is done on the advice of the Prime Minister of the day (except in the case of members of the Royal Family).
Holders of Scottish and Irish peerages were not always permitted to sit in the Lords. When Scotland united with England to form Great Britain in 1707, it was provided that the Scottish hereditary peers would only be able to elect 16 representative peers to sit in the House of Lords; the term of a representative was to extend until the next general election. A similar provision was enacted when Ireland merged with Great Britain in 1801 to form the United Kingdom; the Irish peers were allowed to elect 28 representatives, who were to retain office for life. Elections for Irish representatives ended in 1922, when most of Ireland became an independent state; elections for Scottish representatives ended with the passage of the Peerage Act 1963, under which all Scottish peers obtained seats in the Upper House.
In 1999, the Labour government brought forward the House of Lords Act removing the right of several hundred hereditary peers to sit in the House. The Act provided, as a measure intended to be temporary, that 92 people would continue to sit in the Lords by virtue of hereditary peerages, and this is still in effect.
Of the 92, two remain in the House of Lords because they hold royal offices connected with Parliament: the Earl Marshal and the Lord Great Chamberlain. Of the remaining ninety peers sitting in the Lords by virtue of a hereditary peerage, 15 are elected by the whole House and 75 are chosen by fellow hereditary peers in the House of Lords, grouped by party. (If a hereditary peerage holder is given a life peerage, he or she becomes a member of the House of Lords without a need for a by - election.) The exclusion of other hereditary peers removed the Prince of Wales (who is also Earl of Chester) and all other Royal Peers, including the Duke of Edinburgh, Duke of York, Earl of Wessex, Duke of Gloucester and Duke of Kent.
The number of peers to be chosen by a political group reflects the proportion of hereditary peers that belonged to that group (see current composition below) in 1999. When an elected hereditary peer dies, a by - election is held, with a variant of the Alternative Vote system being used. If the recently deceased hereditary peer had been elected by the whole House, then so is his or her replacement; a hereditary peer elected by a specific political group (including the non-aligned crossbenchers) is replaced by a vote of the hereditary peers already elected to the Lords belonging to that political group (whether elected by that group or by the whole house).
Until 2009, the Lords Temporal also included the Lords of Appeal in Ordinary, a group of individuals appointed to the House of Lords so that they could exercise its judicial functions. Lords of Appeal in Ordinary, more commonly known as Law Lords, were first appointed under the Appellate Jurisdiction Act 1876. They were selected by the Prime Minister of the day, but were formally appointed by the Sovereign. A Lord of Appeal in Ordinary had to retire at the age of 70, or, if his or her term was extended by the government, at the age of 75; after reaching such an age, the Law Lord could not hear any further cases in the House of Lords.
The number of Lords of Appeal in Ordinary (excluding those who were no longer able to hear cases because of age restrictions) was limited to twelve, but could be changed by statutory instrument. By a convention of the House, Lords of Appeal in Ordinary did not take part in debates on new legislation, so as to maintain judicial independence. Lords of Appeal in Ordinary held their seats in the House of Lords for life, remaining as members even after reaching the judicial retirement age of 70 or 75. Former Lord Chancellors and holders of other high judicial office could also sit as Law Lords under the Appellate Jurisdiction Act, although in practice this right was only rarely exercised.
Under the Constitutional Reform Act 2005, the Lords of Appeal in Ordinary when the Act came into effect in 2009 became judges of the new Supreme Court of the United Kingdom and were then barred from sitting or voting in the House of Lords until they had retired as judges. One of the main justifications for the new Supreme Court was to establish a separation of powers between the judiciary and the legislature. It is therefore unlikely that future appointees to the Supreme Court of the United Kingdom will be made Lords of Appeal in Ordinary.
The largest group of Lords Temporal, and indeed of the whole House, are life peers. Life peerages rank only as barons or baronesses, and are created under the Life Peerages Act 1958. Like all other peers, life peers are created by the Sovereign, who acts on the advice of the Prime Minister or the House of Lords Appointments Commission. By convention, however, the Prime Minister allows leaders of other parties to nominate some life peers, so as to maintain a political balance in the House of Lords. Moreover, some non-party life peers (the number being determined by the Prime Minister) are nominated by the independent House of Lords Appointments Commission.
In 2000, the government announced it would set up an Independent Appointments Commission, under Lord Stevenson of Coddenham, to select fifteen so - called "people 's peers '' for life peerages. However, when the choices were announced in April 2001, from a list of 3,000 applicants, the choices were treated with criticism in the media, as all were distinguished in their field, and none were "ordinary people '' as some had originally hoped.
Several different qualifications apply for membership of the House of Lords. No person may sit in the House of Lords if under the age of 21. Furthermore, only British, Irish and Commonwealth citizens may sit in the House of Lords. The nationality restrictions were previously more stringent: under the Act of Settlement 1701, and prior to the British Nationality Act 1948, only natural - born subjects qualified.
Additionally, some bankruptcy - related restrictions apply to members of the Upper House. A person may not sit in the House of Lords if he or she is the subject of a Bankruptcy Restrictions Order (applicable in England and Wales only), or if he or she is adjudged bankrupt (in Northern Ireland), or if his or her estate is sequestered (in Scotland). A final restriction bars an individual convicted of high treason from sitting in the House of Lords until completing his or her full term of imprisonment. An exception applies, however, if the individual convicted of high treason receives a full pardon. Note that an individual serving a prison sentence for an offence other than high treason is not automatically disqualified.
Women were excluded from the House of Lords until the Life Peerages Act 1958, passed to address the declining number of active members, made possible the creation of peerages for life. Women were immediately eligible and four were among the first life peers appointed. However, hereditary peeresses continued to be excluded until the passage of the Peerage Act 1963. Since the passage of the House of Lords Act 1999, hereditary peeresses remain eligible for election to the Upper House; there is one (Countess of Mar) among the 90 hereditary peers who continue to sit.
The Honours (Prevention of Abuses) Act 1925 made it illegal for a peerage, or other honour, to be bought or sold. Nonetheless, there have been repeated allegations that life peerages (and thus membership of the House of Lords) have been made available to major political donors in exchange for donations. The most prominent case, the 2006 Cash for Honours scandal, saw a police investigation, with no charges being brought. A 2015 study found that of 303 people nominated for peerages in the period 2005 -- 14, a total of 211 were former senior figures within politics (including former MPs), or were non-political appointments. Of the remaining 92 political appointments from outside public life, 27 had made significant donations to political parties. The authors concluded firstly that nominees from outside public life were much more likely to have made large gifts than peers nominated after prior political or public service. They also found that significant donors to parties were far more likely to be nominated for peerages than other party members.
Traditionally there was no mechanism by which members could resign or be removed from the House of Lords (compare the situation as regards resignation from the House of Commons). The Peerage Act 1963 permitted a person to disclaim their newly inherited peerage (within certain time limits); this meant that such a person could effectively renounce their membership of the Lords. This might be done in order to remain or become qualified to sit in the House of Commons, as in the case of Tony Benn (formerly the second Viscount Stansgate), who had campaigned for such a change.
The House of Lords Reform Act 2014 made provision for members ' resignation from the House, removal for non-attendance, and automatic expulsion upon conviction for a serious criminal offence (if resulting in a jail sentence of at least one year). In June 2015, under the House of Lords (Expulsion and Suspension) Act 2015, the House 's Standing Orders may provide for the expulsion or suspension of a member upon a resolution of the House.
Traditionally the House of Lords did not elect its own speaker, unlike the House of Commons; rather, the ex officio presiding officer was the Lord Chancellor. With the passage of the Constitutional Reform Act 2005, the post of Lord Speaker was created, a position to which a peer is elected by the House and subsequently appointed by the Crown. The first Lord Speaker, elected on 4 May 2006, was Baroness Hayman, a former Labour peer. As the Speaker is expected to be an impartial presiding officer, Baroness Hayman resigned from the Labour Party. In 2011, Baroness D'Souza was elected as the second Lord Speaker, replacing Baroness Hayman in September 2011. Baroness D'Souza was in turn succeeded by Lord Fowler in September 2016, the incumbent Lord Speaker.
This reform of the post of Lord Chancellor was made due to the perceived constitutional anomalies inherent in the role. The Lord Chancellor was not only the Speaker of the House of Lords, but also a member of the Cabinet; his or her department, formerly the Lord Chancellor 's Department, is now called the Ministry of Justice. The Lord Chancellor is no longer the head of the judiciary of England and Wales. Hitherto, the Lord Chancellor was part of all three branches of government: the legislative, the executive, and the judicial.
The overlap of the legislative and executive roles is a characteristic of the Westminster system, as the entire cabinet consists of members of the House of Commons or the House of Lords; however, in June 2003, the Blair Government announced its intention to abolish the post of Lord Chancellor because of the office 's mixed executive and judicial responsibilities. The abolition of the office was rejected by the House of Lords, and the Constitutional Reform Act 2005 was thus amended to preserve the office of Lord Chancellor. The Act no longer guarantees that the office holder of Lord Chancellor is the presiding officer of the House of Lords, and therefore allows the House of Lords to elect a speaker of their own.
The Lord Speaker may be replaced as presiding officer by one of his or her deputies. The Chairman of Committees, the Principal Deputy Chairman of Committees, and several Chairmen are all deputies to the Lord Speaker, and are all appointed by the House of Lords itself at the beginning of each session. By custom, the Crown appoints each Chairman, Principal Deputy Chairman and Deputy Chairman to the additional office of Deputy Speaker of the House of Lords. There was previously no legal requirement that the Lord Chancellor or a Deputy Speaker be a member of the House of Lords (though the same has long been customary).
Whilst presiding over the House of Lords, the Lord Chancellor traditionally wore ceremonial black and gold robes. Robes of black and gold are now worn by the Lord Chancellor and Secretary of State for Justice in the House of Commons, on ceremonial occasions. This is no longer a requirement for the Lord Speaker except for State occasions outside of the chamber. The Speaker or Deputy Speaker sits on the Woolsack, a large red seat stuffed with wool, at the front of the Lords Chamber.
When the House of Lords resolves itself into committee (see below), the Chairman of Committees or a Deputy Chairman of Committees presides, not from the Woolsack, but from a chair at the Table of the House. The presiding officer has little power compared to the Speaker of the House of Commons. He or she only acts as the mouthpiece of the House, performing duties such as announcing the results of votes. This is because, unlike in the House of Commons where all statements are directed to "Mr / Madam Speaker '', in the House of Lords they are directed to "My Lords ''; i.e., the entire body of the House.
The Lord Speaker or Deputy Speaker can not determine which members may speak, or discipline members for violating the rules of the House; these measures may be taken only by the House itself. Unlike the politically neutral Speaker of the House of Commons, the Lord Chancellor and Deputy Speakers originally remained members of their respective parties, and were permitted to participate in debate; however, this is no longer true of the new role of Lord Speaker.
Another officer of the body is the Leader of the House of Lords, a peer selected by the Prime Minister. The Leader of the House is responsible for steering Government bills through the House of Lords, and is a member of the Cabinet. The Leader also advises the House on proper procedure when necessary, but such advice is merely informal, rather than official and binding. A Deputy Leader is also appointed by the Prime Minister, and takes the place of an absent or unavailable leader.
The Clerk of the Parliaments is the chief clerk and officer of the House of Lords (but is not a member of the House itself). The Clerk, who is appointed by the Crown, advises the presiding officer on the rules of the House, signs orders and official communications, endorses bills, and is the keeper of the official records of both Houses of Parliament. Moreover, the Clerk of the Parliaments is responsible for arranging by - elections of hereditary peers when necessary. The deputies of the Clerk of the Parliaments (the Clerk Assistant and the Reading Clerk) are appointed by the Lord Speaker, subject to the House 's approval.
The Gentleman Usher of the Black Rod is also an officer of the House; he takes his title from the symbol of his office, a black rod. Black Rod (as the Gentleman Usher is normally known) is responsible for ceremonial arrangements, is in charge of the House 's doorkeepers, and may (upon the order of the House) take action to end disorder or disturbance in the Chamber. Black Rod also holds the office of Serjeant - at - Arms of the House of Lords, and in this capacity attends upon the Lord Speaker. The Gentleman Usher of the Black Rod 's duties may be delegated to the Yeoman Usher of the Black Rod or to the Assistant Serjeant - at - Arms.
The House of Lords and the House of Commons assemble in the Palace of Westminster. The Lords Chamber is lavishly decorated, in contrast with the more modestly furnished Commons Chamber. Benches in the Lords Chamber are coloured red. The Woolsack is at the front of the Chamber; the Government sit on benches on the right of the Woolsack, while members of the Opposition sit on the left. Crossbenchers, sit on the benches immediately opposite the Woolsack.
The Lords Chamber is the site of many formal ceremonies, the most famous of which is the State Opening of Parliament, held at the beginning of each new parliamentary session. During the State Opening, the Sovereign, seated on the Throne in the Lords Chamber and in the presence of both Houses of Parliament, delivers a speech outlining the Government 's agenda for the upcoming parliamentary session.
In the House of Lords, members need not seek the recognition of the presiding officer before speaking, as is done in the House of Commons. If two or more Lords simultaneously rise to speak, the House decides which one is to be heard by acclamation, or, if necessary, by voting on a motion. Often, however, the Leader of the House will suggest an order, which is thereafter generally followed. Speeches in the House of Lords are addressed to the House as a whole ("My Lords '') rather than to the presiding officer alone (as is the custom in the Lower House). Members may not refer to each other in the second person (as "you ''), but rather use third person forms such as "the noble Duke '', "the noble Earl '', "the noble Lord '', "my noble friend '', "The most Reverend Primate '', etc.
Each member may make no more than one speech on a motion, except that the mover of the motion may make one speech at the beginning of the debate and another at the end. Speeches are not subject to any time limits in the House; however, the House may put an end to a speech by approving a motion "that the noble Lord be no longer heard ''. It is also possible for the House to end the debate entirely, by approving a motion "that the Question be now put ''. This procedure is known as Closure, and is extremely rare.
Once all speeches on a motion have concluded, or Closure invoked, the motion may be put to a vote. The House first votes by voice vote; the Lord Speaker or Deputy Speaker puts the question, and the Lords respond either "content '' (in favour of the motion) or "not content '' (against the motion). The presiding officer then announces the result of the voice vote, but if his assessment is challenged by any Lord, a recorded vote known as a division follows.
Members of the House enter one of two lobbies (the content lobby or the not - content lobby) on either side of the Chamber, where their names are recorded by clerks. At each lobby are two Tellers (themselves members of the House) who count the votes of the Lords. The Lord Speaker may not take part in the vote. Once the division concludes, the Tellers provide the results thereof to the presiding officer, who then announces them to the House.
If there is an equality of votes, the motion is decided according to the following principles: legislation may proceed in its present form, unless there is a majority in favour of amending or rejecting it; any other motions are rejected, unless there is a majority in favour of approving it. The quorum of the House of Lords is just three members for a general or procedural vote, and 30 members for a vote on legislation. If fewer than three or 30 members (as appropriate) are present, the division is invalid.
By contrast with the House of Commons, the House of Lords has not until recently had an established procedure for putting sanctions on its members. When a cash for influence scandal was referred to the Committee of Privileges in January 2009, the Leader of the House of Lords also asked the Privileges Committee to report on what sanctions the House had against its members. After seeking advice from the Attorney General for England and Wales and the former Lord Chancellor Lord Mackay of Clashfern, the committee decided that the House "possessed an inherent power '' to suspend errant members, although not to withhold a writ of summons nor to expel a member permanently. When the House subsequently suspended Lord Truscott and Lord Taylor of Blackburn for their role in the scandal, they were the first to meet this fate since 1642.
Recent changes have expanded the disciplinary powers of the House. Section 3 of the House of Lords Reform Act 2014 now provides that any member of the House of Lords convicted of a crime and sentenced to imprisonment for more than one year loses their seat. The House of Lords (Expulsion and Suspension) Act 2015 allows the House to set up procedures to suspend, and to expel, its members.
There are two motions which have grown up through custom and practice and which govern questionable conduct within the House. They are brought into play by a member standing up, possibly intervening on another member, and moving the motion without notice. When the debate is getting excessively heated, it is open to a member to move "that the Standing Order on Asperity of Speech be read by the Clerk ''. The motion can be debated, but if agreed by the House, the Clerk of the Parliaments will read out Standing Order 33 which provides "That all personal, sharp, or taxing speeches be forborn ''. The Journals of the House of Lords record only four instances on which the House has ordered the Standing Order to be read since the procedure was invented in 1871.
For more serious problems with an individual Lord, the option is available to move "That the noble Lord be no longer heard ''. This motion also is debatable, and the debate which ensues has sometimes offered a chance for the member whose conduct has brought it about to come to order so that the motion can be withdrawn. If the motion is passed, its effect is to prevent the member from continuing their speech on the motion then under debate. The Journals identify eleven occasions on which this motion has been moved since 1884; four were eventually withdrawn, one was voted down, and six were passed.
In 1958, to counter criticism that some peers only appeared at major decisions in the House and thereby particular votes were swayed, the Standing Orders of the House of Lords were enhanced. Peers who did not wish to attend meetings regularly or were prevented by ill health, age or further reasons, were now able to request Leave of Absence. During the granted time a peer is expected not to visit the House 's meetings until either its expiration or termination, announced at least a month prior to their return.
Members of the House of Lords can, since 2010, opt to receive a £ 300 per day attendance allowance, plus limited travel expenses. Peers can elect to receive a reduced attendance allowance of £ 150 per day instead. Prior to 2010 peers from outside London could claim an overnight allowance of £ 174.
Unlike in the House of Commons, when the term committee is used to describe a stage of a bill, this committee does not take the form of a public bill committee, but what is described as Committee of the Whole House. It is made up of all Members of the House of Lords allowing any Member to contribute to debates if he or she chooses to do so and allows for more flexible rules of procedure. It is presided over by the Chairman of Committees.
The term committee is also used to describe Grand Committee, where the same rules of procedure apply as in the main chamber, except that no divisions may take place. For this reason, business that is discussed in Grand Committee is usually uncontroversial and likely to be agreed unanimously.
Public bills may also be committed to pre-legislative committees. A pre-legislative Committee is specifically constituted for a particular bill. These committees are established in advance of the bill being laid before either the House of Lords or the House of Commons and can take evidence from the public. Such committees are rare and do not replace any of the usual stages of a bill, including committee stage.
The House of Lords also has 15 Select Committees. Typically, these are sessional committees, meaning that their members are appointed by the House at the beginning of each session, and continue to serve until the next parliamentary session begins. In practice, these are often permanent committees, which are re-established during every session. These committees are typically empowered to make reports to the House "from time to time '', that is, whenever they wish. Other committees are ad - hoc committees, which are set up to investigate a specific issue. When they are set up by a motion in the House, the motion will set a deadline by which the Committee must report. After this date, the Committee will cease to exist unless it is granted an extension. One example of this is the Committee on Public Service and Demographic Change. The House of Lords may appoint a chairman for a committee; if it does not do so, the Chairman of Committees or a Deputy Chairman of Committees may preside instead. Most of the Select Committees are also granted the power to co-opt members, such as the European Union Committee. The primary function of Select Committees is to scrutinise and investigate Government activities; to fulfil these aims, they are permitted to hold hearings and collect evidence. Bills may be referred to Select Committees, but are more often sent to the Committee of the Whole House and Grand Committees.
The committee system of the House of Lords also includes several Domestic Committees, which supervise or consider the House 's procedures and administration. One of the Domestic Committees is the Committee of Selection, which is responsible for assigning members to many of the House 's other committees.
There are currently 798 sitting members of the House of Lords. An additional 27 Lords are ineligible from participation, including eight peers who are constitutionally disqualified as members of the Judiciary.
The House of Lords Act 1999 allocated 75 of the 92 hereditary peers to the parties based on the proportion of hereditary peers that belonged to that party in 1999:
Of the initial 42 hereditary peers elected as Conservatives, one, Lord Willoughby de Broke, now sits as a member of UKIP.
Fifteen hereditary peers are elected by the whole House, and the remaining hereditary peers are the two royal office - holders, the Earl Marshal and the Lord Great Chamberlain, both of whom are currently on leave of absence.
A report in 2007 stated that many members of the Lords (particularly the life peers) do not attend regularly; the average daily attendance was around 408.
While the number of hereditary peers is limited to 92, and that of Lords spiritual to 26, there is no maximum limit to the number of life peers who may be members of the House of Lords at any time.
Coordinates: 51 ° 29 ′ 55.7 '' N 0 ° 07 ′ 29.5 '' W / 51.498806 ° N 0.124861 ° W / 51.498806; - 0.124861
|
what episode does reid get caught with drugs | Spencer Reid - Wikipedia
Spencer Reid is a fictional character from the CBS crime drama Criminal Minds, portrayed by Matthew Gray Gubler. Reid is a genius with an IQ of 187 and can read 20,000 words per minute with an eidetic memory (meaning that he can remember an exceedingly large amount of information with extraordinary detail.) He is the youngest member of the FBI 's Behavioral Analysis Unit (BAU), has three BAs and three PhDs, and specializes in statistics and geographic profiling.
Reid was born in 1981 and is a genius and autodidact who graduated from a Las Vegas public high school at age 12. He has an IQ of 187, an eidetic memory, and can read 20,000 words per minute (an average American adult reads prose text at 200 -- 300 words per minute). He holds B.A.s in Psychology, Sociology, and Philosophy, a Ph. D. in Chemistry and Engineering as well as a Ph. D. in Mathematics from Caltech.
Reid is 23 in the pilot episode, having joined the unit when he was 22. His fellow team members almost always introduce him as Dr. Reid. Hotch reveals in the first season that Gideon insists on introducing him as Dr. Reid because Gideon fears that, because of his age, Reid will not be taken seriously as an FBI agent. This was a genuine concern, both for in - universe and for audience acceptance, since in real life the minimum age to become an FBI Special Agent is 23, with at least three more years to obtain Supervisory Special Agent status, and appointments to the BAU do not usually occur until after at least eight to ten years in the FBI. While filming the pilot, the show 's FBI consultant informed Matthew Gray Gubler that there was nothing realistic about his character.
Before Gubler was cast in the role, the character was envisioned as more like Data from Star Trek. However, the producers liked Gubler 's softer interpretation, despite telling the actor he was wrong for the part. After several callbacks, he was hired.
During October 2012, series creator Jeff Davis tweeted that Reid was originally envisioned to be bisexual, but the network shut the idea down by the fourth episode when Reid develops a crush on his colleague, Jennifer "JJ '' Jareau.
As a common characteristic of people with Asperger 's syndrome, Reid is socially awkward. Reid has a hard time dealing with his emotions. He often fixates on things (prompting Morgan and other team members to have to tell him to be quiet), and misses social cues at times (for example, unknowingly changing the subject of a conversation). The Unknown Subject ("UnSub '') in "Broken Mirror '' notes this, and Gubler stated in an interview in the show 's second season "(Reid) 's an eccentric genius, with hints of schizophrenia and minor autism, Asperger 's syndrome. Reid is 24 years old with three Ph. D.s and one can not usually achieve that without some form of autism. '' Writer Sharon Lee Watson stated in a Twitter chat that Reid 's Asperger traits make the character more lovable.
Gubler has commented on the differences between Reid and similarly odd character Penelope Garcia: "She represents everything he 's not, she 's very tech oriented and I would like to imagine he is more like 1920s smart, books and reading etc ''. Kirsten Vangsness agreed, adding that Garcia is more extroverted and available emotionally, whereas Reid struggles with his emotions. Reid is a technophobe, using neither email nor the new iPads. Gubler tweeted that Reid is also germaphobic. In general, Reid dislikes shaking hands, and shows adverse reactions when touched by strangers. It is speculated the character may also have slight obsessive compulsive disorder, particularly from a scene in "Out of the Light '' where Derek Morgan slightly moves an item in the home of an UnSub afflicted with OCD, and Reid immediately places it back to its previous spot.
Reid is a good map - reader, and therefore does the geographic profiling and map - related activities for the team. He also has a talent with words, and is the team 's go - to linguistic profiler, as well as their unofficial discourse analyst. He is rarely seen behind the wheel -- one time when Morgan hands him the keys, JJ and Emily exchange horrified expressions -- but in "Lo - Fi '' he is seen getting into the driver 's side of a vehicle and even driving that same vehicle in one scene, as well as driving to Gideon 's cabin in "In Name and Blood, '' and in "Nelson 's Sparrow. '' However, he is usually seen as a backseat passenger during car scenes, and he commutes to work using the Metro, and presumably the VRE.
Spencer Walter Reid was born in Las Vegas to William Reid (Taylor Nichols), a lawyer, and Diana Reid (Jane Lynch), a former professor of 15th century literature. Diana also has paranoid schizophrenia, and went off her medication during her pregnancy. Reid and his mother have a very close relationship despite her condition.
At age four, Spencer was approached by a man, Gary Michaels, while playing chess at a local park. Although Spencer was unharmed, Diana insisted the family move because she believed her son was in danger. Shortly thereafter, Spencer 's six - year - old neighbor, Riley Jenkins, was sexually abused and murdered. Diana told Lou Jenkins, Riley 's father, about the park incident. Diana then followed Jenkins and witnessed him beat Michaels to death with a baseball bat, getting blood on her clothes in the process. In order to protect his wife, William burned Diana 's clothes, which Spencer inadvertently witnessed. Jenkins avoided arrest because Michaels "disappeared '', and as he had a criminal history as a sexual predator, the police did n't look very hard into the case. Years later, Reid starts having nightmares about the incident, initially leading him to believe his own father was Riley 's killer. Rossi and Morgan help him investigate. After undergoing hypnosis to recover his memories, Reid mistakenly believes he saw his father burning Riley 's clothes, not Diana 's. He pursues his father as a suspect, even after it becomes clear that Michaels is the more likely perpetrator. Michaels ' body is found, and DNA confirms his killer was Lou Jenkins, who is arrested. While Reid is interviewing Jenkins, demanding to know how his father was involved, his parents interrupt and confess to their son the whole story.
At ten years old, Reid 's father abandoned the family. The Michaels incident had already started the rift, and as Diana 's mental state continued to deteriorate due to her paranoid schizophrenia, William left, refusing to take Spencer with him. He moved ten miles away, and never contacted his son. Reid finds out his father 's address from Lou Jenkins seventeen years later, as well as the fact that his father never changed jobs. William later states that the reason he never returned was because he was too ashamed, and felt too much time had passed for him to re-enter Reid 's life, although he did keep electronic tabs on his son. When his father was leaving, young Spencer tried to convince him to stay by using a statistic that children of parents who remain together receive more education. This angered William: "We 're not statistics. '' Reid states that one way he gets back at his father is to collect more educational degrees.
Due to his young age and genius IQ, Reid was a victim of severe bullying in high school. In "Elephant 's Memory '', he recounts one instance where he was stripped naked and tied to a goalpost in front of other students, remaining there for hours. In "L.D.S.K. '', Hotchner is forced to kick Reid in order to allow him access to a gun in order to shoot a suspect. When Hotch says he is sorry if he hurt him, Reid points out that he was a child prodigy in a Las Vegas school and tells Hotch, "you kick like a nine - year - old girl ''. Reid 's social standing as a child increased when he started winning games as the coach of his high school 's basketball team, using statistics to break down the opposing teams ' shooting strategies.
At age twelve, Reid graduated from high school. He attended Caltech, where he rode his bike to classes. He finished his undergraduate degree at sixteen, and received his first doctorate (in Mathematics) the following year. It has also been stated that he attended MIT, but episode writer Breen Frazier admitted the MIT line was a mistake, although it has not yet been corrected onscreen. Yale University was Reid 's "safety school ''. Between the ages of 17 and 21, he completed two more doctorates (Chemistry and Engineering), and two more Bachelor 's degrees (psychology and sociology).
When Reid was eighteen, he realized his mother 's condition had deteriorated to the point where she could no longer take care of herself and had her committed involuntarily to a psychiatric institution, Bennington Sanitarium. Diana still resides in the same institution, and Reid says that he sends her letters every day, in part because of the guilt he feels for not visiting her. He worries about the fact that his mother 's illness can be passed on genetically; telling Morgan: "I know what it 's like to be afraid of your own mind ''.
Currently, Reid resides in an apartment in the District of Columbia, possibly near the Van Ness - UDC Metro stop.
Reid joined the FBI at either the age 21 or 22, depending upon what age he entered the FBI Academy. While there was "no psychological exam or test the FBI could put in front of him he could not ace inside of an hour '', he did struggle with the more physical aspects of training, and ultimately received waivers for those requirements. Even after a year in the field, Reid still struggles to pass his gun qualifications. He is often left behind during takedowns, has never given chase, and jokes that it 's Morgan 's job to kick down the doors. This does not bother the team, because while he has shown an ability to physically disarm unsubs, his true talent is psychologically disarming them, as well as his abilities to solve the cases behind the scenes.
Profiling is the only profession Reid considered, and he was groomed specifically for the BAU. Upon graduation from the Academy, he was placed in the BAU at age 22, and given the title Supervisory Special Agent. His first case in the field was the Blue Ridge Strangler.
Gideon is Reid 's closest confidante on the team during the first two seasons, and often serves as a mentor to Reid. Gideon 's departure affects Reid deeply. He tries playing all possible chess moves in order to understand.
Reid is close to JJ, Morgan, and Emily Prentiss. JJ asks him to be godfather to her new born son Henry, and is the only one on the team who calls him "Spence. '' It is implied in "Plain Sight '' that Reid may have a slight crush on JJ, and Gideon even prods him to ask out JJ after giving him Washington Redskins football tickets for his birthday, but nothing comes from it, and they continue with their brother - sister relationship. However, Reid is very protective of her, and often blames himself if she is injured; even if there was nothing he could have done to prevent it. In "Closing Time '', after she arrests an unsub, but gets hurt in the process, Spencer is seen counting her injuries as she sits in the ambulance and tells the paramedic that she should be going to get a CAT scan. Reid also shares a brotherly friendship with Derek Morgan. In season seven, he is comfortable enough to start a joke war with him, something that he probably would never do with anyone else, and he occasionally confides his secrets to Morgan. It is suggested in the episode "Epilogue '' that Reid told Derek details about what Tobias Hankel did to him when he makes a remark about seeing the afterlife before Tobias saved him. Morgan looks surprised and says "You never told me that. '' In the episode "Elephant 's Memory '', when approached by a fully armed Owen Savage, the unsub whom Reid identifies with, Reid gives Prentiss his gun and trusts her enough to back him up and not shoot at Owen as he tries talking Owen down. Although not shown, it is implied Reid and Prentiss spend time together outside of work heavily, along with riding the train home together when they return from cases. Prentiss is the only one who has beat Reid at poker, even correcting his statistic about her particular poker move. Reid and Prentiss are held hostage by a cult led by Benjamin Cyrus (portrayed by Luke Perry). Though he is not injured, Reid struggles with guilt over "allowing '' Prentiss 's beating at the hands of Cyrus in "Minimal Loss ''. Reid later becomes close to Alex Blake, whose forensic linguistics class he guest lectures in. Blake serves as a maternal figure within the BAU.
Reid contracts anthrax during an outbreak in Maryland, and is later shot in the leg protecting a doctor whose life is being threatened.
During "Corazon '', Reid begins to suffer from severe headaches and hallucinations. He goes to see a doctor in order to find out the source of his headaches, but the doctor says there is no physical cause for his headaches, and they may be psychosomatic. Reid refuses to believe this, afraid that he may be suffering from the same illness as his mother. It is not mentioned again until "Coda '', when he is seen once again wearing sunglasses and is carrying a book on migraines. In the same episode, Reid bonds with a young autistic boy, Sammy Sparks. A cut line from the episode has Rossi stating that Sammy and Reid are two of the most fascinating minds he 's ever encountered. In the episode "Valhalla '', Reid tells Prentiss about his headaches. By then, Reid has gone to several doctors, but no one has been able to diagnose what is wrong with him. He tells Prentiss that he has not told any of the team members because he is afraid that they will "make him feel like a baby. ''
In "Lauren '', it is Reid and Garcia who react most strongly to the news of Emily 's death. Reid 's reaction is to run out of the room, and he ends up sobbing into JJ 's shoulder telling her that he "never got a chance to say goodbye. '' In season seven, when Emily returns and Reid discovers Hotch and JJ faked her death, he is upset, especially with JJ. He tells her he feels betrayed because he came to her house "for 10 weeks in a row, crying over losing a friend '' and "not once did you have the decency to tell me the truth. '' She does n't say anything. He asks J.J. "what if '' he had started using Dilaudid again after Emily "died. ''
In "True Genius '', Reid doubts his reasons for being in the BAU and wonders if he should be doing more with his ' genius '. This is caused by the unsub sending him taunting messages and challenging Reid to find him. In this episode, he also reveals that the team missed his 30th birthday. At the end of the episode, the team throw him a mini birthday party.
After being kidnapped by a serial killer with multiple personalities, Tobias Hankel (James Van Der Beek), Reid is tortured and drugged over the course of two days in "Revelations ''. This leads him to develop an addiction to the narcotic painkiller Dilaudid. While the BAU team members have their suspicions about Spencer 's addiction, none of them confront him about it. An old friend of Reid 's in New Orleans is also aware that Reid suffers from ' problems ' in "Jones ''.
Reid gets clean and attends a support group meeting for addicts in law enforcement in "Elephant 's Memory '', at which he admits struggling with cravings as well as with traumatic memories, including a young adult suspect 's shooting death in his presence. When he contracts anthrax in "Amplification '', he strictly refused to take any narcotic painkillers in an attempt to remain clean. Memories of his torture under Hankel later allow him to empathize with other victims.
Throughout the show, Reid shows a lack of interest when interacting with women. The only three exceptions are Lila Archer, a young actress he is assigned to protect; Austin (Courtney Ford), a bartender he "woos '' with magic tricks while showing her a sketch of a potential suspect; and Maeve, a geneticist he first meets through correspondence, then later weekly phone calls. In "Memoriam '', a prostitute hits on Reid in a Las Vegas casino, but he is oblivious to her intentions.
In "Somebody 's Watching '', with the team on a case to protect a TV starlet, Reid saves Lila Archer (Amber Heard) from being harmed by a serial killer. Reid and Lila kindle a short - lived romance, beginning when Lila pulls Reid, fully clothed, into her pool for a kiss. At the end of the episode, they go their separate ways, and Lila is not seen in further episodes.
In another episode, Reid and Morgan are in a nightclub trying to find a serial murderer who picks up women in nightclubs. Reid is having trouble talking to the women in these clubs, especially since he is spouting facts about club - related deaths, but Morgan helps him out. Reid starts a conversation with the female bartender, and proceeds to do a magic trick in which he appears to jab a pen through the eye of a police sketch, but pulls it through, leaving the paper unscathed. She expresses interest in him, and he gives her his business card in case she hears something about the killer. Later, she sees the killer with another potential victim and intentionally spills her drink on the lady, pulling her away. The killer seems to disappear, and while the bartender goes outside to phone Reid, he grabs her. The team responds quickly and saves her before she is harmed. At the end of the episode, she and Reid are talking over the phone, and he opens a package at his desk that contains the card that he gave her -- with a lipstick kiss on the back.
In "God Complex '', Reid begins calling a mystery woman on a pay phone and they talk about his progress with his headaches and sleep deprivation. It is revealed that she is in danger and does n't want someone to know about her and Reid. While on a case in New Mexico, Alex Blake drops Reid off at a phone booth, unaware that he is going to call the mystery woman. Blake later returns and questions his motives. The two agents have a heated discussion, and he tells her the mystery woman is a geneticist he contacted about his headaches during season six, whom he believes can help on the case. Thanks to her, they are able to find the unsub and save his latest victim. At the end of the episode, Reid thanks her for her help and tells her that he and the BAU can help her in her situation. However, she refuses because she does n't want him to hurt Reid. She ends the phone call by telling him that she loves him. Left shocked and speechless, he starts walking to his left but then turns around and walks to his right. Sometime after "The Lesson '', Reid continues calling his mystery woman, and it is revealed that her name is Maeve (portrayed by Beth Riesgraf). She tells Reid that her stalker might be gone and because of this, she wants to meet him. During a case in Arizona, Blake confronts him about "phone booth girl ''. Reid tells Blake that he 's nervous to meet her because he already believes she 's the most beautiful girl in the world, and he is afraid that she wo n't like him because of his looks. Blake encourages him to meet her. After Reid gets back from the case, they plan to meet at a fancy restaurant until Reid sees a man gazing over at him. Thinking him to be Maeve 's stalker, Reid calls her to cancel while she is right outside. Spencer realizes that the man is not the stalker, and Maeve has already left. The hostess gives him a bag that she left for him. It turns out to be the very same book he was going to give to her by Sir Arthur Conan Doyle. Inside, she has written a quote by Thomas Merton; "Love is our true destiny. We do not find the meaning of life by ourselves alone. We find it with another. ''
In "Zugzwang '', Reid discovers that Maeve has been kidnapped by her stalker. In the investigation, he meets Maeve 's former fiance, the man he believed to be her stalker. Reid becomes more and more distressed by the situation and discovers that her kidnapper is not her ex-fiance, but the man 's girlfriend. Reid searches excessively to find her, and even offers to take her place. He discovers that the stalker wants attention from him and to be seen as an equal. He gets a clue from the stalker leading him to her location, where he tricks the unsub into believing that he is in love with her. Reid finally meets Maeve face - to - face during the situation and is able to briefly subdue the stalker, only to have her hold Maeve at gunpoint. He once again offers to take Maeve 's place, but the unsub kills herself and Maeve in one shot.
Reid spends two weeks alone in his apartment after Maeve 's death. The team constantly tries to help him, but he refuses to answer the door. While Reid remains at home, the team travel to another case. They call him for help a few times before he joins the team in person. Once the case is complete, Reid asks Morgan, Penelope, and JJ to help clean up his apartment. He picks up The Narrative of John Smith (given to him by Maeve) with the Thomas Merton quote and places it on the bookshelf.
In the months following, Reid throws himself into his work when he is not able to sleep because of a recurring dream where Maeve asks him to dance with her, but forces himself to wake up before he answers. By the end of "Alchemy '', Reid is able to complete the dream by accepting Maeve 's request to dance with her. In "The Inspiration '', Reid admits that, had Maeve not died, he might have had kids. In the season nine finale, it is revealed that he still carries a copy of The Narrative of John Smith in his bag.
|
who sang star spangled banner at all star game | The Star - Spangled Banner - wikipedia
"The Star - Spangled Banner '' is the national anthem of the United States of America. The lyrics come from "Defence of Fort M'Henry '', a poem written on September 14, 1814, by the 35 - year - old lawyer and amateur poet Francis Scott Key after witnessing the bombardment of Fort McHenry by British ships of the Royal Navy in Baltimore Harbor during the Battle of Baltimore in the War of 1812. Key was inspired by the large American flag, the Star - Spangled Banner, flying triumphantly above the fort during the American victory.
The poem was set to the tune of a popular British song written by John Stafford Smith for the Anacreontic Society, a men 's social club in London. "To Anacreon in Heaven '' (or "The Anacreontic Song ''), with various lyrics, was already popular in the United States. Set to Key 's poem and renamed "The Star - Spangled Banner '', it soon became a well - known American patriotic song. With a range of one octave and one fifth (a semitone more than an octave and a half), it is known for being difficult to sing. Although the poem has four stanzas, only the first is commonly sung today.
"The Star - Spangled Banner '' was recognized for official use by the United States Navy in 1889, and by U.S. President Woodrow Wilson in 1916, and was made the national anthem by a congressional resolution on March 3, 1931 (46 Stat. 1508, codified at 36 U.S.C. § 301), which was signed by President Herbert Hoover.
Before 1931, other songs served as the hymns of American officialdom. "Hail, Columbia '' served this purpose at official functions for most of the 19th century. "My Country, ' Tis of Thee '', whose melody is identical to "God Save the Queen '', the British national anthem, also served as a de facto anthem. Following the War of 1812 and subsequent American wars, other songs emerged to compete for popularity at public events, among them "The Star - Spangled Banner '', as well as "America the Beautiful ''.
On September 3, 1814, following the Burning of Washington and the Raid on Alexandria, Francis Scott Key and John Stuart Skinner set sail from Baltimore aboard the ship HMS Minden, flying a flag of truce on a mission approved by President James Madison. Their objective was to secure an exchange of prisoners, one of whom was Dr. William Beanes, the elderly and popular town physician of Upper Marlboro and a friend of Key 's who had been captured in his home. Beanes was accused of aiding the arrest of British soldiers. Key and Skinner boarded the British flagship HMS Tonnant on September 7 and spoke with Major General Robert Ross and Vice Admiral Alexander Cochrane over dinner while the two officers discussed war plans. At first, Ross and Cochrane refused to release Beanes, but relented after Key and Skinner showed them letters written by wounded British prisoners praising Beanes and other Americans for their kind treatment.
Because Key and Skinner had heard details of the plans for the attack on Baltimore, they were held captive until after the battle, first aboard HMS Surprise and later back on HMS Minden. After the bombardment, certain British gunboats attempted to slip past the fort and effect a landing in a cove to the west of it, but they were turned away by fire from nearby Fort Covington, the city 's last line of defense.
During the rainy night, Key had witnessed the bombardment and observed that the fort 's smaller "storm flag '' continued to fly, but once the shell and Congreve rocket barrage had stopped, he would not know how the battle had turned out until dawn. On the morning of September 14, the storm flag had been lowered and the larger flag had been raised.
During the bombardment, HMS Terror and HMS Meteor provided some of the "bombs bursting in air ''.
Key was inspired by the American victory and the sight of the large American flag flying triumphantly above the fort. This flag, with fifteen stars and fifteen stripes, had been made by Mary Young Pickersgill together with other workers in her home on Baltimore 's Pratt Street. The flag later came to be known as the Star - Spangled Banner and is today on display in the National Museum of American History, a treasure of the Smithsonian Institution. It was restored in 1914 by Amelia Fowler, and again in 1998 as part of an ongoing conservation program.
Aboard the ship the next day, Key wrote a poem on the back of a letter he had kept in his pocket. At twilight on September 16, he and Skinner were released in Baltimore. He completed the poem at the Indian Queen Hotel, where he was staying, and titled it "Defence of Fort M'Henry ''.
Much of the idea of the poem, including the flag imagery and some of the wording, is derived from an earlier song by Key, also set to the tune of "The Anacreontic Song ''. The song, known as "When the Warrior Returns '', was written in honor of Stephen Decatur and Charles Stewart on their return from the First Barbary War.
Absent elaboration by Francis Scott Key prior to his death in 1843, some have speculated in modern times about the meaning of phrases or verses. According to British historian Robin Blackburn, the words "the hireling and slave '' allude to the thousands of ex-slaves in the British ranks organised as the Corps of Colonial Marines, who had been liberated by the British and demanded to be placed in the battle line "where they might expect to meet their former masters. '' Nevertheless, Professor Mark Clague, a professor of musicology at the University of Michigan, argues that the "middle two verses of Key 's lyric vilify the British enemy in the War of 1812 '' and "in no way glorifies or celebrates slavery. '' Clague writes that "For Key... the British mercenaries were scoundrels and the Colonial Marines were traitors who threatened to spark a national insurrection. '' This harshly anti-British nature of Verse 3 led to its omission in sheet music in World War I, when Britain and the U.S. were allies. Responding to the assertion of writer Jon Schwarz of The Intercept that the song is a "celebration of slavery, '' Clague said that: "The reference to slaves is about the use, and in some sense the manipulation, of black Americans to fight for the British, with the promise of freedom. The American forces included African - Americans as well as whites. The term ' freemen, ' whose heroism is celebrated in the fourth stanza, would have encompassed both. ''
Others suggest that "Key may have intended the phrase as a reference to the British Navy 's practice of impressment (kidnapping sailors and forcing them to fight in defense of the crown), or as a semi-metaphorical slap at the British invading force as a whole (which included a large number of mercenaries). ''
Key gave the poem to his brother - in - law Judge Joseph H. Nicholson who saw that the words fit the popular melody "The Anacreontic Song '', by English composer John Stafford Smith. This was the official song of the Anacreontic Society, an 18th - century gentlemen 's club of amateur musicians in London. Nicholson took the poem to a printer in Baltimore, who anonymously made the first known broadside printing on September 17; of these, two known copies survive.
On September 20, both the Baltimore Patriot and The American printed the song, with the note "Tune: Anacreon in Heaven ''. The song quickly became popular, with seventeen newspapers from Georgia to New Hampshire printing it. Soon after, Thomas Carr of the Carr Music Store in Baltimore published the words and music together under the title "The Star Spangled Banner '', although it was originally called "Defence of Fort M'Henry ''. Thomas Carr 's arrangement introduced the raised fourth which became the standard deviation from "The Anacreontic Song ''. The song 's popularity increased, and its first public performance took place in October, when Baltimore actor Ferdinand Durang sang it at Captain McCauley 's tavern. Washington Irving, then editor of the Analectic Magazine in Philadelphia, reprinted the song in November 1814.
By the early 20th century, there were various versions of the song in popular use. Seeking a singular, standard version, President Woodrow Wilson tasked the U.S. Bureau of Education with providing that official version. In response, the Bureau enlisted the help of five musicians to agree upon an arrangement. Those musicians were Walter Damrosch, Will Earhart, Arnold J. Gantvoort, Oscar Sonneck and John Philip Sousa. The standardized version that was voted upon by these five musicians premiered at Carnegie Hall on December 5, 1917, in a program that included Edward Elgar 's Carillon and Gabriel Pierné 's The Children 's Crusade. The concert was put on by the Oratorio Society of New York and conducted by Walter Damrosch. An official handwritten version of the final votes of these five men has been found and shows all five men 's votes tallied, measure by measure.
The Italian opera composer Giacomo Puccini used an extract of the melody in writing the aria "Dovunque al mondo... '' in 1904 for his work Madama Butterfly.
The song gained popularity throughout the 19th century and bands played it during public events, such as July 4th celebrations.
A plaque displayed at Fort Meade, South Dakota, claims that the idea of making "The Star Spangled Banner '' the national anthem began on their parade ground in 1892. Colonel Caleb Carlton, Post Commander, established the tradition that the song be played "at retreat and at the close of parades and concerts. '' Carlton explained the custom to Governor Sheldon of South Dakota who "promised me that he would try to have the custom established among the state militia. '' Carlton wrote that after a similar discussion, Secretary of War, Daniel E. Lamont issued an order that it "be played at every Army post every evening at retreat. ''
In 1899, the US Navy officially adopted "The Star - Spangled Banner ''. In 1916, President Woodrow Wilson ordered that "The Star - Spangled Banner '' be played at military and other appropriate occasions. The playing of the song two years later during the seventh - inning stretch of Game One of the 1918 World Series, and thereafter during each game of the series is often cited as the first instance that the anthem was played at a baseball game, though evidence shows that the "Star - Spangled Banner '' was performed as early as 1897 at opening day ceremonies in Philadelphia and then more regularly at the Polo Grounds in New York City beginning in 1898. In any case, the tradition of performing the national anthem before every baseball game began in World War II.
On April 10, 1918, John Charles Linthicum, U.S. Congressman from Maryland, introduced a bill to officially recognize "The Star - Spangled Banner '' as the national anthem. The bill did not pass. On April 15, 1929, Linthicum introduced the bill again, his sixth time doing so. On November 3, 1929, Robert Ripley drew a panel in his syndicated cartoon, Ripley 's Believe it or Not!, saying "Believe It or Not, America has no national anthem ''.
In 1930, Veterans of Foreign Wars started a petition for the United States to officially recognize "The Star - Spangled Banner '' as the national anthem. Five million people signed the petition. The petition was presented to the United States House Committee on the Judiciary on January 31, 1930. On the same day, Elsie Jorss - Reilley and Grace Evelyn Boudlin sang the song to the Committee to refute the perception that it was too high pitched for a typical person to sing. The Committee voted in favor of sending the bill to the House floor for a vote. The House of Representatives passed the bill later that year. The Senate passed the bill on March 3, 1931. President Herbert Hoover signed the bill on March 4, 1931, officially adopting "The Star - Spangled Banner '' as the national anthem of the United States of America. As currently codified, the United States Code states that "(t) he composition consisting of the words and music known as the Star - Spangled Banner is the national anthem. ''
O say can you see, by the dawn 's early light, What so proudly we hailed at the twilight 's last gleaming, Whose broad stripes and bright stars through the perilous fight, O'er the ramparts we watched, were so gallantly streaming? And the rockets ' red glare, the bombs bursting in air, Gave proof through the night that our flag was still there; O say does that star - spangled banner yet wave O'er the land of the free and the home of the brave? On the shore dimly seen through the mists of the deep, Where the foe 's haughty host in dread silence reposes, What is that which the breeze, o'er the towering steep, As it fitfully blows, half conceals, half discloses? Now it catches the gleam of the morning 's first beam, In full glory reflected now shines in the stream: ' Tis the star - spangled banner, O long may it wave O'er the land of the free and the home of the brave. And where is that band who so vauntingly swore That the havoc of war and the battle 's confusion, A home and a country, should leave us no more? Their blood has washed out their foul footsteps ' pollution. No refuge could save the hireling and slave From the terror of flight, or the gloom of the grave: And the star - spangled banner in triumph doth wave, O'er the land of the free and the home of the brave. O thus be it ever, when freemen shall stand Between their loved homes and the war 's desolation. Blest with vict'ry and peace, may the Heav'n rescued land Praise the Power that hath made and preserved us a nation! Then conquer we must, when our cause it is just, And this be our motto: ' In God is our trust. ' And the star - spangled banner in triumph shall wave O'er the land of the free and the home of the brave!
In indignation over the start of the American Civil War, Oliver Wendell Holmes, Sr. added a fifth stanza to the song in 1861, which appeared in songbooks of the era.
When our land is illumined with Liberty 's smile, If a foe from within strike a blow at her glory, Down, down with the traitor that dares to defile The flag of her stars and the page of her story! By the millions unchained who our birthright have gained, We will keep her bright blazon forever unstained! And the Star - Spangled Banner in triumph shall wave While the land of the free is the home of the brave.
In a version hand - written by Francis Scott Key in 1840, the third line reads "Whose bright stars and broad stripes, through the clouds of the fight ''.
The song is notoriously difficult for nonprofessionals to sing because of its wide range -- a 12th. Humorist Richard Armour referred to the song 's difficulty in his book It All Started With Columbus.
In an attempt to take Baltimore, the British attacked Fort McHenry, which protected the harbor. Bombs were soon bursting in air, rockets were glaring, and all in all it was a moment of great historical interest. During the bombardment, a young lawyer named Francis Off Key (sic) wrote "The Star - Spangled Banner '', and when, by the dawn 's early light, the British heard it sung, they fled in terror.
Professional and amateur singers have been known to forget the words, which is one reason the song is sometimes pre-recorded and lip - synced. Other times the issue is avoided by having the performer (s) play the anthem instrumentally instead of singing it. The pre-recording of the anthem has become standard practice at some ballparks, such as Boston 's Fenway Park, according to the SABR publication The Fenway Project.
"The Star - Spangled Banner '' is traditionally played at the beginning of public sports events and orchestral concerts in the United States, as well as other public gatherings. The National Hockey League and Major League Soccer both require venues in both the U.S. and Canada to perform both the Canadian and American national anthems at games that involve teams from both countries (with the "away '' anthem being performed first). It is also usual for both American and Canadian anthems (done in the same way as the NHL and MLS) to be played at Major League Baseball and National Basketball Association games involving the Toronto Blue Jays and the Toronto Raptors (respectively), the only Canadian teams in those two major U.S. sports leagues, and in All Star Games on the MLB, NBA, and NHL. The Buffalo Sabres of the NHL, which play in a city on the Canada -- US border and have a substantial Canadian fan base, play both anthems before all home games regardless of where the visiting team is based.
Two especially unusual performances of the song took place in the immediate aftermath of the United States September 11 attacks. On September 12, 2001, the Queen broke with tradition and allowed the Band of the Coldstream Guards to perform the anthem at Buckingham Palace, London, at the ceremonial Changing of the Guard, as a gesture of support for Britain 's ally. The following day at a St. Paul 's Cathedral memorial service, the Queen joined in the singing of the anthem, an unprecedented occurrence.
The 200th anniversary of the "Star - Spangled Banner '' occurred in 2014 with various special events occurring throughout the United States. A particularly significant celebration occurred during the week of September 10 -- 16 in and around Baltimore, Maryland. Highlights included playing of a new arrangement of the anthem arranged by John Williams and participation of President Obama on Defender 's Day, September 12, 2014, at Fort McHenry. In addition, the anthem bicentennial included a youth music celebration including the presentation of the National Anthem Bicentennial Youth Challenge winning composition written by Noah Altshuler.
The first popular music performance of the anthem heard by the mainstream U.S. was by Puerto Rican singer and guitarist José Feliciano. He created a nationwide uproar when he strummed a slow, blues - style rendition of the song at Tiger Stadium in Detroit before game five of the 1968 World Series, between Detroit and St. Louis. This rendition started contemporary "Star - Spangled Banner '' controversies. The response from many in the Vietnam War - era U.S. was generally negative. Despite the controversy, Feliciano 's performance opened the door for the countless interpretations of the "Star - Spangled Banner '' heard in the years since. One week after Feliciano 's performance, the anthem was in the news again when American athletes Tommie Smith and John Carlos lifted controversial raised fists at the 1968 Olympics while the "Star - Spangled Banner '' played at a medal ceremony.
Marvin Gaye gave a soul - influenced performance at the 1983 NBA All - Star Game and Whitney Houston gave a soulful rendition before Super Bowl XXV in 1991, which was released as a single that charted at number 20 in 1991 and number 6 in 2001 (along with José Feliciano, the only times the anthem has been on the Billboard Hot 100). In 1993, Kiss did an instrumental rock version as the closing track on their album, Alive III. Another famous instrumental interpretation is Jimi Hendrix 's version, which was a set - list staple from autumn 1968 until his death in September 1970, including a famous rendition at the Woodstock music festival in 1969. Incorporating sonic effects to emphasize the "rockets ' red glare '', and "bombs bursting in air '', it became a late - 1960s emblem. Roseanne Barr gave a controversial performance of the anthem at a San Diego Padres baseball game at Jack Murphy Stadium on July 25, 1990. The comedian belted out a screechy rendition of the song, and afterward she attempted a gesture of ball players by spitting and grabbing her crotch as if adjusting a protective cup. The performance offended some, including the sitting U.S. President, George H.W. Bush. Sufjan Stevens has frequently performed the "Star - Spangled Banner '' in live sets, replacing the optimism in the end of the first verse with a new coda that alludes to the divisive state of the nation today. David Lee Roth both referenced parts of the anthem and played part of a hard rock rendition of the anthem on his song, "Yankee Rose '' on his 1986 solo album, Eat ' Em and Smile. Steven Tyler also caused some controversy in 2001 (at the Indianapolis 500, to which he later issued a public apology) and again in 2012 (at the AFC Championship Game) with a cappella renditions of the song with changed lyrics. A version of Aerosmith 's Joe Perry and Brad Whitford playing part of the song can be heard at the end of their version of "Train Kept A-Rollin ' '' on the Rockin ' the Joint album. The band Boston gave an instrumental rock rendition of the anthem on their Greatest Hits album. The band Crush 40 made a version of the song as opening track from the album Thrill of the Feel (2000).
In March 2005, a government - sponsored program, the National Anthem Project, was launched after a Harris Interactive poll showed many adults knew neither the lyrics nor the history of the anthem.
Several films have their titles taken from the song 's lyrics. These include two films titled Dawn 's Early Light (2000 and 2005); two made - for - TV features titled By Dawn 's Early Light (1990 and 2000); two films titled So Proudly We Hail (1943 and 1990); a feature (1977) and a short (2005) titled Twilight 's Last Gleaming; and four films titled Home of the Brave (1949, 1986, 2004, and 2006). A 1936 short titled "The Song of a Nation '' from Warner Brothers shows a version of the origin of the song.
When the National Anthem was first recognized by law in 1932, there was no proscription as to behavior during its playing. On June 22, 1942, the law was revised indicating that those in uniform should salute during its playing, while others should simply stand at attention, men removing their hats. (The same code also required that women should place their hands over their hearts when the flag is displayed during the playing of the Anthem, but not if the flag was not present.) On December 23, 1942 the law was again revised instructing men and women to stand at attention and face in the direction of the music when it was played. That revision also directed men and women to place their hands over their hearts only if the flag was displayed. Those in uniform were required to salute. On July 7, 1976, the law was simplified. Men and women were instructed to stand with their hands over their hearts, men removing their hats, irrespective of whether or not the flag was displayed and those in uniform saluting. On August 12, 1998, the law was rewritten keeping the same instructions, but differentiating between "those in uniform '' and "members of the Armed Forces and veterans '' who were both instructed to salute during the playing whether or not the flag was displayed. Because of the changes in law over the years and confusion between instructions for the Pledge of Allegence versus the National Anthem, throughout most of the 20th century many people simply stood at attention or with their hands folded in front of them during the playing of the Anthem, and when reciting the Pledge they would hold their hand (or hat) over their heart. After 9 / 11, the custom of placing the hand over the heart during the playing of the Anthem became nearly universal.
Since 1998, federal law (viz., the United States Code 36 U.S.C. § 301) states that during a rendition of the national anthem, when the flag is displayed, all present including those in uniform should stand at attention; Non-military service individuals should face the flag with the right hand over the heart; Members of the Armed Forces and veterans who are present and not in uniform may render the military salute; Military service persons not in uniform should remove their headdress with their right hand and hold the headdress at the left shoulder, the hand being over the heart; and Members of the Armed Forces and veterans who are in uniform should give the military salute at the first note of the anthem and maintain that position until the last note. The law further provides that when the flag is not displayed, all present should face toward the music and act in the same manner they would if the flag were displayed. Military law requires all vehicles on the installation to stop when the song is played and all individuals outside to stand at attention and face the direction of the music and either salute, in uniform, or place the right hand over the heart, if out of uniform. The law was amended in 2008, and since allows military veterans to salute out of uniform, as well.
The text of 36 U.S.C. § 301 is suggestive and not regulatory in nature. Failure to follow the suggestions is not a violation of law. This behavioral requirement for the national anthem is subject to the same First Amendment controversies that surround the Pledge of Allegiance. For example, Jehovah 's Witnesses do not sing the national anthem, though they are taught that standing is an "ethical decision '' that individual believers must make based on their "conscience. ''
The 1968 Olympics Black Power salute was a political demonstration conducted by African - American athletes Tommie Smith and John Carlos during their medal ceremony at the 1968 Summer Olympics in the Olympic Stadium in Mexico City. After having won gold and bronze medals respectively in the 200 meter running event, they turned on the podium to face their flags, and to hear the American national anthem, "The Star - Spangled Banner ''. Each athlete raised a black - gloved fist, and kept them raised until the anthem had finished. In addition, Smith, Carlos, and Australian silver medalist Peter Norman all wore human rights badges on their jackets. In his autobiography, Silent Gesture, Smith stated that the gesture was not a "Black Power '' salute, but a "human rights salute ''. The event is regarded as one of the most overtly political statements in the history of the modern Olympic Games.
Politically motivated protests of the national anthem began in the National Football League (NFL) after San Francisco 49ers quarterback (QB) Colin Kaepernick sat during the anthem, as opposed to the tradition of standing, before his team 's third preseason game of 2016. Kaepernick also sat during the first two preseason games, but he went unnoticed.
As a result of immigration to the United States and the incorporation of non-English speaking people into the country, the lyrics of the song have been translated into other languages. In 1861, it was translated into German. The Library of Congress also has record of a Spanish - language version from 1919. It has since been translated into Hebrew and Yiddish by Jewish immigrants, Latin American Spanish (with one version popularized during immigration reform protests in 2006), French by Acadians of Louisiana, Samoan, and Irish. The third verse of the anthem has also been translated into Latin.
With regard to the indigenous languages of North America, there are versions in Navajo and Cherokee.
|
what are the countries in south america continent | South America - Wikipedia
South America is a continent located in the western hemisphere, mostly in the southern hemisphere, with a relatively small portion in the northern hemisphere. It may also be considered a subcontinent of the Americas, which is the model used in nations that speak Romance languages. The reference to South America instead of other regions (like Latin America or the Southern Cone) has increased in the last decades due to changing geopolitical dynamics (in particular, the rise of Brazil).
It is bordered on the west by the Pacific Ocean and on the north and east by the Atlantic Ocean; North America and the Caribbean Sea lie to the northwest. It includes twelve sovereign states (Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Guyana, Paraguay, Peru, Suriname, Uruguay, and Venezuela), a part of France (French Guiana), and a non-sovereign area (the Falkland Islands, a British Overseas Territory though this is disputed by Argentina). In addition to this, the ABC islands of the Kingdom of the Netherlands, Trinidad and Tobago, and Panama may also be considered part of South America.
South America has an area of 17,840,000 square kilometers (6,890,000 sq mi). Its population as of 2016 has been estimated at more than 420 million. South America ranks fourth in area (after Asia, Africa, and North America) and fifth in population (after Asia, Africa, Europe, and North America). Brazil is by far the most populous South American country, with more than half of the continent 's population, followed by Colombia, Argentina, Venezuela and Peru. In recent decades Brazil has also concentrated half of the region 's GDP and has become a first regional power.
Most of the population lives near the continent 's western or eastern coasts while the interior and the far south are sparsely populated. The geography of western South America is dominated by the Andes mountains; in contrast, the eastern part contains both highland regions and large lowlands where rivers such as the Amazon, Orinoco, and Paraná flow. Most of the continent lies in the tropics.
The continent 's cultural and ethnic outlook has its origin with the interaction of indigenous peoples with European conquerors and immigrants and, more locally, with African slaves. Given a long history of colonialism, the overwhelming majority of South Americans speak Portuguese or Spanish, and societies and states commonly reflect Western traditions.
South America occupies the southern portion of the Americas. The continent is generally delimited on the northwest by the Darién watershed along the Colombia -- Panama border, although some may consider the border instead to be the Panama Canal. Geopolitically and geographically all of Panama -- including the segment east of the Panama Canal in the isthmus -- is typically included in North America alone and among the countries of Central America. Almost all of mainland South America sits on the South American Plate.
South America is home to the world 's highest uninterrupted waterfall, Angel Falls in Venezuela; the highest single drop waterfall Kaieteur Falls in Guyana; the largest river (by volume), the Amazon River; the longest mountain range, the Andes (whose highest mountain is Aconcagua at 6,962 m (22,841 ft)); the driest non-polar place on earth, the Atacama Desert; the largest rainforest, the Amazon Rainforest; the highest capital city, La Paz, Bolivia; the highest commercially navigable lake in the world, Lake Titicaca; and, excluding research stations in Antarctica, the world 's southernmost permanently inhabited community, Puerto Toro, Chile.
South America 's major mineral resources are gold, silver, copper, iron ore, tin, and petroleum. These resources found in South America have brought high income to its countries especially in times of war or of rapid economic growth by industrialized countries elsewhere. However, the concentration in producing one major export commodity often has hindered the development of diversified economies. The fluctuation in the price of commodities in the international markets has led historically to major highs and lows in the economies of South American states, often causing extreme political instability. This is leading to efforts to diversify production to drive away from staying as economies dedicated to one major export.
South America is one of the most biodiverse continents on earth. South America is home to many interesting and unique species of animals including the llama, anaconda, piranha, jaguar, vicuña, and tapir. The Amazon rainforests possess high biodiversity, containing a major proportion of the Earth 's species.
Brazil is the largest country in South America, encompassing around half of the continent 's land area and population. The remaining countries and territories are divided among three regions: The Andean States, the Guianas and the Southern Cone.
Torres del Paine in Chile.
Ausangate mountain in Peru.
Porto de Galinhas beach in Brazil.
Amazon rainforest, the richest and most megadiverse forest in the world.
Angel Falls in Venezuela, the highest waterfall in the world.
Quebrada de Cafayate in Argentina.
Atacama Desert (Chile) the driest desert in the world.
Pirineus State Park in Brazil.
Serra da Capivara National Park in Brazil.
Monte Roraima between Brazil, Venezuela and Guiana.
Traditionally, South America also includes some of the nearby islands. Aruba, Bonaire, Curaçao, Trinidad, Tobago, and the federal dependencies of Venezuela sit on the northerly South American continental shelf and are often considered part of the continent. Geo - politically, the island states and overseas territories of the Caribbean are generally grouped as a part or subregion of North America, since they are more distant on the Caribbean Plate, even though San Andres and Providencia are politically part of Colombia and Aves Island is controlled by Venezuela.
Other islands that are included with South America are the Galápagos Islands that belong to Ecuador and Easter Island (in Oceania but belonging to Chile), Robinson Crusoe Island, Chiloé (both Chilean) and Tierra del Fuego (split between Chile and Argentina). In the Atlantic, Brazil owns Fernando de Noronha, Trindade and Martim Vaz, and the Saint Peter and Saint Paul Archipelago, while the Falkland Islands are governed by the United Kingdom, whose sovereignty over the islands is disputed by Argentina. South Georgia and the South Sandwich Islands may be associated with either South America or Antarctica.
Falkland Islands, overseas of the United Kingdom.
Easter Island, in the picture Rano Raraku volcano.
Fernando de Noronha, Brazilian archipelago.
Trindade and Martin Vaz, a volcanic archipelago of Brazil.
Galápagos Islands off coast of Ecuador
The distribution of the average temperatures in the region presents a constant regularity from the 30 ° of latitude south, when the isotherms tend, more and more, to be confused with the degrees of latitude.
In temperate latitudes, winters are milder and summers warmer than in North America. Because its most extensive part of the continent is located in the equatorial zone, the region has more areas of equatorial plains than any other region.
The average annual temperatures in the Amazon basin oscillate around 27 ° C, with low thermal amplitudes and high rainfall indices. Between the Maracaibo Lake and the mouth of the Orinoco, predominates an equatorial climate of the type Congolese, that also includes parts of the Brazilian territory.
The east - central Brazilian plateau has a humid and warm tropical climate. The northern and eastern parts of the Argentine pampas have a humid subtropical climate with dry winters and humid summers of the Chinese type, while the western and eastern ranges have a subtropical climate of the dinaric type. At the highest points of the Andean region, climates are colder than the ones occurring at the highest point of the Norwegian fjords. In the Andean plateaus, the warm climate prevails, although it is tempered by the altitude, while in the coastal strip, there is an equatorial climate of the Guinean type. From this point until the north of the Chilean coast appear, successively, Mediterranean oceanic climate, temperate of the Breton type and, already in Tierra del Fuego, cold climate of the Siberian type.
The distribution of rainfall is related to the regime of winds and air masses. In most of the tropical region east of the Andes, winds blowing from the northeast, east and southeast carry moisture from the Atlantic, causing abundant rainfall. In the Orinoco lhanos and in the Guianas plateau, the precipitations go from moderate to high. The Pacific coast of Colombia and northern Ecuador are rainy regions. The Atacama Desert, along this stretch of coast, is one of the driest regions in the world. The central and southern parts of Chile are subject to cyclones, and most of the Argentine Patagonia is desert. In the pampas of Argentina, Uruguay and South of Brazil the rainfall is moderate, with rains well distributed during the year. The moderately dry conditions of the Chaco oppose the intense rainfall of the eastern region of Paraguay. In the semiarid coast of the Brazilian Northeast the rains are linked to a monsoon regime.
Important factors in the determination of climates are sea currents, such as the current Humboldt and Falklands. The equatorial current of the South Atlantic strikes the coast of the Northeast and there is divided into two others: the current of Brazil and a coastal current that flows to the northwest towards the Antilles, where there it moves towards northeast course thus forming the most Important and famous ocean current in the world, the Gulf Stream.
South America is believed to have been joined with Africa from the late Paleozoic Era to the early Mesozoic Era, until the supercontinent Pangaea began to rift and break apart about 225 million years ago. Therefore, South America and Africa share similar fossils and rock layers.
South America is thought to have been first inhabited by humans when people were crossing the Bering Land Bridge (now the Bering Strait) at least 15,000 years ago from the territory that is present - day Russia. They migrated south through North America, and eventually reached South America through the Isthmus of Panama.
The first evidence for the existence of the human race in South America dates back to about 9000 BC, when squashes, chili peppers and beans began to be cultivated for food in the highlands of the Amazon Basin. Pottery evidence further suggests that manioc, which remains a staple food today, was being cultivated as early as 2000 BC.
By 2000 BC, many agrarian communities had been settled throughout the Andes and the surrounding regions. Fishing became a widespread practice along the coast, helping establish fish as a primary source of food. Irrigation systems were also developed at this time, which aided in the rise of an agrarian society.
South American cultures began domesticating llamas, vicuñas, guanacos, and alpacas in the highlands of the Andes circa 3500 BC. Besides their use as sources of meat and wool, these animals were used for transportation of goods.
The rise of plant growing and the subsequent appearance of permanent human settlements allowed for the multiple and overlapping beginnings of civilizations in South America.
One of the earliest known South American civilizations was at Norte Chico, on the central Peruvian coast. Though a pre-ceramic culture, the monumental architecture of Norte Chico is contemporaneous with the pyramids of Ancient Egypt. Norte Chico governing class established a trade network and developed agriculture then followed by Chavín by 900 BC, according to some estimates and archaeological finds. Artifacts were found at a site called Chavín de Huantar in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned 900 BC to 300 BC.
In the central coast of Peru, around the beginning of the 1st millennium AD, Moche (100 BC -- 700 AD, at the northern coast of Peru), Paracas and Nazca (400 BC -- 800 AD, Peru) cultures flourished with centralized states with permanent militia improving agriculture through irrigation and new styles of ceramic art. At the Altiplano, Tiahuanaco or Tiwanaku (100 BC -- 1200 AD, Bolivia) managed a large commercial network based on religion.
Around 7th century, both Tiahuanaco and Wari or Huari Empire (600 -- 1200, Central and northern Peru) expanded its influence to all the Andean region, imposing the Huari urbanism and Tiahuanaco religious iconography.
The Muisca were the main indigenous civilization in what is now Colombia. They established the Muisca Confederation of many clans, or cacicazgos, that had a free trade network among themselves. They were goldsmiths and farmers.
Other important Pre-Columbian cultures include: the Cañaris (in south central Ecuador), Chimú Empire (1300 -- 1470, Peruvian northern coast), Chachapoyas, and the Aymaran kingdoms (1000 -- 1450, Western Bolivia and southern Peru).
Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tawantin suyu, and "the land of the four regions, '' in Quechua, the Inca Empire was highly distinct and developed. Inca rule extended to nearly a hundred linguistic or ethnic communities, some 9 to 14 million people connected by a 25,000 kilometer road system. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture.
The Mapuche in Central and Southern Chile resisted the European and Chilean settlers, waging the Arauco War for more than 300 years.
In 1494, Portugal and Spain, the two great maritime European powers of that time, on the expectation of new lands being discovered in the west, signed the Treaty of Tordesillas, by which they agreed, with the support of the Pope, that all the land outside Europe should be an exclusive duopoly between the two countries.
The treaty established an imaginary line along a north - south meridian 370 leagues west of the Cape Verde Islands, roughly 46 ° 37 ' W. In terms of the treaty, all land to the west of the line (known to comprise most of the South American soil) would belong to Spain, and all land to the east, to Portugal. As accurate measurements of longitude were impossible at that time, the line was not strictly enforced, resulting in a Portuguese expansion of Brazil across the meridian.
Beginning in the 1530s, the people and natural resources of South America were repeatedly exploited by foreign conquistadors, first from Spain and later from Portugal. These competing colonial nations claimed the land and resources as their own and divided it in colonies.
European infectious diseases (smallpox, influenza, measles, and typhus) -- to which the native populations had no immune resistance -- caused large - scale depopulation of the native population under Spanish control. Systems of forced labor, such as the haciendas and mining industry 's mit'a also contributed to the depopulation. After this, African slaves, who had developed immunities to these diseases, were quickly brought in to replace them.
The Spaniards were committed to convert their native subjects to Christianity and were quick to purge any native cultural practices that hindered this end; however, many initial attempts at this were only partially successful, as native groups simply blended Catholicism with their established beliefs and practices. Furthermore, the Spaniards brought their language to the degree they did with their religion, although the Roman Catholic Church 's evangelization in Quechua, Aymara, and Guaraní actually contributed to the continuous use of these native languages albeit only in the oral form.
Eventually, the natives and the Spaniards interbred, forming a mestizo class. At the beginning, many mestizos of the Andean region were offspring of Amerindian mothers and Spanish fathers. After independence, most mestizos had native fathers and European or mestizo mothers.
Many native artworks were considered pagan idols and destroyed by Spanish explorers; this included many gold and silver sculptures and other artifacts found in South America, which were melted down before their transport to Spain or Portugal. Spaniards and Portuguese brought the western European architectural style to the continent, and helped to improve infrastructures like bridges, roads, and the sewer system of the cities they discovered or conquered. They also significantly increased economic and trade relations, not just between the old and new world but between the different South American regions and peoples. Finally, with the expansion of the Portuguese and Spanish languages, many cultures that were previously separated became united through that of Latin American.
Guyana was first a Dutch, and then a British colony, though there was a brief period during the Napoleonic Wars when it was colonized by the French. The country was once partitioned into three parts, each being controlled by one of the colonial powers until the country was finally taken over fully by the British.
Indigenous peoples of the Americas in various European colonies were forced to work in European plantations and mines; along with African slaves who were also introduced in the proceeding centuries. The colonists were heavily dependent on indigenous labor during the initial phases of European settlement to maintain the subsistence economy, and natives were often captured by expeditions. The importation of African slaves began midway through the 16th century, but the enslavement of indigenous peoples continued well into the 17th and 18th centuries. The Atlantic slave trade brought African slaves primarily to South American colonies, beginning with the Portuguese since 1502. The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave - dependent colonies in the New World. Nearly 40 % of all African slaves trafficked to the Americas went to Brazil. An estimated 4.9 million slaves from Africa came to Brazil during the period from 1501 to 1866.
While the Portuguese, English and French settlers enslaved mainly African blacks, the Spaniards became very disposed of the natives. In 1750 Portugal abolished native slavery in the colonies because they considered them unfit for labour and began to import even more African slaves. Slaves were brought to the mainland on so - called slave ships, under subhuman conditions and ill - treatment, and those who survived were sold into the slave markets.
After independence, all South American countries maintained slavery for some time. The first South American country to abolish slavery was Chile in 1823, Uruguay in 1830, Bolivia in 1831, Colombia and Ecuador in 1851, Argentina in 1853, Peru and Venezuela in 1854, Paraguay in 1869, and in 1888 Brazil was the last South American nation and the last country in western world to abolish slavery.
The European Peninsular War (1807 -- 1814), a theater of the Napoleonic Wars, changed the political situation of both the Spanish and Portuguese colonies. First, Napoleon invaded Portugal, but the House of Braganza avoided capture by escaping to Brazil. Napoleon also captured King Ferdinand VII of Spain, and appointed his own brother instead. This appointment provoked severe popular resistance, which created Juntas to rule in the name of the captured king.
Many cities in the Spanish colonies, however, considered themselves equally authorized to appoint local Juntas like those of Spain. This began the Spanish American wars of independence between the patriots, who promoted such autonomy, and the royalists, who supported Spanish authority over the Americas. The Juntas, in both Spain and the Americas, promoted the ideas of the Enlightenment. Five years after the beginning of the war, Ferdinand VII returned to the throne and began the Absolutist Restoration as the royalists got the upper hand in the conflict.
The independence of South America was secured by Simón Bolívar (Venezuela) and José de San Martín (Argentina), the two most important Libertadores. Bolívar led a great uprising in the north, then led his army southward towards Lima, the capital of the Viceroyalty of Peru. Meanwhile, San Martín led an army across the Andes Mountains, along with Chilean expatriates, and liberated Chile. He organized a fleet to reach Peru by sea, and sought the military support of various rebels from the Viceroyalty of Peru. The two armies finally met in Guayaquil, Ecuador, where they cornered the Royal Army of the Spanish Crown and forced its surrender.
In the Portuguese Kingdom of Brazil, Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese King Dom João VI, proclaimed the independent Kingdom of Brazil in 1822, which later became the Empire of Brazil. Despite the Portuguese loyalties of garrisons in Bahia, Cisplatina and Pará, independence was diplomatically accepted by the crown in Portugal in 1825, on condition of a high compensation paid by Brazil mediatized by the United Kingdom.
The newly independent nations began a process of fragmentation, with several civil and international wars. However, it was not as strong as in Central America. Some countries created from provinces of larger countries stayed as such up to modern day (such as Paraguay or Uruguay), while others were reconquered and reincorporated into their former countries (such as the Republic of Entre Ríos and the Riograndense Republic).
The first separatist attempt was in 1820 by the Argentine province of Entre Ríos by a caudillo. In spite of the "Republic '' in its title, General Ramírez, its caudillo, never really intended to declare an independent Entre Rios. Rather, he was making a political statement in opposition to the monarchist and centralist ideas that back then permeated Buenos Aires politics. The "country '' was reincorporated at the United Provinces in 1821.
In 1825 the Cisplatine Province declared its independence from the Empire of Brazil, which led the Cisplatine War between the imperials and the Argentine from the United Provinces of the Río de la Plata to control the region. Three years later, the United Kingdom intervened in question by proclaiming a tie and creating in the former Cisplatina a new independent country: The Oriental Republic of Uruguay which was the only separatist province that maintained its independence.
Later in 1836, while Brazil was experiencing the chaos of the regency, Rio Grande do Sul proclaimed its independence motivated by a tax crisis. This was the longest and most bloody separatist conflict in South America. With the anticipation of the coronation of Pedro II to the throne of Brazil, the country could stabilize and fight the separatists, which the province of Santa Catarina had joined in 1839. The Conflict came to an end with the total defeat of both Riograndense Republic and Juliana Republic and their reincorporation as provinces in 1845.
The Peru -- Bolivian Confederation, a short - lived union of Peru and Bolivia, was blocked by Chile in the War of the Confederation (1836 -- 1839) and again during the War of the Pacific (1879 -- 1883). Paraguay was virtually destroyed by Argentina and Brazil in the Paraguayan War.
South - American history in early 19th century was built almost exclusively on wars. Despite the Spanish American wars of independence and the Brazilian War of Independence, the new nations quickly began to suffer with internal conflicts and wars among themselves.
In 1825 the proclamation of independence of Cisplatina led to the Cisplatine War between historical rivals the Empire of Brazil and the United Provinces of the Río de la Plata, Argentina 's predecessor. The result was a stalemate with the British ending in the independence of Uruguay. Soon after, another Brazilian province proclaimed its independence leading to the Ragamuffin War which Brazil won.
Between 1836 and 1839 the War of the Confederation broke out between the short - lived Peru - Bolivian Confederation and Chile, with the support of the Argentine Confederation. The war was fought mostly in the actual territory of Peru and ended with a Confederate defeat and the dissolution of the Confederacy and annexation of many territories by Argentina.
Meanwhile, the Argentine Civil Wars plagued Argentina since its independence. The conflict was mainly between those who defended the centralization of power in Buenos Aires and those who defended a confederation. During this period it can be said that "there were two Argentines '': the Argentine Confederation and the Argentine Republic. At the same time the political instability in Uruguay led to the Uruguayan Civil War among the main political factions of the country. All this instability in the platine region interfered with the goals of other countries such as Brazil, which was soon forced to take sides. In 1851 the Brazilian Empire, supporting the centralizing unitarians, and the uruguayn government invaded Argentina and deposed the caudillo, Juan Manuel Rosas, who ruled the confederation with an iron hand. Although the Platine War did not put an end to the political chaos and civil war in Argentina, it brought temporary peace to Uruguay where the Colorados faction won, supported by the Brazilian Empire, British Empire, French Empire and the Unitarian Party of Argentina.
Peace lasted only a short time: in 1864 the Uruguayan factions faced each other again in the Uruguayan War. The Blancos supported by Paraguay started to attack Brazilian and Argentine farmers near the borders. The Empire made an initial attempt to settle the dispute between Blancos and Colorados without success. In 1864, after a Brazilian ultimatum was refused, the imperial government declared that Brazil 's military would begin reprisals. Brazil declined to acknowledge a formal state of war, and, for most of its duration, the Uruguayan -- Brazilian armed conflict was an undeclared war which led to the deposition of the Blancos and the rise of the pro-Brazilian Colorados to power again. This angered the Paraguayan government, which even before the end of the war invaded Brazil, beginning the biggest and deadliest war in both South American and Latin American histories: the Paraguayan War.
The Paraguayan War began when the Paraguayan dictator Francisco Solano López ordered the invasion of the Brazilian provinces of Mato Grosso and Rio Grande do Sul. His attempt to cross Argentinian territory without Argentinian approval led the pro-Brazilian Argentine government into the war. The pro-Brazilian Uruguayan government showed its support by sending troops. In 1865 the three countries signed the Treaty of the Triple Alliance against Paraguay. At the beginning of the war, the Paraguayans took the lead with several victories, until the Triple Alliance organized to repel the invaders and fight effectively. This was the second total war experience in the world after the American Civil War. It was deemed the greatest war effort in the history of all participating countries, taking almost 6 years and ending with the complete devastation of Paraguay. The country lost 40 % of its territory to Brazil and Argentina and lost 60 % of its population, including 90 % of the men. The dictator Lopez was killed in battle and a new government was instituted in alliance with Brazil, which maintained occupation forces in the country until 1876.
The last South American war in the 19th century was the War of the Pacific with Bolivia and Peru on one side and Chile on the other. In 1879 the war began with Chilean troops occupying Bolivian ports, followed by Bolivia declaring war on Chile which activated an alliance treaty with Peru. The Bolivians were completely defeated in 1880 and Lima was occupied in 1881. The peace was signed with Peru in 1883 while a truce was signed with Bolivia in 1884. Chile annexed territories of both countries leaving Bolivia with no path to the sea.
In the new century, as wars became less violent and less frequent, Brazil entered into a small conflict with Bolivia for the possession of the Acre, which was acquired by Brazil in 1902. In 1917 Brazil declared war on the Central Powers and join the allied side in the World War I, sending a small fleet to the Mediterranean Sea and some troops to be integrated with the British and French troops. Brazil was the only South American country that fought in WWI. Later in 1932 Colombia and Peru entered a short armed conflict for territory in the Amazon. In the same year Paraguay declared war on Bolivia for possession of the Chaco, in a conflict that ended three years later with Paraguay 's victory. Between 1941 and 1942 Peru and Ecuador fought decisively for territories claimed by both that were annexed by Peru, usurping Ecuador 's frontier with Brazil.
Also in this period the first naval battle of World War II was fought on the continent, in the River Plate, between British forces and German submarines. The Germans still made numerous attacks on Brazilian ships on the coast, causing Brazil to declare war on the Axis powers in 1942, being the only South American country to fight in this war (and in both World Wars). Brazil sent naval and air forces to combat German and Italian submarines off the continent and throughout the South Atlantic, in addition to sending an expeditionary force to fight in the Italian Campaign.
The last war to be fought on South American soil was the Falkland War between Argentina and the United Kingdom for possession of the islands of the same name. Argentina was defeated in 1982.
Wars became less frequent in the 20th century, with Bolivia - Paraguay and Peru - Ecuador fighting the last inter-state wars. Early in the 20th century, the three wealthiest South American countries engaged in a vastly expensive naval arms race which was catalyzed by the introduction of a new warship type, the "dreadnought ''. At one point, the Argentine government was spending a fifth of its entire yearly budget for just two dreadnoughts, a price that did not include later in - service costs, which for the Brazilian dreadnoughts was sixty percent of the initial purchase.
The continent became a battlefield of the Cold War in the late 20th century. Some democratically elected governments of Argentina, Brazil, Chile, Uruguay and Paraguay were overthrown or displaced by military dictatorships in the 1960s and 1970s. To curtail opposition, their governments detained tens of thousands of political prisoners, many of whom were tortured and / or killed on inter-state collaboration. Economically, they began a transition to neoliberal economic policies. They placed their own actions within the US Cold War doctrine of "National Security '' against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict.
Argentina and Britain fought the Falklands War in 1982.
Colombia has had an ongoing, though diminished internal conflict, which started in 1964 with the creation of Marxist guerrillas (FARC - EP) and then involved several illegal armed groups of leftist - leaning ideology as well as the private armies of powerful drug lords. Many of these are now defunct, and only a small portion of the ELN remains, along with the stronger, though also greatly reduced, FARC. These leftist groups smuggle narcotics out of Colombia to fund their operations, while also using kidnapping, bombings, land mines and assassinations as weapons against both elected and non-elected citizens.
Revolutionary movements and right - wing military dictatorships became common after World War II, but since the 1980s, a wave of democratization passed through the continent, and democratic rule is widespread now. Nonetheless, allegations of corruption are still very common, and several countries have developed crises which have forced the resignation of their governments, although, on most occasions, regular civilian succession has continued.
International indebtedness turned into a severe problem in the late 1980s, and some countries, despite having strong democracies, have not yet developed political institutions capable of handling such crises without resorting to unorthodox economic policies, as most recently illustrated by Argentina 's default in the early 21st century. The last twenty years have seen an increased push towards regional integration, with the creation of uniquely South American institutions such as the Andean Community, Mercosur and Unasur. Notably, starting with the election of Hugo Chávez in Venezuela in 1998, the region experienced what has been termed a pink tide -- the election of several leftist and center - left administrations to most countries of the area, except for the Guianas and Colombia.
Historically, the Hispanic countries were founded as Republican dictatorships led by caudillos. Brazil was the only exception, being a constitutional monarchy for its first 67 years of independence, until a coup d'ètat proclaimed a republic. In late 19th century, the most democratic countries were Brazil, Chile, Argentina and Uruguay.
In the interwar period, nationalism grew stronger on the continent, influenced by countries like Nazi Germany and Fascist Italy. A series of authoritarian rules broke out in South American countries with views bringing them closer to the Axis Powers, like Vargas 's Brazil or Perón 's Argentina. In the late 20th century, during the Cold War, many countries became military dictatorships under American tutelage in attempts to avoid the influence of the Soviet Union. After the fall of the authoritarian regimes, these countries became democratic republics.
During the first decade of the 21st century, South American governments have drifted to the political left, with leftist leaders being elected in Chile, Uruguay, Brazil, Argentina, Ecuador, Bolivia, Paraguay, Peru and Venezuela. Most South American countries are making increasing use of protectionist policies, helping local development.
All South American countries are presidential republics with the exceptions of Peru, which is a semi-presidential republic, and Suriname, a parliamentary republic. French Guiana is a French overseas department, while the Falkland Islands and South Georgia and the South Sandwich Islands are British colonies. It is currently the only inhabited continent in the world without monarchies; the Empire of Brazil existed during the 19th century and there was an unsuccessful attempt to establish a Kingdom of Araucanía and Patagonia in southern Argentina and Chile. Also in the twentieth century, Suriname was established as a constituent kingdom of the Kingdom of the Netherlands and Guyana remained as a Commonwealth Realm for 4 years after its independence.
Recently, an intergovernmental entity has been formed which aims to merge the two existing customs unions: Mercosur and the Andean Community, thus forming the third - largest trade bloc in the world. This new political organization, known as Union of South American Nations, seeks to establish free movement of people, economic development, a common defense policy and the elimination of tariffs.
South America has over 420 million inhabitants and a rate of population growth of about 0.6 % per year. There are several areas of sparse demographics such as tropical forests, the Atacama Desert and the icy portions of Patagonia. On the other hand, the continent presents regions of high population density, such as the great urban centers. The population is formed by descendants of Europeans (mainly Spaniards, Portuguese and Italians), Africans and indigenous peoples. There is a high percentage of mestizos that vary greatly in composition by place. There is also a minor population of Asians, specially in Brazil. The two main languages are by far Spanish and Portuguese, followed by French, English and Dutch in smaller numbers. Economically, Brazil, Argentina and Chile are the wealthiest and most developed nations in the continent.
Spanish and Portuguese are the most spoken languages in South America, with approximately 200 million speakers each. Spanish is the official language of most countries, along with other native languages in some countries. Portuguese is the official language of Brazil. Dutch is the official language of Suriname; English is the official language of Guyana, although there are at least twelve other languages spoken in the country, including Portuguese, Chinese, Hindustani and several native languages. English is also spoken in the Falkland Islands. French is the official language of French Guiana and the second language in Amapá, Brazil.
Indigenous languages of South America include Quechua in Peru, Bolivia, Ecuador, Argentina, Chile and Colombia; Wayuunaiki in northern Colombia (La Guajira) and northwestern Venezuela (Zulia); Guaraní in Paraguay and, to a much lesser extent, in Bolivia; Aymara in Bolivia, Peru, and less often in Chile; and Mapudungun is spoken in certain pockets of southern Chile and, more rarely, Argentina. At least three South American indigenous languages (Quechua, Aymara, and Guarani) are recognized along with Spanish as national languages.
Other languages found in South America include Hindustani and Javanese in Suriname; Italian in Argentina, Brazil, Uruguay, Venezuela and Chile; and German in certain pockets of Argentina, Brazil, and Chile. German is also spoken in many regions of the southern states of Brazil, Riograndenser Hunsrückisch being the most widely spoken German dialect in the country; among other Germanic dialects, a Brazilian form of East Pomeranian is also well represented and is experiencing a revival. Welsh remains spoken and written in the historic towns of Trelew and Rawson in the Argentine Patagonia. There are also small clusters of Japanese - speakers in Brazil, Colombia and Peru. Arabic speakers, often of Lebanese, Syrian, or Palestinian descent, can be found in Arab communities in Argentina, Colombia, Brazil, Venezuela and in Paraguay.
An estimated 90 % of South Americans are Christians (82 % Roman Catholic, 8 % other Christian denominations mainly traditional Protestants and Evangelicals but also Orthodox), accounting for ca. 19 % of Christians worldwide.
Crypto - Jews or Marranos, conversos, and Anusim were an important part of colonial life in Latin America.
Both Buenos Aires, Argentina and São Paulo, Brazil figure among the largest Jewish populations by urban area.
Japanese Buddhism and Shinto - derived Japanese New Religions are common in Brazil and Peru. Korean Confucianism is especially found in Brazil while Chinese Buddhism and Chinese Confucianism have spread throughout the continent.
Kardecist Spiritism can be found in several countries.
Part of Religions in South America (2013):
Genetic admixture occurs at very high levels in South America. In Argentina, the European influence accounts for 65 % -- 79 % of the genetic background, Amerindian for 17 % -- 31 % and sub-Saharan African for 2 % -- 4 %. In Colombia, the sub-Saharan African genetic background varied from 1 % to 89 %, while the European genetic background varied from 20 % to 79 %, depending on the region. In Peru, European ancestries ranged from 1 % to 31 %, while the African contribution was only 1 % to 3 %. The Genographic Project determined the average Peruvian from Lima had about 28 % European ancestry, 68 % Native American, 2 % Asian ancestry and 2 % sub-Saharan African.
Descendants of indigenous peoples, such as the Quechua and Aymara, or the Urarina of Amazonia make up the majority of the population in Bolivia (56 %) and, per some sources, in Peru (44 %). In Ecuador, Amerindians are a large minority that comprises two - fifths of the population. The native European population is also a significant element in most other former Portuguese colonies.
People who identify as of primarily or totally European descent, or identify their phenotype as corresponding to such group, are more of a majority in Argentina, and Uruguay and are about half of the population of Chile (52.7 %) and Brazil (48.43 %). In Venezuela, according to the national census 42 % of the population is primarily native Spanish, Italian and Portuguese descendants. In Colombia, people who identify as European descendant are about 37 %. In Peru, European descendants are the third group in number (15 %).
Mestizos (mixed European and Amerindian) are the largest ethnic group in Paraguay, Venezuela, Colombia and Ecuador and the second group in Peru.
South America is also home to one of the largest populations of Africans. This group is significantly present in Brazil, Colombia, Guyana, Suriname, French Guiana, Venezuela and Ecuador.
Brazil followed by Peru have the largest Japanese, Korean and Chinese communities in South America. East Indians form the largest ethnic group in Guyana and Suriname.
In many places indigenous people still practice a traditional lifestyle based on subsistence agriculture or as hunter - gatherers. There are still some uncontacted tribes residing in the Amazon Rainforest.
The most populous country in South America is Brazil with 207.7 million persons. The second largest country is Colombia with a population of 48,653,419. Argentina is the third most populous country with 43,847,430.
While Brazil, Argentina, and Colombia maintain the largest populations, large city populations are not restricted to those nations. The largest cities in South America, by far, are São Paulo, Bogotá, and Lima. These cities are the only cities on the continent to exceed eight million, and two of four in the Americas. Next in size are Rio de Janeiro, Santiago, Caracas, Buenos Aires and Salvador.
Five of the top ten metropolitan areas are located in the Brazil. These metropolitan areas all have a population of above 4 million and include the São Paulo metropolitan area, Rio de Janeiro metropolitan area, and Belo Horizonte metropolitan area. Whilst the majority of the largest metropolitan areas are within Brazil, Argentina is host to the second largest metropolitan area by population in South America: the Buenos Aires metropolitan region is above 13 million inhabitants.
South America has also been witness to the growth of megapolitan areas. In Brazil four megaregions exist including the Expanded Metropolitan Complex of São Paulo with more than 32 million inhabitants. The others are the Greater Rio, Greater Belo Horizonte and Greater Porto Alegre. Colombia also has four megaregions which comprise 72 % of its population, followed by Venezuela, Argentina and Peru which are also homes of megaregions.
The top ten largest South American metropolitan areas by population as of 2015, based on national census numbers from each country:
2015 Census figures.
South America relies less on the export of both manufactured goods and natural resources than the world average; merchandise exports from the continent were 16 % of GDP on an exchange rate basis, compared to 25 % for the world as a whole. Brazil (the seventh largest economy in the world and the largest in South America) leads in terms of merchandise exports at $251 billion, followed by Venezuela at $93 billion, Chile at $86 billion, and Argentina at $84 billion.
Since 1930, the continent has experienced remarkable growth and diversification in most economic sectors. Most agricultural and livestock products are destined for the domestic market and local consumption. However, the export of agricultural products is essential for the balance of trade in most countries.
The main agrarian crops are export crops, such as soy and wheat. The production of staple foods such as vegetables, corn or beans is large, but focused on domestic consumption. Livestock raising for meat exports is important in Argentina, Paraguay, Uruguay and Colombia. In tropical regions the most important crops are coffee, cocoa and bananas, mainly in Brazil, Colombia and Ecuador. Traditionally, the countries producing sugar for export are Peru, Guyana and Suriname, and in Brazil, sugar cane is also used to make ethanol. On the coast of Peru, northeast and south of Brazil, cotton is grown. Fifty percent of the South American surface is covered by forests, but timber industries are small and directed to domestic markets. In recent years, however, transnational companies have been settling in the Amazon to exploit noble timber destined for export. The Pacific coastal waters of South America are the most important for commercial fishing. The anchovy catch reaches thousands of tons, and tuna is also abundant (Peru is a major exporter). The capture of crustaceans is remarkable, particularly in northeastern Brazil and Chile.
Only Brazil and Argentina are part of the G20 (industrial countries), while only Brazil is part of the G8 + 5 (the most powerful and influential nations in the world). In the tourism sector, a series of negotiations began in 2005 to promote tourism and increase air connections within the region. Punta del Este, Florianópolis and Mar del Plata are among the most important resorts in South America.
The most industrialized countries in South America are Brazil, Argentina, Chile, Colombia, Venezuela and Uruguay respectively. These countries alone account for more than 75 percent of the region 's economy and add up to a GDP of more than US $3.0 trillion. Industries in South America began to take on the economies of the region from the 1930s when the Great Depression in the United States and other countries of the world boosted industrial production in the continent. From that period the region left the agricultural side behind and began to achieve high rates of economic growth that remained until the early 1990s when they slowed due to political instabilities, economic crises and neoliberal policies.
Since the end of the economic crisis in Brazil and Argentina that occurred in the period from 1998 to 2002, which has led to economic recession, rising unemployment and falling population income, the industrial and service sectors have been recovering rapidly. Chile, Argentina and Brazil have recovered fastest, growing at an average of 5 % per year. All of South America after this period has been recovering and showing good signs of economic stability, with controlled inflation and exchange rates, continuous growth, a decrease in social inequality and unemployment -- factors that favor industry.
The main industries are: electronics, textiles, food, automotive, metallurgy, aviation, naval, clothing, beverage, steel, tobacco, timber, chemical, among others. Exports reach almost US $400 billion annually, with Brazil accounting for half of this.
The economic gap between the rich and poor in most South American nations is larger than on most other continents. The richest 10 % receive over 40 % of the nation 's income in Bolivia, Brazil, Chile, Colombia, and Paraguay, while the poorest 20 % receive 3 % or less in Bolivia, Brazil, and Colombia. This wide gap can be seen in many large South American cities where makeshift shacks and slums lie in the vicinity of skyscrapers and upper - class luxury apartments; nearly one in nine South Americans live on less than $2 per day (on a purchasing power parity basis).
Tourism has increasingly become a significant source of income for many South American countries. Historical relics, architectural and natural wonders, a diverse range of foods and culture, vibrant and colorful cities, and stunning landscapes attract millions of tourists every year to South America. Some of the most visited places in the region are Iguazu Falls, Recife, Olinda, Machu Picchu, the Amazon rainforest, Rio de Janeiro, São Luís, Salvador, Fortaleza, Maceió, Buenos Aires, Florianópolis, San Ignacio Miní, Isla Margarita, Natal, Lima, São Paulo, Angel Falls, Brasília, Nazca Lines, Cuzco, Belo Horizonte, Lake Titicaca, Salar de Uyuni, Jesuit Missions of Chiquitos, Los Roques archipelago, Gran Sabana, Patagonia, Tayrona National Natural Park, Santa Marta, Bogotá, Medellín, Cartagena, Perito Moreno Glacier and the Galápagos Islands.
In 2016 Brazil hosted the 2016 Summer Olympics.
South Americans are culturally influenced by their indigenous peoples, the historic connection with the Iberian Peninsula and Africa, and waves of immigrants from around the globe.
South American nations have a rich variety of music. Some of the most famous genres include vallenato and cumbia from Colombia, pasillo from Colombia and Ecuador, samba, bossa nova and música sertaneja from Brazil, and tango from Argentina and Uruguay. Also well known is the non-commercial folk genre Nueva Canción movement which was founded in Argentina and Chile and quickly spread to the rest of the Latin America. People on the Peruvian coast created the fine guitar and cajon duos or trios in the most mestizo (mixed) of South American rhythms such as the Marinera (from Lima), the Tondero (from Piura), the 19th century popular Creole Valse or Peruvian Valse, the soulful Arequipan Yaravi, and the early 20th century Paraguayan Guarania. In the late 20th century, Spanish rock emerged by young hipsters influenced by British pop and American rock. Brazil has a Portuguese - language pop rock industry as well a great variety of other music genres.
The literature of South America has attracted considerable critical and popular acclaim, especially with the Latin American Boom of the 1960s and 1970s, and the rise of authors such as Mario Vargas Llosa, Gabriel García Márquez in novels and Jorge Luis Borges and Pablo Neruda in other genres. The Brazilians Machado de Assis and João Guimarães Rosa are widely regarded as the greatest Brazilian writers.
Because of South America 's broad ethnic mix, South American cuisine has African, South American Indian, Asian, and European influences. Bahia, Brazil, is especially well known for its West African -- influenced cuisine. Argentines, Chileans, Uruguayans, Brazilians, Bolivians, and Venezuelans regularly consume wine. People in Argentina, Paraguay, Uruguay, southern Chile, Bolivia and Brazil drink mate, an herb which is brewed. The Paraguayan version, terere, differs from other forms of mate in that it is served cold. Pisco is a liquor distilled from grapes in Peru and Chile. Peruvian cuisine mixes elements from Chinese, Japanese, Spanish, Italian, African, Arab, Andean, and Amazonic food.
The artist Oswaldo Guayasamín (1919 - 1999) from Ecuador, represented with his painting style the feeling of the peoples of Latin America highlighting social injustices in various parts of the world. The Colombian Fernando Botero (1932) is one of the greatest exponents of painting and sculpture that continues still active and has been able to develop a recognizable style of his own. For his part, the Venezuelan Carlos Cruz - Diez has contributed significantly to contemporary art, with the presence of works around the world.
Currently several emerging South American artists are recognized by international art critics: Guillermo Lorca -- Chilean painter, Teddy Cobeña -- Ecuadorian sculptor and recipient of international sculpture award in France) and Argentine artist Adrián Villar Rojas -- winner of the Zurich Museum Art Award among many others.
A wide range of sports are played in the continent of South America, with football (a / k / a soccer) being the most popular overall, while baseball is the most popular in Venezuela.
Other sports include basketball, cycling, polo, volleyball, futsal, motorsports, rugby (mostly in Argentina and Uruguay), handball, tennis, golf, field hockey and boxing.
South America hosted its first Olympic Games in Rio de Janeiro, Brazil in 2016 and will host the Youth Olympic Games in Buenos Aires, Argentina in 2018.
South America shares with Europe supremacy over the sport of football as all winners in FIFA World Cup history and all winning teams in the FIFA Club World Cup have come from these two continents. Brazil holds the record at the FIFA World Cup with five titles in total. Argentina and Uruguay have two titles each. So far four South American nations have hosted the tournament including the first edition in Uruguay (1930). The other three were Brazil (1950, 2014), Chile (1962), and Argentina (1978).
South America is home to the longest running international football tournament; the Copa América, which has been regularly contested since 1916. Uruguay won the Copa América a record 15 times, surpassing hosts Argentina in 2011 to reach 15 titles (they were previously equal at 14 titles each during the 2011 Copa América).
Also, in South America, a multi-sport event, the South American Games, are held every four years. The first edition was held in La Paz in 1978 and the most recent took place in Santiago in 2014.
Due to the diversity of topography and pluviometric precipitation conditions, the region 's water resources vary enormously in different areas. In the Andes, navigation possibilities are limited, except for the Magdalena River, Lake Titicaca and the lakes of the southern regions of Chile and Argentina. Irrigation is an important factor for agriculture from northwestern Peru to Patagonia. Less than 10 % of the known electrical potential of the Andes had been used until the mid-1960s.
The Brazilian Highlands has a much higher hydroelectric potential than the Andean region and its possibilities of exploitation are greater due to the existence of several large rivers with high margins and the occurrence of great differences forming huge cataracts, such as those of Paulo Afonso, Iguaçu and others. The Amazon River system has about 13,000 km of waterways, but its possibilities for hydroelectric use are still unknown.
Most of the continent 's energy is generated through hydroelectric power plants, but there is also an important share of thermoelectric and wind energy. Brazil and Argentina are the only South American countries that generate nuclear power, each with two nuclear power plants. In 1991 these countries signed a peaceful nuclear cooperation agreement.
South American transportation systems are still deficient, with low kilometric densities. The region has about 1,700,000 km of highways and 100,000 km of railways, which are concentrated in the coastal strip, and the interior is still devoid of communication.
Only two railroads are continental: the Transandina, which connects Buenos Aires, in Argentina to Valparaíso, in Chile, and the Brazil - Bolivia Railroad, which makes it the connection between the port of Santos in Brazil and the city of Santa Cruz de la Sierra, in Bolivia. In addition, there is the Pan-American Highway, which crosses the Andean countries from north to south, although some stretches are unfinished.
Two areas of greater density occur in the railway sector: the platinum network, which develops around the Platine region, largely belonging to Argentina, with more than 45,000 km in length; And the Southeast Brazil network, which mainly serves the state of São Paulo, state of Rio de Janeiro and Minas Gerais. Brazil and Argentina also stand out in the road sector. In addition to the modern roads that extend through northern Argentina and south - east and south of Brazil, a vast road complex aims to link Brasilia, the federal capital, to the South, Southeast, Northeast and Northern regions of Brazil.
The Port of Callao is the main port of Peru. South America has one of the largest bays of navigable inland waterways in the world, represented mainly by the Amazon basin, the Platine basin, the São Francisco and the Orinoco basins, Brazil having about 54,000 km navigable, while Argentina has 6,500 km and Venezuela, 1,200 km.
The two main merchant fleets also belong to Brazil and Argentina. The following are those of Chile, Venezuela, Peru and Colombia. The largest ports in commercial movement are those of Buenos Aires, Santos, Rio de Janeiro, Bahía Blanca, Rosario, Valparaiso, Recife, Salvador, Montevideo, Paranaguá, Rio Grande, Fortaleza, Belém and Maracaibo.
In South America, commercial aviation has a magnificent expansion field, which has one of the largest traffic density lines in the world, Rio de Janeiro - São Paulo, and large airports, such as Congonhas, São Paulo - Guarulhos International and Viracopos (São Paulo), Rio de Janeiro International and Santos Dumont (Rio de Janeiro), Ezeiza (Buenos Aires), Confins International Airport (Belo Horizonte), Curitiba International Airport (Curitiba), Brasilia, Caracas, Montevideo, Lima, Bogotá, Recife, Salvador, Salgado Filho International Airport (Porto Alegre), Fortaleza, Manaus and Belém.
The main public transport in major cities is the bus. Many cities also have a diverse system of metro and subway trains. The Santiago subway is the largest network in South America, with 103 km, while the São Paulo subway is the largest in transportation, with more than 4.6 million passengers per day and was voted the best in the Americas. In Rio de Janeiro was installed the first railroad of the continent, in 1854. Today the city has a vast and diversified system of metropolitan trains, integrated with buses and subway. Recently it was also inaugurated in the city a Light Rail System called VLT, a small electrical trams at low speed, while São Paulo inaugurated its monorail, the first of South America. In Brazil, an express bus system called Bus Rapid Transit (BRT), which operates in several cities, has also been developed.
^ Continent model: In some parts of the world South America is viewed as a subcontinent of the Americas (a single continent in these areas), for example Latin America, Latin Europe, and Iran. In most of the countries with English as an official language, however, it is considered a continent; see Americas (terminology).
Africa
Antarctica
Asia
Australia
Europe
North America
South America
Afro - Eurasia
Americas
Eurasia
Oceania
|
how many episodes are in a season of pokemon | List of Pokémon episodes (seasons 1 -- 13) - wikipedia
This is a list of episodes of the Pokémon animated series (ポケットモンスター Poketto Monsutā?, Pocket Monsters). The division between seasons of Pokémon is based on the English version openings of each episode and may not reflect the actual production season. The English episode numbers are based on their first airing either in syndication, on the WB Television Network, Cartoon Network, or on Disney XD. Subsequent episodes of the English version follow the original Japanese order, except where banned episodes are shown. Inc.
|
who was the english prime minister in 1957 | Anthony Eden - wikipedia
World War I
Robert Anthony Eden, 1st Earl of Avon, KG, MC, PC (12 June 1897 -- 14 January 1977), was a British Conservative politician who served three periods as Foreign Secretary and then a relatively brief term as Prime Minister of the United Kingdom from 1955 to 1957.
Achieving rapid promotion as a young Member of Parliament, he became Foreign Secretary aged 38, before resigning in protest at Neville Chamberlain 's appeasement policy towards Mussolini 's Italy. He again held that position for most of the Second World War, and a third time in the early 1950s. Having been deputy to Winston Churchill for almost 15 years, he succeeded him as the Leader of the Conservative Party and Prime Minister in April 1955, and a month later won a general election.
Eden 's worldwide reputation as an opponent of appeasement, a "man of peace '', and a skilled diplomat was overshadowed in 1956 when the United States refused to support the Anglo - French military response to the Suez Crisis, which critics across party lines regarded as an historic setback for British foreign policy, signalling the end of British predominance in the Middle East. Most historians argue that he made a series of blunders, especially not realising the depth of American opposition to military action. Two months after ordering an end to the Suez operation, he resigned as Prime Minister on grounds of ill health and because he was widely suspected of having misled the House of Commons over the degree of collusion with France and Israel.
Eden is generally ranked among the least successful British prime ministers of the 20th century, although two broadly sympathetic biographies (in 1986 and 2003) have gone some way to redressing the balance of opinion. Biographer D.R. Thorpe described the Suez Crisis as "a truly tragic end to his premiership, and one that came to assume a disproportionate importance in any assessment of his career. ''
Eden was born at Windlestone Hall, County Durham, on 12 June 1897. He was born into a very conservative family of landed gentry. He was a younger son of Sir William Eden, 7th and 5th Baronet, a former colonel and local magistrate from an old titled family. Sir William, an eccentric and often foul - tempered man, was a talented watercolourist and collector of Impressionists.
Eden 's mother, Sybil Frances Grey, was a member of the famous Grey family of Northumberland (see below). She had wanted to marry Francis Knollys, later an important Royal adviser. Although she was a popular figure locally, she had a strained relationship with her children, and her profligacy ruined the family fortunes. Eden 's older brother Tim had to sell Windlestone in 1936. Rab Butler would later quip that Eden -- a handsome but ill - tempered man -- was "half mad baronet, half beautiful woman ''.
Eden 's great - grandfather was William Iremonger, who commanded the 2nd Regiment of Foot during the Peninsular War and fought under Wellington (as he became) at Vimiero. He was also descended from Governor Sir Robert Eden, 1st Baronet, of Maryland, the Calvert Family of Maryland, the Schaffalitzky de Muckadell family of Denmark, and Bie family of Norway. Eden was once amused to learn that one of his ancestors had, like Churchill 's ancestor the Duke of Marlborough, been the lover of Barbara Castlemaine.
There was speculation for many years that Eden 's biological father was the politician and man of letters George Wyndham, but this is considered impossible as Wyndham was in South Africa at the time of Eden 's conception. His mother was rumoured to have had an affair with Wyndham. Eden had an elder brother, John, who was killed in action in 1914, and a younger brother, Nicholas, who was killed when the battlecruiser HMS Indefatigable blew up and sank at the Battle of Jutland in 1916.
Eden was educated at two independent schools. The first was Sandroyd School in Cobham from 1907 to 1910, where he excelled in languages. He then started at Eton College in January 1911. There he won a Divinity prize and excelled at cricket, rugby and rowing, winning House colours in the last.
Eden learned French and German on continental holidays and as a child is said to have spoken French better than English. Although Eden was able to converse with Hitler in German in February 1934 and with the Chinese premier Chou En - lai in French at Geneva in 1954, he preferred to have interpreters to translate at formal meetings out of a sense of professionalism.
Although Eden later claimed to have had no interest in politics until the early 1920s, his teenage letters and diaries show him to have been obsessed with the subject. He was a strong, partisan Conservative, rejoicing in the defeat of Charles Masterman at a by - election (May 1913) and once astonishing his mother on a train journey by telling her the MP and the size of his majority for each constituency through which they passed. By 1914 he was a member of the Eton Society ("Pop '').
During the Great War, Eden 's older brother, Lieutenant John Eden, was killed in action on 17 October 1914, at the age of 26, while serving with the 12th (Prince of Wales 's Royal) Lancers. He is buried in Larch Wood (Railway Cutting) Commonwealth War Graves Commission Cemetery in Belgium. His uncle Robin was later shot down and captured whilst serving with the Royal Flying Corps.
Volunteering for service the British Army, as did many others of his generation, Eden served with the 21st (Yeoman Rifles) Battalion of the King 's Royal Rifle Corps (KRRC), a Kitchener 's Army unit, initially recruited mainly from County Durham country labourers, who were increasingly replaced by Londoners after losses at the Somme in mid-1916. He was commissioned as a temporary second lieutenant on 2 November 1915 (antedated to 29 September 1915). His battalion transferred to the Western Front on 4 May 1916 as part of the 41st Division. On 31 May 1916, Eden 's younger brother, Midshipman William Nicholas Eden, was killed in action, aged 16, on board HMS Indefatigable during the Battle of Jutland. He is commemorated on the Plymouth Naval Memorial. His brother - in - law, Lord Brooke, was wounded during the war.
One summer night in 1916, near Ploegsteert, Eden had to lead a small raid into an enemy trench to kill or capture enemy soldiers, so as to identify the enemy units opposite. He and his men were pinned down in No Man 's Land under enemy fire, his sergeant seriously wounded in the leg. Eden sent one man back to British lines to fetch another man and a stretcher, then he and three others carried the wounded sergeant back with, as he later put it in his memoirs, a "chilly feeling down our spines '', unsure whether the Germans had not seen them in the dark or were chivalrously declining to fire. He omitted to mention that he had been awarded the Military Cross (MC) for the incident, something of which he had made little mention in his political career. On 18 September 1916, after the Battle of Flers - Courcelette (part of the Battle of the Somme), he wrote to his mother "I have seen things lately that I am not likely to forget ''. On 3 October, he was appointed an adjutant, with the rank of temporary lieutenant for the duration of that appointment. At the age of just 19, he was the youngest adjutant on the Western Front.
Eden 's MC was gazetted in the 1917 Birthday Honours list. His battalion fought at Messines Ridge in June 1917. On 1 July 1917, Eden was confirmed as a temporary lieutenant, relinquishing his appointment as adjutant three days later. His battalion fought in the first few days of Third Battle of Ypres (31 July -- 4 August). Between 20 and 23 September 1917 his battalion spent a few days on coastal defence on the Franco - Belgian border.
On 19 November, he was transferred to the General Staff as a General Staff Officer Grade 3 (GSO3), with the temporary rank of captain. He served at Second Army HQ, missing out on service in Italy, as the 41st Division was in Italy, after the disastrous Italian defeat at the Battle of Caporetto, between mid-November 1917 and 8 March 1918, returning to the Western Front, the main theatre of war, as a major German offensive was clearly imminent, only for Eden 's former battalion to be disbanded to help alleviate the British Army 's acute manpower shortage. Although David Lloyd George, then the British Prime Minister, was one of the few politicians of whom Eden reported front - line soldiers speaking highly, he wrote to his sister (23 December 1917) in disgust at his "wait and see twaddle '' in declining to extend conscription to Ireland.
In March 1918, during the German Spring Offensive, he was stationed near La Fere on the Oise, opposite Adolf Hitler, as he learned at a conference in 1935. At one point, when brigade HQ was bombed by German aircraft, his companion told him "There now, you have had your first taste of the next war. '' On 26 May 1918 he was appointed brigade major of the 198th Infantry Brigade, part of the 66th Division. At the age of twenty, Eden was the youngest brigade major in the British Army.
He considered standing for Parliament at the end of the war, but the general election was called too early for this to be possible. After the Armistice with Germany, he spent the winter of 1918 -- 19 in the Ardennes with his brigade and on 28 March 1919 he transferred to be brigade major of the 99th Infantry Brigade. Eden contemplated applying for a commission in the Regular Army, but they were very hard to come by, with the army contracting so rapidly. He initially shrugged off his mother 's suggestion of studying at Oxford. He also rejected the thought of becoming a barrister; his preferred career alternatives at this stage were standing for Parliament for Bishop Auckland, the Civil Service in East Africa, or the Foreign Office. He was demobilised on 13 June 1919. He retained the rank of captain.
Eden had dabbled in the study of Turkish with a family friend. After the war, he studied Oriental Languages (Persian and Arabic) at Christ Church, Oxford, starting in October 1919. Persian was his main, and Arabic his secondary, language. He studied under Richard Paset Dewhurst and David Samuel Margoliouth.
At Oxford, Eden took no part in student politics, and his main leisure interest at the time was art. Eden was in the Oxford University Dramatic Society and President of the Asiatic Society. Along with Lord David Cecil and R.E. Gathorne - Hardy he founded the Uffizi Society, of which he later became President. Possibly under the influence of his father he gave a paper on Cézanne, whose work was then not yet widely appreciated. Eden was already collecting paintings.
In July 1920, whilst still an undergraduate, Eden was recalled to military service as a lieutenant in the 6th Battalion of the Durham Light Infantry. In the spring of 1921, once again as a temporary captain, he commanded local defence forces at Spennymoor as serious industrial unrest seemed possible. He again relinquished his commission on 8 July. He graduated from Oxford in June 1922 with a Double First. He continued to serve as an officer in the Territorial Army until May 1923.
Captain Eden, as he was still known, was selected to contest Spennymoor, as a Conservative. At first he had hoped to win (with some Liberal support as the Conservatives were still supporting Lloyd George 's coalition government) but by the time of the November 1922 general election it was clear that the surge in the Labour vote made this unlikely. His main sponsor was the Marquess of Londonderry, a local coalowner. The seat went from Liberal to Labour.
Eden 's father had died on 20 February 1915. As a younger son, he had inherited capital of £ 7,675 and in 1922 he had a private income of £ 706 after tax (approximately £ 375,000 and £ 35,000 at 2014 prices).
Eden read the writings of Lord Curzon and was hoping to emulate him by entering politics with a view to specialising in foreign affairs. Eden married Beatrice Beckett in the autumn of 1923, and after a two - day honeymoon in Essex, he was selected to fight Warwick and Leamington for a by - election in November 1923. His Labour opponent, Daisy Greville Countess of Warwick, was by coincidence his sister Elfrida 's mother - in - law and also mother to his wife 's step - mother, Marjorie Blanche Eve Beckett née Greville. On 16 November 1923, during the by - election campaign, Parliament was dissolved for the December 1923 general election. He was elected to Parliament at the age of twenty - six.
The first Labour Government, under Ramsay MacDonald, took office in January 1924. Eden 's maiden speech (19 February 1924) was a controversial attack on Labour 's defence policy and was heckled, and thereafter he was careful to speak only after deep preparation. He later reprinted the speech in a collection called Foreign Affairs (1939) to give an impression that he had been a consistent advocate of air strength. Eden admired H.H. Asquith, then in his final year in the Commons, for his lucidity and brevity. On 1 April 1924 he spoke urging Anglo - Turkish friendship and ratification of the Treaty of Lausanne, which had been signed in July 1923.
The Conservatives returned to power at the 1924 General Election. In January 1925 Eden, disappointed not to have been offered a position, went on a tour of the Middle East, meeting Emir Feisal of Iraq. Feisal reminded him of the "Czar of Russia & (I) suspect that his fate may be similar '' (a similar fate did indeed befall the Iraqi Royal Family in 1958). He inspected the oil refinery at Abadan, which he likened to "a Swansea on a small scale ''.
He was appointed Parliamentary Private Secretary (PPS) to Godfrey Locker - Lampson, Under - Secretary at the Home Office (17 February 1925) (serving under Home Secretary William Joynson Hicks). In July 1925 he went on a second trip to Canada, Australia and India. He wrote articles for The Yorkshire Post (controlled by his father - in - law Sir Gervase Beckett) under the pseudonym "Backbencher ''. In September 1925 he represented the Yorkshire Post at the Imperial Conference at Melbourne.
Eden continued to be PPS to Locker - Lampson when the latter was appointed Under - Secretary at the Foreign Office in December 1925. He distinguished himself with a speech on the Middle East (21 December 1925), calling for the readjustment of Iraqi frontiers in favour of Turkey, but also for a continued British mandate rather than "scuttle ''. Eden ended his speech by calling for Anglo - Turkish friendship. On 23 March 1926 he spoke urging the League of Nations to admit Germany, which would happen the following year. In July 1926 he became PPS to the Foreign Secretary Sir Austen Chamberlain.
Besides supplementing his parliamentary income (around £ 300 a year at that time) by writing and journalism, in 1926 he published a book about his travels, Places in the Sun, highly critical of the detrimental effect of socialism on Australia, and to which Stanley Baldwin wrote a foreword.
In November 1928, with Austen Chamberlain away on a voyage to recover his health, Eden had to speak for the government in a debate on a recent Anglo - French naval agreement, replying to Ramsay MacDonald (then Leader of the Opposition). According to Austen Chamberlain, he would have been promoted to his first ministerial job, Under - Secretary at the Foreign Office, if the Conservatives had won the 1929 election.
The 1929 General Election was the only time Eden received less than 50 % of the vote at Warwick. After the Conservative defeat he joined a progressive group of younger politicians consisting of Oliver Stanley, William Ormsby - Gore and the future Speaker W.S. "Shakes '' Morrison. Another member was Noel Skelton, who before his death coined the phrase "property - owning democracy '', which Eden was later to popularise as a Conservative party aspiration. Eden advocated co-partnership in industry between managers and workers, whom he wanted to be given shares.
In opposition between 1929 and 1931 Eden worked as a City broker for Harry Lucas (a firm eventually absorbed into S.G. Warburg & Co.).
In August 1931 Eden held his first ministerial office as Under - Secretary for Foreign Affairs in Prime Minister Ramsay MacDonald 's National Government. Initially the office of Foreign Secretary was held by Lord Reading (in the House of Lords), although Sir John Simon held the job from November 1931.
Like many of his generation who had served in World War I, Eden was strongly anti-war, and strove to work through the League of Nations to preserve European peace. The government proposed measures, superseding the postwar Versailles Treaty, that would allow Germany to rearm (albeit replacing her small professional army with a short - service militia) and reduce French armaments. Winston Churchill criticised the policy sharply in the House of Commons on 23 March 1933, opposing "undue '' French disarmament as this might require Britain to take action to enforce peace under the 1925 Locarno Treaty. Eden, replying for the government, dismissed Churchill 's speech as exaggerated and unconstructive, commenting that land disarmament had yet to make the same progress as naval disarmament at the Washington and London treaties, and arguing that French disarmament was needed in order to "secure for Europe that period of appeasement which is needed ''. Eden 's speech was met with approval by the House of Commons. Neville Chamberlain commented shortly afterwards: "That young man is coming along rapidly; not only can he make a good speech but he has a good head and what advice he gives is listened to by the Cabinet '' Eden later wrote that in the early 1930s the word "appeasement '' was still used in its correct sense (from the Oxford English Dictionary) of seeking to settle strife. Only later in the decade did it come to acquire a pejorative meaning of acceding to bullying demands.
In December 1933 he was appointed Lord Privy Seal, a position that was combined with the newly created office of Minister for League of Nations Affairs. While Lord Privy Seal, Eden was sworn of the Privy Council in the 1934 Birthday Honours. He entered the Cabinet for the first time in June 1935 when Stanley Baldwin formed his third administration. Eden later came to recognise that peace could not be maintained by appeasement of Nazi Germany and fascist Italy. He privately opposed the policy of the Foreign Secretary, Sir Samuel Hoare, of trying to appease Italy during its invasion of Abyssinia (Ethiopia) in 1935. When Hoare resigned after the failure of the Hoare - Laval Pact, Eden succeeded him as Foreign Secretary. When Eden had his first audience with King George V, the King is said to have remarked, "No more coals to Newcastle, no more Hoares to Paris. ''
At this stage in his career, Eden was considered as something of a leader of fashion. He regularly wore a Homburg hat, which became known in Britain as an "Anthony Eden ''.
Eden became Foreign Secretary at a time when Britain was having to adjust its foreign policy to face the rise of the fascist powers. He supported the policy of non-interference in the Spanish Civil War through conferences like the Nyon Conference and supported prime minister Neville Chamberlain in his efforts to preserve peace through reasonable concessions to Germany. The Italian - Ethiopian War was brewing, and Eden tried in vain to persuade Mussolini to submit the dispute to the League of Nations. The Italian dictator scoffed at Eden publicly as "the best dressed fool in Europe. '' Eden did not protest when Britain and France failed to oppose Hitler 's reoccupation of the Rhineland in 1936. When the French requested a meeting with a view to some kind of military action in response to Hitler 's occupation, Eden in a statement firmly ruled out any military assistance to France.
His resignation in February 1938 was largely attributed to growing dissatisfaction with Chamberlain 's policy of appeasement. That is, however, disputed by new research; it was not the question if there should be negotiations with Italy, but only when they should start and how far they should be carried. Similarly, he at no point registered his dissatisfaction with the appeasement policy directed towards Nazi Germany in his period as Foreign Secretary. He became a Conservative dissenter leading a group that conservative whip David Margesson called the "Glamour Boys, '', and a leading anti-appeaser like Winston Churchill, who led a similar group, called "The Old Guard. ''
Although Churchill claimed to have lost sleep the night of Eden 's resignation, they were not allies and did not see eye - to - eye until Churchill became Prime Minister. Churchill maintained that and detailed how Eden resigned over Chamberlain 's affront to Roosevelt, who had offered earlier in February to mediate the growing dispute in Europe. There was much speculation that Eden would become a rallying point for all the disparate opponents of Neville Chamberlain, but his position declined heavily amongst politicians as he maintained a low profile, avoiding confrontation, though he opposed the Munich Agreement and abstained in the vote on it in the House of Commons. However, he remained popular in the country at large, and in later years was often wrongly supposed to have resigned as Foreign Secretary in protest at the Munich Agreement.
In a 1967 interview, Eden explained his decision to resign: "It was not over protocol, Chamberlain 's communicating with Mussolini without telling me. I never cared a goddamn, a tuppence about protocol. The reason for my resignation was that we had an agreement with Mussolini about the Mediterranean and Spain, which he was violating by sending troops to Spain, and Chamberlain wanted to have another agreement. I thought Mussolini should honour the first one before we negotiated for the second. I was trying to fight a delaying action for Britain, and I could not go along with Chamberlain 's policy. ''
During the last months of peace in 1939, Eden joined the Territorial Army with the rank of major, in the London Rangers motorized battalion of the King 's Royal Rifle Corps and was at annual camp with them in Beaulieu, Hampshire, when he heard news of the Molotov - Ribbentrop Pact.
On the outbreak of war (3 September 1939) Eden, unlike most Territorials, did not mobilise for active service. Instead, he returned to Chamberlain 's government as Secretary of State for Dominion Affairs, but was not in the War Cabinet. As a result, he was not a candidate for the Premiership when Chamberlain resigned in May 1940 after the Narvik Debate and Churchill became Prime Minister. Churchill appointed Eden Secretary of State for War.
At the end of 1940 Eden returned to the Foreign Office, and in this role became a member of the executive committee of the Political Warfare Executive in 1941. Although he was one of Churchill 's closest confidants, his role in wartime was restricted because Churchill conducted the most important negotiations, with Franklin D. Roosevelt and Joseph Stalin, himself, but Eden served loyally as Churchill 's lieutenant. In December 1941, he travelled by ship to Russia where he met the Soviet leader Joseph Stalin and surveyed the battlefields upon which the Russians had successfully defended Moscow from the German Army attack in Operation Barbarossa.
Nevertheless, he was in charge of handling most of the relations between Britain and Free French leader de Gaulle during the last years of the war. Eden was often critical of the emphasis Churchill put on the Special Relationship with the United States and was often disappointed by American treatment of their British allies.
In 1942 Eden was given the additional role of Leader of the House of Commons. He was considered for various other major jobs during and after the war, including Commander - in - Chief Middle East in 1942 (this would have been a very unusual appointment as Eden was a civilian; General Harold Alexander was in fact appointed), Viceroy of India in 1943 (General Archibald Wavell was appointed to this job), or Secretary - General of the newly formed United Nations Organisation in 1945. In 1943 with the revelation of the Katyn Massacre Eden refused to help the Polish Government in Exile.
In early 1943 Eden blocked a request from the Bulgarian authorities to aid with deporting part of the Jewish population from newly acquired Bulgarian territories to British - controlled Palestine. After his refusal, some of those people were transported to concentration camps in Nazi - occupied Poland.
In 1944 Eden went to Moscow to negotiate with the Soviet Union at the Tolstoy Conference. Eden also opposed the Morgenthau Plan to deindustrialise Germany. After the Stalag Luft III murders, he vowed in the House of Commons to bring the perpetrators of the crime to "exemplary justice '', leading to a successful manhunt after the war by the Royal Air Force Special Investigation Branch.
Eden 's eldest son, Pilot Officer Simon Gascoigne Eden, went missing in action and was later declared dead, while serving as a navigator with the RAF in Burma, in June 1945. There was a close bond between Eden and Simon, and Simon 's death was a great personal shock to his father. Mrs. Eden reportedly reacted to her son 's loss differently, and this led to a breakdown in the marriage. De Gaulle wrote him a personal letter of condolence in French.
In 1945 he was mentioned by Halvdan Koht among seven candidates who were qualified for the Nobel Prize in Peace. However, he did not explicitly nominate any of them. The person actually nominated was Cordell Hull.
After the Labour Party won the 1945 election, Eden went into opposition as Deputy Leader of the Conservative Party. Many felt that Churchill should have retired and allowed Eden to become party leader, but Churchill refused to consider this. As early as the spring of 1946, Eden openly asked Churchill to retire in his favour. He was in any case depressed during this period by the break - up of his first marriage and the death of his eldest son. Churchill was in many ways only "part - time Leader of the Opposition '', given his many journeys abroad and his literary work, and left the day - to - day work largely to Eden. Eden was largely regarded as lacking sense of party politics and contact with the common man. In these opposition years, however, he developed some knowledge about domestic affairs and created the idea of a "property - owning - democracy '', which Margaret Thatcher 's government attempted to achieve decades later. His domestic agenda is overall considered centre - left.
In 1951 the Conservatives returned to office and Eden became Foreign Secretary for a third time, though not "Deputy Prime Minister '' (Churchill gave him this title in the first list of ministers submitted to the King, but the King forbade it on the grounds that this "office '' is unknown to the Constitution). Churchill was largely a figurehead in this government, and Eden had effective control of British foreign policy for the second time, as the Empire declined and the Cold War grew more intense.
Eden 's biographer Richard Lamb said that Eden bullied Churchill into going back on commitments to European unity made in opposition. The truth appears to be more complex. Britain was still a world power, or at least trying to be, in 1945 -- 55, with the concept of sovereignty not as discredited as on the continent. The USA encouraged moves towards European federalism as it wanted to withdraw US troops and get the Germans rearmed under supervision. Eden was less Atlanticist than Churchill and had little time for European federalism. He wanted firm alliances with France and other Western European powers to contain Germany. Half of British trade at that time was with the sterling area, and only a quarter with Western Europe. Despite later talk of "lost opportunities '', even Macmillan, who had been an active member of the "European Movement '' after the war, acknowledged in February 1952 that Britain 's relationship with the USA and the Commonwealth would prevent her from joining a federal Europe at that time. Eden was also irritated by Churchill 's hankering for a summit meeting with the USSR, during the period in 1953 after Stalin 's death and whilst Eden was seriously ill from a botched bile duct operation.
Despite the ending of the British Raj in India, British interest in the Middle East remained strong: Britain had treaty relations with Jordan and Iraq and was the protecting power for Kuwait and the Trucial States, the colonial power in Aden, and the occupying power in the Suez Canal. Many right - wing Conservative MPs, organised in the so - called Suez Group, sought to retain this imperial role, though economic pressures made maintenance of it increasingly difficult. Britain did seek to maintain its huge military base in the Suez Canal zone and, in the face of Egyptian resentment, further develop its alliance with Iraq, and the hope was that the Americans would assist Britain, possibly through finance. While the Americans did co-operate with the British in overthrowing the Mosaddegh government in Iran, after it had nationalised British oil interests, the Americans developed their own relations in the region, taking a positive view of the Egyptian Free Officers and developing friendly relations with Saudi Arabia. Britain was eventually forced to withdraw from the canal zone and the Baghdad Pact security treaty was not supported by the United States, leaving Eden vulnerable to the charge of having failed to maintain British prestige.
Eden had grave misgivings about American foreign policy under Secretary of State John Foster Dulles and President Dwight D. Eisenhower. Eisenhower was concerned, as early as March 1953, at the escalating costs of defence and the increase of state power which this would bring. Eden was irked by Dulles 's policy of "brinkmanship '', or display of muscle, in relations with the Communist world. The success of the 1954 Geneva Conference on Indo - China ranks as the outstanding achievement of his third term in the Foreign Office, although he was critical of the United States decision not to sign the accord. During the summer and fall of 1954, the Anglo - Egyptian agreement to withdraw all British forces from Egypt was also negotiated and ratified.
There were concerns that if the EDC was not ratified as they wanted, the US Republican Administration might withdraw into defending only the Western Hemisphere (although recent documentary evidence confirms that the US intended to withdraw troops from Europe anyway if the EDC was ratified). After the French Assembly rejected the EDC in September 1954, Eden tried to come up with a viable alternative. Between 11 and 17 September he visited every major West European capital, to negotiate West Germany becoming a sovereign state and entering the Brussels pact prior to entering NATO. Paul - Henri Spaak said he "saved the Atlantic alliance ''.
In 1954 he was appointed to the Order of the Garter and became Sir Anthony Eden.
In April 1955 Churchill finally retired, and Eden succeeded him as Prime Minister. He was a very popular figure as a result of his long wartime service and his famous good looks and charm. His famous words "Peace comes first, always '' added to his already substantial popularity.
On taking office, he immediately called a general election for 26 May 1955, at which he increased the Conservative majority from seventeen to sixty, an increase in majority that broke a ninety - year record for any UK government. The 1955 general election was the last in which the Conservatives won the majority share of the votes in Scotland. However, Eden had never held a domestic portfolio and had little experience in economic matters. He left these areas to his lieutenants such as Rab Butler, and concentrated largely on foreign policy, forming a close relationship with US President Dwight Eisenhower. Eden 's attempts to maintain overall control of the Foreign Office drew widespread criticism.
Eden has the distinction of being the British prime minister to oversee the lowest unemployment figures of the post-World War II era, with unemployment standing at just over 215,000 -- barely one per cent of the workforce -- in July 1955.
The alliance with the US proved not universal, however, when in July 1956 Gamal Abdel Nasser, President of Egypt, unexpectedly nationalised (seized) the Suez Canal, following the withdrawal of Anglo - American funding for the Aswan Dam, Eden believed the nationalisation was in violation of the Anglo - Egyptian Agreement that Nasser had signed with the British and French governments on 19 October 1954. This view was shared by Labour leader Hugh Gaitskell and Liberal leader Jo Grimond. In 1956 the Suez Canal was of vital importance since over two - thirds of the oil supplies of Western Europe (60 million tons annually) passed through it, with 15,000 ships a year, one - third of them British; three - quarters of all Canal shipping belonged to NATO countries. Britain 's total oil reserve at the time of the nationalisation was enough for only six weeks. The Soviet Union was certain to veto any sanctions against Nasser at the United Nations. Britain and a conference of other nations met in London following the nationalisation in an attempt to resolve the crisis through diplomatic means. However, the Eighteen Nations Proposals, including an offer of Egyptian representation on the board of the Suez Canal Company and a share of profits, were rejected by Nasser. Eden feared that Nasser intended to form an Arab Alliance that would threaten to cut off oil supplies to Europe and, in conjunction with France, decided he should be removed from power.
Eden, drawing on his experience in the 1930s, saw Nasser as another Mussolini, considering the two men aggressive nationalist socialists determined to invade other countries. Others believed that Nasser was acting from legitimate patriotic concerns and the nationalisation was determined by the Foreign Office to be deliberately provocative but not illegal. The Attorney General, Sir Reginald Manningham - Buller, was not asked for his opinion officially but made his view that the government 's contemplated armed strike against Egypt would be unlawful known through the Lord Chancellor.
Anthony Nutting recalled that Eden told him, "What 's all this nonsense about isolating Nasser or ' neutralising ' him as you call it? I want him destroyed, ca n't you understand? I want him murdered, and if you and the Foreign Office do n't agree, then you 'd better come to the cabinet and explain why. '' When Nutting pointed out that they had no alternative government to replace Nasser, Eden apparently replied, "I do n't give a damn if there 's anarchy and chaos in Egypt. '' At a private meeting at Downing Street on 16 October 1956 Eden showed several ministers a plan, submitted two days earlier by the French. Israel would invade Egypt, Britain and France would give an ultimatum telling both sides to stop and, when one refused, send in forces to enforce the ultimatum, separate the two sides -- and occupy the Canal and get rid of Nasser. When Nutting suggested the Americans should be consulted Eden replied, "I will not bring the Americans into this... Dulles has done enough damage as it is. This has nothing to do with the Americans. We and the French must decide what to do and we alone. '' Eden openly admitted his view of the crisis was shaped by his experiences in the two world wars, writing, "We are all marked to some extent by the stamp of our generation, mine is that of the assassination in Sarajevo and all that flowed from it. It is impossible to read the record now and not feel that we had a responsibility for always being a lap behind... Always a lap behind, a fatal lap. ''
There was no question of an immediate military response to the crisis -- Cyprus had no deep - water harbours, which meant that Malta, several days ' sailing from Egypt, would have to be the main concentration point for an invasion fleet if the Libyan government would not permit a land invasion from its territory. Eden initially considered using British forces in the Kingdom of Libya to regain the Canal, but then decided this risked inflaming Arab opinion. Unlike the French prime minister Guy Mollet, who saw regaining the Canal as the primary objective, Eden believed the real need was to remove Nasser from office. He hoped that if the Egyptian army was swiftly and humiliatingly defeated by the Anglo - French forces the Egyptian people would rise up against Nasser. Eden told Field Marshal Sir Bernard Montgomery that the overall aim of the mission was simply, "To knock Nasser off his perch. '' In the absence of a popular uprising Eden and Mollet would say that Egyptian forces were incapable of defending their country and therefore Anglo - French forces would have to return to guard the Suez Canal.
Eden believed that if Nasser were seen to get away with seizing the Canal then Egypt and other Arab countries might move closer to the Soviet Union. At that time, the Middle East accounted for 80 -- 90 percent of Western Europe 's oil supply. If Nasser were seen to get away with it, then other Middle East countries might be encouraged to nationalise their oil. The invasion, he contended at the time, and again in a 1967 interview, was aimed at maintaining the sanctity of international agreements and at preventing future unilateral denunciation of treaties. Eden was energetic during the crisis in using the media, including the BBC, to incite public opinion to support his views of the need to overthrow Nasser. In September 1956 a plan was drawn up to reduce the flow of water in the Nile by using dams in an attempt to damage Nasser 's position. However, the plan was abandoned because it would take months to implement, and due to fears that it could affect other countries such as Uganda and Kenya.
On 25 September 1956, the Chancellor of the Exchequer Harold Macmillan met informally with President Eisenhower at the White House; he misread Eisenhower 's determination to avoid war and told Eden that the Americans would not in any way oppose the attempt to topple Nasser. Though Eden had known Eisenhower for years and had many direct contacts during the crisis, he also misread the situation. The Americans saw themselves as the champion of decolonization and refused to support any move that could be seen as imperialism or colonialism. Eisenhower felt the crisis had to be handled peacefully; he told Eden that American public opinion would not support a military solution. Eden and other leading British officials incorrectly believed Nasser 's support for Palestinian terrorists against Israel, as well as his attempts to destabilise pro-western regimes in Iraq and other Arab states, would deter the US from intervening with the operation. Eisenhower specifically warned that the Americans, and the world, "would be outraged '' unless all peaceful routes had been exhausted, and even then "the eventual price might become far too heavy ''. At the root of the problem was the fact that Eden felt that Britain was still an independent world power. His lack of sympathy for British integration into Europe, manifested in his scepticism about the fledgling European Economic Community (EEC), was another aspect of his belief in Britain 's independent role in world affairs.
Israel invaded the Sinai peninsula at the end of October 1956. Britain and France moved in ostensibly to separate the two sides and bring peace, but in fact to regain control of the canal and overthrow Nasser. The United States immediately and strongly opposed the invasion. The United Nations denounced the invasion, the Soviets were bellicose, and only New Zealand, Australia, West Germany and South Africa spoke out for Britain 's position.
The Suez Canal was of lesser economic importance to the USA, which acquired 15 percent of its oil through that route. Eisenhower wanted to broker international peace in "fragile '' regions. He did not see Nasser as a serious threat to the West, but he was concerned that the Soviets, who were well known to want a permanent warm water base for their Black Sea fleet in the Mediterranean, might side with Egypt. Eisenhower feared a pro-Soviet backlash amongst the Arab nations if, as seemed likely, Egypt suffered an humiliating defeat at the hands of the British, French and Israelis.
Eden, who faced domestic pressure from his party to take action, as well as stopping the decline of British influence in the Middle East, had ignored Britain 's financial dependence on the US in the wake of the Second World War, and had assumed the US would automatically endorse whatever action taken by its closest ally. At the ' Law not War ' rally in Trafalgar Square on 4 November 1956, Eden was ridiculed by Aneurin Bevan: ' Sir Anthony Eden has been pretending that he is now invading Egypt to strengthen the United Nations. Every burglar of course could say the same thing; he could argue that he was entering the house to train the police. So, if Sir Anthony Eden is sincere in what he is saying, and he may be, then he is too stupid to be a prime minister '. Public opinion was mixed; some historians think that the majority of public opinion in the UK was on Eden 's side. Eden was forced to bow to American diplomatic and financial pressure, and protests at home, by calling a ceasefire when Anglo - French forces had captured only 23 miles of the Canal. With the US threatening to withdraw financial support from sterling, the Cabinet divided and the Chancellor of the Exchequer Harold Macmillan threatening to resign unless an immediate ceasefire was called, Eden was under immense pressure. He considered defying the calls until the commander on the ground told him it could take up to six days for the Anglo - French troops to secure the entire Canal zone. Therefore, a ceasefire was called at quarter past midnight on 7 November.
In his 1987 book Spycatcher Peter Wright said that, following the imposed ending to the military operation, Eden reactivated the assassination option for a second time. By this time virtually all MI6 agents in Egypt had been rounded up by Nasser, and a new operation, using renegade Egyptian officers, was drawn up. It failed principally because the cache of weapons which had been hidden on the outskirts of Cairo was found to be defective.
Suez damaged Eden 's reputation for statesmanship, in many eyes, and led to a breakdown in his health. He went on vacation to Jamaica in November 1956, at a time when he was still determined to soldier on as Prime Minister. His health, however, did not improve, and during his absence from London his Chancellor Harold Macmillan and Rab Butler worked to manoeuvre him out of office. On the morning of the ceasefire Eisenhower agreed to meet with Eden to publicly resolve their differences, but this offer was later withdrawn after Secretary of State Dulles advised that it could inflame the Middle Eastern situation further.
The Observer newspaper accused Eden of lying to Parliament over the Suez Crisis, while MPs from all parties criticised his calling a ceasefire before the Canal was taken. Churchill, while publicly supportive of Eden 's actions, privately criticised his successor for not seeing the military operation through to its conclusion. Eden easily survived a vote of confidence in the House of Commons on 8 November.
While Eden was on holiday in Jamaica, other members of the government discussed on 20 November how to counter charges that the UK and France had worked in collusion with Israel to seize the Canal, but decided there was very little evidence in the public domain.
On his return from Jamaica on 14 December, Eden still hoped to continue as Prime Minister. He had lost his traditional base of support on the Tory left and amongst moderate opinion nationally, but appears to have hoped to rebuild a new base of support amongst the Tory Right. However, his political position had eroded during his absence. He wished to make a statement attacking Nasser as a puppet of the Soviets, attacking the United Nations and speaking of the "lessons of the 1930s '', but was prevented from doing so by Macmillan, Butler and Lord Salisbury.
On his return to the House of Commons (17 December), he slipped into the Chamber largely unacknowledged by his own party. One Conservative MP rose to wave his Order Paper, only to have to sit down in embarrassment whilst Labour MPs laughed. On 18 December he addressed the 1922 committee (Conservative backbenchers), declaring "as long as I live, I shall never apologise for what we did '', but was unable to answer a question about the validity of the Tripartite Declaration of 1950 (which he had in fact reaffirmed in April 1955, two days before becoming Prime Minister). In his final statement to the House of Commons as Prime Minister (20 December 1956) he performed well in a difficult debate, but told MPs that "there was not foreknowledge that Israel would attack Egypt ''. Victor Rothwell writes that the knowledge of his having misled the House of Commons in this way must have hung over him thereafter, as was the concern that the US Administration might demand that Britain pay reparations to Egypt. Papers released in January 1987 showed the entire Cabinet had been informed of the plan on 23 October 1956.
Eden suffered another fever at Chequers over Christmas, but was still talking of going on an official trip to the USSR in April 1957, wanting a full inquiry into the Crabb affair and badgering Lord Hailsham (First Lord of the Admiralty) about the £ 6m being spent on oil storage at Malta.
Eden resigned on 9 January 1957, after his doctors warned him his life was at stake if he continued in office. John Charmley writes "Ill - health... provide (d) a dignified reason for an action (i.e.. resignation) which would, in any event, have been necessary. '' Rothwell writes that "mystery persists '' over exactly how Eden was persuaded to resign, although the limited evidence suggests that Butler, who was expected to succeed him as Prime Minister, was at the centre of the intrigue. Rothwell writes that Eden 's fevers were "nasty but brief and not life - threatening '' and that there may have been "manipulation of medical evidence '' to make Eden 's health seem "even worse '' than it was. Macmillan wrote in his diary that "nature had provided a real health reason '' when a "diplomatic illness '' might otherwise have had to be invented. David Carlton (1981) even suggested that the Palace might have been involved, a suggestion discussed by Rothwell. As early as spring 1954 Eden had been indifferent to cultivating good relations with the new Queen. Eden is known to have favoured a Japanese or Scandinavian style monarchy (i.e. with no involvement in politics whatsoever) and in January 1956 he had insisted that Nikita Khrushchev and Nikolai Bulganin spend only the minimum amount of time in talks with the Queen. Evidence also exists that the Palace were concerned at not being kept fully informed during the Suez Crisis. In the 1960s Clarissa Eden was observed to speak of the Queen "in an extremely hostile and belittling way '', and in an interview in 1976 Eden commented that he "would not claim she was pro-Suez ''.
Although the media expected Butler would get the nod as Eden 's successor, a survey of the Cabinet taken for the Queen showed Macmillan was the nearly unanimous choice, and he became Prime Minister on 10 January 1957. Shortly afterwards Eden and his wife left England for a holiday in New Zealand.
A.J.P. Taylor wrote in the 1970s: "Eden... destroyed (his reputation as a peacemaker) and led Great Britain to one of the greatest humiliations in her history... (he) seemed to take on a new personality. He acted impatiently and on impulse. Previously flexible he now relied on dogma, denouncing Nasser as a second Hitler. Though he claimed to be upholding international law, he in fact disregarded the United Nations Organisation which he had helped to create... The outcome was pathetic rather than tragic ''.
Thorpe has summarised Eden 's central role in the Suez Crisis of 1956:
Eden 's policy had four main aims: first, to secure the Suez Canal; second and consequentially, to ensure continuity of oil supplies; third, to remove Nasser; and fourth, to keep the Russians out of the Middle East. The immediate consequence of the crisis was that the Suez Canal was blocked, oil supplies were interrupted, Nasser 's position as the leader of Arab nationalism was strengthened, and the way was left open for Russian intrusion into the Middle East.
Michael Foot pushed for a special inquiry along the lines of the Parliamentary Inquiry into the Attack on the Dardanelles in the First World War, although Harold Wilson (Labour Prime Minister 1964 -- 70 and 1974 -- 76) regarded the matter as a can of worms best left unopened. This talk ceased after the defeat of the Arab armies by Israel in the Six Day War of 1967, after which Eden received a lot of fanmail telling him that he had been right, and his reputation, not least in Israel and the United States, soared. In 1986 Eden 's official biographer Robert Rhodes James re-evaluated sympathetically Eden 's stance over Suez and in 1990, following the Iraqi invasion of Kuwait, James asked: "Who can now claim that Eden was wrong? ''. Such arguments turn mostly on whether, as a matter of policy, the Suez operation was fundamentally flawed or whether, as such "revisionists '' thought, the lack of American support conveyed the impression that the West was divided and weak. Anthony Nutting, who resigned as a Foreign Office Minister over Suez, expressed the former view in 1967, the year of the Arab - Israeli Six - Day War, when he wrote that "we had sown the wind of bitterness and we were to reap the whirlwind of revenge and rebellion ''. Conversely, Jonathan Pearson argues in Sir Anthony Eden and the Suez Crisis: Reluctant Gamble (2002) that Eden was more reluctant and less bellicose than most historians have judged. D.R. Thorpe, another of Eden 's biographers, writes that Suez was "a truly tragic end to his premiership, and one that came to assume a disproportionate importance in any assessment of his careers ''; he suggests that had the Suez venture succeeded, "there would almost certainly have been no Middle East war in 1967, and probably no Yom Kippur War in 1973 also ''.
Guy Millard, one of Eden 's Private Secretaries, who thirty years later, in a radio interview, spoke publicly for the first time on the crisis, made an insider 's judgement about Eden: "It was his mistake of course and a tragic and disastrous mistake for him. I think he overestimated the importance of Nasser, Egypt, the Canal, even of the Middle East itself. '' While British actions in 1956 are routinely described as "imperialistic '', the motivation was in fact economic. Eden was a liberal supporter of nationalist ambitions, such as over Sudanese independence. His 1954 Suez Canal Base Agreement (withdrawing British troops from Suez in return for certain guarantees) was sold to the Conservative Party against Churchill 's wishes.
Rothwell believes that Eden should have cancelled the Suez Invasion plans in mid-October, when the Anglo - French negotiations at the United Nations were making some headway, and that in 1956 the Arab countries threw away a chance to make peace with Israel on her existing borders.
British Government cabinet papers from September 1956, during Eden 's term as Prime Minister, have shown that French Prime Minister Guy Mollet approached the British Government suggesting the idea of an economic and political union between France and Great Britain. This was a similar offer, in reverse, to that made by Churchill (drawing on a plan devised by Leo Amery) in June 1940.
The offer by Guy Mollet was referred to by Sir John Colville, Churchill 's former private secretary, in his collected diaries, The Fringes of Power (1985), his having gleaned the information in 1957 from Air Chief Marshal Sir William Dickson during an air flight (and, according to Colville, after several whiskies and soda). Mollet 's request for Union with Britain was rejected by Eden, but the additional possibility of France joining the Commonwealth of Nations was considered, although similarly rejected. Colville noted, in respect of Suez, that Eden and his Foreign Secretary Selwyn Lloyd "felt still more beholden to the French on account of this offer ''.
Eden resigned from the House of Commons in March 1957. He retained much of his personal popularity in Britain and soon regretted his retirement, and contemplated standing again. Several Conservative MPs were reportedly willing to give up their seats for him, although the party hierarchy were less keen. He finally gave up such hopes in late 1960 after an exhausting speaking tour of Yorkshire. Macmillan initially offered to recommend him for a viscountcy, which Eden assumed to be a calculated insult, and he was granted an earldom (which was then the traditional rank for a former Prime Minister) after reminding Macmillan that he had already been offered one by the Queen herself. He entered the House of Lords as the Earl of Avon in 1961.
In retirement Eden lived in ' Rose Bower ' by the banks of the River Ebble in Broad Chalke, Wiltshire. Starting in 1961 he bred a herd of sixty Herefordshire cattle (one of whom was called "Churchill '') until a further decline in his health forced him to sell them in 1975.
In July 1962 Eden made front - page news by commenting that "Mr Selwyn Lloyd has been horribly treated '' when the latter was dismissed as Chancellor in the reshuffle known as the "Night of the Long Knives ''. In August 1962, at a dinner party, he had a "slanging match '' with Nigel Birch, who as Secretary of State for Air had not wholeheartedly supported the Suez Invasion. In 1963 Eden initially favoured Hailsham for the Conservative leadership but then supported Home as a compromise candidate.
From 1945 to 1973, Eden was Chancellor of the University of Birmingham. In a television interview in 1966 he called on the United States to halt its bombing of North Vietnam to concentrate on developing a peace plan "that might conceivably be acceptable to Hanoi. '' The bombing of North Vietnam, he argued, would never settle the conflict in South Vietnam. "On the contrary, '' he declared, "bombing creates a sort of David and Goliath complex in any country that has to suffer -- as we had to, and as I suspect the Germans had to, in the last war. '' Eden sat for extensive interviews for the famed multi-part Thames Television production, The World at War, which was first broadcast in 1973. He also featured frequently in Marcel Ophüls ' 1969 documentary Le chagrin et la pitié, discussing the occupation of France in a wider geopolitical context. He spoke impeccable, if accented, French.
Eden 's occasional articles and his early 1970s television appearance were an exception to an almost total retirement. He seldom appeared in public, unlike other former Prime Ministers, e.g. James Callaghan who commented frequently on current affairs. He was even accidentally omitted from a list of Conservative Prime Ministers by Margaret Thatcher when she became Conservative Leader in 1975, although she later went out of her way to establish relations with Eden and, later, his widow. In retirement he was highly critical of regimes such as Sukarno 's Indonesia which confiscated assets belonging to their former colonial rulers, and appears to have reverted somewhat to the right - wing views which he had espoused in the 1920s.
In retirement Eden corresponded with Selwyn Lloyd, coordinating the release of information and with which writers they would agree to speak and when. Rumours that Britain had colluded with France and Israel appeared, albeit in garbled form, as early as 1957. By the 1970s they had agreed that Lloyd would only tell his version of the story after Eden 's death (in the event, Lloyd would outlive Eden by a year, struggling with terminal illness to complete his own memoirs).
In retirement Eden was particularly bitter that Eisenhower had initially indicated British and French troops should be allowed to remain around Port Said, only for the US ambassador Henry Cabot Lodge, Jr to press for an immediate withdrawal at the UN, thereby rendering the operation a complete failure. Eden felt the Eisenhower administration 's unexpected opposition was hypocritical in light of the 1953 Iranian coup d'état and the 1954 Guatemalan coup d'état.
Eden published three volumes of political memoirs, in which he denied that there had been any collusion with France and Israel. Like Churchill, Eden relied heavily on the ghost - writing of young researchers, whose drafts he would sometimes toss angrily into the flowerbeds outside his study. One of them was the young David Dilks.
In his view, American Secretary of State John Foster Dulles, whom he particularly disliked, was responsible for the ill fate of the Suez adventure. In an October press conference, barely three weeks before the fighting began, Dulles had coupled the Suez Canal issue with colonialism, and his statement infuriated Eden and much of the UK as well. "The dispute over Nasser 's seizure of the canal, '' wrote Eden, "had, of course, nothing to do with colonialism, but was concerned with international rights. '' He added that "if the United States had to defend her treaty rights in the Panama Canal, she would not regard such action as colonialism. '' His lack of candour further diminished his standing and a principal concern in his later years was trying to rebuild his reputation that was severely damaged by Suez, sometimes taking legal action to protect his viewpoint.
Eden faulted the United States for forcing him to withdraw, but he took credit for United Nations action in patrolling the Israeli - Egyptian borders. Eden said of the invasion, "Peace at any price has never averted war. We must not repeat the mistakes of the pre-war years, by behaving as though the enemies of peace and order are armed with only good intentions. '' Recalling the incident in a 1967 interview, he declared, "I am still unrepentant about Suez. People never look at what would have happened if we had done nothing. There is a parallel with the 1930s. If you allow people to break agreements with impunity, the appetite grows to feed on such things. I do n't see what other we ought to have done. One can not dodge. It is hard to act rather than dodge. '' In his 1967 interview (which he stipulated would not be used until after his death), Eden acknowledged secret dealings with the French and "intimations '' of the Israeli attack. He insisted, however, that "the joint enterprise and the preparations for it were justified in the light of the wrongs it (the Anglo - French invasion) was designed to prevent. '' "I have no apologies to offer, '' Eden declared.
At the time of his retirement, Eden had been short of money, although he was paid a £ 100,000 advance for his memoirs by The Times, with any profit over this amount to be split between himself and the newspaper. By 1970, they had brought him £ 185,000 (around £ 3,000,000 at 2014 prices), leaving him a wealthy man for the first time in his life. Towards the end of his life, he published a personal memoir of his early life, Another World (1976).
On 5 November 1923, shortly before his election to Parliament, he married Beatrice Beckett, then aged only 18. They had three sons: Simon Gascoigne (1924 -- 1945), Robert, who died fifteen minutes after being born in October 1928, and Nicholas (1930 -- 1985).
The marriage was not a success, with both parties apparently conducting affairs. By the mid-1930s his diaries seldom mention Beatrice. The marriage finally broke up under the strain of the loss of their son Simon, who was killed in action with the RAF in Burma in 1945. His plane was reported "missing in action '' on 23 June and found on 16 July; Eden did not want the news to be public until after the election result on 26 July, to avoid claims of "making political capital '' from it.
Between 1946 and 1950, whilst separated from his wife, Eden conducted an open affair with Dorothy, Countess Beatty, the wife of David, Earl Beatty.
Eden was the great - great - grandnephew of author Emily Eden and in 1947, wrote an introduction to her novel The Semi-Attached Couple (1860).
In 1950, Eden and Beatrice were finally divorced, and in 1952, he married Churchill 's niece Clarissa Spencer - Churchill, a nominal Roman Catholic who was fiercely criticised by Catholic writer Evelyn Waugh for marrying a divorced man. Eden 's second marriage was much more successful than his first had been.
Eden had an ulcer, exacerbated by overwork, as early as the 1920s. His life was changed by a medical mishap: during an operation on 12 April 1953, to remove gallstones, his bile duct was damaged, leaving Eden susceptible to recurrent infections, biliary obstruction, and liver failure. The physician consulted at the time was the royal physician, Sir Horace Evans, 1st Baron Evans. Three surgeons were recommended and Eden chose the one that had previously performed his appendectomy, John Basil Hume, surgeon from St Bartholomew 's Hospital. Eden suffered from cholangitis, an abdominal infection which became so agonising that he was admitted to hospital in 1956 with a temperature reaching 106 ° F (41 ° C). He required major surgery on three or four occasions to alleviate the problem.
He was also prescribed Benzedrine, the wonder drug of the 1950s. Regarded then as a harmless stimulant, it belongs to the family of drugs called amphetamines, and at that time they were prescribed and used in a very casual way. Among the side effects of Benzedrine are insomnia, restlessness, and mood swings, all of which Eden suffered during the Suez Crisis; indeed, earlier in his premiership he complained of being kept awake at night by the sound of motor scooters. Eden 's drug use is now commonly agreed to have been a part of the reason for his bad judgment while Prime Minister. The Thorpe biography, however, denied Eden 's abuse of Benzedrine, stating that the allegations were "untrue, as is made clear by Eden 's medical records at Birmingham University, not yet (at the time) available for research ''.
The resignation document written by Eden for release to the Cabinet on 9 January 1957 admitted his dependence on stimulants but not that they affected his judgement during the Suez crisis in the autumn of 1956. "... I have been obliged to increase the drugs (taken after the "bad abdominal operations '') considerably and also increase the stimulants necessary to counteract the drugs. This has finally had an adverse effect on my precarious inside, '' he wrote. However, in his book The Suez Affair (1966), historian Hugh Thomas, quoted by David Owen, CH, PC, FRCP, claimed that Eden had revealed to a colleague that he was "practically living on Benzedrine '' at the time.
In December 1976, Eden felt well enough to travel with his wife to the United States to spend Christmas and New Year with Averell and Pamela Harriman, but after reaching the States his health rapidly deteriorated. Prime Minister James Callaghan arranged for an RAF plane that was already in America to divert to Miami, to fly Eden home.
Eden died from liver cancer in Salisbury on 14 January 1977, aged 79. He was survived by Clarissa.
He was buried in St Mary 's churchyard at Alvediston, just three miles upstream from ' Rose Bower ', at the source of the River Ebble. Eden 's papers are housed at the University of Birmingham Special Collections.
At his death, Eden was the last surviving member of Churchill 's War Cabinet. Eden 's surviving son, Nicholas Eden, 2nd Earl of Avon (1930 -- 1985), known as Viscount Eden from 1961 to 1977, was also a politician and a minister in the Thatcher government until his premature death from AIDS at the age of 54.
Eden, who was well - mannered, well - groomed, and good - looking, always made a particularly cultured appearance. This gave him huge popular support throughout his political life, but some contemporaries felt he was merely a superficial person lacking any deeper convictions.
That view was enforced by his very pragmatic approach to politics. Sir Oswald Mosley, for example, said he never understood why Eden was so strongly pushed by the Tory party, as he felt that Eden 's abilities were very much inferior to those of Harold Macmillan and Oliver Stanley. In 1947, Dick Crossman called Eden "that peculiarly British type, the idealist without conviction ''.
US Secretary of State Dean Acheson regarded Eden as a quite old - fashioned amateur in politics typical of the British Establishment. In contrast, Soviet Leader Nikita Khrushchev commented that until his Suez adventure Eden had been "in the top world class ''.
Eden was heavily influenced by Stanley Baldwin when he first entered Parliament. After earlier combative beginnings, he cultivated a low - key speaking style which relied heavily on rational argument and consensus - building rather than rhetoric and party point - scoring, and which was often highly effective in the House of Commons. However, he was not always an effective public speaker, and his parliamentary performances sometimes disappointed many of his followers, e.g., after Eden 's resignation from Chamberlain 's government. Churchill once even commented on one of Eden 's speeches that the latter had used every cliché except "God is love ''. This was deliberate: Eden often struck out original phrases from speech drafts and replaced them with clichés.
Eden 's inability to express himself clearly is often attributed to shyness and lack of self - confidence. Eden is known to have been much more direct in meeting with his secretaries and advisers than in Cabinet meetings and public speeches, and sometimes tended to become enraged and behave "like a child '', only to regain his temper within a few minutes. Many who worked for him remarked that he was "two men '', one charming, erudite, and hard - working, the other petty and prone to temper tantrums during which he would insult his subordinates.
As Prime Minister, Eden was notorious for telephoning ministers and newspaper editors from 6 am onwards. Rothwell writes that even before Suez, the telephone had become "a drug '' and that "During the Suez Crisis Eden 's telephone mania exceeded all bounds ''.
Eden was notoriously "unclubbable '' and offended Churchill by declining to join The Other Club. He also declined honorary membership in the Athenaeum. However, he maintained friendly relations with Opposition MPs; for example, George Thomas received a kind two - page letter from Eden on learning that his stepfather had died. Eden was a Trustee of the National Gallery (in succession to MacDonald) between 1935 and 1949. He also had a deep knowledge of Persian poetry and of Shakespeare and would bond with anybody who could display similar knowledge.
Rothwell writes that although Eden was capable of acting with ruthlessness; for instance, over the repatriation of the Cossacks in 1945, his main concern was to avoid being seen as "an appeaser '' or over the Soviet reluctance to accept a democratic Poland in October 1944. Like many people, Eden persuaded himself that his past actions were more consistent than they had in fact been.
Recent biographies put more emphasis on Eden 's achievements in foreign policy and perceive him to have held deep convictions regarding world peace and security as well as a strong social conscience. Rhodes James applies to Eden Churchill 's famous verdict on Lord Curzon (in Great Contemporaries): "The morning had been golden; the noontime was bronze; and the evening lead. But all was solid, and each was polished until it shone after its fashion ''.
|
who wrote the song girl crush by little big town | Girl Crush - wikipedia
"Girl Crush '' is a song written by Lori McKenna, Hillary Lindsey and Liz Rose, and performed by American country music group Little Big Town. It was released on December 15, 2014 as the second single from their sixth studio album, Pain Killer.
As CMT 's Alison Bonaguro writes, "It 's more about wanting to taste her lips in order to taste him and to drown in her perfume, to have her long blond hair -- and so on -- all in an effort to get him back ''. Bonaguro also adds, "On the other hand, it could also be about a girl obsessing over the girl who is with the man that she wants but has never had. That mystery is unclear, which only adds to the allure of this song ''.
Lori McKenna stated that when she presented the idea to co-writer Liz Rose that Rose disliked the idea at first but that Rose changed her mind after hearing the first verse that Hillary Lindsey had written. Group members Kimberly Schlapman and Karen Fairchild heard the song and asked that it be saved for them.
"Girl Crush '' is in the key of C major and a 6 / 8 time signature, with an approximate tempo of 58 dotted quarter notes per minute and a primary chord pattern of C - Em - F-G. Karen Fairchild 's lead vocals range from G - A.
"Girl Crush '' received mostly positive reviews. CMT 's Alison Bonaguro writes, "If anyone knows how to write a country song that 's never been written before, it 's these three '': Lori McKenna, Hillary Lindsey, and Liz Rose. Billy Dukes of Taste of Country states, "Karen Fairchild 's lead on Little Big Town 's new single ' Girl Crush ' may be the best vocal performance of the year. Her anguish drives through you like a steam engine, and afterward she 's nothing but a puff of white smoke barely holding on to existence. '' Dukes adds, "listen to the song on repeat and find yourself exhausted... in the best possible way '' and "mass homogeny on the radio makes it seem impossible to create a sound or write a song that 's truly unique. It 's as if all the good ideas have been used up. Little Big Town prove this is not the case on each album they release. ''
"Girl Crush '' debuted at number 48 on the Billboard Hot Country Songs chart for the week ending November 8, 2014, before it was released as a single. It reached number one and remained there for thirteen consecutive weeks until it was knocked off by "Kick the Dust Up '' by Luke Bryan. It debuted at number 55 on Billboard 's Country Airplay chart for the week ending December 27, 2014. "Girl Crush '' sold 10,000 copies in its debut week of November 8, 2014. The song reached more than two million in sales by early February 2016. As of April 2017, "Girl Crush '' had sold 2,398,000 copies in the United States.
"Girl Crush '' is Little Big Town 's highest charting single on Billboard 's all - genre Hot 100 chart, having peaked at number 18 for the week ending May 9, 2015 chart, besting the number 22 peak of "Pontoon '' in 2012. It is also their highest showing on the Canadian Hot 100, besting the number 39 position of "Pontoon '' in 2012. It is also their longest - running number one single, 13 weeks atop Hot Country Songs. It broke the record set by The Browns ' 1959 hit "The Three Bells '' as the longest - running number one on the Billboard Hot Country Songs chart by a group of three or more members.
Some radio stations were reported to have pulled "Girl Crush '' from their playlists, in response to concerns from listeners who interpreted the song to be about lesbianism. In response, Fairchild said, "That 's just shocking to me, the close - mindedness of that, when that 's just not what the song was about... But what if it were? It 's just a greater issue of listening to a song for what it is. '' In addition, the label created a short commercial in which the band discusses the song and its actual meaning. Billboard consulted several radio program directors on its panel and found only one who detailed a specific complaint from a listener; the magazine concluded that the controversy surrounding the song was mostly fabricated.
The music video is in black - and - white. It was directed by Karla and Matthew Welch and premiered in April 2015. The music video was filmed in Los Angeles, California and produced by Meritocracy Inc. The music video is nominated for a 2016 Academy of Country Music Award for Video of the Year
|
who owns treasure island hotel in las vegas | Treasure Island hotel and casino - Wikipedia
Treasure Island Hotel & Casino (also known as "TI '') is a hotel and casino located on the Las Vegas Strip in Paradise, Nevada, USA with 2,664 rooms and 220 suites, and is connected by tram to The Mirage as well as pedestrian bridge to the Fashion Show Mall shopping center. It is owned and operated by Phil Ruffin.
The hotel received the AAA Four Diamond rating each year from 1999 through 2013.
Treasure Island was opened by Mirage Resorts in 1993 under the direction of Steve Wynn and Atlandia Design (a Mirage Resorts subsidiary) at a cost of US $450 million. The initial plans called for a tower addition to The Mirage, but later evolved into a full - fledged separate hotel casino resort. Treasure Island originally intended to attract families with whimsical pirate features and icons such as the skull - and - crossbones strip marquee, a large video arcade, and staged pirate battles nightly in "Buccaneer Bay '' in front of the casino entrance on the Strip.
The resort was designed by architects Joel Bergman and Jon Jerde in collaboration with Steve Wynn along with Roger Thomas who designed the interior of Treasure Island Hotel and Casino.
In 2003, the hotel largely abandoned its pirate theme for a more contemporary resort choosing to provide primarily adult amenities and services. The original video arcade and kid - friendly pool areas were replaced with a party bar, hot tub, and nightclub. The famous skull - and - crossbones sign at the Strip entrance was replaced by a dual - purpose "TI '' marquee displaying the hotel logo and serving as a large LCD video screen. The exterior color of the hotel was also changed from a light orange to a darker maroon color.
On December 15, 2008, MGM Mirage announced the resort would be sold for US $775 million to Phil Ruffin, former owner of the New Frontier Hotel and Casino. Ruffin took full ownership of the hotel and casino resort on Friday, March 20, 2009.
On October 21, 2013, the Sirens of TI pirate battle show closed in order to add a new multi-level shopping and entertainment center which opened in April 2015 with a 24 - hour CVS as the anchor tenant along with the Marvel Avengers S.T.A.T.I.O.N. exhibit, which opened May 26, 2016.
On June 18, 2016, Michael Steven Sandford attempted to assassinate presidential candidate Donald Trump during a political rally held at Treasure Island.
The resort is home to Cirque du Soleil 's Mystère, which introduced the entertainment style of Franco Dragone. The show opened in 1993 as the original Cirque du Soleil production in Las Vegas. Mystère has been voted nine times as the best production show in the city by the Las Vegas Review Journal reader 's poll. With the sale of TI, it is the only hotel on the strip to host a Cirque du Soleil show that is not affiliated with MGM Resorts International.
Treasure Island opened with the free "Buccaneer Bay '' show in a large man - made lake fronting the resort along the Las Vegas Strip. Presented several times nightly with a large cast of stunt performers, the show depicted the landing and subsequent sacking of a Caribbean village by pirates, serving to attract gamblers from the strip and into the casino after each show in the same fashion as its predecessor, the Wynn - conceived volcano fronting The Mirage casino. Notable special effects included a full - scale, manned British Royal Navy sailing ship that sailed nearly the full width of the property, a gas - fired "powder magazine '' explosion, pyrotechnics, and the sinking to the bottom of the sailing ship "Brittania '' along with its captain.
In 2003, "Buccaneer Bay '' was replaced with "Sirens ' Cove '' and the new show, "The Sirens of TI '' utilizing many of the technical elements of its predecessor. The live, free show was intended to appeal more to adults by including singing, dancing, audio - visual effects, bare - chested pirates and attractive women in the large outdoor show produced by Kenny Ortega.
The Sirens of TI was closed on October 21, 2013. The closure was initially intended to be temporary, but in November, it was made permanent, to the dismay of the show 's actors. The reason cited by Treasure Island was the construction of new retail space nearby.
|
liquids flow from higher level to lower level | Liquid logistics - wikipedia
Liquid logistics is a special category of logistics that relates to liquid products, and is used extensively in the "supply chain for liquids '' discipline.
Standard logistics techniques are generally used for discrete or unit products. Liquid products have logistics characteristics that distinguish them from discrete products. Some of the major characteristics of liquid products that impact their logistics handling are:
Each of these points represents a differentiation of liquid logistics from logistics techniques used for discrete items. When properly planned for and handled these points of differentiation may lead to business advantages for companies that produce, process, move, or use liquid products.
|
what are the names of the hunger games series | The Hunger Games (film series) - wikipedia
The Hunger Games film series consists of four science fiction dystopian adventure films based on The Hunger Games trilogy of novels, by the American author Suzanne Collins. Distributed by Lionsgate and produced by Nina Jacobson and Jon Kilik, it stars Jennifer Lawrence as Katniss Everdeen, Josh Hutcherson as Peeta Mellark, Liam Hemsworth as Gale Hawthorne, Woody Harrelson as Haymitch Abernathy, Elizabeth Banks as Effie Trinket, Stanley Tucci as Caesar Flickerman, and Donald Sutherland as President Snow. Gary Ross directed the first film, while Francis Lawrence directed the next three films.
The first three films set records at the box office. The Hunger Games (2012) set records for the opening day and the biggest opening weekend for a non-sequel film. The Hunger Games: Catching Fire (2013) set the record for biggest opening weekend in the month of November. The Hunger Games: Mockingjay -- Part 1 (2014) had the largest opening day and weekend of 2014. The films, including The Hunger Games: Mockingjay -- Part 2 (2015), received a positive reception from critics, with praise aimed at its themes and messages, as well as Jennifer Lawrence 's portrayal of the main protagonist, Katniss Everdeen.
The Hunger Games is the 18th highest - grossing film franchise of all time, having grossed over US $2.9 billion worldwide.
Following the release of Suzanne Collins ' novel The Hunger Games, on September 14, 2008, Hollywood film studios began looking to adapt the book into film. In March 2009, Color Force, an independent studio founded by producer Nina Jacobson, bought the film rights to the book. Jacobson then sought out production company Lionsgate to help her produce the film. Collins was also attached to adapt the novel; she began the first draft after completing the third novel in the series, Mockingjay (2010). The search for a director began in 2010 with three directors in the running; David Slade, Sam Mendes, and Gary Ross. Ross was ultimately chosen to direct. By the time Collins had finished the script, Ross decided to go through the script with Collins and screenwriter Billy Ray.
In October 2010, scripts were sent to the actors, and casting occurred between March and May 2011. The first role cast was of the protagonist, Katniss Everdeen. As many as 30 actresses were in talks to play the part, with Jennifer Lawrence, Hailee Steinfeld, Abigail Breslin, and Chloë Grace Moretz being mentioned most. The role was given to Lawrence.
The roles of Peeta Mellark, Katniss ' fellow tribute, and Gale Hawthorne, her best friend, began casting later that month. Top contenders for Peeta included Josh Hutcherson, Alexander Ludwig (later cast as Cato), Hunter Parrish, Evan Peters, and Lucas Till. Contenders for Gale included Robbie Amell, Liam Hemsworth, David Henrie, and Drew Roy. On April 4, it was reported that Hemsworth had been cast as Gale, and Hutcherson had been cast as Peeta.
Filming for the franchise began on May 23, 2011 and finished on June 20, 2014.
Suzanne Collins and Louise Rosner acted as executive producers on the first two films. Other executive producers of the first film include Robin Bissell and Shantal Feghali. Co-producers are Diana Alvarez, Martin Cohen, Louis Phillips, Bryan Unkeless, and Aldric La'auli Porter. Color Force and Lionsgate collaborated on all four films. It was announced on November 1, 2012 that the studio had decided to split the final book, Mockingjay (2010), into two films: The Hunger Games: Mockingjay -- Part 1 (2014) and The Hunger Games: Mockingjay -- Part 2 (2015), much like Harry Potter and the Deathly Hallows -- Part 1 (2010) and 2 (2011), and The Twilight Saga: Breaking Dawn -- Part 1 (2011) and 2 (2012).
Gary Ross directed the first film (The Hunger Games), and despite initially stating otherwise on April 10, 2012, Lionsgate announced that Ross would not return to direct the sequel. On April 19, 2012, it was confirmed that Francis Lawrence would direct the sequel instead, and on November 1, 2012, it was confirmed that he would return and direct the final two films in the series, based on the novel Mockingjay.
Suzanne Collins began adapting the first book to film after she finished writing Mockingjay. Collins had experience in writing screenplays after writing Clifford 's Puppy Days and other children 's television shows. When Gary Ross was announced as director for the film in 2010, he began to work with Collins and veteran writer Billy Ray to bring the novel to life. The script was large and resulted in a two - hour and 20 minute film.
After Francis Lawrence took over as director, he brought in Simon Beaufoy and Michael Arndt to write the script for Catching Fire.
The final two films of the series were written by Danny Strong and Peter Craig.
Once the three leads were cast, casting shifted to the other tributes. Jack Quaid was cast as Marvel, Leven Rambin as Glimmer, Amandla Stenberg as Rue, and Dayo Okeniyi as Thresh. Alexander Ludwig (who auditioned for Peeta) was cast as Cato, Isabelle Fuhrman (who auditioned for Katniss) as Clove, and Jacqueline Emerson as Foxface. Following the casting of tributes, the adult cast began to come together. Elizabeth Banks was cast as Effie Trinket, the District 12 escort. Woody Harrelson was cast as Haymitch Abernathy, District 12 's mentor. Lenny Kravitz was cast as Cinna, Katniss ' stylist. Wes Bentley was cast as game maker Seneca Crane. Stanley Tucci was cast as Caesar Flickerman, Panem 's celebrity host. Donald Sutherland was cast as Coriolanus Snow, Panem 's President. Willow Shields was cast as Primrose Everdeen, Katniss ' younger sister.
In July 2012, the cast for the second film was announced. Jena Malone would play Johanna Mason. Philip Seymour Hoffman would play Plutarch Heavensbee, Sam Claflin would play Finnick Odair. It was later announced that Jeffrey Wright was cast as Beetee, Alan Ritchson as Gloss, Lynn Cohen as Mags, and Amanda Plummer as Wiress.
In August and September 2013, it was revealed that Stef Dawson would play Annie Cresta, Natalie Dormer would play Cressida, Evan Ross would play Messalla, and Julianne Moore would play President Alma Coin in the final two films.
Principal photography for The Hunger Games began on May 24, 2011 and concluded on September 15, 2011. The entirety of filming for the first movie took place in North Carolina including the following cities; Asheville, Barnardsville, Black Mountain, Cedar Mountain, Charlotte, Concord, Hildebran and Shelby. All of the Games scenes were filmed on location. All of the Capitol scenes were filmed in a studio in Shelby and Charlotte, North Carolina.
Principal photography for The Hunger Games: Catching Fire began on September 10, 2012 in Atlanta, Georgia and concluded in April 2013. In November 2012, production moved to Hawaii to film the arena scenes. Filming took a Christmas break before filming resumed for two weeks in mid-January. In March 2013, the film went back to Hawaii for re-shoots. Atlanta was used for all the Capitol scenes, Hawaii for the arena scenes, and Oakland, New Jersey for District 12 scenes.
Principal photography for The Hunger Games: Mockingjay began on September 23, 2013 and concluded on June 20, 2014. The majority of filming for the Mockingjay films was filmed in soundstages in a studio in Atlanta, until April 18, 2014. Production then moved to Paris, France, with filming beginning there on May 5, 2014.
Philip Seymour Hoffman, who portrays Plutarch Heavensbee, was found dead on February 2, 2014. At the time of his death, he had completed filming his scenes for The Hunger Games: Mockingjay -- Part 1 and had a week left of shooting for Part 2. Lionsgate released a statement stating that, since the majority of Hoffman 's scenes were completed, the release date for Part 2 would not be affected.
Every year, in the ruins of what was once North America, the Capitol of the nation of Panem forces each of its 12 districts to send a teenage boy and girl, between the ages of 12 and 18, to compete in the Hunger Games: a nationally televised event in which ' tributes ' fight each other within an arena, until one survivor remains. When Primrose Everdeen is ' reaped ', her older sister Katniss Everdeen volunteers in her place to enter the games and is forced to rely upon her sharp instincts when she 's pitted against highly trained tributes.
Along with fellow District 12 victor Peeta Mellark, Katniss Everdeen returns home safely after winning the 74th Annual Hunger Games. Winning means that they must leave their loved ones behind and embark on a Victory Tour throughout the districts. Along the way Katniss senses a rebellion simmering - one that she and Peeta may have sparked - but the Capitol is still in control as President Snow prepares the 75th Hunger Games - the Quarter Quell - that could change Panem forever.
Katniss Everdeen finds herself in District 13 after she destroys the games forever. Under the leadership of President Alma Coin and the advice of her trusted friends, Katniss spreads her wings as she fights to save Peeta, along with other victors and a nation moved by her courage.
Realizing the stakes are no longer just for survival, Katniss Everdeen teams up with her closest friends and allies, including Peeta, Gale, and Finnick, for the ultimate mission. Together, they leave District 13 to liberate the citizens of war - torn Panem and assassinate President Snow.
All the Hunger Games films finished first at the North American box office during both their opening and second weekend. In North America, The Hunger Games film series is the second highest - grossing film series based on young adult books, after the Harry Potter series, earning over $1.4 billion. Worldwide, it is the third highest - grossing film series based on young - adult books after the film series of Harry Potter and The Twilight Saga, respectively, having grossed over $2.9 billion. In North America, it is the eighth highest - grossing film franchise of all time. Worldwide, it is the 15th or 16th highest - grossing film franchise of all time.
All The Hunger Games films received a "Fresh Rating '' in the review aggregator website Rotten Tomatoes, with the first two films receiving a "Certified Rating '' stub.
|
what modern country was home of the yellow river valley civilization | River valley civilization - wikipedia
A river civilization or river culture is an agricultural nation or civilization situated beside and drawing sustenance from a river. A river gives the inhabitants a reliable source of water for drinking and agriculture. Additional benefits include fishing, fertile soil due to annual flooding, and ease of transportation. The first great civilizations all grew up in river valleys.
Civilization first began in 3500 BC, which along the Tigris and Euphrates rivers in the Middle East; the name given to that civilization, Mesopotamia, means "land between the rivers ''. The Nile valley in Egypt had been home to agricultural settlements as early as 5500 BC, but the growth of Egypt as a civilization began around 3100 BC. A third civilization grew up along the Indus River around 2600 BC, in parts of what are now India and Pakistan. The fourth great river civilization emerged around 1700 BC along the Yellow River in China, also known as the Huang - He River Civilization.
Civilizations tended to grow up in river valleys for a number of reasons. The most obvious is access to a usually reliable source of water for agriculture and human needs. Plentiful water, and the enrichment of the soil due to annual floods, made it possible to grow excess crops beyond what was needed to sustain an agricultural village. This allowed for some members of the community to engage in non-agricultural activities such as construction of buildings and cities (the root of the word "civilization ''), metal working, trade, and social organization. Boats on the river provided an easy and efficient way to transport people and goods, allowing for the development of trade and facilitating central control of outlying areas.
Mesopotamia was the earliest river valley civilization, starting to form around 3500 BC. The civilization was created after regular trading started relationships between multiple cities and states around the Tigris and Euphrates Rivers. Mesopotamian cities became self - run civil governments. One of the cities within this civilization, Uruk, was the first literate society in history. Eventually, they all joined together to irrigate the two rivers in order to make their dry land fertile for agricultural growth. The increase in successful farming in this civilization allowed population growth throughout the cities and states within Mesopotamia.
Egypt also created irrigation systems from its local river, the Nile River, more intricate than previous systems. The Egyptians would rotate legumes with cereal which would stop salt buildup from the fresh water and enhance the fertility of their fields. The Nile River also allowed easier travel among the civilization, eventually resulting in the creation of two kingdoms in the north and south areas of the river until both were unified into one society by 3000 BC.
Much of the history of the Indus valley civilization is unknown. Discovered in the 1920s, Harappan society remains a mystery because the Harappan system of writing has not yet been deciphered. It was larger than either Egypt or Mesopotamia. Historians have found no evidence of violence or a ruling class; there are no distinctive burial sites and there is not a lot of evidence to suggest a formal military. However, historians believe that the lack of knowledge about the ruling class and the military is mainly due to the inability to read Harappan writing.
The Yellow River (Huang Ho) area became settled around 4000 BC. Many tribes settled along the river, sixth longest in the world, which was distinguished by its heavy load of yellow silt and its periodic devastating floods. A major impetus for the tribes to unite into a single kingdom by around 1500 BC was the desire to find a solution to the frequent deadly floods. The Yellow River is often called "The Cradle of Chinese Civilization ''.
|
what is the main economic activity in st elizabeth jamaica | Saint Elizabeth parish - Wikipedia
St. Elizabeth, one of Jamaica 's largest parishes, is located in the southwest of the island, in the county of Cornwall. Its capital, Black River, is located at the mouth of the Black River, the widest on the island.
Saint Elizabeth originally included most of the south - west part of the island, but in 1703 Westmoreland was taken from it and in 1814 a part of Manchester. The resulting areas were named after the wife of Sir Thomas Modyford, the first English Governor of Jamaica.
There are archeological traces of Taíno / Arawak existence in the parish, as well as of 17th - century colonial Spanish settlements. After 1655, when the English settled on the island, they concentrated on developing large sugar cane plantations with enslaved African workers. Today, buildings with ' Spanish wall ' construction (masonry of limestone sand and stone between wooden frames) can still be seen in some areas.
St Elizabeth became a prosperous parish and Black River an important seaport. In addition to shipping sugar and molasses, Black River became the centre of the logging trade. Large quantities of logwood were exported to Europe to make a Prussian - blue dye, which was very popular in the 18th and 19th centuries.
St Elizabeth was the first parish to have electric power, where it was first introduced in a house called Waterloo in Black River in 1893.
The parish is located latitude 18 ° 15'N, and longitude 77 ° 56'W; to the west of Manchester, the east of Westmoreland, and to the south of St. James and Trelawny. It covers an area of 1212.4 km2, making it Jamaica 's second - largest parish, behind Saint Ann 's 1212.6 km2. The parish is divided into four electoral districts (constituencies), that is North - East, North - West, South - East and South - West.
The northern and northeastern parts of the parish are mountainous. There are three mountain ranges -- the Nassau Mountains to the north - east, the Lacovia Mountains to the west of the Nassau Mountains, and the Santa Cruz Mountains which, running south, divide the wide plain to end in a precipitous drop of 1,600 feet (490 m) at Lovers ' Leap. The central and southern sections form an extensive plain divided by the Santa Cruz Mountains. A large part of the lowlands is covered by morass, but it still provides grazing land for horses and mules.
The main river in the parish is the Black River, and measuring 53.4 kilometres (33.2 mi), it is one of the longest rivers in Jamaica. It is navigable for about 40 kilometres (25 mi), and is supported by many tributaries including Y.S., Broad, Grass and Horse Savannah. The river has its source in the mountains of Manchester where it rises and flows west as the border between Manchester and Trelawny then goes underground. It reappears briefly in several surrounding towns, but reemerges near Balaclava and tumbles down gorges to the plain known as the Savannah, through the Great Morass and to the sea at Black River, the capital of the parish.
The geology of the parish is primarily alluvial plains to the south, and karstic limestone to the north. The karstic zones are known to contain over 130 caves (Jamaica Cave Register as of 2007 - from Fincham and JCO). These include Mexico Cave and Wallingford River Cave, near Balaclava, which are two associated sections of a major underground river that has its source in south Trelawny, as well as Yardley Chase Caves near the foot of Lovers ' Leap, and Peru Cave, near Goshen, which has stalactites and stalagmites. Mineral deposits include bauxite, antimony, white limestone, clay, peat and silica sand which is used to manufacture glass.
The parish had an estimated population of 148,000 in 2001, 4000 of whom live in the capital town. The distinct feature of this parish is that numerous ethnic groups can be found here; St Elizabeth probably has the greatest ethnic mixture in Jamaica. St. Elizabeth provides the best testimony of the Jamaican motto -- "Out of many, one people ''. The Meskito (corrupted to ' Mosquito ') Indians brought to Jamaica to help capture the Maroons, were allowed to settle in southern St. Elizabeth in return for their assistance and given land grants in this parish. This parish has also attracted Dutch, Spanish, Indian, Maroon, mulatto, English, and European inhabitants from the 17th century onwards, with the result that many observers feel that it has more people of mixed - race ancestry than can be found in any other part of the island.
In the 19th century Irish, Spanish, Portuguese, Scottish, Germans, Chinese, and East Indians migrated to Saint Elizabeth. There are pockets of ethnic concentrations in the parish, including Mulatto and Creole, notably found in the southeast.
The parish has been a major producer of bauxite since the 1960s. Port Kaiser, near a town called Alligator Pond, has a leading deep - water pier for bauxite export. The Alpart alumina refinery was constructed in the 1960s at Nain and produces nearly 2 million tonnes of alumina annually for export. The replacement cost of building the refinery is approximately $2 billion.
There are other alumina refineries close to the nearby town of Mandeville.
Apart from bauxite mining, the parish also produces a large quantity of Jamaica 's sugar; there are two sugar factories in the parish. Fishing is a major industry in the parish, as is tomato canning; a plant is at Bull Savannah. The parish also cultivates crops such as cassava, corn, peas, beans, pimento, ginger, tobacco, tomato, rice sweet potatoes and coffee. As a result of the fertile soil that provide for grazing fields, pastoralism is possible. Livestock include goats, sheep, hogs, and cattle, horses.
Since the 1990s, the parish has become a significant tourist destination, with most visitors going to the Treasure Beach area. The Appleton rum distillery, near the rough Cockpit Country in the north of the parish, is also a tourist destination. The Cockpit area was the site of Maroon settlements through much of the 18th century. Ecological tourism along the Black and YS rivers, and in the Great Morass, has been developed in recent years.
The parish has 12 high schools and 75 primary level institutions as well as 167 early childhood institutions. Notable institutions include:
The Social Development Commission 's national grid of communities has sixty one communities in St. Elizabeth broken down into 465 districts. The communities which include major towns are:
Flagaman
St. Elizabeth has approximately 44 caves, including:
Coordinates: 18 ° 03 ′ N 77 ° 47 ′ W / 18.050 ° N 77.783 ° W / 18.050; - 77.783
|
what is the receiver on a bolt action rifle | Bolt action - wikipedia
Bolt action is a type of repeater firearm action where the handling of cartridges into and out of the weapon 's barrel chamber are operated by manually manipulating the bolt handle, which is most commonly placed on the right - hand side of the weapon (as most users are right - handed). As the handle is operated, the bolt is unlocked and pulled back opening the breech, the spent cartridge case is extracted and ejected, the firing pin within the bolt is cocked (either on opening or closing of the bolt depending on the gun design) and engages the sear, then upon the bolt being pushed back a new cartridge (if available) is loaded into the chamber, and finally the breech is closed tight by the bolt locking against the receiver.
Bolt - action firearms are most often rifles, but there are some bolt - action variants of shotguns and a few handguns as well. Examples of this system date as far back as the early 19th century, notably in the Dreyse needle gun. From the late 19th century, all the way through both World Wars, the bolt - action rifle was the standard infantry firearm for most of the world 's military forces. In modern military and law enforcement use, the bolt action has been mostly replaced by semi-automatic and selective - fire firearms, though the bolt - action design remains dominant in dedicated sniper rifles due to inherently better precision, and are still very popular for civilian hunting and target shooting.
Compared to other manually operated firearm actions such as lever - action and pump - action, bolt action offers an excellent balance of strength (allowing powerful cartridge chamberings), ruggedness, reliability and accuracy, all with light weight and much lower cost than self - loading firearms. Bolt action firearms can also be disassembled and re-assembled for maintenance and repair much faster due to fewer moving parts. The major disadvantage is a slightly lower rate of fire than other types of manual repeating firearms, and a far lower practical rate of fire than semi-automatic weapons, though this is not a very important factor in many types of hunting, target shooting and other precision - based shooting applications.
The first bolt - action rifle was produced in 1824 by Johann Nikolaus von Dreyse, following work on breechloading rifles that dated to the 18th century. Von Dreyse would perfect his Nadelgewehr (Needle Rifle) by 1836, and it was adopted by the Prussian Army in 1841. It however was not the first bolt - action weapon to see combat as it was not fielded until 1864. The United States purchased 900 Greene rifles which saw service at the Battle of Antietam in 1862 during the American Civil War (an under - hammer, percussion - capped, single - shot bolt action that utilized paper cartridges and an ogivial - bore rifling system) in 1857, but this weapon was ultimately considered too complicated for issue to soldiers and was supplanted by the Springfield Model 1861, a conventional muzzle - loading rifle. During the American Civil War, the bolt - action Palmer carbine was patented in 1863, and by 1865, 1000 were purchased for use as cavalry weapons. The French Army adopted its first bolt - action rifle, the Chassepot rifle, in 1866 and followed with the metallic - cartridge bolt - action Gras rifle in 1874.
European armies continued to develop bolt - action rifles through the latter half of the nineteenth century, first adopting tubular magazines as on the Kropatschek rifle and the Lebel rifle, a magazine system pioneered by the Winchester rifle of 1866. The first bolt - action repeating rifle was the Vetterli rifle of 1867 and the first bolt - action repeating rifle to use centerfire cartridges was the weapon designed by the Viennese gunsmith Ferdinand Fruwirth in 1871. Ultimately the military turned to bolt - action rifles using a box magazine; the first of its kind was the M1885 Remington -- Lee, but the first to be generally adopted was the British 1888 Lee -- Metford. World War I marked the height of the bolt - action rifle 's use, with all of the nations in that war fielding troops armed with various bolt - action designs.
During the buildup prior to World War II, the military bolt - action rifle began to be superseded by the semi-automatic rifle and later assault rifles, though bolt - action rifles remained the primary weapon of most of the combatants for the duration of the war; and many American units, especially USMC, used bolt - action ' 03 Springfields until sufficient M1 Garands were available. The bolt action is still common today among sniper rifles, as the design has potential for superior accuracy, reliability, lesser weight, and the ability to control loading over the faster rate of fire that alternatives allow. There are, however, many semi-automatic sniper rifle designs, especially in the designated marksman role.
Today, bolt - action rifles are chiefly used as hunting rifles. These rifles can be used to hunt anything from vermins to deers and to large games, especially big game caught on a safari, as they are adequate to deliver a single lethal shot from a safe distance.
Bolt - action shotguns are considered a rarity among modern firearms, but were formerly a commonly used action for. 410 entry - level shotguns, as well as for low - cost 12 gauge shotguns. The M26 Modular Accessory Shotgun System (MASS) is the most advanced and recent example of a bolt - action shotgun, albeit one designed to be attached to an M16 rifle or M4 carbine using an underbarrel mount (although with the standalone kit, the MASS can become a standalone weapon). Mossberg 12 gauge bolt - action shotguns were briefly popular in Australia after the 1997 changes to firearms laws, but the shotguns themselves were awkward to operate and only had a three - round magazine, thus offering no practical and real advantages over a conventional double - barrel shotgun.
Some pistols utilize a bolt action, although this is uncommon, and such examples are typically specialized target handguns.
Most of the bolt - action designs use turn - bolt design, which involves the shooter doing an upward "turn '' followed by a rearward "pull '' handle movements to unlock the bolt and open the breech, cock the firing pin and begin cycling the next cartridge. There are three major turn - bolt action designs: the Mauser system, the Lee -- Enfield system, and the Mosin -- Nagant system. All three differ in the way the bolt fits into the receiver, how the bolt rotates as it is being operated, the number of locking lugs holding the bolt in place as the gun is fired, and whether the action is cocked on the opening of the bolt (as in the Mauser system) or the closing of the bolt (as in the Lee -- Enfield system). The vast majority of bolt - action rifles utilize one of these three systems, with other designs seeing only limited use.
The Mauser bolt action system was introduced in the Gewehr 98 designed by Paul Mauser, and is the most common bolt - action system in the world, being in use in nearly all modern hunting rifles and the majority of military bolt - action rifles until the middle of the 20th century. The Mauser system is stronger than that of the Lee -- Enfield due to two locking lugs just behind the bolt head which make it better able to handle higher pressure cartridges (i.e. magnum cartridges). The 8 × 68mm S and 9.3 × 64mm Brenneke magnum rifle cartridge "families '' were designed for the Mauser M 98 bolt action. A novel safety feature was the introduction of a third locking lug present at the rear of the bolt that normally did not lock the bolt, since it would introduce asymmetrical locking forces. The Mauser system features "cock on opening '', meaning the upward rotation of the bolt when the rifle is opened cocks the action. A drawback of the Mauser M 98 system is that it can not be cheaply mass - produced very easily. Many Mauser M 98 inspired derivatives feature technical alterations, such as omitting the third safety locking lug, to simplify production.
The controlled - feed Mauser M 98 bolt - action system 's simple, strong, safe, and well - thought - out design inspired other military and hunting / sporting rifle designs that became available during the 20th century, including the:
Versions of the Mauser action designed prior to the Gewehr 98 's introduction, such as that of the Swedish Mauser rifles and carbines, lack the third locking lug and feature a "cock on closing '' operation.
The Lee -- Enfield bolt - action system was introduced in 1889 with the Lee -- Metford and later Lee -- Enfield rifles (the bolt system is named after the designer James Paris Lee and the barrel rifling after the Royal Small Arms Factory at the London Borough of Enfield), and is a "cock on closing '' action in which the forward thrust of the bolt cocks the action. Since the Lee -- Enfield 's locking lugs are at the rear of the bolt, repeated firing over time can lead to receiver "stretch '' and excessive headspace; accordingly, the Lee -- Enfield bolt system features a removable bolthead, which allows the rifle 's headspace to be adjusted by simply removing the bolthead and replacing it with one of a different length as required. In the years leading up to WWII, the Lee -- Enfield bolt system was used in numerous commercial sporting and hunting rifles manufactured by such firms in the UK as BSA, LSA, and Parker -- Hale, as well as by SAF Lithgow in Australia. Vast numbers of ex-military SMLE Mk III rifles were sporterised post-WWII to create cheap, effective hunting rifles, and the Lee -- Enfield bolt system is used in the M10 and No 4 Mk IV rifles manufactured by Australian International Arms.
The Mosin -- Nagant action, created in 1891 and named after the designers Sergei Mosin and Léon Nagant, differs significantly from the Mauser and Lee -- Enfield bolt action designs. The Mosin -- Nagant design has a separate bolthead which rotates with the bolt and the bearing lugs, in contrast to the Mauser system where the bolthead is a non-removable part of the bolt. The Mosin -- Nagant is also unlike the Lee -- Enfield system where the bolthead remains stationary and the bolt body itself rotates. The Mosin -- Nagant bolt is a somewhat complicated affair, but is extremely rugged and durable; it, like the Mauser, uses a "cock on open '' system. Although this bolt system has been rarely used in commercial sporting rifles (the Vostok brand target rifles being the most recognized) and never outside of Russia, large numbers of military surplus Mosin -- Nagant rifles have been sporterized for use as hunting rifles in the years since WWII.
The Vetterli rifle was the first bolt action repeating rifle introduced by an army. It was used by the Swiss army from 1869 to circa 1890. Modified Vetterlis were also used by the Italian Army. Another notable design is the Norwegian Krag -- Jørgensen, which was used by Norway, Denmark, and briefly the United States. It is unusual among bolt - action rifles in that is loaded through a gate on right side of the receiver, and thus can be reloaded without opening the bolt. The Norwegian and Danish versions of the Krag have two locking lugs, while the American version has only one. In all versions, the bolt handle itself serves as an emergency locking lug. The Krag 's major disadvantage compared to other bolt - action designs is that it is usually loaded by hand, one round at a time, although a box - like device was made that could drop five rounds into the magazine, all at once. This made it slower to reload than other designs which used stripper or en - bloc clips. Another historically important bolt - action system was the Gras system, used on the French Mle 1874 Gras rifle and the Mle 1886 Lebel rifle, which was first to introduce ammunition loaded with nitrocellulose - based smokeless powder.
In addition to the turn - bolt action systems, other designs have been devised but failed to achieve the ubiquity of the turn - bolt Mauser, Lee -- Enfield and Mosin -- Nagant designs. Some of the most notable of these are the Canadian Ross rifle, the Swiss K31 and Austro - Hungarian Mannlicher M1895 designs. All three are straight - pull bolt actions, but are entirely unrelated designs.
In the Mauser - style turn - bolt action, the bolt handle must be rotated counter-clockwise, drawn rearward, pushed forward, and finally rotated clockwise back into lock. In a straight - pull action, the bolt can be cycled without rotating the handle, hence producing a reduced range of motion by the shooter from four movements to two, with the goal of increasing the rifle 's rate of fire. The Ross and Schmidt -- Rubin rifles load via stripper clips, albeit of an unusual paperboard and steel design in the Schmidt -- Rubin rifle, while the Mannlicher M1895 uses en - bloc clips. The Schmidt -- Rubin series, which culminated in the K31, are also known for being among the most accurate military rifles ever made. Yet another variant of the straight - pull bolt action, of which the M1895 Lee Navy is an example, is a camming action in which pulling the bolt handle causes the bolt to rock, freeing a stud from the receiver and unlocking the bolt.
More recently the German Blaser company has introduced a new straight - pull action where locking is achieved by a series of concentric "claws '' that protrude / retract from the bolthead, a design that is referred to as Radialbundverschluss ("radial connection '').
In the sport of biathlon, because semiautomatic guns are illegal for race use, straight - pull actions are quite common, and are used almost exclusively on the world cup along with the Lateral Toggle action. The first company to make the straight pull action for. 22 caliber was J.G. Anschütz; the action is specifically the straight pull ball bearing lock action, which features spring - loaded ball bearings on the side of the bolt which lock into a groove inside the bolt 's housing. With the new design came a new dry - fire method; instead of the bolt being turned up slightly, the action is locked back to catch the firing pin. The two companies who have made the lateral toggle are Finn biathlon, as well as Izhmash. Finn was the first to make this type of action, however, due to the large swing of the arm as well as the stiffness of the bolt, these rifles fell out of favour and have been discontinued. Izhmash improved on the lateral swing with their Biathlon 7 - 3 and 7 - 4 series rifles, which have some use on world cups, but are largely thought of as inaccurate as well as having the inconvenience of having to remove the shooter 's hand from the grip.
Typically, the bolt consists of a tube of metal inside of which the firing mechanism is housed, and which has at the front or rear of the tube several metal knobs, or "lugs '', which serve to lock the bolt in place. The operation can be done via a rotating bolt, a lever, cam - action, locking piece, or a number of systems. Straight - pull designs have seen a great deal of use, though manual turn - bolt designs are what is most commonly thought of in reference to a bolt - action design due to the type ubiquity. As a result, the bolt - action term is often reserved for more modern types of rotating bolt - designs when talking about a specific weapon 's type of action. However, both straight - pull and rotating bolt rifles are types of bolt - action rifles. Lever - action and pump - action weapons must still operate the bolt, but they are usually grouped separately from bolt - actions that are operated by a handle directly attached to a rotating bolt. Early bolt - action designs, such as the Dreyse needle gun and the Mauser Model 1871, locked by dropping the bolt handle or bolt guide rib into a notch in the receiver, this method is still used in. 22 rimfire rifles. The most common locking method is a rotating bolt with two lugs on the bolt head, which was used by the Lebel Model 1886 rifle, Model 1888 Commission Rifle, Mauser M 98, Mosin -- Nagant and most bolt - action rifles. The Lee -- Enfield has a lug and guide rib, which lock on the rear end of the bolt into the receiver.
Most bolt - action firearms are fed by an internal magazine loaded by hand, by en bloc, or stripper clips, though a number of designs have had a detachable magazine or independent magazine, or even no magazine at all, thus requiring that each round be independently loaded. Generally, the magazine capacity is limited to between two and ten rounds, as it can permit the magazine to be flush with the bottom of the rifle, reduce the weight, or prevent mud and dirt from entering. A number of bolt - actions have a tube magazine, such as along the length of the barrel. In weapons other than large rifles, such as pistols and cannons, there were some manually operated breech loading weapons. However, the Dreyse Needle fire rifle was the first breech - loader to use a rotating bolt design. Johann Nicholas von Dreyse 's rifle of 1838 was accepted into service by Prussia in 1841, which was in turn developed into the Prussian Model 1849. The design was a single - shot breech loader, and had the now familiar arm sticking out from the side of the bolt, to turn and open the chamber. The entire reloading sequence was a more complex procedure than later designs, however, as the firing pin had to be independently primed and activated, and the lever was only used to move the bolt.
Bolt - action firearms can theoretically achieve higher muzzle velocity and therefore have more accuracy than semi-automatic rifles because of the way the barrel is sealed. In a semi-automatic rifle, some of the energy from the charge is directed towards ejecting the spent shell and loading a new cartridge into the chamber. In a bolt action, the shooter performs this action by manually operating the bolt, allowing the chamber to be better sealed during firing, so that much more of the energy from the expanding gas can be directed forward. However, numerous other factors related to design and ammunition affect reliability and accuracy, and well designed modern semi-automatic rifles can be exceptionally accurate. Because of the combination of relatively light weight, reliability, high potential accuracy and lower cost, the bolt action is still the design of choice for many hunters, target shooters and marksmen.
The bolt action 's locking lugs are normally at the front of the breech (some designs have additional "safety lugs '' at the rear), and this increases potential accuracy relative to a design which locks the breech at the rear, such as a lever action. Also, a bolt action 's only moving parts when firing are the pin and spring. Since it has fewer moving parts and a short lock time, it has less of a chance of being thrown off target and / or malfunctioning.
Because the spent cartridge is removed by manual action rather than automatically ejected, it can help a marksman remain hidden. Because the cartridge is not visibly flung into the air and onto the ground, a bolt action may be less likely to reveal a shooter 's position. Also, the cartridge can be removed when most prudent, allowing the shooter to remain still until reloading is tactically feasible. Bolt actions are also easier to operate from a prone position than other manually repeating mechanisms and work well with box magazines, which are easier to fill and maintain than tubular magazines.
The integral strength of the design means very powerful magnum cartridges can be chambered without significantly increasing the size or weight of the weapon. For example, some of the most powerful elephant guns are in the same weight range (7 -- 10 lbs.) as a typical deer rifle, while delivering several times the kinetic energy to the target. The recoil of these weapons, however, is correspondingly severe. One well known example is bolt - action rifles designed for the. 223 Remington, which can usually safely fire the more powerful 5.56 × 45mm NATO, while auto - loaders might malfunction. By contrast, the operating mechanism of a semi-automatic weapon must increase in mass and weight as the cartridge it fires increases in power. This means that semi-automatic rifles firing magnum cartridges tend to be relatively heavy and impractical for many types of hunting.
The bolt action requires four distinct movements and is therefore generally slower than other major manual repeating mechanisms, such as lever and pump action, which generally require two movements, although straight - pull bolt actions also require only two distinct movements. In addition, the trigger hand must leave the gun and regrip the weapon after each shot, usually resulting in the shooter having to realign his sight and reacquire the target for every shot. It is also not ambidextrous, and left - handed models tend to be more expensive.
On used bolt - action firearms, especially, the headspace should be checked with headspace gauges prior to shooting to ensure it is correct, and to prevent over-stressing chambers and cartridge brass. Some bolt - action rifles, such as the Lee -- Enfield, have a series of different length bolts available to extend the service life of the rifle, for taking up any wear of the bolt and chamber occurring from long years of service. In the case of the No. 4 Lee -- Enfield bolt, the bolt heads themselves are replaceable separate from the bolt and are marked 0, 1, 2, or 3, with each bolt head in sequence being nominally 0.003 '' longer than the bolt head numbered one less, for easily taking up any action stretching that may have occurred. It is possible to replace such a bolt head without tools by disassembling the bolt from the action, unscrewing the bolt head, and replacing the bolt head with the next higher number bolt head, for restoring a safe headspace.
The interrelated mechanics of safe trigger function, correct headspace, and equal bearing of the locking lugs requires that the bolt and action assembly are factory "fitted ''. Usually shown by the rifle serial number, applied to both bolt and action, indicating they are a matched pair. Accidental or deliberate swapping of bolts between similar rifles is not unusual, but is potentially dangerous. Any rifle with mismatched action / bolt serial numbers should be considered to be unsafe to fire until checked and so marked by a competent gunsmith or armourer.
Furthermore, there are many subtle issues involving the provenance of a rifle and its ammunition. Many calibres have dual civilian / military uses but are not completely identical -- e.g. the. 308 Winchester / 7.62 mm NATO and. 223 Remington / 5.56 mm NATO have very slight differences in chamber sizes. Military ammunition often has thicker brass, and harder primers. Over major wars there were literally millions of surplus rifles converted to civilian uses (sporterized); many may be unsafe with modern ammunition -- caution is required with any ex-military bolt action.
Media related to Bolt action (firearms) at Wikimedia Commons
|
when did india participate in fifa world cup | History of the India national football team - wikipedia
The history of the India national football team dates back to the 1930s. They have never played in the World Cup, although they qualified for one in 1950, but withdrew later on for certain reasons. They have had no entries in the tournament from 1950 onwards. India have never won the final of the Asian Championship but managed their best ever finish by making it to the final in the 1964 AFC Asian Cup. They have only made three appearances since.
Indian teams started touring Australia, Japan, Indonesia and Thailand in the late 1930s. Soon after the success of several Indian football clubs, the All India Football Federation (AIFF) was formed in 1937. The 1948 London Olympics was India 's first major international tournament, where a predominately barefooted Indian team lost 2 -- 1 to France, failing to convert two penalties. The Indian team was greeted and appreciated by the crowd for their sporting manner.
India qualified by default for the 1950 FIFA World Cup finals as a result of the withdrawal of all of their scheduled opponents. But the governing body AIFF decided against going to the World Cup, being unable to understand the importance of the event at that time. Reasons shown by the AIFF included the cost of travel (despite the fact that FIFA agreed to bear a major part of the travel expenses) lack of practice time, team selection issues and valuing the Olympics over the FIFA World Cup.
Although FIFA imposed a rule banning barefoot play following the 1948 Olympics where India had played barefoot, the popular belief that the Indian team refused to play because they were not allowed to play barefoot is not entirely true, according to the then Indian captain Sailen Manna, it was just a story to cover up the disastrous decision of the AIFF. The team has never since come close to qualifying for the World Cup.
The period from 1951 to 1964 is considered the golden era in Indian football. Under the tutelage of legendary Syed Abdul Rahim India became the best team in Asia. The Indian team started the 1950s with their triumph in the 1951 Asian Games which they hosted. India beat both Indonesia and Afghanistan 3 -- 0 to reach the final where they beat Iran 1 -- 0. In 1952, India continued their form by winning the Colombo Quadrangular Cup held in Sri Lanka.
Later that year they went on to participate in the 1952 Olympics, but lost 10 -- 1 to Yugoslavia. As four years earlier, many of the team played without boots. After the result AIFF immediately made it mandatory to wear boots.
India also won three further editions of the Quadrangular Cup Colombo Cup, which were held in Burma, Calcutta and Dhaka in 1953, 1954 and 1955 respectively. India then went on to finish second in the 1954 Asian Games held in Manila.
At the 1956 Olympic Games they finished fourth, which is regarded as one of finest achievements in Indian football. India first met hosts Australia, winning 4 -- 2 with Neville D'Souza becoming the first Asian to score a hatrick in the Olympics and also making India the first Asian team to reach the Olympic semi-finals. They lost 4 -- 1 to Yugoslavia, and lost the third place play - off match 3 -- 0 to Bulgaria.
India later participated in the 1958 Asian Games in Tokyo where they finished fourth, and the Merdeka Cup 1959 in Malaysia finishing second.
India started off 1960 with Asian Cup qualifiers in which they failed to qualify. India went on to win the 1962 Asian Games where they beat South Korea 2 -- 1 in the final, and two years later finished second in the Asian Cup which was held in round - robin format. India played in the Merdeka Cup in 1964, 1965 and 1966 where they finished 2nd, 3rd and 3rd respectively.
India later played in the Asian Games in 1966 in Bangkok but were eliminated in first round. India took third place in the 1970 Asian Games, beating Japan 1 -- 0 in the third place, play - off but have since qualified for other major tournaments, other than as host, only once after that.
Failure in a series of qualification tournaments meant that the next time India reach a quarter - final stage was as host in the 1982 Asian Games.
In 1984 India qualified for the Asian Cup again, but failed to make any impact. India won gold medals in the SAF Games of both 1984 (in Dhaka) and 1987 (Calcutta). They won the inaugural SAARC Cup in 1993 in Lahore, and finished runner - up in Colombo two years later. By 1997 the competition had been renamed as the SAFF Cup, and India won it in both 1997 and 1999 edition, when they hosted it in Goa.
Although India failed to qualify for the 2004 Asian Cup, the senior team shone in a silver medal - winning performance in the inaugural Afro Asian Games, with victories over Rwanda and Zimbabwe (then 85 places ahead of India in the world rankings) along the way, losing the final by just 1 -- 0 to Uzbekistan.
As a result, Indian football has steadily earned greater recognition and respect, both within the country and abroad. In November 2003, Stephen was named AFC Manager of the Month.
India could not do much not when they lost to Pakistan and Bangladesh in the 2003 SAFF Cup and defeats in the 2006 World Cup qualifiers meant Stephen Constantine was sacked. The LG Cup win in Vietnam under Stephen Constantine was one of the few bright spots in the early part of the 2000s. It was India 's first victory in a football tournament outside the subcontinent after 1974. India defeated hosts Vietnam 3 -- 2 in the final despite trailing 2 -- 0 after 30 minutes.
In 2005 Syed Nayeemuddin was appointed as India coach but he was sacked the following year after heavy defeats in 2007 AFC Asian Cup qualifiers. Bob Houghton was later appointed coach of team in 2006.
His appointment saw a general progress in India 's performances crowned by victory in 2007 Nehru Cup in August 2007. Houghton led India to the 2008 AFC Challenge Cup title as they beat Tajikistan 4 -- 1 in August 2008. Winning the AFC challenge cup eventually qualified them for the AFC Asian Cup for the first time since 1984. He also oversaw the Indian team to its second consecutive Nehru Cup trophy by winning 2009 Nehru Cup.
In 2011, India started off their campaign by participating in 2011 AFC Asian Cup for which they qualified after 24 years. They were placed in strong Group C along with South Korea, Australia and Bahrain. India lost all three matches but did manage to perform well in patches. Goalkeeper Subrata Pal won a lot of accolades for his performances.
India played its first match in 2012 AFC Challenge Cup qualification on March 21 winning 3 -- 0 against Chinese Taipei, with Jewel Raja Shaikh, Sunil Chhetri and Jeje Lalpekhlua scoring the goals. On March 23 they faced Pakistan. India came from behind and defeated Pakistan 3 -- 1 with Jeje Lalpekhlua scoring 2 goals and Steven Dias scoring one. On March 25 they faced Turkmenistan in their last 2012 AFC Challenge Cup qualifying game and. India drew the game 1 -- 1. The result meant that they finished on top of Group B and qualified for the 2012 AFC Challenge Cup. The Indian senior football team defeated Qatar 2 -- 1 in an international friendly before the start of the world cup qualifier against UAE (United Arab Emirates). India went on to lose the qualifying encounter by 5 - 2 on aggregate over two legs, having contentiously suffered two red cards and two converted penalties in the first 23 minutes of the opening leg, which the UAE won by 3 - 0. Ever since the Indian national team went on a friendly tour to the Caribbean Islands, which turned out to be very unsuccessful.
India then participated in 2011 SAFF Championship in December 2011 and came out as champions after beating Afghanistan 4 -- 0. This then followed up with India playing a testimonial match for Baichung Bhutia against Bayern Munich on 10 January 2012 with India being beaten 4 -- 0 by the German team.
|
what happened to ziva in ncis season 11 | NCIS (season 11) - wikipedia
The eleventh season of the police procedural drama NCIS premiered on September 24, 2013, in the same time slot as the previous seasons, Tuesdays at 8 pm. Special Agent Ziva David (de Pablo), departs during the season with her final appearance being in "Past, Present and Future ''. The episode "Crescent City (Part I) '', which aired on March 25, 2014, serves as the first of a two - part backdoor pilot of a second spin off from NCIS called NCIS: New Orleans based in New Orleans.
On February 1, 2013, CBS renewed NCIS for this season. The same date Mark Harmon extended his contract on the show with a new "multiyear deal '' with CBS. It was announced on July 10, 2013 that Cote de Pablo, who plays Ziva David, had chosen not to return for the eleventh season as a regular. She will appear in enough episodes to close out her character 's storylines. Because of de Pablo 's exit, showrunner Gary Glasberg had to change his planned storyline for season 11. "Someone asked me if I was planning for this, but I really was n't, so basically the minute that this became real, I had to throw out a lot of what I was planning to do and start from scratch ''. Glasberg has stated that there will be rotating characters coming to fill Ziva 's role.
The theme for the season is "unlocking demons '', both figurative and literal according to Glasberg. A "pretty interesting adversary '' about the theme will be introduced, and "that will carry through the season ''.
Colin Hanks returned for the premiere episode as Defense Department investigator Richard Parsons, a character introduced at the end of the tenth season, while Marina Sirtis returned in the second episode as Mossad Director Orli Elbaz. Joe Spano also reprised his role as Senior FBI Agent Tobias C. Fornell in the first two episodes. Muse Watson (as Mike Franks) appeared in the fourth episode. Ralph Waite (as Gibbs ' father Jackson Gibbs) and Robert Wagner (as Tony 's father Anthony DiNozzo, Sr.) are also confirmed to return. Diane Neal reprises her role CGIS Special Agent Abigail Borin in episode six "Oil & Water ''.
The second episode includes a character named Sarah Porter, played by Leslie Hope, who is the new Secretary of the Navy, while Margo Harshman has been cast in a potentially recurring role as Timothy McGee 's girlfriend, Delilah Fielding. The third episode introduces retiring NCIS Special Agent Vera Strickland (Roma Maffia) who has known Gibbs for many years.
On August 13, 2013, "casting intel '' on a new female character named Bishop was published, with filming scheduled to mid-October. Bishop is described as a "twentysomething female (;) bright, educated, athletic, attractive, fresh - faced, focused and somewhat socially awkward. She has a mysterious mixture of analytic brilliance, fierce determination and idealism. She 's traveled extensively, but only feels comfortable at home. '' Emily Wickersham was cast to play the character, named NSA Analyst Ellie Bishop. Wickersham was promoted to the main cast, two weeks prior of her debut appearance. Her first appearance is in episode nine, "Gut Check ''.
Scott Bakula was considered for an undisclosed recurring role this season. He was later cast as Dwayne Pride in NCIS: New Orleans.
Works cited
General references
|
what are the nine regions of ancient greece | Geographic regions of Greece - wikipedia
The traditional geographic regions of Greece (Greek: γεωγραφικά διαμερίσματα, literally "geographic departments '') are the country 's main historical - geographic regions, and were also official administrative regional subdivisions of Greece until the 1987 administrative reform. Despite their replacement as first - level administrative units by the newly defined administrative regions (Greek: περιφέρειες), the nine traditional geographic divisions -- six on the mainland and three island groups -- are still widely referred to in unofficial contexts and in daily discourse.
As of 2011, the official administrative divisions of Greece consist of 13 regions (Greek: περιφέρειες) -- nine on the mainland and four island groups -- which are further subdivided into 74 regional units and 325 municipalities. Formerly, there were also 54 prefectures or prefectural - level administrations.
The regions shown on the map but not in the list are geographic regions, but they are not major.
|
how many rivers were formed from the river of eden | Pennines - wikipedia
The Pennines / ˈpɛnaɪnz /, also known as the Pennine Chain or Pennine Hills, are a range of mountains and hills in England separating North West England from Yorkshire and North East England.
Often described as the "backbone of England '', the Pennine Hills form a more - or-less continuous range stretching northwards from the Peak District in the northern Midlands, through the South Pennines, Yorkshire Dales and North Pennines up to the Tyne Gap, which separates the range from the Cheviot Hills. Some definitions of the Pennines also include the Cheviot Hills while excluding the southern Peak District. South of the Aire Gap is a western spur into east Lancashire, comprising the Rossendale Fells, West Pennine Moors and the Bowland Fells in North Lancashire. The Howgill Fells in Cumbria are sometimes considered to be a Pennine spur to the west of the range. The Pennines are an important water catchment area with numerous reservoirs in the head streams of the river valleys.
The region is widely considered to be one of the most scenic areas of the United Kingdom. The North Pennines and Nidderdale are designated Areas of Outstanding Natural Beauty (AONB) within the range, as are Bowland and Pendle Hill. Parts of the Pennines are incorporated into the Peak District National Park and the Yorkshire Dales National Park. Britain 's oldest long - distance footpath, the Pennine Way, runs along most of the Pennine Chain and is 268 miles (429 km) long.
Various etymologies have been proposed treating "Pennine '' as though it were a native Brittonic / Modern Welsh name related to pen - ("head ''). In fact, it did not become a common name until the 18th century and almost certainly derives from modern comparisons with the Apennine Mountains, which run down the middle of Italy in a similar fashion.
Following an 1853 article by Arthur Hussey, it has become a common belief that the name derives from a passage in The Description of Britain (Latin: De Situ Britanniæ), an infamous historical forgery concocted by Charles Bertram in the 1740s and accepted as genuine until the 1840s. In 2004, George Redmonds reassessed this, finding that numerous respected writers passed over the origin of the mountains ' name in silence even in works dedicated to the topological etymology of Derbyshire and Lancashire. He found that the derivation from Bertram was widely believed and considered uncomfortable. In fact, he found repeated comparisons going back at least as early as Camden, many of whose placenames and ideas Bertram incorporated into his work. Bertram was responsible (at most) with popularizing the name against other contenders such as Daniel Defoe 's "English Andes ''. His own form of the name was the "Pennine Alps '' (Alpes Peninos), which today is used for a western section of the continental Alps. Those mountains derive their name from the Latin Alpes Pœninæ, the St Bernard Pass whose name has been variously derived from the Carthaginians, a local god, and Celtic peninus. This was also the pass used in the invasions of Italy by the Gallic Boii and Lingones in 390 BC. The etymology of the Apennines themselves -- whose name first referred to their northern extremity and then later spread southward -- is also disputed but is usually taken to derive from some form of Celtic pen or ben ("mountain, head '').
Various towns and geographical features within the Pennines retain Celtic names, including Penrith, the fell Pen - y - ghent, the River Eden, and the area of Cumbria. More commonly, local names result from later Anglo - Saxon and Norse settlements. In Yorkshire and Cumbria, many words of Norse origin, not commonly used in standard English, are part of everyday speech: for example, gill / ghyll (narrow steep valley), beck (brook or stream), fell (hill), and dale (valley).
The northern Pennine range is bordered by the foothills of the Lake District, and uplands of the Howgill Fells and Tyne Gap. The Forest of Bowland is a western spur while the Howgill Fells are sometimes considered to be part of the Pennines. The Pennines are fringed by extensive lowlands including the Eden Valley, West Lancashire Coastal Plain, Cheshire Plain, Vale of York, and the Midland Plains.
Most of the Pennine landscape is characterised by upland areas of high moorland indented by more fertile river valleys, although the landscape varies in different areas. The Peak District consists of hilly plateaus, uplands and valleys, divided into the Dark Peak with moorlands and gritstone edges, and the White Peak with limestone gorges. The moorlands of the Dark Peak extend into the South Pennines, a hilly landscape with narrow valleys between the Peak District, Forest of Bowland and Yorkshire Dales. Bowland is dominated by a central upland landform of deeply incised gritstone fells covered with tracts of heather - covered peat moorland, blanket bog and steep - sided wooded valleys linking the upland and lowland landscapes. The landscape becomes higher and more mountainous in the Yorkshire Dales and North Pennines. The Yorkshire Dales are characterised by moorlands, river valleys, hills, fells and mountains while the North Pennines consist of high upland plateaus, moorlands, fells, edges and valleys with most of the area containing flat topped hills while the higher peaks are in the western half.
Although the Pennines mostly cover the area between the Tyne Gap and the Peak District, the presence of the Pennine Way affects the northern and southern extents of the defined area. The Cheviot Hills, separated by the Tyne Gap and the Whin Sill, along which run the A69 and Hadrian 's Wall, are not part of the Pennines but, perhaps because the Pennine Way crosses them, they are often treated as such. As a result, the northern end of the Pennines may be considered to be either at the Tyne Gap or the Cheviot Hills across the Anglo - Scottish border. Conversely the southern end of the Pennines is commonly said to be in the High Peak of Derbyshire at Edale, the start of the Pennine Way. However, the range and its foothills continue as far south as Stoke - on - Trent in northern Staffordshire and Derby in southern Derbyshire.
Rising less than 3,000 feet (900 m), the Pennines are often referred to as fells, with the majority of mountainous terrain in the north. The highest is Cross Fell in eastern Cumbria, at 2,930 feet (893 m), while other principal peaks at the North Pennines include Great Dun Fell 2,782 ft (848 m), Mickle Fell 2,585 ft (788 m), and Burnhope Seat 2,451 ft (747 m). Principal peaks at the Yorkshire Dales include Whernside 2,415 ft (736 m), Ingleborough 2,372 ft (723 m), High Seat 2,328 ft (710 m), Wild Boar Fell 2,324 ft (708 m) and Pen - y - ghent 2,274 ft (693 m). Principal peaks at the Forest of Bowland include Ward 's Stone 1,841 ft (561 m), Fair Snape Fell 1,710 ft (521 m), and Hawthornthwaite Fell 1,572 ft (479 m). Although terrain is lower towards the south, principal peaks at the South Pennines and Peak District include Kinder Scout 2,087 ft (636 m), Bleaklow 2,077 ft (633 m), Black Chew Head 1,778 ft (542 m), Rombalds Moor 1,319 ft (402 m) and Winter Hill 1,496 ft (456 m).
For much of their length the Pennines form the main watershed in northern England, dividing east and west. The rivers Eden, Ribble, Dane and tributaries of the Mersey (including the Irwell, Tame and Goyt) flow westwards towards the Irish Sea. On the eastern side of the watershed, the rivers Tyne, Tees, Wear, Swale, Ure, Nidd, Wharfe, Aire, Calder and Don rise in the region and flow eastwards to the North Sea. The River Trent, however, rises on the western side of the Pennines before flowing around the southern end of the range and up the eastern side; together with its tributaries (principally the Dove and Derwent) it thus drains both east and west sides of the southern end to the North Sea.
The Pennine climate is generally temperate like that of the rest of England, but the hills have more precipitation, stronger winds and colder weather than the surrounding areas. Some areas could be described as Oceanic climate verging on Subarctic climate and a small area in Teesdale is classified as subarctic. More snow falls on the Pennines than on surrounding lowland areas due to the elevation and distance from the coast; unlike lowland areas of England, the Pennines can have quite severe winters.
The northwest is amongst the wettest regions of England and much of the rain falls on the Pennines. The eastern side is drier than the west -- the rain shadow shields northeast England from rainfall that would otherwise fall there.
Precipitation is important for the area 's biodiversity and human population. Many towns and cities are located along rivers flowing from the hills and in northwest England the lack of natural aquifers is compensated for by reservoirs.
Water has carved out gorges, caves and limestone landscapes in the Yorkshire Dales and Peak District. In some areas precipitation has contributed to poor soils, resulting in part in moorland landscapes that characterize much of the range. In other areas where the soil has not been degraded, it has resulted in lush vegetation.
For the purpose of growing plants, the Pennines are in hardiness zones 7 and 8, as defined by the USDA. Zone 8 is common throughout most of the UK, and zone 7 is the UK 's coldest hardiness zone. The Pennines, Scottish Highlands, Southern Uplands and Snowdonia are the only areas of the UK in zone 7.
The Pennines have been carved from a series of geological structures whose overall form is a broad anticline whose axis extends in a north -- south direction. The North Pennines are coincident with the Alston Block, whilst the Yorkshire Dales are coincident with the Askrigg Block. In the south the Peak District is essentially a flat - topped dome. Each of these structures consists of Carboniferous limestone overlain with Millstone Grit. The limestone is exposed at the surface in the North Pennines and Yorkshire Dales to the north of the range and the Peak District to the south. In the Yorkshire Dales and the White Peak the limestone exposure has led to the formation of large cave systems and watercourses.
In the Dales the caves or potholes are known as "pots '' in the Yorkshire dialect. They include some of the largest caves in England at Gaping Gill, more than 350 ft (107 m) deep and Rowten Pot, 365 ft (111 m) deep. Titan in the Peak District, the deepest shaft known in Britain, is connected to Peak Cavern in Castleton, Derbyshire, the largest cave entrance in the country. Erosion of the limestone has led to some unusual geological formations, such as the limestone pavements at Malham Cove. Between the northern and southern areas of exposed limestone (between Skipton and the Dark Peak) lies a narrow belt of exposed gritstone. Here the shales and sandstones of the Millstone Grit form high hills occupied by a moorland of bracken, peat, heather and coarse grasses; the higher ground is uncultivable and barely fit for pasture.
The area contains many Bronze Age settlements, and evidence of Neolithic settlement (including many stone circles or henges, such as Long Meg and Her Daughters).
The Pennines were controlled by the tribal federation of the Brigantes, made up of mainly small tribes who inhabited the area and cooperated on defence and external affairs. The Brigantes evolved an early form of kingdom. During Roman times, the Brigantes were dominated by the Romans who exploited the Pennines for their natural resources including the wild animals found there.
The Pennines were a major obstacle for Anglo - Saxon expansion westwards, although it appears the Anglo - Saxons travelled through the valleys. During the Dark Ages the Pennines were controlled by Celtic and Anglo - Saxon kingdoms. It is believed that the north Pennines were under the control of the kingdom of Rheged.
During Norse times the Pennines were settled by Viking Danes in the east and Norwegian Vikings in the west. The Vikings influenced place names, culture and genetics. When England was unified the Pennines were incorporated into it. Their mixture of Celtic, Anglo - Saxon and Viking heritage resembled much of the rest of northern England and its culture developed alongside its lowland neighbours in northwest and northeast England. The Pennines were not a distinct political polity, but were divided between neighbouring counties in northeast and northwest England; a major part was in the West Riding of Yorkshire.
The Pennine region is sparsely populated by English standards. Larger population centres adjoin the southern Pennine range, such as Chesterfield, Halifax, Huddersfield, Macclesfield, Oldham and Rochdale, but most of the northern Pennine range is thinly populated. The cities of Bradford, Derby, Leeds, Manchester, Sheffield, Stoke - on - Trent and Wakefield lie in the foothills and lowlands fringing the range.
The main economic activities in the Pennines include sheep farming, quarrying, finance and tourism. In the Peak District, tourism is the major local employment for park residents (24 %), with manufacturing industries (19 %) and quarrying (12 %) also being important while 12 % are employed in agriculture. Limestone is the most important mineral quarried, mainly for roads and cement, while other extracted materials include shale for cement and gritstone for building stone. The springs at Buxton and Ashbourne are exploited to produce bottled mineral water and there are approximately 2,700 farms in the National Park. The South Pennines are predominantly industrial, with the main industries including textiles, quarrying and mining, while other economic activities within the South Pennines include tourism and farming.
Although the Forest of Bowland is mostly rural, the main economic activities in the area include farming and tourism. In the Yorkshire Dales, tourism accounts for £ 350 million of expenditure every year while employment is mostly dominated by farming, accommodation and food sectors. There are also significant challenges for managing tourism, farming and other developments within the National Park. The main economic activities in the North Pennines include tourism, farming, timber and small - scale quarrying, due to the rural landscape.
Gaps that allow west -- east communication across the Pennines include the Tyne Gap between the Pennines and the Cheviots, through which the A69 road and Tyne Valley railway link Carlisle and Newcastle upon Tyne. The A66 road, its summit at 1,450 feet (440 m), follows the course of a Roman road from Scotch Corner to Penrith through the Stainmore Gap between the Eden Valley in Cumbria and Teesdale in County Durham. The Aire Gap links Lancashire and Yorkshire via the valleys of the Aire and Ribble. Other high - level roads include Buttertubs Pass, named from limestone potholes near its 1,729 - foot (527 m) summit, between Hawes in Wensleydale and Swaledale and the A684 road from Sedbergh to Hawes via Garsdale Head which reaches 1,100 feet (340 m).
Further south the A58 road traverses the Calder Valley between West Yorkshire and Greater Manchester reaching 1,282 feet (391 m) between Littleborough and Ripponden, while the A646 road along the Calder Valley between Burnley and Halifax reaches 764 feet (233 m) following valley floors. In the South Pennines the A628 Woodhead road links the M67 motorway in Greater Manchester with the M1 motorway in South Yorkshire and Holme Moss is crossed by the A6024 road, whose highest point is near Holme Moss transmitting station between Longdendale and Holmfirth.
The Pennines are traversed by the M62 motorway, the highest motorway in England at 1,221 feet (372 m) on Windy Hill near Junction 23.
Three trans - Pennine canals built during the Industrial Revolution cross the range:
The first of the earlier twin tunnels (Woodhead 1 and 2) was completed by the Sheffield, Ashton - Under - Lyne and Manchester Railway in 1845, engineered by Charles Vignoles and Joseph Locke. At the time of its completion in 1845, Woodhead 1 was one of the world 's longest railway tunnels at a length of 3 miles 13 yards (4,840 m); it was the first of several trans - Pennine tunnels including the Standedge and Totley tunnels, which are only slightly longer.
The first two tunnels were replaced by Woodhead 3, which was longer than the other two at 3 miles 66 yards (4860m). It was bored purposely for the overhead electrification of the route and completed in 1953. The tunnel was opened by the then transport minister Alan Lennox - Boyd on 3 June 1954. It was designed by Sir William Halcrow & Partners. The line was closed in 1981.
The London and North Western Railway acquired the Huddersfield and Manchester Railway in 1847 and built a single - line tunnel parallel to the canal tunnel at Standedge with a length of 3 miles, 57 yards (4803 m). Today rail services along the Huddersfield line between Huddersfield and Victoria and Piccadilly stations in Manchester are operated by TransPennine Express and Northern. Between 1869 and 1876 the Midland Railway built the Settle - Carlisle Line through remote, scenic regions of the Pennines from near Settle to Carlisle passing Appleby - in - Westmorland and a number of other settlements, some a distance from their stations. The line has survived, despite difficult times and is operated by Northern Rail.
The Trans Pennine Trail, a long - distance route for cyclists, horse riders and walkers, runs west -- east alongside rivers and canals, along disused railway tracks and through historic towns and cities from Southport to Hornsea (207 miles / 333 km). It crosses the north -- south Pennine Way (268 miles / 431 km) at Crowden - in - Longdendale.
Considerable areas of Pennine landscape are protected as UK national parks and Areas of Outstanding Natural Beauty (AONBs). Areas of Outstanding Natural Beauty are afforded much the same protection as national parks. The two national parks within the Pennines are the Yorkshire Dales National Park (7) and the Peak District National Park (1).
The North Pennines AONB just north of the Yorkshire Dales rivals the national park in size and includes some of the Pennines ' highest peaks and some of its most isolated and sparsely populated areas while Nidderdale is an AONB east of the Yorkshire Dales National Park, and the Bowland Fells, including Pendle Hill, is an AONB west of the Yorkshire Dales.
The language used in pre-Roman and Roman times was British. During the Early Middle Ages, the Cumbric language developed. Little evidence of Cumbric remains, so it is difficult to ascertain whether or not it was distinct from Old Welsh. The extent of the region in which Cumbric was spoken is also unknown.
During Anglo - Saxon times the area was settled by Anglian peoples of Mercia and Northumbria, rather than the Saxon people of Southern England. Celtic speech remained in most areas of the Pennines longer than it did in the surrounding areas of England. Eventually, the Celtic tongue of the Pennines was replaced by early English as Anglo - Saxons and Vikings settled the area and assimilated the Celts.
In Norse times, Viking settlers brought their languages of Old Norse, Old Danish (mainly in the Yorkshire Dales and parts of the Peak District) and Old Norwegian (mainly in the western Pennines). With the eventual consolidation of England by the Saxon kingdom of Wessex, the pure Norse speech died out in England, though it survived in the Pennines longer than in most areas. However, the fusion of Norse and Old English was important in the formation of Middle English and hence Modern English, and many individual words of Norse descent remain in use in local dialects, such as that of Yorkshire, and in local place names.
The folklore and customs are mostly based on Celtic, Anglo - Saxon and Viking customs and folklore. Many customs and stories have their origin in Christianised pagan traditions. In the Peak District, a notable custom is well dressing, which has its origin in pagan traditions that became Christianised.
Flora in the Pennines is adapted to moorland and subarctic landscapes and climates. The flora found there can be found in other areas of moorland in Northern Europe and some species are also found in areas of tundra.
In the Pennine millstone grit areas above an altitude of 900 feet (270 m) the topsoil is so acidic, pH 2 to 4, that it can grow only bracken, heather, sphagnum, and coarse grasses such as cottongrass, purple moor grass and heath rush.
As the Ice age glacial sheets retreated c. 11,500 BC trees returned and archaeological palynology can identify their species. The first trees to settle were willow, birch and juniper, followed later by alder and pine. By 6500 BC temperatures were warmer and woodlands covered 90 % of the dales with mostly pine, elm, lime and oak. On the limestone soils the oak was slower to colonize and pine and birch predominated. Around 3000 BC a noticeable decline in tree pollen indicates that neolithic farmers were clearing woodland to increase grazing for domestic livestock, and studies at Linton Mires and Eshton Tarn find an increase in grassland species.
On poorly drained impermeable areas of millstone grit, shale or clays the topsoil gets waterlogged in winter and spring. Here tree suppression combined with the heavier rainfall results in blanket bog up to 7 ft (2 m) thick. The erosion of peat still exposes stumps of ancient trees.
"In digging it away they frequently find vast fir trees, perfectly sound, and some oaks... ''
Fauna in the Pennines is similar to the rest of England and Wales, but the area hosts some specialised species. Deer are found throughout the Pennines and some species of animals that are rare elsewhere in England can be found here. Arctic hares, which were common in Britain during the Ice Age and retreated to the cooler, more tundra - like uplands once the climate warmed up, were introduced to the Dark Peak area of the Peak District in the 19th century.
Large areas of heather moorland in the Pennines are managed for driven shooting of wild red grouse. The related and declining black grouse is still found in northern parts of the Pennines. Other birds whose English breeding strongholds are in the Pennines include golden plover, snipe, curlew, dunlin, merlin, short - eared owl, ring ouzel and twite, though many of these are at the southern limit of their distributions and are more common in Scotland.
|
what is residual sum of squares in excel | Residual sum of squares - wikipedia
In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared errors of prediction (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of data). It is a measure of the discrepancy between the data and an estimation model. A small RSS indicates a tight fit of the model to the data. It is used as an optimality criterion in parameter selection and model selection.
In general, total sum of squares = explained sum of squares + residual sum of squares. For a proof of this in the multivariate ordinary least squares (OLS) case, see partitioning in the general OLS model.
In a model with a single explanatory variable, RSS is given by:
where y is the i value of the variable to be predicted, x is the i value of the explanatory variable, and f (x i) (\ displaystyle f (x_ (i))) is the predicted value of y (also termed y i ^ (\ displaystyle (\ hat (y_ (i))))). In a standard linear simple regression model, y i = a + b x i + ε i (\ displaystyle y_ (i) = a + bx_ (i) + \ varepsilon _ (i) \,), where a and b are coefficients, y and x are the regressand and the regressor, respectively, and ε is the error term. The sum of squares of residuals is the sum of squares of estimates of ε; that is
where α (\ displaystyle \ alpha) is the estimated value of the constant term a (\ displaystyle a) and β (\ displaystyle \ beta) is the estimated value of the slope coefficient b.
The general regression model with ' n ' observations and ' k ' explanators, the first of which is a constant unit vector whose coefficient is the regression intercept, is
where y is an n × 1 vector of dependent variable observations, each column of the n × k matrix X is a vector of observations on one of the k explanators, β (\ displaystyle \ beta) is a k × 1 vector of true coefficients, and e is an n × 1 vector of the true underlying errors. The ordinary least squares estimator for β (\ displaystyle \ beta) is
The residual vector e ^ (\ displaystyle (\ hat (e))) = y − X β ^ = y − X (X T X) − 1 X T y (\ displaystyle y-X (\ hat (\ beta)) = y-X (X ^ (T) X) ^ (- 1) X ^ (T) y), so the residual sum of squares is:
(equivalent to the square - root of the norm of residuals); in full:
where H is the hat matrix, or the projection matrix in linear regression.
The least - squares regression line is given by
where b = y _̄ − a x _̄ (\ displaystyle b = (\ bar (y)) - a (\ bar (x))) and a = S x y S x x (\ displaystyle a = (\ frac (S_ (xy)) (S_ (xx)))), where S x y = ∑ i = 1 n (x _̄ − x i) (y _̄ − y i) (\ displaystyle S_ (xy) = \ sum _ (i = 1) ^ (n) ((\ bar (x)) - x_ (i)) ((\ bar (y)) - y_ (i))) and S x x = ∑ i = 1 n (x _̄ − x i) 2. (\ displaystyle S_ (xx) = \ sum _ (i = 1) ^ (n) ((\ bar (x)) - x_ (i)) ^ (2).)
Therefore
where S y y = ∑ i = 1 n (y _̄ − y i) 2. (\ displaystyle S_ (yy) = \ sum _ (i = 1) ^ (n) ((\ bar (y)) - y_ (i)) ^ (2).)
The Pearson product - moment correlation is given by r = S x y S x x S y y; (\ displaystyle r = (\ frac (S_ (xy)) (\ sqrt (S_ (xx) S_ (yy))));) therefore, R S S = S y y (1 − r 2). (\ displaystyle RSS = S_ (yy) (1 - r ^ (2)).)
|
who became vice president as a result of the 1796 election | United states presidential election, 1796 - wikipedia
George Washington Nonpartisan
John Adams Federalist
The United States presidential election of 1796 was the third quadrennial presidential election. It was held from Friday, November 4 to Wednesday, December 7, 1796. It was the first contested American presidential election, the first presidential election in which political parties played a dominant role, and the only presidential election in which a president and vice president were elected from opposing tickets. Incumbent Vice President John Adams of the Federalist Party defeated former Secretary of State Thomas Jefferson of the Democratic - Republican Party.
With incumbent President George Washington having refused a third term in office, the 1796 election became the first U.S. presidential election in which political parties competed for the presidency. The Federalists coalesced behind Adams and the Democratic - Republicans supported Jefferson, but each party ran multiple candidates. Under the electoral rules in place prior to the 1804 ratification of the Twelfth Amendment, the members of the Electoral College each cast two votes, with no distinction made between electoral votes for president and electoral votes for vice president. In order to be elected president, the winning candidate had to win the votes of a majority of electors; should no individual win a majority, the House of Representatives would hold a contingent election.
The campaign was an acrimonious one, with Federalists attempting to identify the Democratic - Republicans with the violence of the French Revolution and the Democratic - Republicans accusing the Federalists of favoring monarchism and aristocracy. Republicans sought to associate Adams with the policies developed by fellow Federalist Alexander Hamilton during the Washington administration, which they declaimed were too much in favor of Great Britain and a centralized national government. In foreign policy, Republicans denounced the Federalists over the Jay Treaty, which had established a temporary peace with Great Britain. Federalists attacked Jefferson 's moral character, alleging he was an atheist and that he had been a coward during the American Revolutionary War. Adams supporters also accused Jefferson of being too pro-France; the accusation was underscored when the French ambassador embarrassed the Republicans by publicly backing Jefferson and attacking the Federalists right before the election. Despite the vituperation between their respective camps, neither Adams nor Jefferson actively campaigned for the presidency.
Adams was elected president with 71 electoral votes, one more than was needed for a majority. Adams won by sweeping the electoral votes of New England and winning votes from several other states, especially the states of the Mid-Atlantic region. Jefferson received 68 electoral votes and was elected vice president. Former Governor Thomas Pinckney of South Carolina, a Federalist, finished with 59 electoral votes, while Senator Aaron Burr, a Democratic - Republican from New York, won 30 electoral votes. The remaining 48 electoral votes were dispersed among nine other candidates. Reflecting the evolving nature of both parties, several electors cast one vote for a Federalist candidate and one vote for a Democratic - Republican candidate. The election marked the formation of the First Party System, and established a rivalry between Federalist New England and Democratic - Republican South, with the middle states holding the balance of power.
With the retirement of Washington after two terms, both parties sought the presidency for the first time. Before the ratification of the 12th Amendment in 1804, each elector was to vote for two persons, but was not able to indicate which vote was for president and which was for vice president. Instead, the recipient of the most electoral votes would become president and the runner - up vice president. As a result, both parties ran multiple candidates for president, in hopes of keeping one of their opponents from being the runner - up. These candidates were the equivalent of modern - day running mates, but under the law they were all candidates for president. Thus, both Adams and Jefferson were technically opposed by several members of their own parties. The plan was for one of the electors to cast a vote for the main party nominee (Adams or Jefferson) and a candidate besides the primary running mate, thus ensuring that the main nominee would have one more vote than his running mate.
The Federalists ' nominee was John Adams of Massachusetts, the incumbent vice president and a leading voice during the Revolutionary period. Most Federalist leaders viewed Adams, who had twice been elected vice president, as the natural heir to Washington. Adams 's main running mate was Thomas Pinckney, a former governor of South Carolina who had negotiated the Treaty of San Lorenzo with Spain. Pinckney agreed to run after the first choice of many party leaders, former Governor Patrick Henry of Virginia, declined to enter the race. Alexander Hamilton, who competed with Adams for leadership of the party, worked behind the scenes to elect Pinckney over Adams by convincing Jefferson electors from South Carolina to cast their second votes for Pinckney. Hamilton did prefer Adams to Jefferson, and he urged Federalist electors to cast their votes for Adams and Pinckney.
John Adams, Vice President
Thomas Pinckney, Former Governor of South Carolina
Oliver Ellsworth, U.S. Chief Justice, from Connecticut
John Jay, Governor of New York
James Iredell, Associate Justice of the U.S. Supreme Court, from North Carolina
Samuel Johnston, Former U.S. Senator from North Carolina
Charles Cotesworth Pinckney, U.S. Minister to France from South Carolina
The Democratic - Republicans united behind former Secretary of State Thomas Jefferson, who had co-founded the party with James Madison and others in opposition to the policies of Hamilton. Congressional Democratic - Republicans met in an attempt to unite behind one vice presidential nominee. With Jefferson 's popularity strongest in the South, many party leaders wanted a Northern candidate to serve as Jefferson 's running mate. Popular choices included Senator Pierce Butler of South Carolina and three New Yorkers: Senator Aaron Burr, Chancellor Robert R. Livingston, and former Governor George Clinton, the party 's 1792 candidate for vice president. A group of Democratic - Republican leaders met in June 1796 and agreed to support Jefferson for president and Burr for vice president.
Thomas Jefferson, Former U.S. Secretary of State from Virginia
Aaron Burr, U.S. Senator from New York
Samuel Adams, Governor of Massachusetts
George Clinton, Former Governor of New York
Tennessee was admitted into the United States after the 1792 election, increasing the Electoral College to 138 electors.
Under the system in place prior to the 1804 ratification of the Twelfth Amendment, electors were to cast votes for two persons. Both votes were for president; the runner - up in the presidential race was elected vice-president. If no candidate won votes from a majority of the Electoral College, the House of Representatives would a hold contingent election to select the winner. Each party intended to manipulate the results by having some of their electors cast one vote for the intended presidential candidate and one vote for somebody besides the intended vice-presidential candidate, leaving their vice-presidential candidate a few votes shy of their presidential candidate. However, all electoral votes were cast on the same day, and communications between states were extremely slow at that time, making it very difficult to coordinate which electors were to manipulate their vote for vice-president. Additionally, there were rumors that southern electors pledged to Jefferson were coerced by Hamilton to give their second vote to Pinckney in hope of electing him president instead of Adams.
Campaigning centered in the swing states of New York and Pennsylvania. Adams and Jefferson won a combined 139 electoral votes from the 138 members of the Electoral College. With the exception of Pennsylvania, the Federalists swept every state north of the Mason - Dixon line. The Democratic - Republicans won the votes of most Southern electors, but the electors of Maryland and Delaware gave a majority of their votes to Federalist candidates, and South Carolina split its vote between candidates of the two parties.
Nationwide, most electors voted for Adams and a second Federalist or for Jefferson and a second Democratic - Republican, but there were several exceptions to this rule. One elector in Maryland voted for both Adams and Jefferson, and two electors cast votes for Washington, who had not campaigned and was not formally affiliated with either party. Pinckney won the second votes from a majority of the electors who voted for Adams, but 21 electors from New England and Maryland cast their second votes for other candidates, including Chief Justice Oliver Ellsworth. Those who voted for Jefferson were significantly less united in their second choice, though Burr won a plurality of the Jefferson electors. All eight electors in Pinckney 's home state of South Carolina, as well as at least one elector in Pennsylvania, cast their ballots for Jefferson and Pinckney. In North Carolina, Jefferson won 11 votes, but the remaining 13 votes were spread among six different candidates from both parties. In Virginia, most electors voted for Jefferson and Governor Samuel Adams of Massachusetts.
The end result was that Adams received 71 electoral votes, one more than required to be elected president. Jefferson received 68 votes, nine more than Pinckney, and was elected vice president. Burr finished in a distant fourth place with 30 votes. Nine other individuals received the remaining 48 electoral votes. If Pinckney had won the second votes of all of the New England electors who voted for Adams, he would have been elected president over Adams and Jefferson.
Source (Popular Vote): U.S. President National Vote. Our Campaigns. (February 11, 2006). Source (Popular Vote): A New Nation Votes: American Election Returns 1787 - 1825 Source (Electoral Vote): "Electoral College Box Scores 1789 -- 1996 ''. National Archives and Records Administration. Retrieved July 30, 2005.
Votes for Federalist electors have been assigned to John Adams and votes for Democratic - Republican electors have been assigned to Thomas Jefferson. Only 9 of the 16 states used any form of popular vote. Those states that did choose electors by popular vote had widely varying restrictions on suffrage via property requirements.
Source: Dave Leip 's Atlas of U.S. Presidential Elections
The following four years would be the only time that the president and vice-president were from different parties (John Quincy Adams and John C. Calhoun would later be elected president and vice-president as political opponents, but they were both Democratic - Republican party candidates; Andrew Johnson, Abraham Lincoln 's second vice-president, was a Democrat, but Lincoln ran on a combined National Union Party ticket in 1864, not as a strict Republican). Jefferson would leverage his position as vice-president to attack President Adams 's policies, and this would help him reach the White House in the following election.
This election would provide part of the impetus for the Twelfth Amendment to the United States Constitution. On January 6, 1797, Representative William L. Smith of South Carolina presented a resolution on the floor of the House of Representatives for an amendment to the Constitution by which the presidential electors would designate which candidate would be president and which would be vice-president. However, no action was taken on his proposal, setting the stage for the deadlocked election of 1800.
The Constitution, in Article II, Section 1, provided that the state legislatures should decide the manner in which their Electors were chosen. Different state legislatures chose different methods:
|
how much does it cost to renew cuban passport | Cuban passport - wikipedia
Cuban passports are issued to citizens of Cuba to facilitate international travel. They are valid for 6 years from the date of issuance, but have to be extended every 2 years.
The cost of issue of this passport is about US $400 (CUC 400) and US $200 for every two years if one person lives in the United States.
Until January 14, 2013, the Cuban government required that all Cuban citizens and foreigners such as foreign students that live in Cuba desiring to leave the country would have to obtain an exit permit (Spanish: Permiso de Salida). The abolition of the controversial requirement led to long lines at passport offices filled with citizens desiring to legally travel abroad; however, the lines were partly attributed to the fact that the cost of obtaining a passport was going to double the next day to the equivalent of US $100 (CUC 100), the equivalent of 5 months of average state salary. Now the passport is the only document required to leave the country, apart from a visa from the destination country. Previously the cost of a passport, exit permit, and associated paperwork added up to around US $300 (CUC 300), the equivalent of 15 months of average state salary.
Passports of many countries contain a message, nominally from the official who is in charge of passport issuance, addressed to authorities of other countries. The message identifies the bearer as a citizen of the issuing country, requests that he or she be allowed to enter and pass through the other country, and requests further that, when necessary, he or she be given help consistent with international norms. In Cuban passports, the message is in Spanish, French and English. The message is:
in Spanish:
in French:
and in English:
In addition to colored fibers in all common pages, Cuban passports feature a UV - reaction - based mark of the Cuban flag and the words República de Cuba (Spanish for Republic of Cuba) on the front endpaper.
As of 1 January 2017, Cuban citizens had visa - free or visa on arrival access to 60 countries and territories, ranking the Cuban passport 76th in terms of travel freedom according to the Henley visa restrictions index.
B) The Isle of Man, Jersey and Guernsey are not part of the European Union, but Manxmen and Channel Islanders are citizens of the European Union; the Isle of Man, Jersey and Guernsey, and Manxmen and Channel Islanders themselves (unless they qualify and apply for recognition of a change in status), are however excluded from the benefits of the Four Freedoms of the European Union.
C) The Government of the United Kingdom also issue passports to British nationals who are not British citizens with the right of abode in the United Kingdom and who are also not otherwise citizens of the European Union.
Non-EU country that has open border with Schengen Area.
Russia is a transcontinental country in Eastern Europe and Northern Asia. The vast majority of its population (80 %) lives in European Russia, therefore Russia as a whole is included as a European country here.
Turkey is a transcontinental country in the Middle East and Southeast Europe. Turkey has a small part of its territory (3 %) in Southeast Europe called Turkish Thrace.
Azerbaijan and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have a small part of their territories in the European part of the Caucasus.
Kazakhstan is a transcontinental country. Kazakhstan has a small part of its territories located west of the Urals in Eastern Europe.
Armenia (Artsakh) and Cyprus (Northern Cyprus) are entirely in Southwest Asia but having socio - political connections with Europe.
Egypt is a transcontinental country in North Africa and Western Asia. Egypt has a small part of its territory in Western Asia called Sinai Peninsula.
Partially recognized.
Not recognized by any other state.
Special administrative regions of China
|
who is the actor that plays iron man | Robert Downey Jr. - Wikipedia
Robert John Downey Jr. (born April 4, 1965) is an American actor and singer. His career has included critical and popular success in his youth, followed by a period of substance abuse and legal difficulties, and a resurgence of commercial success in middle age. For three consecutive years from 2012 to 2015, Downey topped the Forbes list of Hollywood 's highest - paid actors, making an estimated $80 million in earnings between June 2014 and June 2015.
Making his acting debut at the age of five, appearing in his father 's film Pound (1970), Downey appeared in roles associated with the Brat Pack, such as the teen sci - fi comedy Weird Science (1985) and the drama Less Than Zero (1987). He starred as the title character in the 1992 film Chaplin, for which he earned a nomination for the Academy Award for Best Actor and he won the BAFTA Award for Best Actor in a Leading Role. After being released in 2000 from the California Substance Abuse Treatment Facility and State Prison where he was incarcerated on drug charges, Downey joined the cast of the TV series Ally McBeal playing Calista Flockhart 's love interest. For that he earned a Golden Globe Award. His character was terminated when Downey was fired after two drug arrests in late 2000 and early 2001. After his last stay in a court - ordered drug treatment program, Downey achieved sobriety.
Downey 's career prospects improved when he featured in the black comedy crime Kiss Kiss Bang Bang (2005), the mystery thriller Zodiac (2007), and the satirical action comedy Tropic Thunder (2008); for the latter he was nominated for an Academy Award for Best Supporting Actor. Beginning in 2008, Downey began portraying the role of Marvel Comics superhero Iron Man in the Marvel Cinematic Universe, appearing in several films as either the lead role, member of an ensemble cast, or in a cameo. Each of these films, with the exception of The Incredible Hulk, has grossed over $500 million at the box office worldwide; four of these -- The Avengers, Avengers: Age of Ultron, Iron Man 3 and Captain America: Civil War -- earned over $1 billion, while Avengers: Infinity War earned over $2 billion.
Downey has also played the title character in Guy Ritchie 's Sherlock Holmes (2009), which earned him his second Golden Globe win, and its sequel (2011), both of which have earned over $500 million at the box office worldwide.
As of 2018, the U.S. domestic box - office grosses of Downey 's films total over US $4.9 billion, with worldwide grosses surpassing $11.6 billion, making Downey the third highest - grossing U.S. domestic box - office star of all time.
Downey was born in Manhattan, New York on April 4, 1965, the younger of two children. His father, Robert Downey Sr., is an actor and filmmaker, while his mother, Elsie Ann (née Ford), was an actress who appeared in Downey Sr. 's films. Downey 's father is of half Lithuanian Jewish, one - quarter Hungarian Jewish, and one - quarter Irish descent, while Downey 's mother had Scottish, German, and Swiss ancestry. Downey and his older sister Allyson grew up in Greenwich Village.
As a child, Downey was "surrounded by drugs. '' His father, a drug addict, allowed Downey to use marijuana at age six, an incident which his father said he now regrets. Downey later stated that drug use became an emotional bond between him and his father: "When my dad and I would do drugs together, it was like him trying to express his love for me in the only way he knew how. '' Eventually, Downey began spending every night abusing alcohol and "making a thousand phone calls in pursuit of drugs. ''
During his childhood, Downey had minor roles in his father 's films. He made his acting debut at the age of five, playing a sick puppy in the absurdist comedy Pound (1970), and then at seven appeared in the surrealist Greaser 's Palace (1972). At the age of 10, he was living in England and studied classical ballet as part of a larger curriculum. He attended the Stagedoor Manor Performing Arts Training Center in upstate New York as a teenager. When his parents divorced in 1978, Downey moved to California with his father, but in 1982, he dropped out of Santa Monica High School, and moved back to New York to pursue an acting career full - time.
Downey and Kiefer Sutherland, who shared the screen in the 1988 drama 1969, were roommates for three years when he first moved to Hollywood to pursue his career in acting.
Downey began building upon theater roles, including in the short - lived off - Broadway musical American Passion at the Joyce Theater in 1983, produced by Norman Lear. In 1985, he was part of the new, younger cast hired for Saturday Night Live, but following a year of poor ratings and criticism of the new cast 's comedic talents, he and most of the new crew were dropped and replaced. Rolling Stone magazine named Downey the worst SNL cast member in its entire run, stating that the "Downey Fail sums up everything that makes SNL great. '' That same year, Downey had a dramatic acting breakthrough when he played James Spader 's sidekick in Tuff Turf and then a bully in John Hughes 's Weird Science. He was considered for the role of Duckie in John Hughes 's film Pretty in Pink (1986), but his first lead role was with Molly Ringwald in The Pick - up Artist (1987). Because of these and other coming - of - age films Downey did during the 1980s, he is sometimes named as a member of the Brat Pack.
In 1987, Downey played Julian Wells, a drug - addicted rich boy whose life rapidly spirals out of his control, in the film version of the Bret Easton Ellis novel Less Than Zero. His performance, described by Janet Maslin in The New York Times as "desperately moving '', was widely praised, though Downey has said that for him "the role was like the ghost of Christmas Future '' since his drug habit resulted in his becoming an "exaggeration of the character '' in real life. Zero drove Downey into films with bigger budgets and names, such as Chances Are (1989) with Cybill Shepherd and Ryan O'Neal, Air America (1990) with Mel Gibson, and Soapdish (1991) with Sally Field, Kevin Kline, and Whoopi Goldberg.
In 1992, he starred as Charlie Chaplin in Chaplin, a role for which he prepared extensively, learning how to play the violin as well as tennis left - handed. He had a personal coach in order to help him imitate Chaplin 's posture, and a way of carrying himself. The role garnered Downey an Academy Award nomination for Best Actor at the Academy Awards 65th ceremony, losing to Al Pacino in Scent of a Woman.
In 1993, he appeared in the films Heart and Souls with Alfre Woodard and Kyra Sedgwick and Short Cuts with Matthew Modine and Julianne Moore, along with a documentary that he wrote about the 1992 presidential campaigns titled The Last Party (1993). He starred in the 1994 films, Only You with Marisa Tomei, and Natural Born Killers with Woody Harrelson. He then subsequently appeared in Restoration (1995), Richard III (1995), Two Girls and a Guy (1998), as Special Agent John Royce in U.S. Marshals (1998), and in Black and White (1999).
From 1996 through 2001, Downey was arrested numerous times on charges related to drugs including cocaine, heroin, and marijuana and went through drug treatment programs unsuccessfully, explaining in 1999 to a judge: "It 's like I have a shotgun in my mouth, and I 've got my finger on the trigger, and I like the taste of the gun metal. '' He explained his relapses by claiming to have been addicted to drugs since the age of eight, due to the fact that his father, also an addict previously, had been giving them to him.
In April 1996, Downey was arrested for possession of heroin, cocaine, and an unloaded. 357 Magnum handgun while he was speeding down Sunset Boulevard. A month later, while on parole, he trespassed into a neighbor 's home while under the influence of a controlled substance, and fell asleep in one of the beds. He received three years of probation and was ordered to undergo compulsory drug testing. In 1997, he missed one of the court - ordered drug tests, and had to spend six months in the Los Angeles County jail.
After Downey missed another required drug test in 1999, he was arrested once more. Despite Downey 's lawyer, John Stewart Holden, assembling the same team of lawyers that successfully defended O.J. Simpson during his criminal trial for murder, Downey was sentenced to a three - year prison term at the California Substance Abuse Treatment Facility and State Prison in Corcoran, California. At the time of the 1999 arrest, all of Downey 's film projects had wrapped and were close to release. He had also been hired for voicing the devil on the NBC animated television series God, the Devil and Bob, but was fired when he failed to show up for rehearsals.
After spending nearly a year in the California Substance Abuse Treatment Facility and State Prison, Downey, on condition of posting a $5,000 bail, was unexpectedly freed when a judge ruled that his collective time in incarceration facilities (spawned from the initial 1996 arrests) had qualified him for early release. A week after his 2000 release, Downey joined the cast of the hit television series Ally McBeal, playing the new love interest of Calista Flockhart 's title character. His performance was praised and the following year he was nominated for an Emmy Award in the Outstanding Supporting Actor in a Comedy Series category and won a Golden Globe for Best Supporting Actor in a mini-series or television film. He also appeared as a writer and singer on Vonda Shepard 's Ally McBeal: For Once in My Life album, and he sang with Sting a duet of "Every Breath You Take '' in an episode of the series. Despite the apparent success, Downey claimed that his performance on the series was overrated and said, "It was my lowest point in terms of addictions. At that stage, I did n't give a fuck whether I ever acted again. '' In January 2001, Downey was scheduled to play the role of Hamlet in a Los Angeles stage production directed by Mel Gibson.
Before the end of his first season on Ally McBeal, over the Thanksgiving 2000 holiday, Downey was arrested when his room at Merv Griffin 's Hotel and Givenchy Spa in Palm Springs, California was searched by the police, who were responding to an anonymous 911 call. Downey was under the influence of a controlled substance and in possession of cocaine and Valium. Despite the fact that, if convicted, he would have faced a prison sentence of up to four years and eight months, he signed on to appear in at least eight more Ally McBeal episodes.
In April 2001, while he was on parole, a Los Angeles police officer found him wandering barefooted in Culver City, just outside Los Angeles. He was arrested for suspicion of being under the influence of drugs, but was released a few hours later, even though tests showed he had cocaine in his system. After this last arrest, producer David E. Kelley and other Ally McBeal executives ordered last - minute rewrites and reshoots and fired Downey from the show, despite the fact that Downey 's character had resuscitated Ally McBeal 's ratings. The Culver City arrest also cost him a role in the high - profile film America 's Sweethearts, and the subsequent incarceration prompted Mel Gibson to shut down his planned stage production of Hamlet, as well. In July 2001, Downey pleaded no contest to the Palm Springs charges, avoiding jail time. Instead, he was sent into drug rehabilitation and received three years of probation, benefiting from the California Proposition 36, which had been passed the year before with the aim of helping nonviolent drug offenders overcome their addictions instead of sending them to jail.
The book Conversations with Woody Allen reports that director Woody Allen wanted to cast Downey and Winona Ryder in his film Melinda and Melinda in 2005, but was unable to do so, because he could not get insurance on them, stating, "We could n't get bonded. The completion bonding companies would not bond the picture unless we could insure them. We were heartbroken because I had worked with Winona before (on Celebrity) and thought she was perfect for this and wanted to work with her again. And I had always wanted to work with Bob Downey and always thought he was a huge talent. ''
In a December 18, 2000 article for People magazine entitled "Bad to Worse '', Downey 's stepmother Rosemary told author Alex Tresnlowski, that Downey had been diagnosed with bipolar disorder "a few years ago '' and added that his bipolar disorder was "the reason he has a hard time staying sober. What has n't been tried is medication and intensive psychotherapy ''. In the same article, Dr. Manijeh Nikakhtar, a Los Angeles psychiatrist and co-author of Addiction or Self - Medication: The Truth (ISBN 978 - 1883819576), says she received a letter from Downey in 1999, during his time at Corcoran II, asking for advice on his condition. She discovered that "no one had done a complete (psychiatric) evaluation (on him)... I asked him flat out if he thought he was bipolar, and he said, ' Oh yeah. There are times I spend a lot of money and I 'm hyperactive, and there are other times I 'm down. ' '' In an article for the March 2007 issue of Esquire, Downey told author Scott Raab that he wanted to address "this whole thing about the bipolar '' after receiving a phone call from "the Bipolar Association '' asking him about being bipolar. When Downey denied he had ever said he was bipolar, the caller quoted the People article, to which Downey replied, "' No! Dr. Malibusian said (I said I was bipolar)... ', and they go, ' Well, it 's been written, so we 're going to quote it. ' '' Downey flatly denied being "depressed or manic '' and that previous attempts to diagnose him with any kind of psychiatric or mood disorder have always been skewed because "the guy I was seeing did n't know I was smokin ' crack in his bathroom. You ca n't make a diagnosis until somebody 's sober. ''
After five years of substance abuse, arrests, rehab, and relapse, Downey was ready to work toward a full recovery from drugs, and a return to his career. In discussing his failed attempts to control his own addictive behavior in the past, Downey told Oprah Winfrey in November 2004 that "when someone says, ' I really wonder if maybe I should go to rehab? ' Well, uh, you 're a wreck, you just lost your job, and your wife left you. Uh, you might want to give it a shot. '' He added that after his last arrest in April 2001, when he knew he would likely be facing another stint in prison or another form of incarceration such as court - ordered rehab, "I said, ' You know what? I do n't think I can continue doing this. ' And I reached out for help, and I ran with it. You can reach out for help in kind of a half - assed way and you 'll get it and you wo n't take advantage of it. It 's not that difficult to overcome these seemingly ghastly problems... what 's hard is to decide to do it. ''
Downey got his first post-rehabilitation acting job in August 2001, lip - syncing in the video for Elton John 's single "I Want Love ''. Video director Sam Taylor - Wood shot 16 takes of the video and used the last one because, according to John, Downey looked completely relaxed, and, "The way he underplays it is fantastic ''.
Downey was able to return to the big screen after Mel Gibson, who had been a close friend to Downey since both had co-starred in Air America, paid Downey 's insurance bond for the 2003 film The Singing Detective (directed by his Back To School co-star Keith Gordon). Gibson 's gamble paved the way for Downey 's comeback and Downey returned to mainstream films in the mid-2000s with Gothika, for which producer Joel Silver withheld 40 % of his salary until after production wrapped as insurance against his addictive behavior. Similar clauses have become standard in his contracts since. Silver, who was getting closer to Downey as he dated his assistant Susan Levin, also got the actor the leading role in the comedy thriller Kiss Kiss Bang Bang, the directorial debut of screenwriter Shane Black.
After Gothika, Downey was cast in a number of leading and supporting roles, including well - received work in a number of semi-independent films: A Guide to Recognizing Your Saints, Good Night, and Good Luck, Richard Linklater 's dystopian, rotoscoped A Scanner Darkly (in which Downey plays the role of a drug addict), and Steven Shainberg 's fictional biographical film of Diane Arbus, Fur, where Downey 's character represented the two biggest influences on Arbus 's professional life, Lisette Model and Marvin Israel. Downey also received great notice for his roles in more mainstream fare such as Kiss Kiss Bang Bang and Disney 's poorly received The Shaggy Dog.
On November 23, 2004, Downey released his debut musical album, The Futurist, on Sony Classical, for which he designed the cover art and designed the track listing label on the CD with his son Indio. The album received mixed reviews, but Downey stated in 2006 that he probably will not do another album, as he felt that the energy he put into doing the album was not compensated.
In 2006, Downey returned to television when he guest - starred on Family Guy in the episode "The Fat Guy Strangler ''. Downey had previously telephoned the show 's production staff, and asked if he could produce or assist in an episode creation, as his son Indio is a fan of the show. The producers of the show accepted the offer and created the character of Patrick Pewterschmidt, Lois Griffin 's long lost, mentally disturbed brother, for Downey.
Downey signed on with publishers HarperCollins to write a memoir, which in 2006, was already being billed as a "candid look at the highs and lows of his life and career ''. In 2008, however, Downey returned his advance to the publishers, and canceled the book without further comment.
In 2007, Downey appeared in David Fincher 's mystery thriller Zodiac, which was based on a true story. He played the role of San Francisco Chronicle journalist Paul Avery, who was reporting the Zodiac Killer case.
With all of the critical success Downey had experienced throughout his career, he had not appeared in a "blockbuster '' film. That changed in 2008, when Downey starred in two critically and commercially successful films, Iron Man and Tropic Thunder. In the article Ben Stiller wrote for Downey 's entry in the 2008 edition of The Time 100, he offered an observation on Downey 's commercially successful summer at the box office:
Yes, Downey is Iron Man, but he really is Actor Man... In the realm where box office is irrelevant and talent is king, the realm that actually means something, he has always ruled, and finally this summer he gets to have his cake and let us eat him up all the way to the multiplex, where his mastery is in full effect.
In 2007, Downey was cast as the title character in the film Iron Man, with director Jon Favreau explaining the choice by stating: "Downey was n't the most obvious choice, but he understood what makes the character tick. He found a lot of his own life experience in ' Tony Stark '. '' Favreau insisted on having Downey as he repeatedly claimed that Downey would be to Iron Man what Johnny Depp is to the Pirates of the Caribbean series: a lead actor who could both elevate the quality of the film and increase the public 's interest in it. For the role Downey had to gain more than 20 pounds of muscle in five months to look like he "had the power to forge iron ''.
Iron Man was globally released between April 30 and May 3, 2008, grossing over $585 million worldwide and receiving rave reviews which cited Downey 's performance as a highlight of the film. By October 2008, Downey had agreed to appear as Iron Man in two Iron Man sequels, as part of the Iron Man franchise, as well as The Avengers, featuring the superhero team that Stark joins, based on Marvel 's comic book series The Avengers. He first reprised the role in a small appearance as Iron Man 's alter ego Tony Stark in the 2008 film The Incredible Hulk, as a part of Marvel Studios ' depicting the same Marvel Universe on film by providing continuity among the movies.
After Iron Man, Downey appeared alongside Ben Stiller and Jack Black in the Stiller - directed Tropic Thunder. The three actors play a Hollywood archetype -- with Downey playing self - absorbed multi-Oscar - winning Australian method actor Kirk Lazarus -- as they star in an extremely expensive Vietnam - era film called Tropic Thunder. Lazarus undergoes a "controversial skin pigmentation procedure '' in order to take on the role of African - American platoon sergeant Lincoln Osiris, which required Downey to wear dark makeup and a wig. Both Stiller and Downey feared Downey 's portrayal of the character could become controversial:
Stiller says that he and Downey always stayed focused on the fact that they were skewering insufferable actors, not African Americans. "I was trying to push it as far as you can within reality '', Stiller explains. "I had no idea how people would respond to it ''. Stiller screened a rough cut of the film (in March 2008) and it scored high with African Americans. He was relieved at the reaction. "It seems people really embrace it '', he said.
When asked by Harry Smith on CBS 's The Early Show who his model was for Lazarus, Downey laughed before responding, "Sadly, my sorry - ass self ''.
Released in the United States on August 13, 2008, Tropic Thunder received good reviews with 83 % of reviews positive and an average normalized score of 71, according to the review aggregator websites Rotten Tomatoes and Metacritic, respectively. It earned US $26 million in its North American opening weekend and retained the number one position for its first three weekends of release. The film grossed $180 million in theaters before its release on home video on November 18, 2008. Downey was nominated for the Academy Award for Best Supporting Actor for his portrayal of Lazarus.
Opening in late April 2009 was a film Downey finished in mid-2008, The Soloist. The film was delayed from a November 2008 release by Paramount Pictures due to the studio 's tight end - of - year release schedule. Critics who had seen the film in 2008 were mentioning it as a possible Academy Award candidate. Downey picked up an Academy Award nomination for the 2008 release year for his role in Tropic Thunder.
The first role Downey accepted after Iron Man was the title character in Guy Ritchie 's Sherlock Holmes. Warner Bros. released it on December 25, 2009. The film set several box office records in the United States for a Christmas Day release, beating the previous record holder, 2008 's Marley & Me, by nearly $10 M, and finished second to Avatar in a record - setting Christmas weekend box office. Sherlock Holmes ended up being the 8th highest - grossing film of 2009. When Downey won the Golden Globe for Best Actor in a Motion Picture Musical or Comedy from the Hollywood Foreign Press Association for his role as Sherlock Holmes, he noted in his acceptance speech that he had prepared no remarks because "Susan Downey (his wife and Sherlock Holmes producer) told me that Matt Damon (nominated for his role in The Informant!) was going to win so do n't bother preparing a speech ''.
Downey returned as Tony Stark in the first of two planned sequels to Iron Man, Iron Man 2, which released in May 2010. Iron Man 2 grossed over $623 M worldwide, becoming the 7th highest - grossing film of 2010.
Downey 's other commercial film release of 2010 was the comedy road film, Due Date. The movie, co-starring Zach Galifianakis, was released in November 2010 and grossed over $211 M worldwide, making it the 36th highest - grossing movie of 2010. Downey 's sole 2011 film credit was the sequel to the 2009 version of Sherlock Holmes, Sherlock Holmes: A Game of Shadows, which opened worldwide on December 16, 2011.
In 2012, Downey reprised the role of Tony Stark in The Avengers. The film received positive reviews and was highly successful at the box office, becoming the third highest - grossing film of all time both in the United States and worldwide. His film, the David Dobkin - directed dramedy The Judge, a project co-produced by his production company Team Downey, was the opening film at the Toronto International Film Festival in 2014. Downey played Tony Stark again in Iron Man 3 (2013), Avengers: Age of Ultron (2015), Captain America: Civil War (2016), Spider - Man: Homecoming (2017), and Avengers: Infinity War (2018).
Downey is scheduled to star in an upcoming Pinocchio film, The Voyage of Doctor Dolittle, and an untitled Avengers film.
Downey has sung on several soundtracks for his films, including for Chaplin, Too Much Sun, Two Girls and a Guy, Friends and Lovers, The Singing Detective, and Kiss Kiss Bang Bang. In 2001, he appeared in the music video for Elton John 's song, "I Want Love. '' He released a CD in 2004 called The Futurist, and while promoting his film Tropic Thunder, he and his co-stars Ben Stiller and Jack Black were back - up singers "The Pips '' to Gladys Knight singing "Midnight Train to Georgia ''.
Downey 's most commercially successful recording venture to date (combining sales and radio airplay) has been his remake of the 1973 Joni Mitchell Christmas song "River '', which was included on the Ally McBeal tie - in album Ally McBeal: A Very Ally Christmas, released in 2000; Downey 's character Larry Paul performs the song in the Ally McBeal episode "Tis the Season ''.
On June 14, 2010, Downey and his wife Susan opened their own production company called Team Downey. Their first project was The Judge.
Downey started dating actress Sarah Jessica Parker after meeting her on the set of Firstborn. The couple later separated due to his drug addiction.
He married actress / singer Deborah Falconer on May 29, 1992, after a 42 - day courtship. Their son, Indio Falconer Downey, was born in September 1993. The strain on their marriage from Downey 's repeated trips to rehab and jail finally reached a breaking point; in 2001, in the midst of Downey 's last arrest and sentencing to an extended stay in rehab, Falconer left Downey and took their son with her. Downey and Falconer finalized their divorce on April 26, 2004.
In 2003, Downey met producer Susan Levin, an Executive Vice President of Production at Joel Silver 's film company, Silver Pictures on the set of Gothika. Downey and Susan quietly struck up a romance during production, though Susan turned down his romantic advances twice. Despite Susan 's worries that the romance would not last after the completion of shooting because "he 's an actor; I have a real job '', the couple 's relationship continued after production wrapped on Gothika, and Downey proposed to Susan on the night before her thirtieth birthday. The couple were married in August 2005, in a Jewish ceremony at Amagansett, New York. A tattoo on one of his biceps reads "Suzie Q '' in tribute to her. Their first child, a son, was born in February 2012, their second child, a daughter, was born in November 2014.
Downey has been a close friend of Mel Gibson since they starred in Air America. Downey defended Gibson during the controversy surrounding The Passion of the Christ, and said "nobody 's perfect '' in reference to Gibson 's DUI. Gibson said of Downey: "He was one of the first people to call and offer the hand of friendship. He just said, ' Hey, welcome to the club. Let 's go see what we can do to work on ourselves. ' '' In October 2011, Downey was being honored at the 25th American Cinematheque Awards; Downey chose Gibson to present him with his award for his life 's work, and used his air time to say a few kind words about Gibson and explain why he chose him to present the award.
Downey maintains that he has been drug - free since July 2003, and has credited his wife with helping him overcome his drug and alcohol habits, along with his family, therapy, meditation, twelve - step recovery programs, yoga, and the practice of Wing Chun kung fu, the martial art he learned from Eric Oram, who is also a fight consultant in several of Downey 's movies. Oram was Downey 's personal fight coordinator in Avengers: Age of Ultron and Captain America: Civil War. In December 2015, Downey received a full and unconditional pardon from Governor of California Jerry Brown for his prior drug convictions. Oram wrote a letter in support of Downey 's pardon to Governor Brown.
Downey has described his religious beliefs as "Jewish - Buddhist '', and he is reported to have consulted astrologers. In the past, Downey has been interested in Christianity and the Hare Krishna movement.
In a 2008 interview, Downey stated that his time in prison changed his political point of view somewhat, saying, "I have a really interesting political point of view, and it 's not always something I say too loud at dinner tables here, but you ca n't go from a $2,000 - a-night suite at La Mirage to a penitentiary and really understand it and come out a liberal. You ca n't. I would n't wish that experience on anyone else, but it was very, very, very educational for me and has informed my proclivities and politics ever since. '' However, when asked about the quote in a 2015 interview to promote Avengers: Age of Ultron, he denied that his previous statement reflected any longstanding beliefs on his part, and stated, "I would n't say that I 'm a Republican or a liberal or a Democrat. ''
Downey serves on the board of the Anti-Recidivism Coalition.
In 2016, Downey appeared in an anti-Trump commercial with other celebrities encouraging people to register to vote in the 2016 election.
|
when did the first and second world war start and end | World War I - wikipedia
Allied victory
World War I (often abbreviated to WWI or WW1), also known as the First World War, the Great War, or the War to End All Wars, was a global war originating in Europe that lasted from 28 July 1914 to 11 November 1918. More than 70 million military personnel, including 60 million Europeans, were mobilised in one of the largest wars in history. Over nine million combatants and seven million civilians died as a result of the war (including the victims of a number of genocides), a casualty rate exacerbated by the belligerents ' technological and industrial sophistication, and the tactical stalemate caused by gruelling trench warfare. It was one of the deadliest conflicts in history and precipitated major political change, including the Revolutions of 1917 -- 1923 in many of the nations involved. Unresolved rivalries at the end of the conflict contributed to the start of the Second World War twenty - one years later.
The war drew in all the world 's economic great powers, assembled in two opposing alliances: the Allies (based on the Triple Entente of the Russian Empire, the French Third Republic, and the United Kingdom of Great Britain and Ireland) versus the Central Powers of Germany and Austria - Hungary. Although Italy was a member of the Triple Alliance alongside Germany and Austria - Hungary, it did not join the Central Powers, as Austria - Hungary had taken the offensive against the terms of the alliance. These alliances were reorganised and expanded as more nations entered the war: Italy, Japan and the United States joined the Allies, while the Ottoman Empire and Bulgaria joined the Central Powers.
The trigger for the war was the assassination of Archduke Franz Ferdinand of Austria, heir to the throne of Austria - Hungary, by Yugoslav nationalist Gavrilo Princip in Sarajevo on 28 June 1914. This set off a diplomatic crisis when Austria - Hungary delivered an ultimatum to the Kingdom of Serbia, and entangled international alliances formed over the previous decades were invoked. Within weeks the major powers were at war, and the conflict soon spread around the world.
Russia was the first to order a partial mobilization of its armies on 24 -- 25 July, and when on 28 July Austria - Hungary declared war on Serbia, Russia declared general mobilization on 30 July. Germany presented an ultimatum to Russia to demobilise, and when this was refused, declared war on Russia on 1 August. Being outnumbered on the Eastern Front, Russia urged its Triple Entente ally France to open up a second front in the west.
Japan entered the war on the side of the Allies on 23 August 1914, seizing the opportunity of Germany 's distraction with the European War to expand its sphere of influence in China and the Pacific.
Over forty years earlier in 1870, the Franco - Prussian War had ended the Second French Empire and France had ceded the provinces of Alsace - Lorraine to a unified Germany. Bitterness over that defeat and the determination to retake Alsace - Lorraine made the acceptance of Russia 's plea for help an easy choice, so France began full mobilisation on 1 August and, on 3 August, Germany declared war on France. The border between France and Germany was heavily fortified on both sides so, according to the Schlieffen Plan, Germany then invaded neutral Belgium and Luxembourg before moving towards France from the north, leading the United Kingdom to declare war on Germany on 4 August due to their violation of Belgian neutrality.
After the German march on Paris was halted in the Battle of the Marne, what became known as the Western Front settled into a battle of attrition, with a trench line that changed little until 1917. On the Eastern Front, the Russian army led a successful campaign against the Austro - Hungarians, but the Germans stopped its invasion of East Prussia in the battles of Tannenberg and the Masurian Lakes. In November 1914, the Ottoman Empire joined the Central Powers, opening fronts in the Caucasus, Mesopotamia, and the Sinai Peninsula. In 1915, Italy joined the Allies and Bulgaria joined the Central Powers. Romania joined the Allies in 1916. After the sinking of seven U.S. merchant ships by German submarines, and the revelation that the Germans were trying to get Mexico to make war on the United States, the U.S. declared war on Germany on 6 April 1917.
The Russian government collapsed in March 1917 with the February Revolution, and the October Revolution followed by a further military defeat brought the Russians to terms with the Central Powers via the Treaty of Brest - Litovsk, which granted the Germans a significant victory. After a the stunning German Spring Offensive along the Western Front in the spring of 1918, the Allies rallied and drove back the Germans in the successful Hundred Days Offensive. On 4 November 1918, the Austro - Hungarian empire agreed to the Armistice of Villa Giusti, and Germany, which had its own trouble with revolutionaries, agreed to an armistice on 11 November 1918, ending the war in victory for the Allies.
By the end of the war or soon after, the German Empire, Russian Empire, Austro - Hungarian Empire and the Ottoman Empire ceased to exist. National borders were redrawn, with nine independent nations restored or created, and Germany 's colonies were parceled out among the victors. During the Paris Peace Conference of 1919, the Big Four powers (Britain, France, the United States and Italy) imposed their terms in a series of treaties. The League of Nations was formed with the aim of preventing any repetition of such a conflict. This effort failed, and economic depression, renewed nationalism, weakened successor states, and feelings of humiliation (particularly in Germany) eventually contributed to the start of World War II.
From the time of its start until the approach of World War II, the First World War was called simply the World War or the Great War and thereafter the First World War or World War I. At the time, it was also sometimes called "the war to end war '' or "the war to end all wars '' due to its then - unparalleled scale and devastation.
In Canada, Maclean 's magazine in October 1914 wrote, "Some wars name themselves. This is the Great War. '' During the interwar period (1918 -- 1939), the war was most often called the World War and the Great War in English - speaking countries.
The term "First World War '' was first used in September 1914 by the German biologist and philosopher Ernst Haeckel, who claimed that "there is no doubt that the course and character of the feared ' European War '... will become the first world war in the full sense of the word, '' citing a wire service report in The Indianapolis Star on 20 September 1914. After the onset of the Second World War in 1939, the terms World War I or the First World War became standard, with British and Canadian historians favouring the First World War, and Americans World War I.
In the introduction to his book, Waterloo in 100 Objects, historian Gareth Glover states: "This opening statement will cause some bewilderment to many who have grown up with the appellation of the Great War firmly applied to the 1914 -- 18 First World War. But to anyone living before 1918, the title of the Great War was applied to the Revolutionary and Napoleonic wars in which Britain fought France almost continuously for twenty - two years from 1793 to 1815. '' In 1911, the historian John Holland Rose published a book titled William Pitt and the Great War.
During the 19th century, the major European powers went to great lengths to maintain a balance of power throughout Europe, resulting in the existence of a complex network of political and military alliances throughout the continent by 1900. These began in 1815, with the Holy Alliance between Prussia, Russia, and Austria. When Germany was united in 1871, Prussia became part of the new German nation. Soon after, in October 1873, German Chancellor Otto von Bismarck negotiated the League of the Three Emperors (German: Dreikaiserbund) between the monarchs of Austria - Hungary, Russia and Germany. This agreement failed because Austria - Hungary and Russia could not agree over Balkan policy, leaving Germany and Austria - Hungary in an alliance formed in 1879, called the Dual Alliance. This was seen as a method of countering Russian influence in the Balkans as the Ottoman Empire continued to weaken. This alliance expanded in 1882 to include Italy, in what became the Triple Alliance.
Bismarck had especially worked to hold Russia at Germany 's side in an effort to avoid a two - front war with France and Russia. When Wilhelm II ascended to the throne as German Emperor (Kaiser), Bismarck was compelled to retire and his system of alliances was gradually de-emphasised. For example, the Kaiser refused, in 1890, to renew the Reinsurance Treaty with Russia. Two years later, the Franco - Russian Alliance was signed to counteract the force of the Triple Alliance. In 1904, Britain signed a series of agreements with France, the Entente Cordiale, and in 1907, Britain and Russia signed the Anglo - Russian Convention. While these agreements did not formally ally Britain with France or Russia, they made British entry into any future conflict involving France or Russia a possibility, and the system of interlocking bilateral agreements became known as the Triple Entente.
German industrial and economic power had grown greatly after unification and the foundation of the Empire in 1871 following the Franco - Prussian War. From the mid-1890s on, the government of Wilhelm II used this base to devote significant economic resources for building up the Kaiserliche Marine (Imperial German Navy), established by Admiral Alfred von Tirpitz, in rivalry with the British Royal Navy for world naval supremacy. As a result, each nation strove to out - build the other in capital ships. With the launch of HMS Dreadnought in 1906, the British Empire expanded on its significant advantage over its German rival. The arms race between Britain and Germany eventually extended to the rest of Europe, with all the major powers devoting their industrial base to producing the equipment and weapons necessary for a pan-European conflict. Between 1908 and 1913, the military spending of the European powers increased by 50 %.
Austria - Hungary precipitated the Bosnian crisis of 1908 -- 1909 by officially annexing the former Ottoman territory of Bosnia and Herzegovina, which it had occupied since 1878. This angered the Kingdom of Serbia and its patron, the Pan-Slavic and Orthodox Russian Empire. Russian political manoeuvring in the region destabilised peace accords that were already fracturing in the Balkans, which came to be known as the "powder keg of Europe. '' In 1912 and 1913, the First Balkan War was fought between the Balkan League and the fracturing Ottoman Empire. The resulting Treaty of London further shrank the Ottoman Empire, creating an independent Albanian state while enlarging the territorial holdings of Bulgaria, Serbia, Montenegro, and Greece. When Bulgaria attacked Serbia and Greece on 16 June 1913, it lost most of Macedonia to Serbia and Greece, and Southern Dobruja to Romania in the 33 - day Second Balkan War, further destabilising the region. The Great Powers were able to keep these Balkan conflicts contained, but the next one would spread throughout Europe and beyond.
On 28 June 1914, Austrian Archduke Franz Ferdinand visited the Bosnian capital, Sarajevo. A group of six assassins (Cvjetko Popović, Gavrilo Princip, Muhamed Mehmedbašić, Nedeljko Čabrinović, Trifko Grabež, Vaso Čubrilović) from the Yugoslavist group Mlada Bosna, supplied by the Serbian Black Hand, had gathered on the street where the Archduke 's motorcade would pass, with the intention of assassinating him. Čabrinović threw a grenade at the car, but missed. Some nearby were injured by the blast, but Ferdinand 's convoy carried on. The other assassins failed to act as the cars drove past them.
About an hour later, when Ferdinand was returning from a visit at the Sarajevo Hospital with those wounded in the assassination attempt, the convoy took a wrong turn into a street where, by coincidence, Princip stood. With a pistol, Princip shot and killed Ferdinand and his wife Sophie. The reaction among the people in Austria was mild, almost indifferent. As historian Zbyněk Zeman later wrote, "the event almost failed to make any impression whatsoever. On Sunday and Monday (28 and 29 June), the crowds in Vienna listened to music and drank wine, as if nothing had happened. '' Nevertheless, the political impact of the murder of the heir to the throne was significant and has been described as a "9 / 11 effect '', a terrorist event charged with historic meaning, transforming the political chemistry in Vienna. And although they were not personally close, the Emperor Franz Joseph was profoundly shocked and upset.
The Austro - Hungarian authorities encouraged the subsequent anti-Serb riots in Sarajevo, in which Bosnian Croats and Bosniaks killed two Bosnian Serbs and damaged numerous Serb - owned buildings. Violent actions against ethnic Serbs were also organized outside Sarajevo, in other cities in Austro - Hungarian - controlled Bosnia and Herzegovina, Croatia and Slovenia. Austro - Hungarian authorities in Bosnia and Herzegovina imprisoned and extradited approximately 5,500 prominent Serbs, 700 to 2,200 of whom died in prison. A further 460 Serbs were sentenced to death. A predominantly Bosniak special militia known as the Schutzkorps was established and carried out the persecution of Serbs.
The assassination led to a month of diplomatic manoeuvring between Austria - Hungary, Germany, Russia, France and Britain, called the July Crisis. Believing correctly that Serbian officials (especially the officers of the Black Hand) were involved in the plot to murder the Archduke, and wanting to finally end Serbian interference in Bosnia, Austria - Hungary delivered to Serbia on 23 July the July Ultimatum, a series of ten demands that were made intentionally unacceptable, in an effort to provoke a war with Serbia. Serbia decreed general mobilization on the 25th. Serbia accepted all of the terms of the ultimatum except for article six, which demanded that Austrian delegates be allowed in Serbia for the purpose of participation in the investigation into the assassination. Following this, Austria broke off diplomatic relations with Serbia and, the next day ordered a partial mobilization. Finally, on 28 July 1914, Austria - Hungary declared war on Serbia.
On 29 July, Russia, in support of Serbia, declared partial mobilization against Austria - Hungary. On the 30th, Russia ordered general mobilization. German Chancellor Bethmann - Hollweg waited until the 31st for an appropriate response, when Germany declared a "state of danger of war ''. Kaiser Wilhelm II asked his cousin, Tsar Nicolas II, to suspend the Russian general mobilization. When he refused, Germany issued an ultimatum demanding its mobilization be stopped, and a commitment not to support Serbia. Another was sent to France, asking her not to support Russia if it were to come to the defence of Serbia. On 1 August, after the Russian response, Germany mobilized and declared war on Russia. This also led to the general mobilization in Austria - Hungary on 4 August.
The German government issued demands to France that it remain neutral as they had to decide which deployment plan to implement, it being difficult if not impossible to change the deployment whilst it was underway. The modified German Schlieffen Plan, Aufmarsch II West, would deploy 80 % of the army in the west, and Aufmarsch I Ost and Aufmarsch II Ost would deploy 60 % in the west and 40 % in the east as this was the maximum that the East Prussian railway infrastructure could carry. The French did not respond, but sent a mixed message by ordering their troops to withdraw 10 km (6 mi) from the border to avoid any incidents, and at the same time ordered the mobilisation of her reserves. Germany responded by mobilising its own reserves and implementing Aufmarsch II West. On 1 August Wilhelm ordered General Moltke to "march the whole of the... army to the East '' after he had been wrongly informed that the British would remain neutral as long as France was not attacked. The General convinced the Kaiser that improvising the redeployment of a million men was unthinkable and that making it possible for the French to attack the Germans "in the rear '' might prove disastrous. Yet Wilhelm insisted that the German army should not march into Luxembourg until he received a telegram sent by his cousin George V, who made it clear that there had been a misunderstanding. Eventually the Kaiser told Moltke, "Now you can do what you want. '' Germany attacked Luxembourg on 2 August, and on 3 August declared war on France. On 4 August, after Belgium refused to permit German troops to cross its borders into France, Germany declared war on Belgium as well. Britain declared war on Germany at 19: 00 UTC on 4 August 1914 (effective from 11 pm), following an "unsatisfactory reply '' to the British ultimatum that Belgium must be kept neutral.
The strategy of the Central Powers suffered from miscommunication. Germany had promised to support Austria - Hungary 's invasion of Serbia, but interpretations of what this meant differed. Previously tested deployment plans had been replaced early in 1914, but those had never been tested in exercises. Austro - Hungarian leaders believed Germany would cover its northern flank against Russia. Germany, however, envisioned Austria - Hungary directing most of its troops against Russia, while Germany dealt with France. This confusion forced the Austro - Hungarian Army to divide its forces between the Russian and Serbian fronts.
Austria invaded and fought the Serbian army at the Battle of Cer and Battle of Kolubara beginning on 12 August. Over the next two weeks, Austrian attacks were thrown back with heavy losses, which marked the first major Allied victories of the war and dashed Austro - Hungarian hopes of a swift victory. As a result, Austria had to keep sizeable forces on the Serbian front, weakening its efforts against Russia. Serbia 's defeat of the Austro - Hungarian invasion of 1914 has been called one of the major upset victories of the twentieth century.
At the outbreak of World War I, 80 % of the German army was deployed as seven field armies in the west according to the plan Aufmarsch II West. However, they were then assigned to execute the retired deployment plan Aufmarsch I West, also known as the Schlieffen Plan. This would march German armies through northern Belgium and into France, in an attempt to encircle the French army and then breach the ' second defensive area ' of the fortresses of Verdun and Paris and the Marne river.
Aufmarsch I West was one of four deployment plans available to the German General Staff in 1914. Each plan favoured certain operations, but did not specify exactly how those operations were to be carried out, leaving the commanding officers to carry those out at their own initiative and with minimal oversight. Aufmarsch I West, designed for a one - front war with France, had been retired once it became clear it was irrelevant to the wars Germany could expect to face; both Russia and Britain were expected to help France, and there was no possibility of Italian nor Austro - Hungarian troops being available for operations against France. But despite its unsuitability, and the availability of more sensible and decisive options, it retained a certain allure due to its offensive nature and the pessimism of pre-war thinking, which expected offensive operations to be short - lived, costly in casualties, and unlikely to be decisive. Accordingly, the Aufmarsch II West deployment was changed for the offensive of 1914, despite its unrealistic goals and the insufficient forces Germany had available for decisive success. Moltke took Schlieffen 's plan and modified the deployment of forces on the western front by reducing the right wing, the one to advance through Belgium, from 85 % to 70 %. In the end, the Schlieffen plan was so radically modified by Moltke, that it could be more properly called the Moltke Plan.
The plan called for the right flank of the German advance to bypass the French armies concentrated on the Franco - German border, defeat the French forces closer to Luxembourg and Belgium and move south to Paris. Initially the Germans were successful, particularly in the Battle of the Frontiers (14 -- 24 August). By 12 September, the French, with assistance from the British Expeditionary Force (BEF), halted the German advance east of Paris at the First Battle of the Marne (5 -- 12 September) and pushed the German forces back some 50 km (31 mi). The French offensive into southern Alsace, launched on 20 August with the Battle of Mulhouse, had limited success.
In the east, Russia invaded with two armies. In response, Germany rapidly moved the 8th Field Army from its previous role as reserve for the invasion of France to East Prussia by rail across the German Empire. This army, led by general Paul von Hindenburg, defeated Russia in a series of battles collectively known as the First Battle of Tannenberg (17 August -- 2 September). While the Russian invasion failed, it caused the diversion of German troops to the east, allowing the Allied victory at the First Battle of the Marne. This meant Germany failed to achieve its objective of avoiding a long, two - front war. However, the German army had fought its way into a good defensive position inside France and effectively halved France 's supply of coal. It had also killed or permanently crippled 230,000 more French and British troops than it itself had lost. Despite this, communications problems and questionable command decisions cost Germany the chance of a more decisive outcome.
New Zealand occupied German Samoa (later Western Samoa) on 30 August 1914. On 11 September, the Australian Naval and Military Expeditionary Force landed on the island of Neu Pommern (later New Britain), which formed part of German New Guinea. On 28 October, the German cruiser SMS Emden sank the Russian cruiser Zhemchug in the Battle of Penang. Japan seized Germany 's Micronesian colonies and, after the Siege of Tsingtao, the German coaling port of Qingdao on the Chinese Shandong peninsula. As Vienna refused to withdraw the Austro - Hungarian cruiser SMS Kaiserin Elisabeth from Tsingtao, Japan declared war not only on Germany, but also on Austria - Hungary; the ship participated in the defence of Tsingtao where it was sunk in November 1914. Within a few months, the Allied forces had seized all the German territories in the Pacific; only isolated commerce raiders and a few holdouts in New Guinea remained.
Some of the first clashes of the war involved British, French, and German colonial forces in Africa. On 6 -- 7 August, French and British troops invaded the German protectorate of Togoland and Kamerun. On 10 August, German forces in South - West Africa attacked South Africa; sporadic and fierce fighting continued for the rest of the war. The German colonial forces in German East Africa, led by Colonel Paul von Lettow - Vorbeck, fought a guerrilla warfare campaign during World War I and only surrendered two weeks after the armistice took effect in Europe.
Germany attempted to use Indian nationalism and pan-Islamism to its advantage, instigating uprisings in India, and sending a mission that urged Afghanistan to join the war on the side of Central powers. However, contrary to British fears of a revolt in India, the outbreak of the war saw an unprecedented outpouring of loyalty and goodwill towards Britain. Indian political leaders from the Indian National Congress and other groups were eager to support the British war effort, since they believed that strong support for the war effort would further the cause of Indian Home Rule. The Indian Army in fact outnumbered the British Army at the beginning of the war; about 1.3 million Indian soldiers and labourers served in Europe, Africa, and the Middle East, while the central government and the princely states sent large supplies of food, money, and ammunition. In all, 140,000 men served on the Western Front and nearly 700,000 in the Middle East. Casualties of Indian soldiers totalled 47,746 killed and 65,126 wounded during World War I. The suffering engendered by the war, as well as the failure of the British government to grant self - government to India after the end of hostilities, bred disillusionment and fuelled the campaign for full independence that would be led by Mohandas K. Gandhi and others.
Military tactics developed before World War I failed to keep pace with advances in technology and had become obsolete. These advances had allowed the creation of strong defensive systems, which out - of - date military tactics could not break through for most of the war. Barbed wire was a significant hindrance to massed infantry advances, while artillery, vastly more lethal than in the 1870s, coupled with machine guns, made crossing open ground extremely difficult. Commanders on both sides failed to develop tactics for breaching entrenched positions without heavy casualties. In time, however, technology began to produce new offensive weapons, such as gas warfare and the tank.
Just after the First Battle of the Marne (5 -- 12 September 1914), Entente and German forces repeatedly attempted manoeuvring to the north in an effort to outflank each other: this series of manoeuvres became known as the "Race to the Sea ''. When these outflanking efforts failed, the opposing forces soon found themselves facing an uninterrupted line of entrenched positions from Lorraine to Belgium 's coast. Britain and France sought to take the offensive, while Germany defended the occupied territories. Consequently, German trenches were much better constructed than those of the enemy; Anglo - French trenches were intended only to be "temporary '' before the allied forces broke through the German defences.
Both sides tried to break the stalemate using scientific and technological advances. On 22 April 1915, at the Second Battle of Ypres, the Germans (violating the Hague Convention) used chlorine gas for the first time on the Western Front. Several types of gas soon became widely used by both sides, and though it never proved a decisive, battle - winning weapon, poison gas became one of the most - feared and best - remembered horrors of the war. Tanks were developed by Britain and France and were first used in combat by the British during the Battle of Flers -- Courcelette (part of the Battle of the Somme) on 15 September 1916, with only partial success. However, their effectiveness would grow as the war progressed; the Allies built tanks in large numbers, whilst the Germans employed only a few of their own design, supplemented by captured Allied tanks.
Neither side proved able to deliver a decisive blow for the next two years. Throughout 1915 -- 17, the British Empire and France suffered more casualties than Germany, because of both the strategic and tactical stances chosen by the sides. Strategically, while the Germans only mounted one major offensive, the Allies made several attempts to break through the German lines.
In February 1916 the Germans attacked the French defensive positions at Verdun. Lasting until December 1916, the battle saw initial German gains, before French counter-attacks returned matters to near their starting point. Casualties were greater for the French, but the Germans bled heavily as well, with anywhere from 700,000 to 975,000 casualties suffered between the two combatants. Verdun became a symbol of French determination and self - sacrifice.
The Battle of the Somme was an Anglo - French offensive of July to November 1916. The opening of this offensive (1 July 1916) saw the British Army endure the bloodiest day in its history, suffering 57,470 casualties, including 19,240 dead, on the first day alone. The entire Somme offensive cost the British Army some 420,000 casualties. The French suffered another estimated 200,000 casualties and the Germans an estimated 500,000.
Protracted action at Verdun throughout 1916, combined with the bloodletting at the Somme, brought the exhausted French army to the brink of collapse. Futile attempts using frontal assault came at a high price for both the British and the French and led to the widespread French Army Mutinies, after the failure of the costly Nivelle Offensive of April -- May 1917. The concurrent British Battle of Arras was more limited in scope, and more successful, although ultimately of little strategic value. A smaller part of the Arras offensive, the capture of Vimy Ridge by the Canadian Corps, became highly significant to that country: the idea that Canada 's national identity was born out of the battle is an opinion widely held in military and general histories of Canada.
The last large - scale offensive of this period was a British attack (with French support) at Passchendaele (July -- November 1917). This offensive opened with great promise for the Allies, before bogging down in the October mud. Casualties, though disputed, were roughly equal, at some 200,000 -- 400,000 per side.
These years of trench warfare in the West saw no major exchanges of territory and, as a result, are often thought of as static and unchanging. However, throughout this period, British, French, and German tactics constantly evolved to meet new battlefield challenges.
At the start of the war, the German Empire had cruisers scattered across the globe, some of which were subsequently used to attack Allied merchant shipping. The British Royal Navy systematically hunted them down, though not without some embarrassment from its inability to protect Allied shipping. Before the beginning of the war, it was widely understood that Britain held the position of strongest, most influential navy in the world. The publishing of the book The Influence of Sea Power upon History by Alfred Thayer Mahan was intended to encourage the United States to increase their naval power. Instead, this book made it to Germany and inspired its readers to try to over-power the British Royal Navy. For example, the German detached light cruiser SMS Emden, part of the East - Asia squadron stationed at Qingdao, seized or destroyed 15 merchantmen, as well as sinking a Russian cruiser and a French destroyer. However, most of the German East - Asia squadron -- consisting of the armoured cruisers SMS Scharnhorst and Gneisenau, light cruisers Nürnberg and Leipzig and two transport ships -- did not have orders to raid shipping and was instead underway to Germany when it met British warships. The German flotilla and Dresden sank two armoured cruisers at the Battle of Coronel, but was virtually destroyed at the Battle of the Falkland Islands in December 1914, with only Dresden and a few auxiliaries escaping, but after the Battle of Más a Tierra these too had been destroyed or interned.
Soon after the outbreak of hostilities, Britain began a naval blockade of Germany. The strategy proved effective, cutting off vital military and civilian supplies, although this blockade violated accepted international law codified by several international agreements of the past two centuries. Britain mined international waters to prevent any ships from entering entire sections of ocean, causing danger to even neutral ships. Since there was limited response to this tactic of the British, Germany expected a similar response to its unrestricted submarine warfare.
The Battle of Jutland (German: Skagerrakschlacht, or "Battle of the Skagerrak '') developed into the largest naval battle of the war. It was the only full - scale clash of battleships during the war, and one of the largest in history. The Kaiserliche Marine 's High Seas Fleet, commanded by Vice Admiral Reinhard Scheer, fought the Royal Navy 's Grand Fleet, led by Admiral Sir John Jellicoe. The engagement was a stand off, as the Germans were outmanoeuvred by the larger British fleet, but managed to escape and inflicted more damage to the British fleet than they received. Strategically, however, the British asserted their control of the sea, and the bulk of the German surface fleet remained confined to port for the duration of the war.
German U-boats attempted to cut the supply lines between North America and Britain. The nature of submarine warfare meant that attacks often came without warning, giving the crews of the merchant ships little hope of survival. The United States launched a protest, and Germany changed its rules of engagement. After the sinking of the passenger ship RMS Lusitania in 1915, Germany promised not to target passenger liners, while Britain armed its merchant ships, placing them beyond the protection of the "cruiser rules '', which demanded warning and movement of crews to "a place of safety '' (a standard that lifeboats did not meet). Finally, in early 1917, Germany adopted a policy of unrestricted submarine warfare, realising that the Americans would eventually enter the war. Germany sought to strangle Allied sea lanes before the United States could transport a large army overseas, but after initial successes eventually failed to do so.
The U-boat threat lessened in 1917, when merchant ships began travelling in convoys, escorted by destroyers. This tactic made it difficult for U-boats to find targets, which significantly lessened losses; after the hydrophone and depth charges were introduced, accompanying destroyers could attack a submerged submarine with some hope of success. Convoys slowed the flow of supplies, since ships had to wait as convoys were assembled. The solution to the delays was an extensive program of building new freighters. Troopships were too fast for the submarines and did not travel the North Atlantic in convoys. The U-boats had sunk more than 5,000 Allied ships, at a cost of 199 submarines. World War I also saw the first use of aircraft carriers in combat, with HMS Furious launching Sopwith Camels in a successful raid against the Zeppelin hangars at Tondern in July 1918, as well as blimps for antisubmarine patrol.
Faced with Russia, Austria - Hungary could spare only one - third of its army to attack Serbia. After suffering heavy losses, the Austrians briefly occupied the Serbian capital, Belgrade. A Serbian counter-attack in the Battle of Kolubara succeeded in driving them from the country by the end of 1914. For the first ten months of 1915, Austria - Hungary used most of its military reserves to fight Italy. German and Austro - Hungarian diplomats, however, scored a coup by persuading Bulgaria to join the attack on Serbia. The Austro - Hungarian provinces of Slovenia, Croatia and Bosnia provided troops for Austria - Hungary, in the fight with Serbia, Russia and Italy. Montenegro allied itself with Serbia.
Bulgaria declared war on Serbia, 12 October and joined in the attack by the Austro - Hungarian army under Mackensen 's army of 250,000 that was already underway. Serbia was conquered in a little more than a month, as the Central Powers, now including Bulgaria, sent in 600,000 troops total. The Serbian army, fighting on two fronts and facing certain defeat, retreated into northern Albania. The Serbs suffered defeat in the Battle of Kosovo. Montenegro covered the Serbian retreat towards the Adriatic coast in the Battle of Mojkovac in 6 -- 7 January 1916, but ultimately the Austrians also conquered Montenegro. The surviving Serbian soldiers were evacuated by ship to Greece. After conquest, Serbia was divided between Austro - Hungary and Bulgaria.
In late 1915, a Franco - British force landed at Salonica in Greece, to offer assistance and to pressure its government to declare war against the Central Powers. However, the pro-German King Constantine I dismissed the pro-Allied government of Eleftherios Venizelos before the Allied expeditionary force arrived. The friction between the King of Greece and the Allies continued to accumulate with the National Schism, which effectively divided Greece between regions still loyal to the king and the new provisional government of Venizelos in Salonica. After intense negotiations and an armed confrontation in Athens between Allied and royalist forces (an incident known as Noemvriana), the King of Greece resigned and his second son Alexander took his place; Greece then officially joined the war on the side of the Allies.
In the beginning, the Macedonian Front was mostly static. French and Serbian forces retook limited areas of Macedonia by recapturing Bitola on 19 November 1916 following the costly Monastir Offensive, which brought stabilization of the front.
Serbian and French troops finally made a breakthrough in September 1918, after most of the German and Austro - Hungarian troops had been withdrawn. The Bulgarians were defeated at the Battle of Dobro Pole and by 25 September 1918 British and French troops had crossed the border into Bulgaria proper as the Bulgarian army collapsed. Bulgaria capitulated four days later, on 29 September 1918. The German high command responded by despatching troops to hold the line, but these forces were far too weak to reestablish a front.
The disappearance of the Macedonian Front meant that the road to Budapest and Vienna was now opened to Allied forces. Hindenburg and Ludendorff concluded that the strategic and operational balance had now shifted decidedly against the Central Powers and, a day after the Bulgarian collapse, insisted on an immediate peace settlement.
The Ottomans threatened Russia 's Caucasian territories and Britain 's communications with India via the Suez Canal. As the conflict progressed, the Ottoman Empire took advantage of the European powers ' preoccupation with the war and conducted large - scale ethnic cleansing of the indigenous Armenian, Greek, and Assyrian Christian populations, known as the Armenian Genocide, Greek Genocide, and Assyrian Genocide.
The British and French opened overseas fronts with the Gallipoli (1915) and Mesopotamian campaigns (1914). In Gallipoli, the Ottoman Empire successfully repelled the British, French, and Australian and New Zealand Army Corps (ANZACs). In Mesopotamia, by contrast, after the defeat of the British defenders in the Siege of Kut by the Ottomans (1915 -- 16), British Imperial forces reorganised and captured Baghdad in March 1917. The British were aided in Mesopotamia by local Arab and Assyrian tribesmen, while the Ottomans employed local Kurdish and Turcoman tribes.
Further to the west, the Suez Canal was defended from Ottoman attacks in 1915 and 1916; in August, a German and Ottoman force was defeated at the Battle of Romani by the ANZAC Mounted Division and the 52nd (Lowland) Infantry Division. Following this victory, an Egyptian Expeditionary Force advanced across the Sinai Peninsula, pushing Ottoman forces back in the Battle of Magdhaba in December and the Battle of Rafa on the border between the Egyptian Sinai and Ottoman Palestine in January 1917.
Russian armies generally saw success in the Caucasus. Enver Pasha, supreme commander of the Ottoman armed forces, was ambitious and dreamed of re-conquering central Asia and areas that had been lost to Russia previously. He was, however, a poor commander. He launched an offensive against the Russians in the Caucasus in December 1914 with 100,000 troops, insisting on a frontal attack against mountainous Russian positions in winter. He lost 86 % of his force at the Battle of Sarikamish.
The Ottoman Empire, with German support, invaded Persia (modern Iran) in December 1914 in an effort to cut off British and Russian access to petroleum reservoirs around Baku near the Caspian Sea. Persia, ostensibly neutral, had long been under the spheres of British and Russian influence. The Ottomans and Germans were aided by Kurdish and Azeri forces, together with a large number of major Iranian tribes, such as the Qashqai, Tangistanis, Luristanis, and Khamseh, while the Russians and British had the support of Armenian and Assyrian forces. The Persian Campaign was to last until 1918 and end in failure for the Ottomans and their allies. However the Russian withdrawal from the war in 1917 led to Armenian and Assyrian forces, who had hitherto inflicted a series of defeats upon the forces of the Ottomans and their allies, being cut off from supply lines, outnumbered, outgunned and isolated, forcing them to fight and flee towards British lines in northern Mesopotamia.
General Yudenich, the Russian commander from 1915 to 1916, drove the Turks out of most of the southern Caucasus with a string of victories. In 1917, Russian Grand Duke Nicholas assumed command of the Caucasus front. Nicholas planned a railway from Russian Georgia to the conquered territories, so that fresh supplies could be brought up for a new offensive in 1917. However, in March 1917 (February in the pre-revolutionary Russian calendar), the Czar abdicated in the course of the February Revolution and the Russian Caucasus Army began to fall apart.
The Arab Revolt, instigated by the Arab bureau of the British Foreign Office, started June 1916 with the Battle of Mecca, led by Sherif Hussein of Mecca, and ended with the Ottoman surrender of Damascus. Fakhri Pasha, the Ottoman commander of Medina, resisted for more than two and half years during the Siege of Medina before surrendering.
The Senussi tribe, along the border of Italian Libya and British Egypt, incited and armed by the Turks, waged a small - scale guerrilla war against Allied troops. The British were forced to dispatch 12,000 troops to oppose them in the Senussi Campaign. Their rebellion was finally crushed in mid-1916.
Total Allied casualties on the Ottoman fronts amounted 650,000 men. Total Ottoman casualties were 725,000 (325,000 dead and 400,000 wounded).
Italy had been allied with the German and Austro - Hungarian Empires since 1882 as part of the Triple Alliance. However, the nation had its own designs on Austrian territory in Trentino, the Austrian Littoral, Fiume (Rijeka) and Dalmatia. Rome had a secret 1902 pact with France, effectively nullifying its part in the Triple Alliance. At the start of hostilities, Italy refused to commit troops, arguing that the Triple Alliance was defensive and that Austria - Hungary was an aggressor. The Austro - Hungarian government began negotiations to secure Italian neutrality, offering the French colony of Tunisia in return. The Allies made a counter-offer in which Italy would receive the Southern Tyrol, Austrian Littoral and territory on the Dalmatian coast after the defeat of Austria - Hungary. This was formalised by the Treaty of London. Further encouraged by the Allied invasion of Turkey in April 1915, Italy joined the Triple Entente and declared war on Austria - Hungary on 23 May. Fifteen months later, Italy declared war on Germany.
The Italians had numerical superiority but this advantage was lost, not only because of the difficult terrain in which the fighting took place, but also because of the strategies and tactics employed. Field Marshal Luigi Cadorna, a staunch proponent of the frontal assault, had dreams of breaking into the Slovenian plateau, taking Ljubljana and threatening Vienna.
On the Trentino front, the Austro - Hungarians took advantage of the mountainous terrain, which favoured the defender. After an initial strategic retreat, the front remained largely unchanged, while Austrian Kaiserschützen and Standschützen engaged Italian Alpini in bitter hand - to - hand combat throughout the summer. The Austro - Hungarians counterattacked in the Altopiano of Asiago, towards Verona and Padua, in the spring of 1916 (Strafexpedition), but made little progress.
Beginning in 1915, the Italians under Cadorna mounted eleven offensives on the Isonzo front along the Isonzo (Soča) River, northeast of Trieste. All eleven offensives were repelled by the Austro - Hungarians, who held the higher ground. In the summer of 1916, after the Battle of Doberdò, the Italians captured the town of Gorizia. After this minor victory, the front remained static for over a year, despite several Italian offensives, centred on the Banjšice and Karst Plateau east of Gorizia.
The Central Powers launched a crushing offensive on 26 October 1917, spearheaded by the Germans. They achieved a victory at Caporetto (Kobarid). The Italian Army was routed and retreated more than 100 kilometres (62 mi) to reorganise, stabilising the front at the Piave River. Since the Italian Army had suffered heavy losses in the Battle of Caporetto, the Italian Government called to arms the so - called ' 99 Boys (Ragazzi del ' 99): that is, all males born 1899 and prior, and so were 18 years old or older. In 1918, the Austro - Hungarians failed to break through in a series of battles on the Piave and were finally decisively defeated in the Battle of Vittorio Veneto in October of that year. On 1 November, the Italian Navy destroyed much of the Austro - Hungarian fleet stationed in Pula, preventing it from being handed over to the new State of Slovenes, Croats and Serbs. On 3 November, the Italians invaded Trieste from the sea. On the same day, the Armistice of Villa Giusti was signed. By mid-November 1918, the Italian military occupied the entire former Austrian Littoral and had seized control of the portion of Dalmatia that had been guaranteed to Italy by the London Pact. By the end of hostilities in November 1918, Admiral Enrico Millo declared himself Italy 's Governor of Dalmatia. Austria - Hungary surrendered on 11 November 1918.
Romania had been allied with the Central Powers since 1882. When the war began, however, it declared its neutrality, arguing that because Austria - Hungary had itself declared war on Serbia, Romania was under no obligation to join the war. When the Entente Powers promised Romania Transylvania and Banat, large territories of eastern Hungary, in exchange for Romania 's declaring war on the Central Powers, the Romanian government renounced its neutrality. On 27 August 1916, the Romanian Army launched an attack against Austria - Hungary, with limited Russian support. The Romanian offensive was initially successful, against the Austro - Hungarian troops in Transylvania, but a counterattack by the forces of the Central Powers drove them back. As a result of the Battle of Bucharest, the Central Powers occupied Bucharest on 6 December 1916. Fighting in Moldova continued in 1917, resulting in a costly stalemate for the Central Powers. Russian withdrawal from the war in late 1917 as a result of the October Revolution meant that Romania was forced to sign an armistice with the Central Powers on 9 December 1917.
In January 1918, Romanian forces established control over Bessarabia as the Russian Army abandoned the province. Although a treaty was signed by the Romanian and the Bolshevik Russian governments following talks between 5 and 9 March 1918 on the withdrawal of Romanian forces from Bessarabia within two months, on 27 March 1918 Romania attached Bessarabia to its territory, formally based on a resolution passed by the local assembly of that territory on its unification with Romania.
Romania officially made peace with the Central Powers by signing the Treaty of Bucharest on 7 May 1918. Under that treaty, Romania was obliged to end the war with the Central Powers and make small territorial concessions to Austria - Hungary, ceding control of some passes in the Carpathian Mountains, and to grant oil concessions to Germany. In exchange, the Central Powers recognised the sovereignty of Romania over Bessarabia. The treaty was renounced in October 1918 by the Alexandru Marghiloman government, and Romania nominally re-entered the war on 10 November 1918. The next day, the Treaty of Bucharest was nullified by the terms of the Armistice of Compiègne. Total Romanian deaths from 1914 to 1918, military and civilian, within contemporary borders, were estimated at 748,000.
While the Western Front had reached stalemate, the war continued in East Europe. Initial Russian plans called for simultaneous invasions of Austrian Galicia and East Prussia. Although Russia 's initial advance into Galicia was largely successful, it was driven back from East Prussia by Hindenburg and Ludendorff at the battles of Tannenberg and the Masurian Lakes in August and September 1914. Russia 's less developed industrial base and ineffective military leadership were instrumental in the events that unfolded. By the spring of 1915, the Russians had retreated to Galicia, and, in May, the Central Powers achieved a remarkable breakthrough on Poland 's southern frontiers. On 5 August, they captured Warsaw and forced the Russians to withdraw from Poland.
Despite Russia 's success with the June 1916 Brusilov Offensive in eastern Galicia, dissatisfaction with the Russian government 's conduct of the war grew. The offensive 's success was undermined by the reluctance of other generals to commit their forces to support the victory. Allied and Russian forces were revived only temporarily by Romania 's entry into the war on 27 August. German forces came to the aid of embattled Austro - Hungarian units in Transylvania while a German - Bulgarian force attacked from the south, and Bucharest was retaken by the Central Powers on 6 December. Meanwhile, unrest grew in Russia, as the Tsar remained at the front. Empress Alexandra 's increasingly incompetent rule drew protests and resulted in the murder of her favourite, Rasputin, at the end of 1916.
In March 1917, demonstrations in Petrograd culminated in the abdication of Tsar Nicholas II and the appointment of a weak Provisional Government, which shared power with the Petrograd Soviet socialists. This arrangement led to confusion and chaos both at the front and at home. The army became increasingly ineffective.
Following the Tsar 's abdication, Vladimir Lenin was ushered by train from Switzerland into Russia 16 April 1917. He was financed by Jacob Schiff. Discontent and the weaknesses of the Provisional Government led to a rise in the popularity of the Bolshevik Party, led by Lenin, which demanded an immediate end to the war. The Revolution of November was followed in December by an armistice and negotiations with Germany. At first, the Bolsheviks refused the German terms, but when German troops began marching across Ukraine unopposed, the new government acceded to the Treaty of Brest - Litovsk on 3 March 1918. The treaty ceded vast territories, including Finland, the Baltic provinces, parts of Poland and Ukraine to the Central Powers. Despite this enormous apparent German success, the manpower required for German occupation of former Russian territory may have contributed to the failure of the Spring Offensive and secured relatively little food or other materiel for the Central Powers war effort.
With the adoption of the Treaty of Brest - Litovsk, the Entente no longer existed. The Allied powers led a small - scale invasion of Russia, partly to stop Germany from exploiting Russian resources, and to a lesser extent, to support the "Whites '' (as opposed to the "Reds '') in the Russian Civil War. Allied troops landed in Arkhangelsk and in Vladivostok as part of the North Russia Intervention.
The Czechoslovak Legion fought with the Entente; its goal was to win support for the independence of Czechoslovakia. The Legion in Russia was established in September 1914, in December 1917 in France (including volunteers from America) and in April 1918 in Italy. Czechoslovak Legion troops defeated the Austro - Hungarian army at the Ukrainian village of Zborov, in July 1917. After this success, the number of Czechoslovak legionaries increased, as well as Czechoslovak military power. In the Battle of Bakhmach, the Legion defeated the Germans and forced them to make a truce.
In Russia, they were heavily involved in the Russian Civil War, siding with the Whites against the Bolsheviks, at times controlling most of the Trans - Siberian railway and conquering all the major cities of Siberia. The presence of the Czechoslovak Legion near Yekaterinburg appears to have been one of the motivations for the Bolshevik execution of the Tsar and his family in July 1918. Legionaries arrived less than a week afterwards and captured the city. Because Russia 's European ports were not safe, the corps was evacuated by a long detour via the port of Vladivostok. The last transport was the American ship Heffron in September 1920.
In December 1916, after ten brutal months of the Battle of Verdun and a successful offensive against Romania, the Germans attempted to negotiate a peace with the Allies. Soon after, the U.S. President, Woodrow Wilson, attempted to intervene as a peacemaker, asking in a note for both sides to state their demands. Lloyd George 's War Cabinet considered the German offer to be a ploy to create divisions amongst the Allies. After initial outrage and much deliberation, they took Wilson 's note as a separate effort, signalling that the United States was on the verge of entering the war against Germany following the "submarine outrages ''. While the Allies debated a response to Wilson 's offer, the Germans chose to rebuff it in favour of "a direct exchange of views ''. Learning of the German response, the Allied governments were free to make clear demands in their response of 14 January. They sought restoration of damages, the evacuation of occupied territories, reparations for France, Russia and Romania, and a recognition of the principle of nationalities. This included the liberation of Italians, Slavs, Romanians, Czecho - Slovaks, and the creation of a "free and united Poland ''. On the question of security, the Allies sought guarantees that would prevent or limit future wars, complete with sanctions, as a condition of any peace settlement. The negotiations failed and the Entente powers rejected the German offer on the grounds that Germany had not put forward any specific proposals.
Events of 1917 proved decisive in ending the war, although their effects were not fully felt until 1918.
The British naval blockade began to have a serious impact on Germany. In response, in February 1917, the German General Staff convinced Chancellor Theobald von Bethmann - Hollweg to declare unrestricted submarine warfare, with the goal of starving Britain out of the war. German planners estimated that unrestricted submarine warfare would cost Britain a monthly shipping loss of 600,000 tons. The General Staff acknowledged that the policy would almost certainly bring the United States into the conflict, but calculated that British shipping losses would be so high that they would be forced to sue for peace after 5 to 6 months, before American intervention could make an impact. In reality, tonnage sunk rose above 500,000 tons per month from February to July. It peaked at 860,000 tons in April. After July, the newly re-introduced convoy system became effective in reducing the U-boat threat. Britain was safe from starvation, while German industrial output fell and the United States joined the war far earlier than Germany had anticipated.
On 3 May 1917, during the Nivelle Offensive, the French 2nd Colonial Division, veterans of the Battle of Verdun, refused orders, arriving drunk and without their weapons. Their officers lacked the means to punish an entire division, and harsh measures were not immediately implemented. The French Army Mutinies eventually spread to a further 54 French divisions and saw 20,000 men desert. However, appeals to patriotism and duty, as well as mass arrests and trials, encouraged the soldiers to return to defend their trenches, although the French soldiers refused to participate in further offensive action. Robert Nivelle was removed from command by 15 May, replaced by General Philippe Pétain, who suspended bloody large - scale attacks.
The victory of the Central Powers at the Battle of Caporetto led the Allies to convene the Rapallo Conference at which they formed the Supreme War Council to coordinate planning. Previously, British and French armies had operated under separate commands.
In December, the Central Powers signed an armistice with Russia, thus freeing large numbers of German troops for use in the west. With German reinforcements and new American troops pouring in, the outcome was to be decided on the Western Front. The Central Powers knew that they could not win a protracted war, but they held high hopes for success based on a final quick offensive. Furthermore, both sides became increasingly fearful of social unrest and revolution in Europe. Thus, both sides urgently sought a decisive victory.
In 1917, Emperor Charles I of Austria secretly attempted separate peace negotiations with Clemenceau, through his wife 's brother Sixtus in Belgium as an intermediary, without the knowledge of Germany. Italy opposed the proposals. When the negotiations failed, his attempt was revealed to Germany, resulting in a diplomatic catastrophe.
In March and April 1917, at the First and Second Battles of Gaza, German and Ottoman forces stopped the advance of the Egyptian Expeditionary Force, which had begun in August 1916 at the Battle of Romani. At the end of October, the Sinai and Palestine Campaign resumed, when General Edmund Allenby 's XXth Corps, XXI Corps and Desert Mounted Corps won the Battle of Beersheba. Two Ottoman armies were defeated a few weeks later at the Battle of Mughar Ridge and, early in December, Jerusalem was captured following another Ottoman defeat at the Battle of Jerusalem. About this time, Friedrich Freiherr Kress von Kressenstein was relieved of his duties as the Eighth Army 's commander, replaced by Djevad Pasha, and a few months later the commander of the Ottoman Army in Palestine, Erich von Falkenhayn, was replaced by Otto Liman von Sanders.
In early 1918, the front line was extended and the Jordan Valley was occupied, following the First Transjordan and the Second Transjordan attacks by British Empire forces in March and April 1918. In March, most of the Egyptian Expeditionary Force 's British infantry and Yeomanry cavalry were sent to the Western Front as a consequence of the Spring Offensive. They were replaced by Indian Army units. During several months of reorganisation and training of the summer, a number of attacks were carried out on sections of the Ottoman front line. These pushed the front line north to more advantageous positions for the Entente in preparation for an attack and to acclimatise the newly arrived Indian Army infantry. It was not until the middle of September that the integrated force was ready for large - scale operations.
The reorganised Egyptian Expeditionary Force, with an additional mounted division, broke Ottoman forces at the Battle of Megiddo in September 1918. In two days the British and Indian infantry, supported by a creeping barrage, broke the Ottoman front line and captured the headquarters of the Eighth Army (Ottoman Empire) at Tulkarm, the continuous trench lines at Tabsor, Arara and the Seventh Army (Ottoman Empire) headquarters at Nablus. The Desert Mounted Corps rode through the break in the front line created by the infantry and, during virtually continuous operations by Australian Light Horse, British mounted Yeomanry, Indian Lancers and New Zealand Mounted Rifle brigades in the Jezreel Valley, they captured Nazareth, Afulah and Beisan, Jenin, along with Haifa on the Mediterranean coast and Daraa east of the Jordan River on the Hejaz railway. Samakh and Tiberias on the Sea of Galilee, were captured on the way northwards to Damascus. Meanwhile, Chaytor 's Force of Australian light horse, New Zealand mounted rifles, Indian, British West Indies and Jewish infantry captured the crossings of the Jordan River, Es Salt, Amman and at Ziza most of the Fourth Army (Ottoman Empire). The Armistice of Mudros, signed at the end of October, ended hostilities with the Ottoman Empire when fighting was continuing north of Aleppo.
On or shortly before 15 August 1917 Pope Benedict XV made a peace proposal suggesting:
At the outbreak of the war, the United States pursued a policy of non-intervention, avoiding conflict while trying to broker a peace. When the German U-boat U-20 sank the British liner RMS Lusitania on 7 May 1915 with 128 Americans among the dead, President Woodrow Wilson insisted that "America is too proud to fight '' but demanded an end to attacks on passenger ships. Germany complied. Wilson unsuccessfully tried to mediate a settlement. However, he also repeatedly warned that the United States would not tolerate unrestricted submarine warfare, in violation of international law. Former president Theodore Roosevelt denounced German acts as "piracy ''. Wilson was narrowly re-elected in 1916 after campaigning with the slogan "he kept us out of war ''.
In January 1917, Germany resumed unrestricted submarine warfare, realizing it would mean American entry. The German Foreign Minister, in the Zimmermann Telegram, invited Mexico to join the war as Germany 's ally against the United States. In return, the Germans would finance Mexico 's war and help it recover the territories of Texas, New Mexico, and Arizona. The United Kingdom intercepted the message and presented it to the U.S. embassy in the U.K. From there it made its way to President Wilson who released the Zimmermann note to the public, and Americans saw it as casus belli. Wilson called on anti-war elements to end all wars, by winning this one and eliminating militarism from the globe. He argued that the war was so important that the U.S. had to have a voice in the peace conference. After the sinking of seven U.S. merchant ships by submarines and the publication of the Zimmermann telegram, Wilson called for war on Germany, which the U.S. Congress declared on 6 April 1917.
The United States was never formally a member of the Allies but became a self - styled "Associated Power ''. The United States had a small army, but, after the passage of the Selective Service Act, it drafted 2.8 million men, and, by summer 1918, was sending 10,000 fresh soldiers to France every day. In 1917, the U.S. Congress granted U.S. citizenship to Puerto Ricans to allow them to be drafted to participate in World War I, as part of the Jones -- Shafroth Act. German General Staff assumptions that it would be able to defeat the British and French forces before American troops reinforced them were proven incorrect.
The United States Navy sent a battleship group to Scapa Flow to join with the British Grand Fleet, destroyers to Queenstown, Ireland, and submarines to help guard convoys. Several regiments of U.S. Marines were also dispatched to France. The British and French wanted American units used to reinforce their troops already on the battle lines and not waste scarce shipping on bringing over supplies. General John J. Pershing, American Expeditionary Forces (AEF) commander, refused to break up American units to be used as filler material. As an exception, he did allow African - American combat regiments to be used in French divisions. The Harlem Hellfighters fought as part of the French 16th Division, and earned a unit Croix de Guerre for their actions at Château - Thierry, Belleau Wood, and Sechault. AEF doctrine called for the use of frontal assaults, which had long since been discarded by British Empire and French commanders due to the large loss of life that resulted.
Ludendorff drew up plans (codenamed Operation Michael) for the 1918 offensive on the Western Front. The Spring Offensive sought to divide the British and French forces with a series of feints and advances. The German leadership hoped to end the war before significant U.S. forces arrived. The operation commenced on 21 March 1918, with an attack on British forces near Saint - Quentin. German forces achieved an unprecedented advance of 60 kilometres (37 mi).
British and French trenches were penetrated using novel infiltration tactics, also named Hutier tactics, after General Oskar von Hutier, by specially trained units called stormtroopers. Previously, attacks had been characterised by long artillery bombardments and massed assaults. However, in the Spring Offensive of 1918, Ludendorff used artillery only briefly and infiltrated small groups of infantry at weak points. They attacked command and logistics areas and bypassed points of serious resistance. More heavily armed infantry then destroyed these isolated positions. This German success relied greatly on the element of surprise.
The front moved to within 120 kilometres (75 mi) of Paris. Three heavy Krupp railway guns fired 183 shells on the capital, causing many Parisians to flee. The initial offensive was so successful that Kaiser Wilhelm II declared 24 March a national holiday. Many Germans thought victory was near. After heavy fighting, however, the offensive was halted. Lacking tanks or motorised artillery, the Germans were unable to consolidate their gains. The problems of re-supply were also exacerbated by increasing distances that now stretched over terrain that was shell - torn and often impassable to traffic.
General Foch pressed to use the arriving American troops as individual replacements, whereas Pershing sought to field American units as an independent force. These units were assigned to the depleted French and British Empire commands on 28 March. A Supreme War Council of Allied forces was created at the Doullens Conference on 5 November 1917. General Foch was appointed as supreme commander of the Allied forces. Haig, Petain, and Pershing retained tactical control of their respective armies; Foch assumed a coordinating rather than a directing role, and the British, French, and U.S. commands operated largely independently.
Following Operation Michael, Germany launched Operation Georgette against the northern English Channel ports. The Allies halted the drive after limited territorial gains by Germany. The German Army to the south then conducted Operations Blücher and Yorck, pushing broadly towards Paris. Germany launched Operation Marne (Second Battle of the Marne) 15 July, in an attempt to encircle Reims. The resulting counter-attack, which started the Hundred Days Offensive, marked the first successful Allied offensive of the war.
By 20 July, the Germans had retreated across the Marne to their starting lines, having achieved little, and the German Army never regained the initiative. German casualties between March and April 1918 were 270,000, including many highly trained storm troopers.
Meanwhile, Germany was falling apart at home. Anti-war marches became frequent and morale in the army fell. Industrial output was half the 1913 levels.
In the late spring of 1918, three new states were formed in the South Caucasus: the First Republic of Armenia, the Azerbaijan Democratic Republic, and the Democratic Republic of Georgia, which declared their independence from the Russian Empire. Two other minor entities were established, the Centrocaspian Dictatorship and South West Caucasian Republic (the former was liquidated by Azerbaijan in the autumn of 1918 and the latter by a joint Armenian - British task force in early 1919). With the withdrawal of the Russian armies from the Caucasus front in the winter of 1917 -- 18, the three major republics braced for an imminent Ottoman advance, which commenced in the early months of 1918. Solidarity was briefly maintained when the Transcaucasian Federative Republic was created in the spring of 1918, but this collapsed in May, when the Georgians asked for and received protection from Germany and the Azerbaijanis concluded a treaty with the Ottoman Empire that was more akin to a military alliance. Armenia was left to fend for itself and struggled for five months against the threat of a full - fledged occupation by the Ottoman Turks before defeating them at the Battle of Sardarabad.
The Allied counteroffensive, known as the Hundred Days Offensive, began on 8 August 1918, with the Battle of Amiens. The battle involved over 400 tanks and 120,000 British, Dominion, and French troops, and by the end of its first day a gap 24 kilometres (15 mi) long had been created in the German lines. The defenders displayed a marked collapse in morale, causing Ludendorff to refer to this day as the "Black Day of the German army ''. After an advance as far as 23 kilometres (14 mi), German resistance stiffened, and the battle was concluded on 12 August.
Rather than continuing the Amiens battle past the point of initial success, as had been done so many times in the past, the Allies shifted attention elsewhere. Allied leaders had now realised that to continue an attack after resistance had hardened was a waste of lives, and it was better to turn a line than to try to roll over it. They began to undertake attacks in quick order to take advantage of successful advances on the flanks, then broke them off when each attack lost its initial impetus.
British and Dominion forces launched the next phase of the campaign with the Battle of Albert on 21 August. The assault was widened by French and then further British forces in the following days. During the last week of August the Allied pressure along a 110 - kilometre (68 mi) front against the enemy was heavy and unrelenting. From German accounts, "Each day was spent in bloody fighting against an ever and again on - storming enemy, and nights passed without sleep in retirements to new lines. ''
Faced with these advances, on 2 September the German Supreme Army Command issued orders to withdraw to the Hindenburg Line in the south. This ceded without a fight the salient seized the previous April. According to Ludendorff "We had to admit the necessity... to withdraw the entire front from the Scarpe to the Vesle.
September saw the Allies advance to the Hindenburg Line in the north and centre. The Germans continued to fight strong rear - guard actions and launched numerous counterattacks on lost positions, but only a few succeeded, and those only temporarily. Contested towns, villages, heights, and trenches in the screening positions and outposts of the Hindenburg Line continued to fall to the Allies, with the BEF alone taking 30,441 prisoners in the last week of September. On 24 September an assault by both the British and French came within 3 kilometres (2 mi) of St. Quentin. The Germans had now retreated to positions along or behind the Hindenburg Line.
In nearly four weeks of fighting beginning on 8 August, over 100,000 German prisoners were taken. As of "The Black Day of the German Army '', the German High Command realised that the war was lost and made attempts to reach a satisfactory end. The day after that battle, Ludendorff said: "We can not win the war any more, but we must not lose it either. '' On 11 August he offered his resignation to the Kaiser, who refused it, replying, "I see that we must strike a balance. We have nearly reached the limit of our powers of resistance. The war must be ended. '' On 13 August, at Spa, Hindenburg, Ludendorff, the Chancellor, and Foreign Minister Hintz agreed that the war could not be ended militarily and, on the following day, the German Crown Council decided that victory in the field was now most improbable. Austria and Hungary warned that they could only continue the war until December, and Ludendorff recommended immediate peace negotiations. Prince Rupprecht warned Prince Max of Baden: "Our military situation has deteriorated so rapidly that I no longer believe we can hold out over the winter; it is even possible that a catastrophe will come earlier. '' On 10 September Hindenburg urged peace moves to Emperor Charles of Austria, and Germany appealed to the Netherlands for mediation. On 14 September Austria sent a note to all belligerents and neutrals suggesting a meeting for peace talks on neutral soil, and on 15 September Germany made a peace offer to Belgium. Both peace offers were rejected, and on 24 September Supreme Army Command informed the leaders in Berlin that armistice talks were inevitable.
The final assault on the Hindenburg Line began with the Meuse - Argonne Offensive, launched by French and American troops on 26 September. The following week, cooperating French and American units broke through in Champagne at the Battle of Blanc Mont Ridge, forcing the Germans off the commanding heights, and closing towards the Belgian frontier. On 8 October the line was pierced again by British and Dominion troops at the Battle of Cambrai. The German army had to shorten its front and use the Dutch frontier as an anchor to fight rear - guard actions as it fell back towards Germany.
When Bulgaria signed a separate armistice on 29 September, Ludendorff, having been under great stress for months, suffered something similar to a breakdown. It was evident that Germany could no longer mount a successful defence.
News of Germany 's impending military defeat spread throughout the German armed forces. The threat of mutiny was rife. Admiral Reinhard Scheer and Ludendorff decided to launch a last attempt to restore the "valour '' of the German Navy. Knowing the government of Prince Maximilian of Baden would veto any such action, Ludendorff decided not to inform him. Nonetheless, word of the impending assault reached sailors at Kiel. Many, refusing to be part of a naval offensive, which they believed to be suicidal, rebelled and were arrested. Ludendorff took the blame; the Kaiser dismissed him on 26 October. The collapse of the Balkans meant that Germany was about to lose its main supplies of oil and food. Its reserves had been used up, even as U.S. troops kept arriving at the rate of 10,000 per day. The Americans supplied more than 80 % of Allied oil during the war, and there was no shortage.
With the military faltering and with widespread loss of confidence in the Kaiser, Germany moved towards surrender. Prince Maximilian of Baden took charge of a new government as Chancellor of Germany to negotiate with the Allies. Negotiations with President Wilson began immediately, in the hope that he would offer better terms than the British and French. Wilson demanded a constitutional monarchy and parliamentary control over the German military. There was no resistance when the Social Democrat Philipp Scheidemann on 9 November declared Germany to be a republic. The Kaiser, kings and other hereditary rulers all were removed from power and Wilhelm fled to exile in the Netherlands. Imperial Germany was dead; a new Germany had been born as the Weimar Republic.
The collapse of the Central Powers came swiftly. Bulgaria was the first to sign an armistice, on 29 September 1918 at Saloniki. On 30 October, the Ottoman Empire capitulated, signing the Armistice of Mudros.
On 24 October, the Italians began a push that rapidly recovered territory lost after the Battle of Caporetto. This culminated in the Battle of Vittorio Veneto, which marked the end of the Austro - Hungarian Army as an effective fighting force. The offensive also triggered the disintegration of the Austro - Hungarian Empire. During the last week of October, declarations of independence were made in Budapest, Prague, and Zagreb. On 29 October, the imperial authorities asked Italy for an armistice, but the Italians continued advancing, reaching Trento, Udine, and Trieste. On 3 November, Austria - Hungary sent a flag of truce to ask for an armistice (Armistice of Villa Giusti). The terms, arranged by telegraph with the Allied Authorities in Paris, were communicated to the Austrian commander and accepted. The Armistice with Austria was signed in the Villa Giusti, near Padua, on 3 November. Austria and Hungary signed separate armistices following the overthrow of the Habsburg Monarchy. In the following days the Italian Army occupied Innsbruck and all Tyrol with 20 to 22,000 soldiers.
On 11 November, at 5: 00 am, an armistice with Germany was signed in a railroad carriage at Compiègne. At 11 am on 11 November 1918 -- "the eleventh hour of the eleventh day of the eleventh month '' -- a ceasefire came into effect. During the six hours between the signing of the armistice and its taking effect, opposing armies on the Western Front began to withdraw from their positions, but fighting continued along many areas of the front, as commanders wanted to capture territory before the war ended.
The occupation of the Rhineland took place following the Armistice. The occupying armies consisted of American, Belgian, British and French forces.
In November 1918, the Allies had ample supplies of men and materiel to invade Germany. Yet at the time of the armistice, no Allied force had crossed the German frontier; the Western Front was still some 720 kilometres (450 mi) from Berlin; and the Kaiser 's armies had retreated from the battlefield in good order. These factors enabled Hindenburg and other senior German leaders to spread the story that their armies had not really been defeated. This resulted in the stab - in - the - back legend, which attributed Germany 's defeat not to its inability to continue fighting (even though up to a million soldiers were suffering from the 1918 flu pandemic and unfit to fight), but to the public 's failure to respond to its "patriotic calling '' and the supposed intentional sabotage of the war effort, particularly by Jews, Socialists, and Bolsheviks.
The Allies had much more potential wealth they could spend on the war. One estimate (using 1913 U.S. dollars) is that the Allies spent $58 billion on the war and the Central Powers only $25 billion. Among the Allies, the UK spent $21 billion and the U.S. $17 billion; among the Central Powers Germany spent $20 billion.
In the aftermath of the war, four empires disappeared: the German, Austro - Hungarian, Ottoman, and Russian. Numerous nations regained their former independence, and new ones were created. Four dynasties, together with their ancillary aristocracies, fell as a result of the war: the Romanovs, the Hohenzollerns, the Habsburgs, and the Ottomans. Belgium and Serbia were badly damaged, as was France, with 1.4 million soldiers dead, not counting other casualties. Germany and Russia were similarly affected.
A formal state of war between the two sides persisted for another seven months, until the signing of the Treaty of Versailles with Germany on 28 June 1919. The United States Senate did not ratify the treaty despite public support for it, and did not formally end its involvement in the war until the Knox -- Porter Resolution was signed on 2 July 1921 by President Warren G. Harding. For the United Kingdom and the British Empire, the state of war ceased under the provisions of the Termination of the Present War (Definition) Act 1918 with respect to:
After the Treaty of Versailles, treaties with Austria, Hungary, Bulgaria, and the Ottoman Empire were signed. However, the negotiation of the latter treaty with the Ottoman Empire was followed by strife, and a final peace treaty between the Allied Powers and the country that would shortly become the Republic of Turkey was not signed until 24 July 1923, at Lausanne.
Some war memorials date the end of the war as being when the Versailles Treaty was signed in 1919, which was when many of the troops serving abroad finally returned home; by contrast, most commemorations of the war 's end concentrate on the armistice of 11 November 1918. Legally, the formal peace treaties were not complete until the last, the Treaty of Lausanne, was signed. Under its terms, the Allied forces left Constantinople on 23 August 1923.
After the war, the Paris Peace Conference imposed a series of peace treaties on the Central Powers officially ending the war. The 1919 Treaty of Versailles dealt with Germany and, building on Wilson 's 14th point, brought into being the League of Nations on 28 June 1919.
The Central Powers had to acknowledge responsibility for "all the loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by '' their aggression. In the Treaty of Versailles, this statement was Article 231. This article became known as the War Guilt clause as the majority of Germans felt humiliated and resentful. Overall the Germans felt they had been unjustly dealt with by what they called the "diktat of Versailles ''. German historian Hagen Schulze said the Treaty placed Germany "under legal sanctions, deprived of military power, economically ruined, and politically humiliated. '' Belgian historian Laurence Van Ypersele emphasizes the central role played by memory of the war and the Versailles Treaty in German politics in the 1920s and 1930s:
Active denial of war guilt in Germany and German resentment at both reparations and continued Allied occupation of the Rhineland made widespread revision of the meaning and memory of the war problematic. The legend of the "stab in the back '' and the wish to revise the "Versailles diktat '', and the belief in an international threat aimed at the elimination of the German nation persisted at the heart of German politics. Even a man of peace such as (Gustav) Stresemann publicly rejected German guilt. As for the Nazis, they waved the banners of domestic treason and international conspiracy in an attempt to galvanize the German nation into a spirit of revenge. Like a Fascist Italy, Nazi Germany sought to redirect the memory of the war to the benefit of its own policies.
Meanwhile, new nations liberated from German rule viewed the treaty as recognition of wrongs committed against small nations by much larger aggressive neighbors. The Peace Conference required all the defeated powers to pay reparations for all the damage done to civilians. However, owing to economic difficulties and Germany being the only defeated power with an intact economy, the burden fell largely on Germany.
Austria - Hungary was partitioned into several successor states, including Austria, Hungary, Czechoslovakia, and Yugoslavia, largely but not entirely along ethnic lines. Transylvania was shifted from Hungary to Greater Romania. The details were contained in the Treaty of Saint - Germain and the Treaty of Trianon. As a result of the Treaty of Trianon, 3.3 million Hungarians came under foreign rule. Although the Hungarians made up 54 % of the population of the pre-war Kingdom of Hungary, only 32 % of its territory was left to Hungary. Between 1920 and 1924, 354,000 Hungarians fled former Hungarian territories attached to Romania, Czechoslovakia, and Yugoslavia.
The Russian Empire, which had withdrawn from the war in 1917 after the October Revolution, lost much of its western frontier as the newly independent nations of Estonia, Finland, Latvia, Lithuania, and Poland were carved from it. Romania took control of Bessarabia in April 1918.
The Ottoman Empire disintegrated, with much of its Levant territory awarded to various Allied powers as protectorates. The Turkish core in Anatolia was reorganised as the Republic of Turkey. The Ottoman Empire was to be partitioned by the Treaty of Sèvres of 1920. This treaty was never ratified by the Sultan and was rejected by the Turkish National Movement, leading to the victorious Turkish War of Independence and the much less stringent 1923 Treaty of Lausanne.
Poland reemerged as an independent country, after more than a century. The Kingdom of Serbia and its dynasty, as a "minor Entente nation '' and the country with the most casualties per capita, became the backbone of a new multinational state, the Kingdom of Serbs, Croats and Slovenes, later renamed Yugoslavia. Czechoslovakia, combining the Kingdom of Bohemia with parts of the Kingdom of Hungary, became a new nation. Russia became the Soviet Union and lost Finland, Estonia, Lithuania, and Latvia, which became independent countries. The Ottoman Empire was soon replaced by Turkey and several other countries in the Middle East.
In the British Empire, the war unleashed new forms of nationalism. In Australia and New Zealand the Battle of Gallipoli became known as those nations ' "Baptism of Fire ''. It was the first major war in which the newly established countries fought, and it was one of the first times that Australian troops fought as Australians, not just subjects of the British Crown. Anzac Day, commemorating the Australian and New Zealand Army Corps, celebrates this defining moment.
After the Battle of Vimy Ridge, where the Canadian divisions fought together for the first time as a single corps, Canadians began to refer to their country as a nation "forged from fire ''. Having succeeded on the same battleground where the "mother countries '' had previously faltered, they were for the first time respected internationally for their own accomplishments. Canada entered the war as a Dominion of the British Empire and remained so, although it emerged with a greater measure of independence. When Britain declared war in 1914, the dominions were automatically at war; at the conclusion, Canada, Australia, New Zealand, and South Africa were individual signatories of the Treaty of Versailles.
The establishment of the modern state of Israel and the roots of the continuing Israeli -- Palestinian conflict are partially found in the unstable power dynamics of the Middle East that resulted from World War I. Before the end of the war, the Ottoman Empire had maintained a modest level of peace and stability throughout the Middle East. With the fall of the Ottoman government, power vacuums developed and conflicting claims to land and nationhood began to emerge. The political boundaries drawn by the victors of World War I were quickly imposed, sometimes after only cursory consultation with the local population. These continue to be problematic in the 21st - century struggles for national identity. While the dissolution of the Ottoman Empire at the end of World War I was pivotal in contributing to the modern political situation of the Middle East, including the Arab - Israeli conflict, the end of Ottoman rule also spawned lesser known disputes over water and other natural resources.
The war had profound consequences on the health of soldiers. Of the 60 million European military personnel who were mobilized from 1914 to 1918, 8 million were killed, 7 million were permanently disabled, and 15 million were seriously injured. Germany lost 15.1 % of its active male population, Austria - Hungary lost 17.1 %, and France lost 10.5 %. In Germany, civilian deaths were 474,000 higher than in peacetime, due in large part to food shortages and malnutrition that weakened resistance to disease. By the end of the war, starvation caused by famine had killed approximately 100,000 people in Lebanon. Between 5 and 10 million people died in the Russian famine of 1921. By 1922, there were between 4.5 million and 7 million homeless children in Russia as a result of nearly a decade of devastation from World War I, the Russian Civil War, and the subsequent famine of 1920 -- 1922. Numerous anti-Soviet Russians fled the country after the Revolution; by the 1930s, the northern Chinese city of Harbin had 100,000 Russians. Thousands more emigrated to France, England, and the United States.
The Australian prime minister, Billy Hughes, wrote to the British prime minister, Lloyd George, "You have assured us that you can not get better terms. I much regret it, and hope even now that some way may be found of securing agreement for demanding reparation commensurate with the tremendous sacrifices made by the British Empire and her Allies. '' Australia received ₤ 5,571,720 war reparations, but the direct cost of the war to Australia had been ₤ 376,993,052, and, by the mid-1930s, repatriation pensions, war gratuities, interest and sinking fund charges were ₤ 831,280,947. Of about 416,000 Australians who served, about 60,000 were killed and another 152,000 were wounded.
Diseases flourished in the chaotic wartime conditions. In 1914 alone, louse - borne epidemic typhus killed 200,000 in Serbia. From 1918 to 1922, Russia had about 25 million infections and 3 million deaths from epidemic typhus. In 1923, 13 million Russians contracted malaria, a sharp increase from the pre-war years. In addition, a major influenza epidemic spread around the world. Overall, the 1918 flu pandemic killed at least 50 million people.
Lobbying by Chaim Weizmann and fear that American Jews would encourage the United States to support Germany culminated in the British government 's Balfour Declaration of 1917, endorsing creation of a Jewish homeland in Palestine. A total of more than 1,172,000 Jewish soldiers served in the Allied and Central Power forces in World War I, including 275,000 in Austria - Hungary and 450,000 in Tsarist Russia.
The social disruption and widespread violence of the Russian Revolution of 1917 and the ensuing Russian Civil War sparked more than 2,000 pogroms in the former Russian Empire, mostly in Ukraine. An estimated 60,000 -- 200,000 civilian Jews were killed in the atrocities.
In the aftermath of World War I, Greece fought against Turkish nationalists led by Mustafa Kemal, a war that eventually resulted in a massive population exchange between the two countries under the Treaty of Lausanne. According to various sources, several hundred thousand Greeks died during this period, which was tied in with the Greek Genocide.
World War I began as a clash of 20th - century technology and 19th - century tactics, with the inevitably large ensuing casualties. By the end of 1917, however, the major armies, now numbering millions of men, had modernised and were making use of telephone, wireless communication, armoured cars, tanks, and aircraft. Infantry formations were reorganised, so that 100 - man companies were no longer the main unit of manoeuvre; instead, squads of 10 or so men, under the command of a junior NCO, were favoured.
Artillery also underwent a revolution. In 1914, cannons were positioned in the front line and fired directly at their targets. By 1917, indirect fire with guns (as well as mortars and even machine guns) was commonplace, using new techniques for spotting and ranging, notably aircraft and the often overlooked field telephone. Counter-battery missions became commonplace, also, and sound detection was used to locate enemy batteries.
Germany was far ahead of the Allies in using heavy indirect fire. The German Army employed 150 mm (6 in) and 210 mm (8 in) howitzers in 1914, when typical French and British guns were only 75 mm (3 in) and 105 mm (4 in). The British had a 6 - inch (152 mm) howitzer, but it was so heavy it had to be hauled to the field in pieces and assembled. The Germans also fielded Austrian 305 mm (12 in) and 420 mm (17 in) guns and, even at the beginning of the war, had inventories of various calibers of Minenwerfer, which were ideally suited for trench warfare.
In 1917, on 27 June the Germans used their biggest gun of the world Batterie Pommern, nicknamed "Lange Max ''. This gun from Krupp was able to shoot 750 kg shells from Koekelare to Dunkirk, which is about 50 km away.
Much of the combat involved trench warfare, in which hundreds often died for each metre gained. Many of the deadliest battles in history occurred during World War I. Such battles include Ypres, the Marne, Cambrai, the Somme, Verdun, and Gallipoli. The Germans employed the Haber process of nitrogen fixation to provide their forces with a constant supply of gunpowder despite the British naval blockade. Artillery was responsible for the largest number of casualties and consumed vast quantities of explosives. The large number of head wounds caused by exploding shells and fragmentation forced the combatant nations to develop the modern steel helmet, led by the French, who introduced the Adrian helmet in 1915. It was quickly followed by the Brodie helmet, worn by British Imperial and US troops, and in 1916 by the distinctive German Stahlhelm, a design, with improvements, still in use today.
Gas! GAS! Quick, boys! -- An ecstasy of fumbling, Fitting the clumsy helmets just in time; But someone still was yelling out and stumbling, And flound'ring like a man in fire or lime... Dim, through the misty panes and thick green light, As under a green sea, I saw him drowning.
The widespread use of chemical warfare was a distinguishing feature of the conflict. Gases used included chlorine, mustard gas and phosgene. Relatively few war casualties were caused by gas, as effective countermeasures to gas attacks were quickly created, such as gas masks. The use of chemical warfare and small - scale strategic bombing were both outlawed by the Hague Conventions of 1899 and 1907, and both proved to be of limited effectiveness, though they captured the public imagination.
The most powerful land - based weapons were railway guns, weighing dozens of tons apiece. The German ones were nicknamed Big Berthas, even though the namesake was not a railway gun. Germany developed the Paris Gun, able to bombard Paris from over 100 kilometres (62 mi), though shells were relatively light at 94 kilograms (210 lb).
Trenches, machine guns, air reconnaissance, barbed wire, and modern artillery with fragmentation shells helped bring the battle lines of World War I to a stalemate. The British and the French sought a solution with the creation of the tank and mechanised warfare. The British first tanks were used during the Battle of the Somme on 15 September 1916. Mechanical reliability was an issue, but the experiment proved its worth. Within a year, the British were fielding tanks by the hundreds, and they showed their potential during the Battle of Cambrai in November 1917, by breaking the Hindenburg Line, while combined arms teams captured 8,000 enemy soldiers and 100 guns. Meanwhile, the French introduced the first tanks with a rotating turret, the Renault FT, which became a decisive tool of the victory. The conflict also saw the introduction of light automatic weapons and submachine guns, such as the Lewis Gun, the Browning automatic rifle, and the Bergmann MP18.
Another new weapon, the flamethrower, was first used by the German army and later adopted by other forces. Although not of high tactical value, the flamethrower was a powerful, demoralising weapon that caused terror on the battlefield.
Trench railways evolved to supply the enormous quantities of food, water, and ammunition required to support large numbers of soldiers in areas where conventional transportation systems had been destroyed. Internal combustion engines and improved traction systems for automobiles and trucks / lorries eventually rendered trench railways obsolete.
On the Western Front neither side made impressive gains in the first three years of the war with attacks at Verdun, the Somme, Passchendaele, and Cambrai -- the exception was Nivelle 's Offensive in which the German defense gave ground while mauling the attackers so badly that there were mutinies in the French Army. In 1918 the Germans smashed through the defense lines in three great attacks: Michael, on the Lys, and on the Aisne, which displayed the power of their new tactics. The Allies struck back at Soissons which showed the Germans that they must return to the defensive, and at Amiens; tanks played a prominent role in both of these assaults, as they had the year before at Cambrai.
The areas in the East were larger. The Germans did well at the First Masurian Lakes driving the invaders from East Prussia, and at Riga which led to the Russian 's suing for peace. The Austro - Hungarians and Germans joined for a great success at Gorlice -- Tarnów which drove the Russians out of Poland. In a series of attacks along with the Bulgarians they occupied Serbia, Albania, Montenegro and most of Romania. The Allies successes came later in Palestine (the beginning of the end for the Ottomans), in Macedonia (which drove the Bulgarians out of the war), and at Vittorio Veneto (the final blow for the Austro - Hungarians.
The area occupied in East by the Central powers on 11 November 1918 was 1,042,600 km2, roughly the size of Columbia.
Germany deployed U-boats (submarines) after the war began. Alternating between restricted and unrestricted submarine warfare in the Atlantic, the Kaiserliche Marine employed them to deprive the British Isles of vital supplies. The deaths of British merchant sailors and the seeming invulnerability of U-boats led to the development of depth charges (1916), hydrophones (passive sonar, 1917), blimps, hunter - killer submarines (HMS R - 1, 1917), forward - throwing anti-submarine weapons, and dipping hydrophones (the latter two both abandoned in 1918). To extend their operations, the Germans proposed supply submarines (1916). Most of these would be forgotten in the interwar period until World War II revived the need.
Fixed - wing aircraft were first used militarily by the Italians in Libya on 23 October 1911 during the Italo - Turkish War for reconnaissance, soon followed by the dropping of grenades and aerial photography the next year. By 1914, their military utility was obvious. They were initially used for reconnaissance and ground attack. To shoot down enemy planes, anti-aircraft guns and fighter aircraft were developed. Strategic bombers were created, principally by the Germans and British, though the former used Zeppelins as well. Towards the end of the conflict, aircraft carriers were used for the first time, with HMS Furious launching Sopwith Camels in a raid to destroy the Zeppelin hangars at Tondern in 1918.
Manned observation balloons, floating high above the trenches, were used as stationary reconnaissance platforms, reporting enemy movements and directing artillery. Balloons commonly had a crew of two, equipped with parachutes, so that if there was an enemy air attack the crew could parachute to safety. At the time, parachutes were too heavy to be used by pilots of aircraft (with their marginal power output), and smaller versions were not developed until the end of the war; they were also opposed by the British leadership, who feared they might promote cowardice.
Recognised for their value as observation platforms, balloons were important targets for enemy aircraft. To defend them against air attack, they were heavily protected by antiaircraft guns and patrolled by friendly aircraft; to attack them, unusual weapons such as air - to - air rockets were tried. Thus, the reconnaissance value of blimps and balloons contributed to the development of air - to - air combat between all types of aircraft, and to the trench stalemate, because it was impossible to move large numbers of troops undetected. The Germans conducted air raids on England during 1915 and 1916 with airships, hoping to damage British morale and cause aircraft to be diverted from the front lines, and indeed the resulting panic led to the diversion of several squadrons of fighters from France.
On 19 August 1915, the German submarine U-27 was sunk by the British Q - ship HMS Baralong. All German survivors were summarily executed by Baralong 's crew on the orders of Lieutenant Godfrey Herbert, the captain of the ship. The shooting was reported to the media by American citizens who were on board the Nicosia, a British freighter loaded with war supplies, which was stopped by U-27 just minutes before the incident.
On 24 September, Baralong destroyed U-41, which was in the process of sinking the cargo ship Urbino. According to Karl Goetz, the submarine 's commander, Baralong continued to fly the U.S. flag after firing on U-41 and then rammed the lifeboat -- carrying the German survivors -- sinking it.
The Canadian hospital ship HMHS Llandovery Castle was torpedoed by the German submarine SM U-86 on 27 June 1918 in violation of international law. Only 24 of the 258 medical personnel, patients, and crew survived. Survivors reported that the U-boat surfaced and ran down the lifeboats, machine - gunning survivors in the water. The U-boat captain, Helmut Patzig, was charged with war crimes in Germany following the war, but escaped prosecution by going to the Free City of Danzig, beyond the jurisdiction of German courts.
The first successful use of poison gas as a weapon of warfare occurred during the Second Battle of Ypres (22 April -- 25 May 1915). Gas was soon used by all major belligerents throughout the war. It is estimated that the use of chemical weapons employed by both sides throughout the war had inflicted 1.3 million casualties. For example, the British had over 180,000 chemical weapons casualties during the war, and up to one - third of American casualties were caused by them. The Russian Army reportedly suffered roughly 500,000 chemical weapon casualties in World War I. The use of chemical weapons in warfare was in direct violation of the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited their use.
The effect of poison gas was not limited to combatants. Civilians were at risk from the gases as winds blew the poison gases through their towns, and they rarely received warnings or alerts of potential danger. In addition to absent warning systems, civilians often did not have access to effective gas masks. An estimated 100,000 -- 260,000 civilian casualties were caused by chemical weapons during the conflict and tens of thousands more (along with military personnel) died from scarring of the lungs, skin damage, and cerebral damage in the years after the conflict ended. Many commanders on both sides knew such weapons would cause major harm to civilians but nonetheless continued to use them. British Field Marshal Sir Douglas Haig wrote in his diary, "My officers and I were aware that such weapons would cause harm to women and children living in nearby towns, as strong winds were common in the battlefront. However, because the weapon was to be directed against the enemy, none of us were overly concerned at all. ''
The ethnic cleansing of the Ottoman Empire 's Armenian population, including mass deportations and executions, during the final years of the Ottoman Empire is considered genocide. The Ottomans carried out organized and systematic massacres of the Armenian population at the beginning of the war and portrayed deliberately provoked acts of Armenian resistance as rebellions to justify further extermination. In early 1915, a number of Armenians volunteered to join the Russian forces and the Ottoman government used this as a pretext to issue the Tehcir Law (Law on Deportation), which authorized the deportation of Armenians from the Empire 's eastern provinces to Syria between 1915 and 1918. The Armenians were intentionally marched to death and a number were attacked by Ottoman brigands. While an exact number of deaths is unknown, the International Association of Genocide Scholars estimates 1.5 million. The government of Turkey has consistently denied the genocide, arguing that those who died were victims of inter-ethnic fighting, famine, or disease during World War I; these claims are rejected by most historians. Other ethnic groups were similarly attacked by the Ottoman Empire during this period, including Assyrians and Greeks, and some scholars consider those events to be part of the same policy of extermination.
Many pogroms accompanied the Russian Revolution of 1917 and the ensuing Russian Civil War. 60,000 -- 200,000 civilian Jews were killed in the atrocities throughout the former Russian Empire (mostly within the Pale of Settlement in present - day Ukraine).
The German invaders treated any resistance -- such as sabotaging rail lines -- as illegal and immoral, and shot the offenders and burned buildings in retaliation. In addition, they tended to suspect that most civilians were potential francs - tireurs (guerrillas) and, accordingly, took and sometimes killed hostages from among the civilian population. The German army executed over 6,500 French and Belgian civilians between August and November 1914, usually in near - random large - scale shootings of civilians ordered by junior German officers. The German Army destroyed 15,000 -- 20,000 buildings -- most famously the university library at Louvain -- and generated a wave of refugees of over a million people. Over half the German regiments in Belgium were involved in major incidents. Thousands of workers were shipped to Germany to work in factories. British propaganda dramatizing the Rape of Belgium attracted much attention in the United States, while Berlin said it was both lawful and necessary because of the threat of franc - tireurs like those in France in 1870. The British and French magnified the reports and disseminated them at home and in the United States, where they played a major role in dissolving support for Germany.
The British soldiers of the war were initially volunteers but increasingly were conscripted into service. Surviving veterans, returning home, often found they could discuss their experiences only amongst themselves. Grouping together, they formed "veterans ' associations '' or "Legions ''. A small number of personal accounts of American veterans have been collected by the Library of Congress Veterans History Project.
About eight million men surrendered and were held in POW camps during the war. All nations pledged to follow the Hague Conventions on fair treatment of prisoners of war, and the survival rate for POWs was generally much higher than that of combatants at the front. Individual surrenders were uncommon; large units usually surrendered en masse. At the siege of Maubeuge about 40,000 French soldiers surrendered, at the battle of Galicia Russians took about 100,000 to 120,000 Austrian captives, at the Brusilov Offensive about 325,000 to 417,000 Germans and Austrians surrendered to Russians, and at the Battle of Tannenberg 92,000 Russians surrendered. When the besieged garrison of Kaunas surrendered in 1915, some 20,000 Russians became prisoners, at the battle near Przasnysz (February -- March 1915) 14,000 Germans surrendered to Russians, and at the First Battle of the Marne about 12,000 Germans surrendered to the Allies. 25 -- 31 % of Russian losses (as a proportion of those captured, wounded, or killed) were to prisoner status; for Austria - Hungary 32 %, for Italy 26 %, for France 12 %, for Germany 9 %; for Britain 7 %. Prisoners from the Allied armies totalled about 1.4 million (not including Russia, which lost 2.5 -- 3.5 million men as prisoners). From the Central Powers about 3.3 million men became prisoners; most of them surrendered to Russians. Germany held 2.5 million prisoners; Russia held 2.2 -- 2.9 million; while Britain and France held about 720,000. Most were captured just before the Armistice. The United States held 48,000. The most dangerous moment was the act of surrender, when helpless soldiers were sometimes gunned down. Once prisoners reached a camp, conditions were, in general, satisfactory (and much better than in World War II), thanks in part to the efforts of the International Red Cross and inspections by neutral nations. However, conditions were terrible in Russia: starvation was common for prisoners and civilians alike; about 15 -- 20 % of the prisoners in Russia died and in Central Powers imprisonment -- 8 % of Russians. In Germany, food was scarce, but only 5 % died.
The Ottoman Empire often treated POWs poorly. Some 11,800 British Empire soldiers, most of them Indians, became prisoners after the Siege of Kut in Mesopotamia in April 1916; 4,250 died in captivity. Although many were in a poor condition when captured, Ottoman officers forced them to march 1,100 kilometres (684 mi) to Anatolia. A survivor said: "We were driven along like beasts; to drop out was to die. '' The survivors were then forced to build a railway through the Taurus Mountains.
In Russia, when the prisoners from the Czech Legion of the Austro - Hungarian army were released in 1917, they re-armed themselves and briefly became a military and diplomatic force during the Russian Civil War.
While the Allied prisoners of the Central Powers were quickly sent home at the end of active hostilities, the same treatment was not granted to Central Power prisoners of the Allies and Russia, many of whom served as forced labor, e.g., in France until 1920. They were released only after many approaches by the Red Cross to the Allied Supreme Council. German prisoners were still being held in Russia as late as 1924.
Military and civilian observers from every major power closely followed the course of the war. Many were able to report on events from a perspective somewhat akin to modern "embedded '' positions within the opposing land and naval forces.
In the Balkans, Yugoslav nationalists such as the leader, Ante Trumbić, strongly supported the war, desiring the freedom of Yugoslavs from Austria - Hungary and other foreign powers and the creation of an independent Yugoslavia. The Yugoslav Committee was formed in Paris on 30 April 1915 but shortly moved its office to London; Trumbić led the Committee. In April 1918, the Rome Congress of Oppressed Nationalities met, including Czechoslovak, Italian, Polish, Transylvanian, and Yugoslav representatives who urged the Allies to support national self - determination for the peoples residing within Austria - Hungary.
In the Middle East, Arab nationalism soared in Ottoman territories in response to the rise of Turkish nationalism during the war, with Arab nationalist leaders advocating the creation of a pan-Arab state. In 1916, the Arab Revolt began in Ottoman - controlled territories of the Middle East in an effort to achieve independence.
In East Africa, Iyasu V of Ethiopia was supporting the Dervish state who were at war with the British in the Somaliland Campaign. Von Syburg, the German envoy in Addis Ababa, said, "now the time has come for Ethiopia to regain the coast of the Red Sea driving the Italians home, to restore the Empire to its ancient size. '' The Ethiopian Empire was on the verge of entering World War I on the side of the Central Powers before Iyasu 's overthrow due to Allied pressure on the Ethiopian aristocracy.
A number of socialist parties initially supported the war when it began in August 1914. But European socialists split on national lines, with the concept of class conflict held by radical socialists such as Marxists and syndicalists being overborne by their patriotic support for the war. Once the war began, Austrian, British, French, German, and Russian socialists followed the rising nationalist current by supporting their countries ' intervention in the war.
Italian nationalism was stirred by the outbreak of the war and was initially strongly supported by a variety of political factions. One of the most prominent and popular Italian nationalist supporters of the war was Gabriele d'Annunzio, who promoted Italian irredentism and helped sway the Italian public to support intervention in the war. The Italian Liberal Party, under the leadership of Paolo Boselli, promoted intervention in the war on the side of the Allies and used the Dante Alighieri Society to promote Italian nationalism. Italian socialists were divided on whether to support the war or oppose it; some were militant supporters of the war, including Benito Mussolini and Leonida Bissolati. However, the Italian Socialist Party decided to oppose the war after anti-militarist protestors were killed, resulting in a general strike called Red Week. The Italian Socialist Party purged itself of pro-war nationalist members, including Mussolini. Mussolini, a syndicalist who supported the war on grounds of irredentist claims on Italian - populated regions of Austria - Hungary, formed the pro-interventionist Il Popolo d'Italia and the Fasci Rivoluzionario d'Azione Internazionalista ("Revolutionary Fasci for International Action '') in October 1914 that later developed into the Fasci di Combattimento in 1919, the origin of fascism. Mussolini 's nationalism enabled him to raise funds from Ansaldo (an armaments firm) and other companies to create Il Popolo d'Italia to convince socialists and revolutionaries to support the war.
Once war was declared, many socialists and trade unions backed their governments. Among the exceptions were the Bolsheviks, the Socialist Party of America, and the Italian Socialist Party, and people like Karl Liebknecht, Rosa Luxemburg, and their followers in Germany.
Benedict XV, elected to the papacy less than three months into World War I, made the war and its consequences the main focus of his early pontificate. In stark contrast to his predecessor, five days after his election he spoke of his determination to do what he could to bring peace. His first encyclical, Ad beatissimi Apostolorum, given 1 November 1914, was concerned with this subject. Benedict XV found his abilities and unique position as a religious emissary of peace ignored by the belligerent powers. The 1915 Treaty of London between Italy and the Triple Entente included secret provisions whereby the Allies agreed with Italy to ignore papal peace moves towards the Central Powers. Consequently, the publication of Benedict 's proposed seven - point Peace Note of August 1917 was roundly ignored by all parties except Austria - Hungary.
In Britain, in 1914, the Public Schools Officers ' Training Corps annual camp was held at Tidworth Pennings, near Salisbury Plain. Head of the British Army, Lord Kitchener, was to review the cadets, but the imminence of the war prevented him. General Horace Smith - Dorrien was sent instead. He surprised the two - or - three thousand cadets by declaring (in the words of Donald Christopher Smith, a Bermudian cadet who was present),
that war should be avoided at almost any cost, that war would solve nothing, that the whole of Europe and more besides would be reduced to ruin, and that the loss of life would be so large that whole populations would be decimated. In our ignorance I, and many of us, felt almost ashamed of a British General who uttered such depressing and unpatriotic sentiments, but during the next four years, those of us who survived the holocaust -- probably not more than one - quarter of us -- learned how right the General 's prognosis was and how courageous he had been to utter it.
Voicing these sentiments did not hinder Smith - Dorrien 's career, or prevent him from doing his duty in World War I to the best of his abilities.
Many countries jailed those who spoke out against the conflict. These included Eugene Debs in the United States and Bertrand Russell in Britain. In the US, the Espionage Act of 1917 and Sedition Act of 1918 made it a federal crime to oppose military recruitment or make any statements deemed "disloyal ''. Publications at all critical of the government were removed from circulation by postal censors, and many served long prison sentences for statements of fact deemed unpatriotic.
A number of nationalists opposed intervention, particularly within states that the nationalists were hostile to. Although the vast majority of Irish people consented to participate in the war in 1914 and 1915, a minority of advanced Irish nationalists staunchly opposed taking part. The war began amid the Home Rule crisis in Ireland that had resurfaced in 1912 and, by July 1914, there was a serious possibility of an outbreak of civil war in Ireland. Irish nationalists and Marxists attempted to pursue Irish independence, culminating in the Easter Rising of 1916, with Germany sending 20,000 rifles to Ireland to stir unrest in Britain. The UK government placed Ireland under martial law in response to the Easter Rising; although, once the immediate threat of revolution had dissipated, the authorities did try to make concessions to nationalist feeling. However, opposition to involvement in the war increased in Ireland, resulting in the Conscription Crisis of 1918.
Other opposition came from conscientious objectors -- some socialist, some religious -- who refused to fight. In Britain, 16,000 people asked for conscientious objector status. Some of them, most notably prominent peace activist Stephen Henry Hobhouse, refused both military and alternative service. Many suffered years of prison, including solitary confinement and bread and water diets. Even after the war, in Britain many job advertisements were marked "No conscientious objectors need apply ''.
The Central Asian Revolt started in the summer of 1916, when the Russian Empire government ended its exemption of Muslims from military service.
In 1917, a series of French Army Mutinies led to dozens of soldiers being executed and many more imprisoned.
In Milan, in May 1917, Bolshevik revolutionaries organised and engaged in rioting calling for an end to the war, and managed to close down factories and stop public transportation. The Italian army was forced to enter Milan with tanks and machine guns to face Bolsheviks and anarchists, who fought violently until 23 May when the army gained control of the city. Almost 50 people (including three Italian soldiers) were killed and over 800 people arrested.
In September 1917, Russian soldiers in France began questioning why they were fighting for the French at all and mutinied. In Russia, opposition to the war led to soldiers also establishing their own revolutionary committees, which helped foment the October Revolution of 1917, with the call going up for "bread, land, and peace ''. The Bolsheviks agreed to a peace treaty with Germany, the peace of Brest - Litovsk, despite its harsh conditions.
In northern Germany, the end of October 1918 saw the beginning of the German Revolution of 1918 -- 1919. Units of the German Navy refused to set sail for a last, large - scale operation in a war they saw as good as lost; this initiated the uprising. The sailors ' revolt, which then ensued in the naval ports of Wilhelmshaven and Kiel, spread across the whole country within days and led to the proclamation of a republic on 9 November 1918 and shortly thereafter to the abdication of Kaiser Wilhelm II.
Conscription was common in most European countries. However, it was controversial in English - speaking countries. It was especially unpopular among minority ethnic groups -- especially the Irish Catholics in Ireland and Australia, and the French Catholics in Canada.
In Canada the issue produced a major political crisis that permanently alienated the Francophones. It opened a political gap between French Canadians, who believed their true loyalty was to Canada and not to the British Empire, and members of the Anglophone majority, who saw the war as a duty to their British heritage.
In Australia, a sustained pro-conscription campaign by Billy Hughes, the Prime Minister, caused a split in the Australian Labor Party, so Hughes formed the Nationalist Party of Australia in 1917 to pursue the matter. Farmers, the labour movement, the Catholic Church, and the Irish Catholics successfully opposed Hughes ' push, which was rejected in two plebiscites.
In Britain, conscription resulted in the calling up of nearly every physically fit man in Britain -- six of ten million eligible. Of these, about 750,000 lost their lives. Most deaths were to young unmarried men; however, 160,000 wives lost husbands and 300,000 children lost fathers. Conscription during the First World War began when the British government passed the Military Service Act in 1916. The act specified that single men aged 18 to 40 years old were liable to be called up for military service unless they were widowed with children or ministers of a religion. There was a system of Military Service Tribunals to adjudicate upon claims for exemption upon the grounds of performing civilian work of national importance, domestic hardship, health, and conscientious objection. The law went through several changes before the war ended. Married men were exempt in the original Act, although this was changed in June 1916. The age limit was also eventually raised to 51 years old. Recognition of work of national importance also diminished, and in the last year of the war there was some support for the conscription of clergy. Conscription lasted until mid-1919. Due to the political situation in Ireland, conscription was never applied there; only in England, Scotland and Wales.
In the United States, conscription began in 1917 and was generally well received, with a few pockets of opposition in isolated rural areas. The administration decided to rely primarily on conscription, rather than voluntary enlistment, to raise military manpower for when only 73,000 volunteers enlisted out of the initial 1 million target in the first six weeks of the war. In 1917 10 million men were registered. This was deemed to be inadequate, so age ranges were increased and exemptions reduced, and so by the end of 1918 this increased to 24 million men that were registered with nearly 3 million inducted into the military services. The draft was universal and included blacks on the same terms as whites, although they served in different units. In all 367,710 black Americans were drafted (13.0 % of the total), compared to 2,442,586 white (86.9 %).
Forms of resistance ranged from peaceful protest to violent demonstrations and from humble letter - writing campaigns asking for mercy to radical newspapers demanding reform. The most common tactics were dodging and desertion, and many communities sheltered and defended their draft dodgers as political heroes. Many socialists were jailed for "obstructing the recruitment or enlistment service ''. The most famous was Eugene Debs, head of the Socialist Party of America, who ran for president in 1920 from his prison cell. In 1917 a number of radicals and anarchists challenged the new draft law in federal court, arguing that it was a direct violation of the Thirteenth Amendment 's prohibition against slavery and involuntary servitude. The Supreme Court unanimously upheld the constitutionality of the draft act in the Selective Draft Law Cases on January 7, 1918.
Like all of the armies of mainland Europe, Austria - Hungary relied on conscription to fill their ranks. Officer recruitment however, was voluntary. The effect of this at the start of the war was that well over a quarter of the rank and file were Slavs, while more than 75 % of the officers were ethnic - Germans. This was much resented. The army has been described as being "run on colonial lines '' and the Slav soldiers as "disaffected ''. Thus conscription contributed greatly to Austria 's disastrous performance on the battlefield.
The non-military diplomatic and propaganda interactions among the nations were designed to build support for the cause, or to undermine support for the enemy. For the most part, wartime diplomacy focused on five issues: propaganda campaigns; defining and redefining the war goals, which became harsher as the war went on; luring neutral nations (Italy, Ottoman Empire, Bulgaria, Romania) into the coalition by offering slices of enemy territory; and encouragement by the Allies of nationalistic minority movements inside the Central Powers, especially among Czechs, Poles, and Arabs. In addition, there were multiple peace proposals coming from neutrals, or one side or the other; none of them progressed very far.
... "Strange, friend, '' I said, "Here is no cause to mourn. '' "None, '' said the other, "Save the undone years ''...
The War was an unprecedented triumph for natural science. (Francis) Bacon had promised that knowledge would be power, and power it was: power to destroy the bodies and souls of men more rapidly than had ever been done by human agency before. This triumph paved the way to other triumphs: improvements in transport, in sanitation, in surgery, medicine, and psychiatry, in commerce and industry, and, above all, in preparations for the next war.
The first tentative efforts to comprehend the meaning and consequences of modern warfare began during the initial phases of the war, and this process continued throughout and after the end of hostilities, and is still underway, more than a century later.
Historian Heather Jones argues that the historiography has been reinvigorated by the cultural turn in recent years. Scholars have raised entirely new questions regarding military occupation, radicalization of politics, race, and the male body. Furthermore, new research has revised our understanding of five major topics that historians have long debated. These are: Why the war began, why the Allies won, whether generals were responsible for high casualty rates, how the soldiers endured the horrors of trench warfare, and to what extent the civilian homefront accepted and endorsed the war effort.
Memorials were erected in thousands of villages and towns. Close to battlefields, those buried in improvised burial grounds were gradually moved to formal graveyards under the care of organisations such as the Commonwealth War Graves Commission, the American Battle Monuments Commission, the German War Graves Commission, and Le Souvenir français. Many of these graveyards also have central monuments to the missing or unidentified dead, such as the Menin Gate memorial and the Thiepval Memorial to the Missing of the Somme.
In 1915 John McCrae, a Canadian army doctor, wrote the poem In Flanders Fields as a salute to those who perished in the Great War. Published in Punch on 8 December 1915, it is still recited today, especially on Remembrance Day and Memorial Day.
National World War I Museum and Memorial in Kansas City, Missouri, is a memorial dedicated to all Americans who served in World War I. The Liberty Memorial was dedicated on 1 November 1921, when the supreme Allied commanders spoke to a crowd of more than 100,000 people.
The UK Government has budgeted substantial resources to the commemoration of the war during the period 2014 to 2018. The lead body is the Imperial War Museum. On 3 August 2014, French President Francois Hollande and German President Joachim Gauck together marked the centenary of Germany 's declaration of war on France by laying the first stone of a memorial in Vieil Armand, known in German as Hartmannswillerkopf, for French and German soldiers killed in the war.
World War I had a lasting impact on social memory. It was seen by many in Britain as signalling the end of an era of stability stretching back to the Victorian period, and across Europe many regarded it as a watershed. Historian Samuel Hynes explained:
A generation of innocent young men, their heads full of high abstractions like Honour, Glory and England, went off to war to make the world safe for democracy. They were slaughtered in stupid battles planned by stupid generals. Those who survived were shocked, disillusioned and embittered by their war experiences, and saw that their real enemies were not the Germans, but the old men at home who had lied to them. They rejected the values of the society that had sent them to war, and in doing so separated their own generation from the past and from their cultural inheritance.
This has become the most common perception of World War I, perpetuated by the art, cinema, poems, and stories published subsequently. Films such as All Quiet on the Western Front, Paths of Glory and King & Country have perpetuated the idea, while war - time films including Camrades, Poppies of Flanders, and Shoulder Arms indicate that the most contemporary views of the war were overall far more positive. Likewise, the art of Paul Nash, John Nash, Christopher Nevinson, and Henry Tonks in Britain painted a negative view of the conflict in keeping with the growing perception, while popular war - time artists such as Muirhead Bone painted more serene and pleasant interpretations subsequently rejected as inaccurate. Several historians like John Terraine, Niall Ferguson and Gary Sheffield have challenged these interpretations as partial and polemical views:
These beliefs did not become widely shared because they offered the only accurate interpretation of wartime events. In every respect, the war was much more complicated than they suggest. In recent years, historians have argued persuasively against almost every popular cliché of World War I. It has been pointed out that, although the losses were devastating, their greatest impact was socially and geographically limited. The many emotions other than horror experienced by soldiers in and out of the front line, including comradeship, boredom, and even enjoyment, have been recognised. The war is not now seen as a ' fight about nothing ', but as a war of ideals, a struggle between aggressive militarism and more or less liberal democracy. It has been acknowledged that British generals were often capable men facing difficult challenges, and that it was under their command that the British army played a major part in the defeat of the Germans in 1918: a great forgotten victory.
Though these views have been discounted as "myths '', they are common. They have dynamically changed according to contemporary influences, reflecting in the 1950s perceptions of the war as "aimless '' following the contrasting Second World War and emphasising conflict within the ranks during times of class conflict in the 1960s. The majority of additions to the contrary are often rejected.
The social trauma caused by unprecedented rates of casualties manifested itself in different ways, which have been the subject of subsequent historical debate.
The optimism of la belle époque was destroyed, and those who had fought in the war were referred to as the Lost Generation. For years afterwards, people mourned the dead, the missing, and the many disabled. Many soldiers returned with severe trauma, suffering from shell shock (also called neurasthenia, a condition related to posttraumatic stress disorder). Many more returned home with few after - effects; however, their silence about the war contributed to the conflict 's growing mythological status. Though many participants did not share in the experiences of combat or spend any significant time at the front, or had positive memories of their service, the images of suffering and trauma became the widely shared perception. Such historians as Dan Todman, Paul Fussell, and Samuel Heyns have all published works since the 1990s arguing that these common perceptions of the war are factually incorrect.
The rise of Nazism and Fascism included a revival of the nationalist spirit and a rejection of many post-war changes. Similarly, the popularity of the stab - in - the - back legend (German: Dolchstoßlegende) was a testament to the psychological state of defeated Germany and was a rejection of responsibility for the conflict. This conspiracy theory of betrayal became common, and the German populace came to see themselves as victims. The widespread acceptance of the "stab - in - the - back '' theory delegitimized the Weimar government and destabilized the system, opening it to extremes of right and left.
Communist and fascist movements around Europe drew strength from this theory and enjoyed a new level of popularity. These feelings were most pronounced in areas directly or harshly affected by the war. Adolf Hitler was able to gain popularity by using German discontent with the still controversial Treaty of Versailles. World War II was in part a continuation of the power struggle never fully resolved by World War I. Furthermore, it was common for Germans in the 1930s to justify acts of aggression due to perceived injustices imposed by the victors of World War I. American historian William Rubinstein wrote that:
The ' Age of Totalitarianism ' included nearly all of the infamous examples of genocide in modern history, headed by the Jewish Holocaust, but also comprising the mass murders and purges of the Communist world, other mass killings carried out by Nazi Germany and its allies, and also the Armenian Genocide of 1915. All these slaughters, it is argued here, had a common origin, the collapse of the elite structure and normal modes of government of much of central, eastern and southern Europe as a result of World War I, without which surely neither Communism nor Fascism would have existed except in the minds of unknown agitators and crackpots.
One of the most dramatic effects of the war was the expansion of governmental powers and responsibilities in Britain, France, the United States, and the Dominions of the British Empire. To harness all the power of their societies, governments created new ministries and powers. New taxes were levied and laws enacted, all designed to bolster the war effort; many have lasted to this day. Similarly, the war strained the abilities of some formerly large and bureaucratised governments, such as in Austria - Hungary and Germany.
Gross domestic product (GDP) increased for three Allies (Britain, Italy, and the United States), but decreased in France and Russia, in neutral Netherlands, and in the three main Central Powers. The shrinkage in GDP in Austria, Russia, France, and the Ottoman Empire ranged between 30 % and 40 %. In Austria, for example, most pigs were slaughtered, so at war 's end there was no meat.
In all nations, the government 's share of GDP increased, surpassing 50 % in both Germany and France and nearly reaching that level in Britain. To pay for purchases in the United States, Britain cashed in its extensive investments in American railroads and then began borrowing heavily from Wall Street. President Wilson was on the verge of cutting off the loans in late 1916, but allowed a great increase in U.S. government lending to the Allies. After 1919, the U.S. demanded repayment of these loans. The repayments were, in part, funded by German reparations that, in turn, were supported by American loans to Germany. This circular system collapsed in 1931 and some loans were never repaid. Britain still owed the United States $4.4 billion of World War I debt in 1934, the last installment was finally paid in 2015
Macro - and micro-economic consequences devolved from the war. Families were altered by the departure of many men. With the death or absence of the primary wage earner, women were forced into the workforce in unprecedented numbers. At the same time, industry needed to replace the lost labourers sent to war. This aided the struggle for voting rights for women.
World War I further compounded the gender imbalance, adding to the phenomenon of surplus women. The deaths of nearly one million men during the war in Britain increased the gender gap by almost a million: from 670,000 to 1,700,000. The number of unmarried women seeking economic means grew dramatically. In addition, demobilisation and economic decline following the war caused high unemployment. The war increased female employment; however, the return of demobilised men displaced many from the workforce, as did the closure of many of the wartime factories.
In Britain, rationing was finally imposed in early 1918, limited to meat, sugar, and fats (butter and margarine), but not bread. The new system worked smoothly. From 1914 to 1918, trade union membership doubled, from a little over four million to a little over eight million.
Britain turned to her colonies for help in obtaining essential war materials whose supply from traditional sources had become difficult. Geologists such as Albert Ernest Kitson were called on to find new resources of precious minerals in the African colonies. Kitson discovered important new deposits of manganese, used in munitions production, in the Gold Coast.
Article 231 of the Treaty of Versailles (the so - called "war guilt '' clause) stated Germany accepted responsibility for "all the loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by the aggression of Germany and her allies. '' It was worded as such to lay a legal basis for reparations, and a similar clause was inserted in the treaties with Austria and Hungary. However neither of them interpreted it as an admission of war guilt. '' In 1921, the total reparation sum was placed at 132 billion gold marks. However, "Allied experts knew that Germany could not pay '' this sum. The total sum was divided into three categories, with the third being "deliberately designed to be chimerical '' and its "primary function was to mislead public opinion... into believing the "total sum was being maintained. '' Thus, 50 billion gold marks (12.5 billion dollars) "represented the actual Allied assessment of German capacity to pay '' and "therefore... represented the total German reparations '' figure that had to be paid.
This figure could be paid in cash or in kind (coal, timber, chemical dyes, etc.). In addition, some of the territory lost -- via the treaty of Versailles -- was credited towards the reparation figure as were other acts such as helping to restore the Library of Louvain. By 1929, the Great Depression arrived, causing political chaos throughout the world. In 1932 the payment of reparations was suspended by the international community, by which point Germany had only paid the equivalent of 20.598 billion gold marks in reparations. With the rise of Adolf Hitler, all bonds and loans that had been issued and taken out during the 1920s and early 1930s were cancelled. David Andelman notes "refusing to pay does n't make an agreement null and void. The bonds, the agreement, still exist. '' Thus, following the Second World War, at the London Conference in 1953, Germany agreed to resume payment on the money borrowed. On 3 October 2010, Germany made the final payment on these bonds.
The war contributed to the evolution of the wristwatch from women 's jewelry to a practical everyday item, replacing the pocketwatch, which requires a free hand to operate. Military funding of advancements in radio contributed to the postwar popularity of the medium.
|
in the united kingdom who picks the prime minister | Prime Minister of the United Kingdom - Wikipedia
The Prime Minister of the United Kingdom is the head of Her Majesty 's Government in the United Kingdom. The Prime Minister (informally abbreviated to PM) and Cabinet (consisting of all the most senior ministers, most of whom are government department heads) are collectively accountable for their policies and actions to the Monarch, to Parliament, to their political party and ultimately to the electorate. The office is one of the Great Offices of State. The current holder of the office, Theresa May, leader of the Conservative Party, was appointed by the Queen on 13 July 2016.
The office is not established by any statute or constitutional document but exists only by long - established convention, which stipulates that the monarch must appoint as Prime Minister the person most likely to command the confidence of the House of Commons; this individual is typically the leader of the political party or coalition of parties that holds the largest number of seats in that chamber. The position of Prime Minister was not created; it evolved slowly and erratically over three hundred years due to numerous acts of Parliament, political developments, and accidents of history. The office is therefore best understood from a historical perspective. The origins of the position are found in constitutional changes that occurred during the Revolutionary Settlement (1688 -- 1720) and the resulting shift of political power from the Sovereign to Parliament. Although the Sovereign was not stripped of the ancient prerogative powers and legally remained the head of government, politically it gradually became necessary for him or her to govern through a Prime Minister who could command a majority in Parliament.
By the 1830s the Westminster system of government (or cabinet government) had emerged; the Prime Minister had become primus inter pares or the first among equals in the Cabinet and the head of government in the United Kingdom. The political position of Prime Minister was enhanced by the development of modern political parties, the introduction of mass communication (inexpensive newspapers, radio, television and the internet), and photography. By the start of the 20th century the modern premiership had emerged; the office had become the pre-eminent position in the constitutional hierarchy vis - à - vis the Sovereign, Parliament and Cabinet.
Prior to 1902, the Prime Minister sometimes came from the House of Lords, provided that his government could form a majority in the Commons. However as the power of the aristocracy waned during the 19th century the convention developed that the Prime Minister should always sit in the lower house. As leader of the House of Commons, the Prime Minister 's authority was further enhanced by the Parliament Act of 1911 which marginalised the influence of the House of Lords in the law - making process.
The Prime Minister is ex officio also First Lord of the Treasury and Minister for the Civil Service. Certain privileges, such as residency of 10 Downing Street, are accorded to Prime Ministers by virtue of their position as First Lord of the Treasury.
The Prime Minister is the head of Her Majesty 's Government in the United Kingdom. As the "Head of Her Majesty 's Government '' the modern Prime Minister leads the Cabinet (the Executive). In addition, the Prime Minister leads a major political party and generally commands a majority in the House of Commons (the lower House of the legislature). As such, the incumbent wields both significant legislative and executive powers. Under the British system, there is a unity of powers rather than separation. In the House of Commons, the Prime Minister guides the law - making process with the goal of enacting the legislative agenda of their political party. In an executive capacity, the Prime Minister appoints (and may dismiss) all other Cabinet members and ministers, and co-ordinates the policies and activities of all government departments, and the staff of the Civil Service. The Prime Minister also acts as the public "face '' and "voice '' of Her Majesty 's Government, both at home and abroad. Solely upon the advice of the Prime Minister, the Sovereign exercises many statutory and prerogative powers, including high judicial, political, official and Church of England ecclesiastical appointments; the conferral of peerages and some knighthoods, decorations and other important honours.
The British system of government is based on an uncodified constitution, meaning that it is not set out in any single document. The British constitution consists of many documents and most importantly for the evolution of the Office of the Prime Minister, it is based on customs known as constitutional conventions that became accepted practice. In 1928, Prime Minister H.H. Asquith described this characteristic of the British constitution in his memoirs:
In this country we live... under an unwritten Constitution. It is true that we have on the Statute - book great instruments like Magna Carta, the Petition of Right, and the Bill of Rights which define and secure many of our rights and privileges; but the great bulk of our constitutional liberties and... our constitutional practices do not derive their validity and sanction from any Bill which has received the formal assent of the King, Lords and Commons. They rest on usage, custom, convention, often of slow growth in their early stages, not always uniform, but which in the course of time received universal observance and respect.
The relationships between the Prime Minister and the Sovereign, Parliament and Cabinet are defined largely by these unwritten conventions of the constitution. Many of the Prime Minister 's executive and legislative powers are actually royal prerogatives which are still formally vested in the Sovereign, who remains the head of state. Despite its growing dominance in the constitutional hierarchy, the Premiership was given little formal recognition until the 20th century; the legal fiction was maintained that the Sovereign still governed directly. The position was first mentioned in statute only in 1917, in the schedule of the Chequers Estate Act. Increasingly during the 20th century, the office and role of Prime Minister featured in statute law and official documents; however, the Prime Minister 's powers and relationships with other institutions still largely continue to derive from ancient royal prerogatives and historic and modern constitutional conventions. Prime Ministers continue to hold the position of First Lord of the Treasury and, since November 1968, that of Minister for the Civil Service, the latter giving them authority over the civil service.
Under this arrangement, Britain might appear to have two executives: the Prime Minister and the Sovereign. The concept of "the Crown '' resolves this paradox. The Crown symbolises the state 's authority to govern: to make laws and execute them, impose taxes and collect them, declare war and make peace. Before the "Glorious Revolution '' of 1688, the Sovereign exclusively wielded the powers of the Crown; afterwards, Parliament gradually forced monarchs to assume a neutral political position. Parliament has effectively dispersed the powers of the Crown, entrusting its authority to responsible ministers (the Prime Minister and Cabinet), accountable for their policies and actions to Parliament, in particular the elected House of Commons.
Although many of the Sovereign 's prerogative powers are still legally intact, constitutional conventions have removed the monarch from day - to - day governance, with ministers exercising the royal prerogatives, leaving the monarch in practice with three constitutional rights: to be kept informed, to advise, and to warn.
Because the Premiership was not intentionally created, there is no exact date when its evolution began. A meaningful starting point, however, is 1688 -- 9 when James II fled England and the Parliament of England confirmed William and Mary as joint constitutional monarchs, enacting legislation that limited their authority and that of their successors: the Bill of Rights (1689), the Mutiny Bill (1689), the Triennial Bill (1694), the Treason Act (1696) and the Act of Settlement (1701). Known collectively as the Revolutionary Settlement, these acts transformed the constitution, shifting the balance of power from the Sovereign to Parliament. They also provided the basis for the evolution of the office of Prime Minister, which did not exist at that time.
The Revolutionary Settlement gave the Commons control over finances and legislation and changed the relationship between the Executive and the Legislature. For want of money, Sovereigns had to summon Parliament annually and could no longer dissolve or prorogue it without its advice and consent. Parliament became a permanent feature of political life. The veto fell into disuse because Sovereigns feared that if they denied legislation, Parliament would deny them money. No Sovereign has denied royal assent since Queen Anne vetoed the Scottish Militia Bill in 1708.
Treasury officials and other department heads were drawn into Parliament serving as liaisons between it and the Sovereign. Ministers had to present the government 's policies, and negotiate with Members to gain the support of the majority; they had to explain the government 's financial needs, suggest ways of meeting them and give an account of how money had been spent. The Sovereign 's representatives attended Commons sessions so regularly that they were given reserved seats at the front, known as the Treasury Bench. This is the beginning of "unity of powers '': the Sovereign 's Ministers (the Executive) became leading members of Parliament (the Legislature). Today the Prime Minister (First Lord of the Treasury), the Chancellor of the Exchequer (responsible for The Budget) and other senior members of the Cabinet sit on the Treasury bench and present policies in much the same way Ministers did late in the 17th century.
After the Revolution, there was a constant threat that non-government members of Parliament would ruin the country 's finances by proposing ill - considered money bills. Vying for control to avoid chaos, the Crown 's Ministers gained an advantage in 1706, when the Commons informally declared, "That this House will receive no petition for any sum of money relating to public Service, but what is recommended from the Crown. '' On 11 June 1713, this non-binding rule became Standing Order 66: that "the Commons would not vote money for any purpose, except on a motion of a Minister of the Crown. '' Standing Order 66 remains in effect today (though renumbered as no. 48), essentially unchanged for three hundred years.
Empowering Ministers with sole financial initiative had an immediate and lasting impact. Apart from achieving its intended purpose -- to stabilise the budgetary process -- it gave the Crown a leadership role in the Commons; and, the Lord Treasurer assumed a leading position among Ministers.
The power of financial initiative was not, however, absolute. Only Ministers might initiate money bills, but Parliament now reviewed and consented to them. Standing Order 66 therefore represents the beginnings of Ministerial responsibility and accountability.
The term "Prime Minister '' appears at this time as an unofficial title for the leader of the government, usually the Head of the Treasury. Jonathan Swift, for example, wrote in 1713 about "those who are now commonly called Prime Minister among us '', referring to Sidney Godolphin, 1st Earl of Godolphin and Robert Harley, Queen Anne 's Lord Treasurers and chief ministers. Since 1721, every head of the Sovereign 's government -- with one exception in the 18th century (William Pitt the Elder) and one in the 19th (Lord Salisbury) -- has been First Lord of the Treasury.
Political parties first appeared during the Exclusion Crisis of 1678 -- 1681. The Whigs, who believed in limited monarchy, wanted to exclude James Stuart from succeeding to the throne because he was a Catholic. The Tories, who believed in the "Divine Right of Kings '', defended James ' hereditary claim.
Political parties were not well organised or disciplined in the 17th century. They were more like factions with "members '' drifting in and out, collaborating temporarily on issues when it was to their advantage, then disbanding when it was not. A major deterrent to the development of opposing parties was the idea that there could only be one "King 's Party '' and to oppose it would be disloyal or even treasonous. This idea lingered throughout the 18th century. Nevertheless it became possible at the end of the 17th century to identify Parliaments and Ministries as being either "Whig '' or "Tory '' in composition.
The modern Prime Minister is also the leader of the Cabinet. A convention of the constitution, the modern Cabinet is a group of ministers who formulate policies. As the political heads of government departments Cabinet Ministers ensure that policies are carried out by permanent civil servants. Although the modern Prime Minister selects Ministers, appointment still rests with the Sovereign. With the Prime Minister as its leader, the Cabinet forms the executive branch of government.
The term "Cabinet '' first appears after the Revolutionary Settlement to describe those ministers who conferred privately with the Sovereign. The growth of the Cabinet met with widespread complaint and opposition because its meetings were often held in secret and it excluded the ancient Privy Council (of which the Cabinet is formally a committee) from the Sovereign 's circle of advisers, reducing it to an honorary body. The early Cabinet, like that of today, included the Treasurer and other department heads who sat on the Treasury bench. However, it might also include individuals who were not members of Parliament such as household officers (e.g. the Master of the Horse) and members of the royal family. The exclusion of non-members of Parliament from the Cabinet was essential to the development of ministerial accountability and responsibility.
Both William and Anne appointed and dismissed Cabinet members, attended meetings, made decisions, and followed up on actions. Relieving the Sovereign of these responsibilities and gaining control over the Cabinet 's composition was an essential part of evolution of the Premiership. This process began after the Hanoverian Succession. Although George I (1714 -- 1727) attended Cabinet meetings at first, after 1717 he withdrew because he did not speak fluent English and was bored with the discussions. George II (1727 -- 1760) occasionally presided at Cabinet meetings but his grandson, George III (1760 -- 1820), is known to have attended only two during his 60 - year reign. Thus, the convention that Sovereigns do not attend Cabinet meetings was established primarily through royal indifference to the everyday tasks of governance. The Prime Minister became responsible for calling meetings, presiding, taking notes, and reporting to the Sovereign. These simple executive tasks naturally gave the Prime Minister ascendancy over his Cabinet colleagues.
Although the first three Hanoverians rarely attended Cabinet meetings they insisted on their prerogatives to appoint and dismiss ministers and to direct policy even if from outside the Cabinet. It was not until late in the 18th century that Prime Ministers gained control over Cabinet composition (see section Emergence of Cabinet Government below).
British governments (or Ministries) are generally formed by one party. The Prime Minister and Cabinet are usually all members of the same political party, almost always the one that has a majority of seats in the House of Commons. Coalition governments (a ministry that consists of representatives from two or more parties) and minority governments (a one - party ministry formed by a party that does not command a majority in the Commons) are relatively rare. "One party government '', as this system is sometimes called, has been the general rule for almost three hundred years.
Early in his reign, William III (1689 -- 1702) preferred "Mixed Ministries '' (or coalitions) consisting of both Tories and Whigs. William thought this composition would dilute the power of any one party and also give him the benefit of differing points of view. However, this approach did not work well because the members could not agree on a leader or on policies, and often worked at odds with each other.
In 1697, William formed a homogeneous Whig ministry. Known as the Junto, this government is often cited as the first true Cabinet because its members were all Whigs, reflecting the majority composition of the Commons.
Anne (1702 -- 1714) followed this pattern but preferred Tory Cabinets. This approach worked well as long as Parliament was also predominantly Tory. However, in 1708, when the Whigs obtained a majority, Anne did not call on them to form a government, refusing to accept the idea that politicians could force themselves on her merely because their party had a majority. She never parted with an entire Ministry or accepted an entirely new one regardless of the results of an election. Anne preferred to retain a minority government rather than be dictated to by Parliament. Consequently, her chief ministers Sidney Godolphin, 1st Earl of Godolphin and Robert Harley, who were called "Prime Minister '' by some, had difficulty executing policy in the face of a hostile Parliament.
William 's and Anne 's experiments with the political composition of the Cabinet illustrated the strengths of one party government and the weaknesses of coalition and minority governments. Nevertheless, it was not until the 1830s that the constitutional convention was established that the Sovereign must select the Prime Minister (and Cabinet) from the party whose views reflect those of the majority in Parliament. Since then, most ministries have reflected this one party rule.
Despite the "one party '' convention, Prime Ministers may still be called upon to lead either minority or coalition governments. A minority government may be formed as a result of a "hung parliament '' in which no single party commands a majority in the House of Commons after a general election or the death, resignation or defection of existing members. By convention the serving Prime Minister is given the first opportunity to reach agreements that will allow them to survive a vote of confidence in the House and continue to govern. The last minority government was led by Labour Prime Minister Harold Wilson for eight months after the February 1974 general election produced a hung parliament. In the October 1974 general election, the Labour Party gained 18 seats, giving Wilson a majority of three.
A hung parliament may also lead to the formation of a coalition government in which two or more parties negotiate a joint programme to command a majority in the Commons. Coalitions have also been formed during times of national crisis such as war. Under such circumstances, the parties agree to temporarily set aside their political differences and to unite to face the national crisis. Coalitions are rare: since 1721, there have been fewer than a dozen.
When the general election of 2010 produced a hung parliament, the Conservative and Liberal Democrat parties agreed to form the Cameron -- Clegg coalition, the first coalition in seventy years. The previous coalition in the UK before 2010 was led by Conservative Prime Minister Winston Churchill during most of the Second World War, from May 1940 to May 1945. Clement Attlee, the leader of the Labour Party, served as deputy Prime Minister. After the general election of 2015, the nation returned to one party government after the Tories won an outright majority.
The Premiership is still largely a convention of the constitution; its legal authority is derived primarily from the fact that the Prime Minister is also First Lord of the Treasury. The connection of these two offices -- one a convention, the other a legal office -- began with the Hanoverian Succession in 1714.
When George I succeeded to the British throne in 1714, his German ministers advised him to leave the office of Lord High Treasurer vacant because those who had held it in recent years had grown overly powerful, in effect, replacing the Sovereign as head of the government. They also feared that a Lord High Treasurer would undermine their own influence with the new King. They therefore suggested that he place the office in "commission '', meaning that a committee of five ministers would perform its functions together. Theoretically, this dilution of authority would prevent any one of them from presuming to be the head of the government. The King agreed and created the Treasury Commission consisting of the First Lord of the Treasury, the Second Lord, and three Junior Lords.
No one has been appointed Lord High Treasurer since 1714; it has remained in commission for three hundred years. The Treasury Commission ceased to meet late in the 18th century but has survived, albeit with very different functions: the First Lord of the Treasury is now the Prime Minister, the Second Lord is the Chancellor of the Exchequer (and actually in charge of the Treasury), and the Junior Lords are government Whips maintaining party discipline in the House of Commons; they no longer have any duties related to the Treasury, though when subordinate legislation requires the consent of the Treasury it is still two of the Junior Lords who sign on its behalf.
Since the office evolved rather than being instantly created, it may not be totally clear - cut who was the first Prime Minister. However, this appellation is traditionally given to Sir Robert Walpole, who became First Lord of the Treasury in 1721.
In 1720, the South Sea Company, created to trade in cotton, agricultural goods and slaves, collapsed, causing the financial ruin of thousands of investors and heavy losses for many others, including members of the royal family. King George I called on Robert Walpole, well known for his political and financial acumen, to handle the emergency. With considerable skill and some luck, Walpole acted quickly to restore public credit and confidence, and led the country out of the crisis. A year later, the King appointed him First Lord of the Treasury, Chancellor of the Exchequer, and Leader of the House of Commons -- making him the most powerful minister in the government. Ruthless, crude, and hard - working, he had a "sagacious business sense '' and was a superb manager of men. At the head of affairs for the next two decades, Walpole stabilised the nation 's finances, kept it at peace, made it prosperous, and secured the Hanoverian Succession.
Walpole demonstrated for the first time how a chief minister -- a Prime Minister -- could be the actual Head of the Government under the new constitutional framework. First, recognising that the Sovereign could no longer govern directly but was still the nominal head of the government, he insisted that he was nothing more than the "King 's Servant ''. Second, recognising that power had shifted to the Commons, he conducted the nation 's business there and made it dominant over the Lords in all matters. Third, recognising that the Cabinet had become the executive and must be united, he dominated the other members and demanded their complete support for his policies. Fourth, recognising that political parties were the source of ministerial strength, he led the Whig party and maintained discipline. In the Commons, he insisted on the support of all Whig members, especially those who held office. Finally, he set an example for future Prime Ministers by resigning his offices in 1742 after a vote of confidence, which he won by just 3 votes. The slimness of this majority undermined his power, even though he still retained the confidence of the Sovereign.
For all his contributions, Walpole was not a Prime Minister in the modern sense. The King -- not Parliament -- chose him; and the King -- not Walpole -- chose the Cabinet. Walpole set an example, not a precedent, and few followed his example. For over 40 years after Walpole 's fall in 1742, there was widespread ambivalence about the position. In some cases, the Prime Minister was a figurehead with power being wielded by other individuals; in others there was a reversion to the "chief minister '' model of earlier times in which the Sovereign actually governed. At other times, there appeared to be two Prime Ministers. During Britain 's participation in the Seven Years ' War, for example, the powers of government were divided equally between the Duke of Newcastle and William Pitt, 1st Earl of Chatham, leading to them both alternatively being described as Prime Minister. Furthermore, many thought that the title "Prime Minister '' usurped the Sovereign 's constitutional position as "head of the government '' and that it was an affront to other ministers because they were all appointed by and equally responsible to the Sovereign.
For these reasons there was a reluctance to use the title. Although Walpole is now called the "first '' Prime Minister, the title was not commonly used during his tenure. Walpole himself denied it. In 1741, during the attack that led to Walpole 's downfall, Samuel Sandys declared that "According to our Constitution we can have no sole and prime minister. '' In his defence, Walpole said "I unequivocally deny that I am sole or Prime Minister and that to my influence and direction all the affairs of government must be attributed. '' George Grenville, Prime Minister in the 1760s, said it was "an odious title '' and never used it. Lord North, the reluctant head of the King 's Government during the American War of Independence, "would never suffer himself to be called Prime Minister, because it was an office unknown to the Constitution. ''
Denials of the Premiership 's legal existence continued throughout the 19th century. In 1806, for example, one member of the Commons said, "the Constitution abhors the idea of a prime minister ''. In 1829, Lord Lansdowne said, "nothing could be more mischievous or unconstitutional than to recognise by act of parliament the existence of such an office. ''
By the turn of the 20th century the Premiership had become, by convention, the most important position in the constitutional hierarchy. Yet there were no legal documents describing its powers or acknowledging its existence. The first official recognition given to the office had only been in the Treaty of Berlin in 1878, when Disraeli signed as "First Lord of the Treasury and Prime Minister of her Britannic Majesty ''. Incumbents had no statutory authority in their own right. As late as 1904, Arthur Balfour explained the status of his office in a speech at Haddington: "The Prime Minister has no salary as Prime Minister. He has no statutory duties as Prime Minister, his name occurs in no Acts of Parliament, and though holding the most important place in the constitutional hierarchy, he has no place which is recognised by the laws of his country. This is a strange paradox. ''
In 1905 the position was given some official recognition when the "Prime Minister '' was named in the order of precedence, outranked, among non-royals, only by the Archbishops of Canterbury and York, the Moderator of the General Assembly of the Church of Scotland and the Lord Chancellor.
The first Act of Parliament to mention the Premiership -- albeit in a schedule -- was the Chequers Estate Act on 20 December 1917. This law conferred the Chequers Estate owned by Sir Arthur and Lady Lee, as a gift to the Crown for use as a country home for future Prime Ministers.
Unequivocal legal recognition was given in the Ministers of the Crown Act 1937, which made provision for payment of a salary to the person who is both "the First Lord of the Treasury and Prime Minister ''. Explicitly recognising two hundred years ' of ambivalence, the Act states that it intended "To give statutory recognition to the existence of the position of Prime Minister, and to the historic link between the Premiership and the office of First Lord of the Treasury, by providing in respect to that position and office a salary of... '' The Act made a distinction between the "position '' (Prime Minister) and the "office '' (First Lord of the Treasury), emphasising the unique political character of the former. Nevertheless, the brass plate on the door of the Prime Minister 's home, 10 Downing Street, still bears the title of "First Lord of the Treasury '', as it has since the 18th century as it is officially the home of the First Lord and not the Prime Minister.
Despite the reluctance to legally recognise the Premiership, ambivalence toward it waned in the 1780s. During the first 20 years of his reign, George III (1760 -- 1820) tried to be his own "prime minister '' by controlling policy from outside the Cabinet, appointing and dismissing ministers, meeting privately with individual ministers, and giving them instructions. These practices caused confusion and dissension in Cabinet meetings; King George 's experiment in personal rule was generally a failure. After the failure of Lord North 's ministry (1770 -- 1782) in March 1782 due to Britain 's defeat in the American Revolutionary War and the ensuing vote of no confidence by Parliament, the Marquess of Rockingham reasserted the Prime Minister 's control over the Cabinet. Rockingham assumed the Premiership "on the distinct understanding that measures were to be changed as well as men; and that the measures for which the new ministry required the royal consent were the measures which they, while in opposition, had advocated. '' He and his Cabinet were united in their policies and would stand or fall together; they also refused to accept anyone in the Cabinet who did not agree. King George threatened to abdicate but in the end reluctantly agreed out of necessity: he had to have a government.
From this time, there was a growing acceptance of the position of Prime Minister and the title was more commonly used, if only unofficially. Associated initially with the Whigs, the Tories started to accept it. Lord North, for example, who had said the office was "unknown to the constitution '', reversed himself in 1783 when he said, "In this country some one man or some body of men like a Cabinet should govern the whole and direct every measure. '' In 1803, William Pitt the Younger, also a Tory, suggested to a friend that "this person generally called the first minister '' was an absolute necessity for a government to function, and expressed his belief that this person should be the minister in charge of the finances.
The Tories ' wholesale conversion started when Pitt was confirmed as Prime Minister in the election of 1784. For the next 17 years until 1801 (and again from 1804 to 1806), Pitt, the Tory, was Prime Minister in the same sense that Walpole, the Whig, had been earlier.
Their conversion was reinforced after 1810. In that year, George III, who had suffered periodically from mental instability (due to a blood disorder now known as porphyria), became permanently insane and spent the remaining 10 years of his life unable to discharge his duties. The Prince Regent was prevented from using the full powers of Kingship. The Regent became George IV in 1820, but during his 10 - year reign was indolent and frivolous. Consequently, for 20 years the throne was virtually vacant and Tory Cabinets led by Tory Prime Ministers filled the void, governing virtually on their own.
The Tories were in power for almost 50 years, except for a Whig ministry from 1806 to 1807. Lord Liverpool was Prime Minister for 15 years; he and Pitt held the position for 34 years. Under their long, consistent leadership, Cabinet government became a convention of the constitution. Although subtle issues remained to be settled, the Cabinet system of government is essentially the same today as it was in 1830.
Under this form of government, called the Westminster system, the Sovereign is head of state and titular head of Her Majesty 's Government. She selects as her Prime Minister the person who is able to command a working majority in the House of Commons, and invites him or her to form a government. As the actual Head of Government, the Prime Minister selects his Cabinet, choosing its members from among those in Parliament who agree or generally agree with his intended policies. He then recommends them to the Sovereign who confirms his selections by formally appointing them to their offices. Led by the Prime Minister, the Cabinet is collectively responsible for whatever the government does. The Sovereign does not confer with members privately about policy, nor attend Cabinet meetings. With respect to actual governance, the monarch has only three constitutional rights: to be kept informed, to advise, and to warn. In practice this means that the Sovereign reviews state papers and meets regularly with the Prime Minister, usually weekly, when she may advise and warn him or her regarding the proposed decisions and actions of Her Government.
The modern British system includes not only a government formed by the majority party (or coalition of parties) in the House of Commons but also an organised and open opposition formed by those who are not members of the governing party. Called Her Majesty 's Most Loyal Opposition, they occupy the benches to the Speaker 's left. Seated in the front, directly across from the ministers on the Treasury Bench, the leaders of the opposition form a "Shadow Government '', complete with a salaried "Shadow Prime Minister '', the Leader of the Opposition, ready to assume office if the government falls or loses the next election.
Opposing the King 's government was considered disloyal, even treasonous, at the end of the 17th century. During the 18th century this idea waned and finally disappeared as the two party system developed. The expression "His Majesty 's Opposition '' was coined by John Hobhouse, 1st Baron Broughton. In 1826, Broughton, a Whig, announced in the Commons that he opposed the report of a Bill. As a joke, he said, "It was said to be very hard on His Majesty 's ministers to raise objections to this proposition. For my part, I think it is much more hard on His Majesty 's Opposition to compel them to take this course. '' The phrase caught on and has been used ever since. Sometimes rendered as the "Loyal Opposition '', it acknowledges the legitimate existence of the two party system, and describes an important constitutional concept: opposing the government is not treason; reasonable men can honestly oppose its policies and still be loyal to the Sovereign and the nation.
Informally recognized for over a century as a convention of the constitution, the position of Leader of the Opposition was given statutory recognition in 1937 by the Ministers of the Crown Act.
British Prime Ministers have never been elected directly by the public. A Prime Minister need not be a party leader; David Lloyd George was not a party leader during his service as prime Minister during World War I, and neither was Ramsay MacDonald from 1931 to 1935. Prime Ministers have taken office because they were members of either the Commons or Lords, and either inherited a majority in the Commons or won more seats than the opposition in a general election.
Since 1722, most Prime Ministers have been members of the Commons; since 1902, all have had a seat there. Like other members, they are elected initially to represent only a constituency. Former Prime Minister Tony Blair, for example, represented Sedgefield in County Durham from 1983 to 2007. He became Prime Minister because in 1994 he was elected Labour Party leader and then led the party to victory in the 1997 general election, winning 418 seats compared to 165 for the Conservatives and gaining a majority in the House of Commons.
Neither the Sovereign nor the House of Lords had any meaningful influence over who was elected to the Commons in 1997 or in deciding whether or not Blair would become Prime Minister. Their detachment from the electoral process and the selection of the Prime Minister has been a convention of the constitution for almost 200 years.
Prior to the 19th century, however, they had significant influence, using to their advantage the fact that most citizens were disenfranchised and seats in the Commons were allocated disproportionately. Through patronage, corruption and bribery, the Crown and Lords "owned '' about 30 % of the seats (called "pocket '' or "rotten boroughs '') giving them a significant influence in the Commons and in the selection of the Prime Minister.
In 1830, Charles Grey, the 2nd Earl Grey and a life - long Whig, became Prime Minister and was determined to reform the electoral system. For two years, he and his Cabinet fought to pass what has come to be known as the Great Reform Bill of 1832. The greatness of the Great Reform Bill lay less in substance than in symbolism. As John Bright, a liberal statesman of the next generation, said, "It was not a good Bill, but it was a great Bill when it passed. '' Substantively, it increased the franchise by 65 % to 717,000; with the middle class receiving most of the new votes. The representation of 56 rotten boroughs was eliminated completely, together with half the representation of 30 others; the freed up seats were distributed to boroughs created for previously disenfranchised areas. However, many rotten boroughs remained and it still excluded millions of working class men and all women.
Symbolically, however, the Reform Act exceeded expectations. It is now ranked with Magna Carta and the Bill of Rights as one of the most important documents of the British constitutional tradition.
First, the Act removed the Sovereign from the election process and the choice of Prime Minister. Slowly evolving for 100 years, this convention was confirmed two years after the passage of the Act. In 1834, King William IV dismissed Melbourne as Premier, but was forced to recall him when Robert Peel, the King 's choice, could not form a working majority. Since then, no Sovereign has tried to impose a Prime Minister on Parliament.
Second, the Bill reduced the Lords ' power by eliminating many of their pocket boroughs and creating new boroughs in which they had no influence. Weakened, they were unable to prevent the passage of more comprehensive electoral reforms in 1867, 1884, 1918 and 1928 when universal equal suffrage was established.
Ultimately, this erosion of power led to the Parliament Act of 1911, which marginalised the Lords ' role in the legislative process and gave further weight to the convention that had developed over the previous century that a Prime Minister can not sit in the House of Lords. The last to do so was Robert Gascoyne - Cecil, 3rd Marquess of Salisbury, from 1895 to 1902. Throughout the 19th century, governments led from the Lords had often suffered difficulties governing alongside ministers who sat in the Commons.
Grey set an example and a precedent for his successors. He was primus inter pares (first among equals), as Bagehot said in 1867 of the Prime Minister 's status. Using his Whig victory as a mandate for reform, Grey was unrelenting in the pursuit of this goal, using every Parliamentary device to achieve it. Although respectful toward the King, he made it clear that his constitutional duty was to acquiesce to the will of the people and Parliament.
The Loyal Opposition acquiesced too. Some disgruntled Tories claimed they would repeal the Bill once they regained a majority. But in 1834, Robert Peel, the new Conservative leader, put an end to this threat when he stated in his Tamworth Manifesto that the Bill was "a final and irrevocable settlement of a great constitutional question which no friend to the peace and welfare of this country would attempt to disturb ''.
The Premiership was a reclusive office prior to 1832. The incumbent worked with his Cabinet and other government officials; he occasionally met with the Sovereign, and attended Parliament when it was in session during the spring and summer. He never went out on the stump to campaign, even during elections; he rarely spoke directly to ordinary voters about policies and issues.
After the passage of the Great Reform Bill, the nature of the position changed: Prime Ministers had to go out among the people. The Bill increased the electorate to 717,000. Subsequent legislation (and population growth) raised it to 2 million in 1867, 5.5 million in 1884 and 21.4 million in 1918. As the franchise increased, power shifted to the people and Prime Ministers assumed more responsibilities with respect to party leadership. It naturally fell on them to motivate and organise their followers, explain party policies, and deliver its "message ''. Successful leaders had to have a new set of skills: to give a good speech, present a favourable image, and interact with a crowd. They became the "voice '', the "face '' and the "image '' of the party and ministry.
Robert Peel, often called the "model Prime Minister '', was the first to recognise this new role. After the successful Conservative campaign of 1841, J.W. Croker said in a letter to Peel, "The elections are wonderful, and the curiosity is that all turns on the name of Sir Robert Peel. It 's the first time that I remember in our history that the people have chosen the first Minister for the Sovereign. Mr. Pitt 's case in ' 84 is the nearest analogy; but then the people only confirmed the Sovereign 's choice; here every Conservative candidate professed himself in plain words to be Sir Robert Peel 's man, and on that ground was elected. ''
Benjamin Disraeli and William Ewart Gladstone developed this new role further by projecting "images '' of themselves to the public. Known by their nicknames "Dizzy '' and the "Grand Old Man '', their colourful, sometimes bitter, personal and political rivalry over the issues of their time -- Imperialism vs. Anti-Imperialism, expansion of the franchise, labour reform, and Irish Home Rule -- spanned almost twenty years until Disraeli 's death in 1881. Documented by the penny press, photographs and political cartoons, their rivalry linked specific personalities with the Premiership in the public mind and further enhanced its status.
Each created a different public image of himself and his party. Disraeli, who expanded the Empire to protect British interests abroad, cultivated the image of himself (and the Conservative Party) as "Imperialist '', making grand gestures such as conferring the title "Empress of India '' on Queen Victoria in 1876. Gladstone, who saw little value in the Empire, proposed an anti-Imperialist policy (later called "Little England ''), and cultivated the image of himself (and the Liberal Party) as "man of the people '' by circulating pictures of himself cutting down great oak trees with an axe as a hobby.
Gladstone went beyond image by appealing directly to the people. In his Midlothian campaign -- so called because he stood as a candidate for that county -- Gladstone spoke in fields, halls and railway stations to hundreds, sometimes thousands, of students, farmers, labourers and middle class workers. Although not the first leader to speak directly to voters -- both he and Disraeli had spoken directly to party loyalists before on special occasions -- he was the first to canvass an entire constituency, delivering his message to anyone who would listen, encouraging his supporters and trying to convert his opponents. Publicised nationwide, Gladstone 's message became that of the party. Noting its significance, Lord Shaftesbury said, "It is a new thing and a very serious thing to see the Prime Minister on the stump. ''
Campaigning directly to the people became commonplace. Several 20th century Prime Ministers, such as David Lloyd George and Winston Churchill, were famous for their oratorical skills. After the introduction of radio, motion pictures, television, and the internet, many used these technologies to project their public image and address the nation. Stanley Baldwin, a master of the radio broadcast in the 1920s and 1930s, reached a national audience in his talks filled with homely advice and simple expressions of national pride. Churchill also used the radio to great effect, inspiring, reassuring and informing the people with his speeches during the Second World War. Two recent Prime Ministers, Margaret Thatcher and Tony Blair (who both spent a decade or more as prime Minister), achieved celebrity status like rock stars, but have been criticised for their more ' presidential ' style of leadership. According to Anthony King, "The props in Blair 's theatre of celebrity included... his guitar, his casual clothes... footballs bounced skilfully off the top of his head... carefully choreographed speeches and performances at Labour Party conferences. ''
In addition to being the leader of a great political party and the head of Her Majesty 's Government, the modern Prime Minister directs the law - making process, enacting into law his or her party 's programme. For example, Tony Blair, whose Labour party was elected in 1997 partly on a promise to enact a British Bill of Rights and to create devolved governments for Scotland and Wales, subsequently stewarded through Parliament the Human Rights Act (1998), the Scotland Act (1998) and the Government of Wales Act (1998).
From its appearance in the fourteenth century Parliament has been a bicameral legislature consisting of the Commons and the Lords. Members of the Commons are elected; those in the Lords are not. Most Lords are called "Temporal '' with titles such as Duke, Marquess, Earl and Viscount. The balance are Lords Spiritual (prelates of the Anglican Church).
For most of the history of the Upper House, Lords Temporal were landowners who held their estates, titles and seats as an hereditary right passed down from one generation to the next -- in some cases for centuries. In 1910, for example, there were nineteen whose title was created before 1500.
Until 1911, Prime Ministers had to guide legislation through the Commons and the Lords and obtain majority approval in both houses for it to become law. This was not always easy, because political differences often separated the chambers. Representing the landed aristocracy, Lords Temporal were generally Tory (later Conservative) who wanted to maintain the status quo and resisted progressive measures such as extending the franchise. The party affiliation of members of the Commons was less predictable. During the 18th century its makeup varied because the Lords had considerable control over elections: sometimes Whigs dominated it, sometimes Tories. After the passage of the Great Reform Bill in 1832, the Commons gradually became more progressive, a tendency that increased with the passage of each subsequent expansion of the franchise.
In 1906, the Liberal party, led by Sir Henry Campbell - Bannerman, won an overwhelming victory on a platform that promised social reforms for the working class. With 379 seats compared to the Conservatives ' 132, the Liberals could confidently expect to pass their legislative programme through the Commons. At the same time, however, the Conservative Party had a huge majority in the Lords; it could easily veto any legislation passed by the Commons that was against their interests.
For five years, the Commons and the Lords fought over one bill after another. The Liberals pushed through parts of their programme, but the Conservatives vetoed or modified others. When the Lords vetoed the "People 's Budget '' in 1909, the controversy moved almost inevitably toward a constitutional crisis.
In 1910, Prime Minister H.H. Asquith introduced a bill "for regulating the relations between the Houses of Parliament '' which would eliminate the Lords ' veto power over legislation. Passed by the Commons, the Lords rejected it. In a general election fought on this issue, the Liberals were weakened but still had a comfortable majority. At Asquith 's request, King George V then threatened to create a sufficient number of new Liberal Peers to ensure the bill 's passage. Rather than accept a permanent Liberal majority, the Conservative Lords yielded, and the bill became law.
The Parliament Act 1911 established the supremacy of the Commons. It provided that the Lords could not delay for more than one month any bill certified by the Speaker of the Commons as a money bill. Furthermore, the Act provided that any bill rejected by the Lords would nevertheless become law if passed by the Commons in three successive sessions provided that two years had elapsed since its original passage. The Lords could still delay or suspend the enactment of legislation but could no longer veto it. Subsequently the Lords "suspending '' power was reduced to one year by the Parliament Act 1949.
Indirectly, the Act enhanced the already dominant position of Prime Minister in the constitutional hierarchy. Although the Lords are still involved in the legislative process and the Prime Minister must still guide legislation through both Houses, the Lords no longer have the power to veto or even delay enactment of legislation passed by the Commons. Provided that he controls the Cabinet, maintains party discipline, and commands a majority in the Commons, the Prime Minister is assured of putting through his legislative agenda.
The classic view of Cabinet Government was laid out by Walter Bagehot in The English Constitution (1867) in which he described the prime minister as the primus ‐ inter ‐ pares ("first among equals ''). The view was questioned by Richard Crossman in The Myths of Cabinet Government (1972) and by Tony Benn. They were both members of the Labour governments of the 1960s and thought that the position of the Prime Minister had acquired more power so that Prime Ministerial Government was a more apt description. Crossman stated that the increase the power of the prime minister resulted from power of centralised political parties, the development of a unified civil service, and the grown of the Prime Minister 's private office and Cabinet secretariat. Graham Allen (a Government Whip during Tony Blair 's first government) made the case in The Last Prime Minister: Being Honest About the UK Presidency (2003) that in fact the office of prime minister has presidential powers, as did the political scientist Michael Foley in The British Presidency (2000).
In Tony Blair 's government, many sources such as former ministers have suggested that decision - making was controlled by him and Gordon Brown, and the Cabinet was no longer used for decision - making. Former ministers such as Clare Short and Chris Smith have criticised the lack of decision - making power in Cabinet. When she resigned, Short denounced "the centralisation of power into the hands of the Prime Minister and an increasingly small number of advisers ''. The Butler Review of 2004 condemned Blair 's style of "sofa government ''.
However the power that a prime minister has over his or her cabinet colleagues is directly proportional to the amount of support that they have with their political parties and this is often related to whether the party considers them to be an electoral asset or liability. Also when a party is divided into factions a Prime Minister may be forced to include other powerful party members in the Cabinet for party political cohesion. The Prime Minister 's personal power is also curtailed if their party is in a power - sharing arrangement, or a formal coalition with another party (as happened in the coalition government of 2010 to 2015).
When commissioned by the Sovereign, a potential Prime Minister 's first requisite is to "form a Government '' -- to create a cabinet of ministers that has the support of the House of Commons, of which they are expected to be a member. The Prime Minister then formally kisses the hands of the Sovereign, whose royal prerogative powers are thereafter exercised solely on the advice of the Prime Minister and Her Majesty 's Government ("HMG ''). The Prime Minister has weekly audiences with the Sovereign, whose rights are constitutionally limited: "to warn, to encourage, and to be consulted ''; the extent of the Sovereign 's ability to influence the nature of the Prime Ministerial advice is unknown, but presumably varies depending upon the personal relationship between the Sovereign and the Prime Minister of the day.
The Prime Minister will appoint all other cabinet members (who then become active Privy Counsellors) and ministers, although consulting senior ministers on their junior ministers, without any Parliamentary or other control or process over these powers. At any time, the PM may obtain the appointment, dismissal or nominal resignation of any other minister; the PM may resign, either purely personally or with the whole government. The Prime Minister generally co-ordinates the policies and activities of the Cabinet and Government departments, acting as the main public "face '' of Her Majesty 's Government.
Although the Commander - in - Chief of the British Armed Forces is legally the Sovereign, under constitutional practice the Prime Minister can declare war, and through the Secretary of State for Defence (whom the PM may appoint and dismiss, or even appoint himself or herself to the position) as chair of the Defence Council the power over the deployment and disposition of British forces. The Prime Minister can authorise, but not directly order, the use of Britain 's nuclear weapons and the Prime Minister is hence a Commander - in - Chief in all but name.
The Prime Minister makes all the most senior Crown appointments, and most others are made by Ministers over whom the PM has the power of appointment and dismissal. Privy Counsellors, Ambassadors and High Commissioners, senior civil servants, senior military officers, members of important committees and commissions, and other officials are selected, and in most cases may be removed, by the Prime Minister. The PM also formally advises the Sovereign on the appointment of Archbishops and Bishops of the Church of England, but the PM 's discretion is limited by the existence of the Crown Nominations Commission. The appointment of senior judges, while constitutionally still on the advice of the Prime Minister, is now made on the basis of recommendations from independent bodies.
Peerages, knighthoods, and most other honours are bestowed by the Sovereign only on the advice of the Prime Minister. The only important British honours over which the Prime Minister does not have control are the Order of the Garter, the Order of the Thistle, the Order of Merit, the Royal Victorian Order, and the Venerable Order of Saint John, which are all within the "personal gift '' of the Sovereign.
The Prime Minister appoints officials known as the "Government Whips '', who negotiate for the support of MPs and to discipline dissenters. Party discipline is strong since electors generally vote for individuals on the basis of their party affiliation. Members of Parliament may be expelled from their party for failing to support the Government on important issues, and although this will not mean they must resign as MPs, it will usually make re-election difficult. Members of Parliament who hold ministerial office or political privileges can expect removal for failing to support the Prime Minister. Restraints imposed by the Commons grow weaker when the Government 's party enjoys a large majority in that House, or among the electorate. In most circumstances, however, the Prime Minister can secure the Commons ' support for almost any bill by internal party negotiations, with little regard to Opposition MPs.
However, even a government with a healthy majority can on occasion find itself unable to pass legislation. For example, on 9 November 2005, Tony Blair 's Government was defeated over plans which would have allowed police to detain terror suspects for up to 90 days without charge, and on 31 January 2006, was defeated over certain aspects of proposals to outlaw religious hatred. On other occasions, the Government alters its proposals to avoid defeat in the Commons, as Tony Blair 's Government did in February 2006 over education reforms.
Formerly, a Prime Minister whose government lost a Commons vote would be regarded as fatally weakened, and the whole government would resign, usually precipitating a general election. In modern practice, when the Government party has an absolute majority in the House, only loss of supply and the express vote "that this House has no confidence in Her Majesty 's Government '' are treated as having this effect; dissenters on a minor issue within the majority party are unlikely to force an election with the probable loss of their seats and salaries.
Likewise, a Prime Minister is no longer just "first amongst equals '' in HM Government; although theoretically the Cabinet might still outvote the PM, in practice the PM progressively entrenches his or her position by retaining only personal supporters in the Cabinet. In occasional reshuffles, the Prime Minister can sideline and simply drop from Cabinet the Members who have fallen out of favour: they remain Privy Counsellors, but the Prime Minister decides which of them are summoned to meetings. The Prime Minister is responsible for producing and enforcing the Ministerial Code.
By tradition, before a new Prime Minister can occupy 10 Downing Street, they are required to announce to the country and the world that they have "kissed hands '' with the reigning monarch, and have thus become Prime Minister. This is usually done by saying words to the effect of:
Her Majesty the Queen (His Majesty the King) has asked me to form a government and I have accepted.
Throughout the United Kingdom, the Prime Minister outranks all other dignitaries except members of the Royal Family, the Lord Chancellor, and senior ecclesiastical figures.
In 2010 the Prime Minister received £ 142,500 including a salary of £ 65,737 as a member of parliament. Until 2006, the Lord Chancellor was the highest paid member of the government, ahead of the Prime Minister. This reflected the Lord Chancellor 's position at the head of the judicial pay scale. The Constitutional Reform Act 2005 eliminated the Lord Chancellor 's judicial functions and also reduced the office 's salary to below that of the Prime Minister.
The Prime Minister is customarily a member of the Privy Council and thus entitled to the appellation "The Right Honourable ''. Membership of the Council is retained for life. It is a constitutional convention that only a Privy Counsellor can be appointed Prime Minister. Most potential candidates have already attained this status. The only case when a non-Privy Counsellor was the natural appointment was Ramsay MacDonald in 1924. The issue was resolved by appointing him to the Council immediately prior to his appointment as Prime Minister.
According to the now defunct Department for Constitutional Affairs, the Prime Minister is made a Privy Counsellor as a result of taking office and should be addressed by the official title prefixed by "The Right Honourable '' and not by a personal name. Although this form of address is employed on formal occasions, it is rarely used by the media. As "Prime Minister '' is a position, not a title, the incumbent should be referred to as "the Prime Minister ''. The title "Prime Minister '' (e.g. "Prime Minister James Smith '') is technically incorrect but is sometimes used erroneously outside the United Kingdom, and has more recently become acceptable within it. Within the UK, the expression "Prime Minister Smith '' is never used, although it, too, is sometimes used by foreign dignitaries and news sources.
10 Downing Street, in London, has been the official place of residence of the Prime Minister since 1732; they are entitled to use its staff and facilities, including extensive offices. Chequers, a country house in Buckinghamshire, gifted to the government in 1917, may be used as a country retreat for the Prime Minister.
There are four living former British Prime Ministers:
Upon retirement, it is customary for the Sovereign to grant a Prime Minister some honour or dignity. The honour bestowed is commonly, but not invariably, membership of the United Kingdom 's most senior order of chivalry, the Order of the Garter. The practice of creating a retired Prime Minister a Knight (or, in the case of Margaret Thatcher, a Lady) of the Garter (KG and LG respectively) has been fairly prevalent since the mid-nineteenth century. Upon the retirement of a Prime Minister who is Scottish, it is likely that the primarily Scottish honour of Knight of the Thistle (KT) will be used instead of the Order of the Garter, which is generally regarded as an English honour.
Historically it has also been common for Prime Ministers to be granted a peerage upon retirement from the Commons, which elevates the individual to the House of Lords. Formerly, the peerage bestowed was usually an earldom, with Churchill offered a dukedom.
From the 1960s onward, life peerages were preferred, although in 1984 Harold Macmillan was created Earl of Stockton. Sir Alec Douglas - Home, Harold Wilson, James Callaghan and Margaret Thatcher accepted life peerages, although Douglas - Home had previously disclaimed his hereditary title as Earl of Home. Edward Heath did not accept a peerage of any kind and nor have any of the Prime Ministers to retire since 1990; although Heath and Major were later appointed as Knights of the Garter.
The most recent former Prime Minister to die was Margaret Thatcher (served 1979 -- 1990) on 8 April 2013, aged 87.
Prime Minister Theresa May
Chancellor of the Exchequer Philip Hammond
Foreign Secretary Boris Johnson
Home Secretary Amber Rudd
|
where do the traverse city beach bums play | Traverse City Beach Bums - wikipedia
The Traverse City Beach Bums are a professional baseball team based in the Traverse City, Michigan, suburb of Blair Township, in the United States. The Beach Bums are a member of the East Division of the Frontier League, which is not affiliated with Major League Baseball. Since their establishment in 2006, the Beach Bums have played their home games at Wuerfel Park.
The "Beach Bums '' name refers to the residents and visitors who come to Michigan 's most popular resort town, to spend time on the beach. Traverse City lies on the Grand Traverse Bay, a branch of Lake Michigan. The team 's colors of navy blue and gold represent the region 's bays and bright summer sunshine.
Wuerfel Park averages well over 200,000 fans per season for Beach Bums home games, ranking near the top of attendance lists in independent baseball. In their first three seasons, the Beach Bums have drawn over 600,000 fans with a mix of loyal, local fans, tourists, and regional baseball fans.
John and Leslye Wuerfel are the managing members of the Beach Bums. Their son, Jason, is the Beach Bums ' Vice President and Director of Baseball Operations. Jason Wuerfel lettered four times with the University of Michigan varsity baseball team, and also won the 2003 Wolverine Award for Spirit and Leadership. He played professionally with the Elmira Pioneers, the Mid-Missouri Mavericks, and the Ohio Valley Redcoats. Jason also owns and operates the Beach Bums Baseball Academy, an organization that instructs young baseball players in the Traverse City area.
In November 2008, the Beach Bums named two new coaches with Gregg Langbehn being hired as the manager and Roger Mason accepting the position of pitching coach. Langbehn had previously spent 10 seasons as a coach and manager in the Houston Astros minor league system, while Mason, a Northern Michigan native, spent nine seasons as a pitcher in major league baseball, including a World Series appearance with the 1993 Philadelphia Phillies.
The Beach Bums are Traverse City 's first professional baseball team since 1915. Predecessors include the semi-professional Traverse City Hustlers of the 1890s, and the professional Traverse City Resorters, from 1910 to 1915.
Following the 2004 season, the Frontier League granted a franchise for Traverse City, Michigan. However, the league was not sure whether to consider the team for expansion or relocation. In 2005, the Richmond Roosters were purchased by the Wuerfels and moved to Traverse City.
The Beach Bums played their first home game at Wuerfel Park on May 24, 2006, against the Kalamazoo Kings, with a sell - out crowd of 5,825.
Frontier League Championship Series: Defeated the Chillicothe Paints 3 - 0.
Frontier League Championship Series: Defeated the Washington Wild Things 3 - 1
Frontier League Championship Series: Lost vs. River City Rascals 1 - 3
Frontier League Division Series: Won vs. Normal 2 - 0
Frontier League Championship Series: Won vs. River City 3 - 0
Pitchers
Utility player
Catchers
Infielders
Outfielders
Manager
Coaches
Disabled list ‡ Inactive list § Suspended list Roster updated May 12, 2017 Transactions
|
what is it called when a dead body moves | Cadaveric spasm - wikipedia
Cadaveric spasm, also known as postmortem spasm, instantaneous rigor, cataleptic rigidity, or instantaneous rigidity, is a rare form of muscular stiffening that occurs at the moment of death and persists into the period of rigor mortis. Cadaveric spasm can be distinguished from rigor mortis being stronger stiffening of the muscles that ca n't be easily undone like rigor mortis. The cause is unknown, but is usually associated with violent deaths happening under extremely physical circumstances with intense emotion.
Cadaveric spasm may affect all muscles in the body, but typically only groups, such as the forearms, or hands. Cadaveric spasm is seen in cases of drowning victims when grass, weeds, roots or other materials are clutched, and provides evidence of life at the time of entry into the water. Cadaveric spasm often crystallizes the last activity one did before death and is therefore significant in forensic investigations, e.g. holding onto a knife tightly.
ATP is required to reuptake calcium into the sarcomere 's sarcoplasmic reticulum (SR). When a muscle is relaxed, the myosin heads are returned to their "high energy '' position, ready and waiting for a binding site on the actin filament to become available. Because there is no ATP available, previously released calcium ions can not return to the SR. These leftover calcium ions move around inside the sarcomere and may eventually find their way to a binding site on the thin filament 's regulatory protein. Since the myosin head is already ready to bind, no additional ATP expenditure is required and the sarcomere contracts.
When this process occurs on a larger scale, the stiffening associated with rigor mortis can occur. It mainly occurs during high ATP use. Sometimes, cadaveric spasms can be associated with erotic asphyxiation resulting in death.
Cadaveric spasm has been posed as an explanation for President Kennedy 's reaction to the fatal head shot in his 1963 assassination, to indicate why his head moved backward after the shot.
When the body of Kurt Cobain was discovered, his left hand tightly clutched the barrel of the shotgun that killed him, indicating he had been alive and holding the weapon before his death, rather than having been killed by another person and the scene then arranged to suggest suicide.
Matthias Pfaffli and Dau Wyler, Professors of Legal Medicine at University of Bern, Switzerland, posed five requirements in order for a death to have been observed and classified as containing a cadaveric spasm:
Because of the improbability that all of these requirements may be examined in one subject, cadaveric spasms are unlikely to be consistently documented and therefore proved existent.
Very minimal to no pathophysiological or scientific basis exists to support the validity of cadaveric spasms. Chemically, this phenomenon can not be explained as being analogous to "true '' rigor mortis. Therefore, a variety of other factors have been examined and explored in an effort to alternatively account for the cases of supposed instantaneous rigor mortis that have been reported. In a study reported in The International Journal of Legal Medicine, there was no consistent evidence of cadaveric spasms even in deaths of the same type. Out of 65 sharp - force suicides, only two victims still held their weapon post mortem. This low incidence rate suggests that genuine cadaveric spasm was not exhibited. Gravity may play a large factor in the trapping of limbs and other objects under the body at the time of death, and the subsequent observed placement of limbs after death. In fatalities related to cranial or neural injury, nerve damage in the brain may inhibit the ability to release a weapon from the hand. The flexion of agonist and antagonist muscles in conjunction may additionally contribute to the observed fixation of an object or weapon.
|
where is the gold cup 2017 being held | 2017 CONCACAF Gold Cup - wikipedia
The 2017 CONCACAF Gold Cup was the 14th edition of the CONCACAF Gold Cup, the biennial international men 's football championship of the North, Central American and Caribbean region organized by CONCACAF, and 24th CONCACAF regional championship overall. The tournament was played between July 7 -- 26, 2017 in the United States.
The United States won their sixth title with their 2 -- 1 victory over Jamaica in the final.
As the winners of this tournament, the United States will play against the winners of the 2019 CONCACAF Gold Cup in the 2019 CONCACAF Cup, a one - match play - off to determine CONCACAF 's qualifier for the 2021 FIFA Confederations Cup. If the United States win the 2019 Gold Cup, they will automatically qualify for the Confederations Cup.
A total of 12 teams qualified for the tournament. Three berths were allocated to North America, four to Central America, four to the Caribbean, and one to the winners of the play - off between the two fifth - placed teams of the Caribbean zone and the Central American zone.
Bold indicates that the corresponding team was hosting the event. 1. This was Curaçao 's first appearance since the dissolution of the Netherlands Antilles, as its direct successor (with regards to membership in football associations), inheriting the former nation 's FIFA membership and competitive record. 2. French Guiana and Martinique are not FIFA members, and so did not have a FIFA Ranking.
The venues were announced on December 19, 2016. Levi 's Stadium was announced as the venue of the final on February 1, 2017.
Group stage venue Quarter - final venue Semi-final venue Final venue
The United States and Mexico were announced as the seeded teams of Groups B and C respectively on December 19, 2016. Honduras, the winners of the 2017 Copa Centroamericana title were announced as being the seeded team in Group A on February 14, 2017.
The groups and match schedule were revealed on March 7, 2017, 10: 00 PST (UTC − 8), at Levi 's Stadium in Santa Clara, California. At the time of the announcement, 11 of the 12 qualified teams were known, with the identity of the CFU -- UNCAF play - off winners not yet known.
The 12 national teams involved in the tournament were required to register a squad of 23 players; only players in these squads were eligible to take part in the tournament.
A provisional list of 40 players per national team was submitted to CONCACAF by June 2, 2017. The final list of 23 players per national team was submitted to CONCACAF by June 27, 2017. Three players per national team had to be goalkeepers.
National teams that reached the quarter - final stage were able to swap up to six players in the final squad with six players from the provisional list within 24 hours of their final group stage game.
The match officials, which included 17 referees and 25 assistant referees, were announced on June 23, 2017.
The top two teams from each group and the two best third - placed teams qualified for the quarter - finals.
All match times listed are in EDT (UTC − 4). If the venue is located in a different time zone, the local time is also given.
The ranking of each team in each group was determined as follows:
French Guiana v Canada
Honduras v Costa Rica
Costa Rica v Canada
Honduras v French Guiana
Costa Rica v French Guiana
Canada v Honduras
United States v Panama
Martinique v Nicaragua
Panama v Nicaragua
United States v Martinique
Panama v Martinique
Nicaragua v United States
Curaçao v Jamaica
Mexico v El Salvador
El Salvador v Curaçao
Mexico v Jamaica
Jamaica v El Salvador
Curaçao v Mexico
The best two third - placed teams which advance to the knockout stage played the winners from another group in the quarter - finals.
In the quarter - finals and semi-finals, if a match was tied after 90 minutes, extra time would not have been played and the match would be decided by a penalty shoot - out. In the final, if the match was tied after 90 minutes, extra time would have been played, where each team would have been allowed to make a fourth substitution. If still tied after extra time, the match would have been decided by a penalty shoot - out. Unlike the previous edition of the competition, there was no third place play - off.
Costa Rica v Panama
United States v El Salvador
Jamaica v Canada
Mexico v Honduras
Costa Rica v United States
Mexico v Jamaica
United States v Jamaica
55 goals were scored in 25 matches, for an average of 2.2 goals per match.
The following awards were given at the conclusion of the tournament.
The technical study group selected the tournament 's best XI.
"The Arena '' and "Do n't Let This Feeling Fade '' by American violinist Lindsey Stirling served as the official songs of the tournament. The latter features Rivers Cuomo of the band Weezer and rapper Lecrae.
"Bia Beraghsim '' by Persian - Swedish singer Mahan Moin served as the official anthem of the tournament
"Levántate '' by Puerto Rican singer Gale served as the official Spanish - language song of the tournament.
"Thunder '' and "Whatever It Takes '' by American rock band Imagine Dragons also served as official anthems of the tournament.
|
who played black widow in avengers infinity war | Black Widow (Natasha Romanova) - Wikipedia
Natalia Alianovna Romanova, (Russian: Наталья Альяновна "Наташа '' Романова; alias: Natasha Romanoff; Russian: Наташа Романофф), colloquial: Black Widow (Russian: Чёрная Вдова; transliterated Chyornaya Vdova) is a fictional superhero appearing in American comic books published by Marvel Comics. Created by editor and plotter Stan Lee, scripter Don Rico, and artist Don Heck, the character debuted in Tales of Suspense # 52 (April 1964). The character was introduced as a Russian spy, an antagonist of the superhero Iron Man. She later defected to the United States, becoming an agent of the fictional spy agency S.H.I.E.L.D., and a member of the superhero team the Avengers.
Scarlett Johansson portrayed the character in the films Iron Man 2 (2010), The Avengers (2012), Captain America: The Winter Soldier (2014), Avengers: Age of Ultron (2015), Captain America: Civil War (2016), Avengers: Infinity War (2018) and the Untitled Avengers film (2019) as a part of the Marvel Cinematic Universe franchise.
The Black Widow 's first appearances were as a recurring, non-costumed, Russian - spy antagonist in the feature "Iron Man '', beginning in Tales of Suspense # 52 (April 1964). Five issues later, she recruits the besotted costumed archer and later superhero Hawkeye to her cause. Her government later supplies her with her first Black Widow costume and high - tech weaponry, but she eventually defects to the United States after appearing, temporarily brainwashed against the U.S., in the superhero - team series The Avengers # 29 (July 1966). The Widow later becomes a recurring ally of the team before officially becoming its sixteenth member many years later.
The Black Widow was visually updated in 1970: The Amazing Spider - Man # 86 (July 1970) reintroduced her with shoulder - length red hair (instead of her former short black hair), a skintight black costume, and wristbands which fired spider threads. This would become the appearance most commonly associated with the character.
In short order, The Black Widow starred in her own series in Amazing Adventures # 1 -- 8 (Aug. 1970 -- Sept. 1971), sharing that split book with the feature Inhumans. The Black Widow feature was dropped after only eight issues (the Inhumans feature followed soon, ending with issue 10).
Immediately after her initial solo feature ended, the Black Widow co-starred in Daredevil # 81 -- 124 (Nov. 1971 -- Aug. 1975), of which # 93 - 108 were cover titled Daredevil and the Black Widow. Daredevil writer Gerry Conway recounted, "It was my idea to team up Daredevil and the Black Widow, mainly because I was a fan of Natasha, and thought she and Daredevil would have interesting chemistry. '' Succeeding writers, however, felt that Daredevil worked better as a solo hero, and gradually wrote the Black Widow out of the series. She was immediately recast into the super-team series The Champions as the leader of the titular superhero group, which ran for 17 issues (Oct. 1975 -- Jan. 1978).
Throughout the 1980s and 1990s, the Black Widow appeared frequently as both an Avengers member and a freelance agent of S.H.I.E.L.D. She starred in a serialized feature within the omnibus comic - book series Marvel Fanfare # 10 -- 13 (Aug. 1983 -- March 1984), written by George Pérez and Ralph Macchio, with art by penciller Perez. These stories were later collected in the oversized one - shot Black Widow: Web of Intrigue # 1 (June 1999).
The Widow guest - starred in issues of Solo Avengers, Force Works, Iron Man, Marvel Team - Up, and other comics. She had made frequent guest appearances in Daredevil since the late 1970s.
She starred in a three - issue arc, "The Fire Next Time '', by writer Scott Lobdell and penciller Randy Green, in Journey into Mystery # 517 -- 519 (Feb. -- April 1998).
A new ongoing Black Widow comic title debuted in April 2010. The first story arc was written by Marjorie Liu with art by Daniel Acuna. Beginning with issue # 6 (Sept. 2010), the title was written by Duane Swierczynski, with artwork by Manuel Garcia and Lorenzo Ruggiero.
Black Widow appeared as a regular character throughout the 2010 -- 2013 Secret Avengers series, from issue # 1 (July 2010) through its final issue # 37 (March 2013).
Black Widow appears in the 2013 Secret Avengers series by Nick Spencer and Luke Ross.
Black Widow appears in a relaunched ongoing series by writer Nathan Edmondson and artist Phil Noto. The first issue debuted in January 2014.
In October 2015, it was announced that Mark Waid and Chris Samnee would be launching a new Black Widow series for 2016 as part of Marvel 's post-Secret Wars relaunch. The first issue was released in March 2016.
Aside from the arcs in Marvel Fanfare and Journey into Mystery, the Black Widow has starred in four limited series and four graphic novels.
The three - issue Black Widow (June - Aug. 1999), under the Marvel Knights imprint, starred Romanova and fully introduced her appointed successor, Captain Yelena Belova, who had briefly appeared in an issue of the 1999 series Inhumans. The writer for the story arc, "The Itsy - Bitsy Spider '' was Devin K. Grayson while J.G. Jones was the artist. The next three - issue, Marvel Knights mini-series, also titled Black Widow (Jan. - March 2001) featured both Black Widows in the story arc "Breakdown '', by writers Devin Grayson and Greg Rucka with painted art by Scott Hampton.
Romanova next starred in another solo miniseries titled Black Widow: Homecoming (Nov. 2004 - April 2005), also under the Marvel Knights imprint and written by science fiction novelist Richard K. Morgan, with art initially by Bill Sienkiewicz and later by Sienkiewicz over Goran Parlov layouts. A six - issue sequel, Black Widow: The Things They Say About Her (Nov. 2005 -- April 2006; officially Black Widow 2: The Things They Say About Her in the series ' postal indicia), by writer Morgan, penciller Sean Phillips, and inker Sienkiewicz, picks up immediately where the previous miniseries left off, continuing the story using many of the same characters.
She starred in the solo graphic novel Black Widow: The Coldest War (April 1990), and co-starred in three more: Punisher / Black Widow: Spinning Doomsday 's Web (Dec. 1992); Daredevil / Black Widow: Abattoir (July 1993); and Fury / Black Widow: Death Duty (June 1995), also co-starring Marvel UK 's Night Raven.
Black Widow is also featured in the short story Love Is Blindness in I Heart Marvel: Marvel Ai (2006) # 1 (April 2006), where she instigates a humorous fight with Elektra over Daredevil 's affections. The comic is stylized to look like Japanese animation and uses images, not words, inside the speech and thought bubbles to convey what the characters are saying / thinking.
In 2010, the year in which the character, called only Natasha Romanoff, made her film debut in Iron Man 2, the Black Widow received two separate miniseries. Black Widow and the Marvel Girls was an all - ages, four - issue series that chronicled her adventures with various women of the Marvel Universe, including Storm, She - Hulk, the Enchantress, and Spider - Woman. It was written by Paul Tobin, with art by Salvador Espin and Takeshi Miyazawa. The second four - issue miniseries, Black Widow: Deadly Origin, was written by Paul Cornell, and featured art by Tom Raney and John Paul Leon.
Natasha was born in Stalingrad (now Volgograd), Russia. The first and best - known Black Widow is a Russian agent trained as a spy, martial artist, and sniper, and outfitted with an arsenal of high - tech weaponry, including a pair of wrist - mounted energy weapons dubbed her "Widow 's Bite ''. She wears no costume during her first few appearances but simply evening wear and a veil. Romanova eventually defects to the U.S. for reasons that include her love for the reluctant - criminal turned superhero archer, Hawkeye.
First hints to Natasha Romanova 's childhood come by Ivan Petrovich, who is introduced as her middle - aged chauffeur and confidant in the Black Widow 's 1970s Amazing Adventures. The man tells Matt Murdock how he had been given custody of little Natasha by a woman just before her death during the Battle of Stalingrad in autumn 1942. He had consequently felt committed to raise the orphan as a surrogate father and she had eventually trained as a Soviet spy, being eager to help her homeland. In another flashback, set in the fictional island of Madripoor in 1941, Petrovich helps Captain America and the mutant Logan, who would later become the Canadian super-agent and costumed hero Wolverine, to rescue Natasha from Nazis.
A revised, retconned origin establishes her as being raised from very early childhood by the U.S.S.R. 's "Black Widow Ops '' program, rather than solely by Ivan Petrovitch. Petrovitch had taken her to Department X, with other young female orphans, where she was brainwashed, and trained in combat and espionage at the covert "Red Room '' facility. There, she is biotechnologically and psycho - technologically enhanced -- an accounting that provides a rationale for her unusually long and youthful lifespan. During that time she had some training under Winter Soldier, and the pair even had a short romance. Each Black Widow is deployed with false memories to help ensure her loyalty. Romanova eventually discovers this, including the fact that she had never, as she had believed, been a ballerina. She further discovers that the Red Room is still active as "2R ''.
Natasha was arranged by the KGB to marry the renowned Soviet test pilot Alexei Shostakov. However, when the Soviet government decided to make Alexei into their new operative, the Red Guardian, he is told that he can have no further contact with his wife. Natasha is told that he had died and is trained as a secret agent separately.
Romanova grew up to serve as a femme fatale. She was assigned to assist Boris Turgenov in the assassination of Professor Anton Vanko for defecting from the Soviet Union, which served as her first mission in the United States. Natasha and Turgenov infiltrated Stark Industries as part of the plan. She attempted to manipulate information from American defense contractor Tony Stark, and inevitably confronted his superhero alter ego, Iron Man. The pair then battled Iron Man, and Turgenov steals and wears the Crimson Dynamo suit. Vanko sacrificed himself to save Iron Man, killing Turgenov in the process, using an unstable experimental laser light pistol. Romanova later meets the criminal archer Hawkeye and sets him against Iron Man, and later helped Hawkeye battle Iron Man.
Natasha once more attempted to get Hawkeye to help her destroy Iron Man. The pair almost succeeded, but when Black Widow was injured, Hawkeye retreated to get her to safety. During this period, Romanova was attempting to defect from the Soviet Union and began falling in love with Hawkeye, weakening her loyalty to her country. When her employers learned the truth, the KGB had her gunned down, sending her to a hospital, convincing Hawkeye to go straight and seek membership in the Avengers.
The Red Room kidnaps and brainwashes her again, and with the Swordsman and the first Power Man, she battles the Avengers. She eventually breaks free from her psychological conditioning (with the help of Hawkeye), and does successfully defect, having further adventures with Spider - Man, with Hawkeye and with Daredevil. She ultimately joins the Avengers as a costumed heroine herself.
Later still, she begins freelancing as an agent of the international espionage group S.H.I.E.L.D. She is sent on a secret S.H.I.E.L.D. mission to China by Nick Fury. There, with the Avengers, she battles Col. Ling, Gen. Brushov, and her ex-husband the Red Guardian. For a time, as writer Les Daniels noted in a contemporaneous study in 1971,
... her left - wing upbringing was put to better use, and she has lately taken to fighting realistic oppressor - of - the - people types. She helps young Puerto Ricans clean up police corruption and saves young hippies from organized crime... (The splash page of Amazing Adventures # 3 (Nov. 1970)) reflects the recent trend toward involving fantastic characters in contemporary social problems, a move which has gained widespread publicity for Marvel and its competitor, DC.
During her romantic involvement with Matt Murdock in San Francisco, she operates as an independent superhero alongside Murdock 's alter ego, Daredevil. There she tries unsuccessfully to find a new career for herself as a fashion designer. Eventually, her relationship with Murdock stagnates, and after briefly working with Avengers finally breaks up with Murdock, fearing that playing "sidekick '' is sublimating her identity. During a HYDRA attempt to take over S.H.I.E.L.D., she is tortured to such an extent that she regresses back to an old cover identity of schoolteacher Nancy Rushman, but she is recovered by Spider - Man in time to help Nick Fury and Shang - Chi work out what had happened and restore her memory, with "Nancy '' developing an attraction to Spider - Man before her memory is restored during the final fight against Madam Viper, Boomerang and the Silver Samurai. She later returns to Matt Murdock 's life to find he is romantically involved with another woman, Heather Glenn, prompting her to leave New York. Natasha ultimately realizes that Matt still only thinks of her in platonic terms, and elects to restrain herself from any advances.
After their breakup, the Widow moves to Los Angeles and becomes leader of the newly created and short - lived super team known as The Champions, consisting of her, Ghost Rider (Johnny Blaze), Hercules (with whom she has a brief romance), and former X-Men Angel and Iceman.
Her friends often call her "Natasha '', the informal version of "Natalia ''. She has sometimes chosen the last - name alias "Romanoff '' -- evidently as a private joke on those who are not aware that Russian family names use different endings for males and females. She has been hinted to be a descendant of the deposed House of Romanov and a relation to Nicholas II of Russia.
Natasha crosses Daredevil 's (Matt Murdock) path again when he attempts to slay an infant he believes to be the Anti-Christ while under the influence of mind - altering drugs. After Daredevil 's one - time love, Karen Page, dies protecting the child, Natasha reconciles with Murdock, revealing she still loves him, but noting that he is too full of anger to commit to a relationship with her.
Natasha is challenged by Yelena Belova, a graduate from the training program through which Natasha herself was taught the espionage trade, who is the first to ever surpass Natasha 's marks and considers herself the rightful successor to the "Black Widow '' mantle. Natasha refers to her as "little one '' and "rooskaya (meaning "Russian ''), and encourages her to discover her individuality rather than live in blind service, asking her "why be Black Widow, when you can be Yelena Belova? '' After several confrontations, Natasha subjects Yelena to intense psychological manipulation and suffering in order to teach her the reality of the espionage business, and an angry but disillusioned Yelena eventually returns home and temporarily quits being a spy. Although Matt Murdock is appalled by the cruelty of Natasha 's treatment of Yelena, Nick Fury describes the action as Natasha 's attempt at saving Yelena 's life. After bringing the Avengers and the Thunderbolts together to overcome Count Nefaria, Natasha supported Daredevil 's short - lived efforts to form a new super-team to capture the Punisher, originally believed to be Nick Fury 's murderer. Despite recruitment endeavors, however, this vigilante group folded shortly after she and her teammate Dagger fought an army of renegade S.H.I.E.L.D. androids; ironically, she soon afterward worked with both Daredevil and Punisher against the European crime syndicate managed by the Brothers Grace. Months later, her pursuit of war criminal Anatoly Krylenko led to a clash with Hawkeye, whose pessimism regarding heroic activities now rivaled her own.
Shortly after the Scarlet Witch 's insanity seemingly killed Hawkeye, and again disbanded the Avengers, Natasha, weary of espionage and adventure, travelled to Arizona but was targeted. Natasha discovers that other women had been trained in the Black Widow Program, and all are now being hunted down and killed by the North Institute on behalf of the corporation Gynacon. Natasha 's investigations led her back to Russia, where she was appalled to learn the previously unimagined extent of her past manipulation, and she discovered the Widows were being hunted because Gynacon, having purchased Russian biotechnology from Red Room 's successor agency 2R, wanted all prior users of the technology dead. Natasha finds and kills the mastermind of the Black Widow murders: Ian McMasters, Gynacon 's aging CEO, who intended to use part of their genetic structure to create a new chemical weapon. After killing McMasters, she clashed with operatives of multiple governments to help Sally Anne Carter, a girl Natasha had befriended in her investigations, whom she rescued with help from Daredevil and Yelena Belova. She soon returned the favor for Daredevil by reluctantly working with Elektra Natchios to protect his new wife, Milla Donovan, from the FBI and others, although Yelena proved beyond help when she agreed to be transformed into the new Super-Adaptoid by A.I.M. and HYDRA.
During the Superhero Civil War, Natasha becomes a supporter of the Superhuman Registration Act and a member of the taskforce led by Iron Man. Afterward, the registered Natasha joins the reconstituted Avengers. S.H.I.E.L.D. director Nick Fury is presumed killed, and deputy director Maria Hill incapacitated, so Natasha assumes temporary command of S.H.I.E.L.D. as the highest - ranking agent present.
Later, Tony Stark assigns Natasha to convey the late Captain America 's shield to a secure location, but is intercepted by her former lover, Bucky Barnes, the Winter Soldier, who steals the shield. Natasha and the Falcon then rescue Barnes from the Red Skull 's minions, and bring him to the S.H.I.E.L.D. Helicarrier, where Stark convinces Bucky to become the new Captain America. Afterward, Natasha accompanies Bucky as his partner for a brief time until she is called back by S.H.I.E.L.D. She later rejoins him and Falcon for the final confrontation with the Red Skull, helping to rescue Sharon Carter. She and Bucky have restarted their relationship. She later plays an important role in the capture of Hercules. However, due to her respect of the Greek god, she let him go. Soon Natasha, along with the rest of the Avengers, gets involved in the current Skrull invasion. Afterwards, she stayed as Bucky 's partner. She also assists former director Maria Hill in delivering a special form of data to Bucky.
Norman Osborn discovered Yelena Belova breaking into an abandoned S.H.I.E.L.D. facility, and offered her the position of field leader of the new Thunderbolts. On her first mission, she and Ant - Man take control of Air Force One with the Goblin, Doc Samson, and the new President aboard. It was suggested she faked her apparent death (as the Adaptoid) but it is never explained how.
A conversation with the Ghost implies that Yelena is working as a mole for someone else and that she may even be someone else disguised as Yelena. She is later seen talking privately through a comm - link to Nick Fury.
Osborn orders Yelena to lead the current Thunderbolts to kill former Thunderbolt, Songbird. Fury orders "Yelena '' to rescue and retrieve Songbird, for the information she might possess about Osborn and his operations. Yelena finds Songbird, and reveals to her that she was really Natasha Romanova in disguise. She tries delivering Songbird to Fury, but the Thunderbolts have also followed them. The trio are captured as Osborn reveals he had been impersonating Fury in messages all along to set Natasha up in order to strengthen the Thunderbolts and lead him to Fury. She and Songbird are brought to be executed but manage to escape when Ant - Man, Headsmen and Paladin turn on the rest of the Thunderbolts and let them go.
At the start of the Heroic Age, Natasha is recruited by Steve Rogers into a new black - ops wing of the Avengers, dubbed the Secret Avengers. She travels to Dubai with her new teammate, Valkyrie, where they steal a dangerous artifact which the Beast then studies, noting that it seems like a distant cousin of the Serpent Crown. In the story "Coppelia '', she encounters a teenage clone of herself, code named "Tiny Dancer '', whom she rescues from an arms dealer.
During the Fear Itself storyline, Black Widow and Peregrine are sent on a mission to free hostages being held in a Marseille cathedral by Rapido. He and a group of mercenaries are trying to exploit the panic over the events in Paris to steal France 's nuclear arsenal.
During the Ends of the Earth storyline, involving one of Doctor Octopus ' schemes, Natasha is one of only three heroes left standing after the Sinister Six defeat the Avengers, joining Silver Sable and Spider - Man to track the Six (albeit because she was closest to Sable 's cloaked ship after the Avengers were defeated rather than for her prowess). She is later contacted by the Titanium Man to warn her and her allies about Doctor Octopus ' attempt to rally other villains against Spider - Man. She is knocked out along with Hawkeye by Iron Man during a battle against the Avengers when they were temporarily under Octavius ' remote control.
During the incursion event between Earth - 616 and Earth - 1610, Natasha is involved in the final battle between the Marvel Universe 's superheroes and the Ultimate Universe 's Children of Tomorrow. She pilots a ship holding a handpicked few to restart humanity after the universe ends, copiloted by Jessica Drew. Her ship is shot down during the battle though, and she is killed in the ensuing explosion.
As the evacuation of Earth - 616 begins in light of the fact that Earth - 1610 is about to come crashing down as part of the "Last Days '' storyline, Black Widow is seen standing atop a building with Captain America who gives her a list of people to save and bring aboard the lifeboat. As she tells Sam she ca n't save them all, Sam explains it 's Natasha 's job to assist in the effort to save as many people as possible before Earth as they know it is destroyed. As she leaves, her mind transitions to Cold War Russia, where a young Natasha (here called Natalia) speaks with two Russian functionaries in the infamous "Red Room ''. She is given her first mission: travel to Cuba and locate a family called the Comienzas, who are at risk from Raúl Castro 's regime and who may have information of vital importance to Russia. She is told to rendezvous with another agent, her classmate Marina, and befriend the family under the guise of a Russian businesswoman. Natasha assures them of her competency and leaves. When one of the officers questions her youth, the other assures him, "she 's a killer. She will not disappoint. '' Natasha meets Marina in Cuba and the two friends catch up before meeting with the Comienzas that night at a local bar. Using her talent for deception, she casually and politely convinces the husband and wife that she 's seeking inside information to help her import various goods into the country. The Comienzas explain they ca n't reveal said information, prompting Natasha to later explain to Marina that the family might need "a little push ''. Not too soon she effectively began terrorizing the family into desperation. First, she plants an American flag on their doorstep to mimic someone accusing them of defecting to the United States. Later after meeting with one of the Russian officers from the Red Room to report her progress, she detonated a car bomb outside their home when the first attempt did n't make them "nearly desperate enough ''. Following the car bomb explosion, Natasha declares the family is indeed desperate enough to reproach for information. Before letting Natasha go, the officer announces she has one additional task before her mission is over: Marina has become too much enamored with her civilian guise, and is now a security risk. Natasha will have to eliminate her. Flipping to the present, Black Widow is saving as many people as she can, but she quickly flashback to Havana. Natasha and her then Red Room partner Marina are trying to help a family defect. Natasha 's orders are simple: Kill the parents and make it public. When Natasha asks if she should kill the child too, her boss looks horrified that she would be so OK with that and tells her no. Having no problem following orders she sets up a meet and using a sniper rifle she takes out the pair without blinking. Next she shoots Marina 's boyfriend then Marina herself. Next she shoots Marina 's cat. Flipping back to the present, Black Widow is back saving people from the incursion as the reason that triggered Natasha 's flashback is revealed... a man she saved is holding his cat. This dark, heartless side of the Black Widow shows why she is trying so hard to do good today.
During the Secret Empire storyline, Black Widow appears as a member of the underground resistance at the time when most of the United States has been taken over by Hydra and Captain America who was brainwashed by Red Skull 's clone using the powers of the sentient Cosmic Cube Kobik into believing that he was a Hydra sleeper agent. While Hawkeye assembles a strike force of Hercules and Quicksilver to find the Cosmic Cube fragments, Black Widow sets off to kill Rogers herself reasoning that even if Rick 's theory is true, the man Rogers was would prefer to die than be used in this manner. She finds herself followed by the Champions as she establishes her version of the Red Room. While preparing to shoot Captain America with a sniper rifle, she rushes to prevent Miles Morales from killing him as predicted by Ulysses, and is struck by his shield, breaking her neck and killing her. Despite the return of the real Steve Rogers and the downfall of Hydra, Natasha 's death along with other casualties remains.
However, while observing a dictator who recently rose to power due to his support of Hydra, Bucky witnesses the man being assassinated in such a manner that he believes only Natasha could have pulled off the kill, and believes he sees the Black Widow depart from her chosen vantage point.
It was later discovered that Black Widow was actually cloned by the Black Widow Ops Program following her death. It 's member Ursa Major bribed Epsilon Red to let him add her current memories while secretly disposing of the bad programming. The Black Widow Ops Program tasked the clone into taking out the remnants of Hydra and S.H.I.E.L.D. She revealed herself to Winter Soldier and Hawkeye while also killing Orphan Maker. To keep them from interfering, the Black Widow clone locked Winter Soldier and Hawkeye in a safe room within the Red Room. The Black Widow clone rose to the ranks of the Red Room while secretly persuading the recruits to turn against their masters. When Winter Soldier and Hawkeye arrived at the Red Room, the Black Widow clone dropped her cover where she began to kill her superiors, liberate the recruits, and destroy all the clones and Epsilon Red. When the authorities arrived, Black Widow left the Red Room where she left a note for Hawkeye to stop following her and for Winter Soldier to join her in ending the Red Room.
During the "Infinity Countdown '' storyline, the Black Widow clone had traced a dead drop signal left by a somehow - revived Wolverine in Madripoor. She found that Wolverine secretly left the Space Infinity Gem in her care.
The Black Widow is a world class athlete, gymnast, acrobat, aerialist capable of numerous complex maneuvers and feats, expert martial artist (including Jiu jitsu, Aikido, Boxing, Judo, Karate, Savate, Ninjutsu, various styles of Kung Fu and Kenpo), marksman and weapons specialist as well as having extensive espionage training. She is also an accomplished ballerina.
The Black Widow has been enhanced by biotechnology that makes her body resistant to aging and disease and heals at an above human rate; as well as psychological conditioning that suppresses her memory of true events as opposed to implanted ones of the past without the aid of specially designed system suppressant drugs.
The white blood cells in her body are efficient enough to fight off any microbe, foreign body and others from her body, keeping her healthy and immune to most, if not all infections, diseases and disorders.
Her agility is greater than that of an Olympic gold medalist. She can coordinate her body with balance, flexibility, and dexterity easily.
Romanova has a gifted intellect. She displays an uncanny affinity for psychological manipulation and can mask her real emotions perfectly. Like Steve Rogers, she possesses the ability to quickly process multiple information streams (such as threat assessment) and rapidly respond to changing tactical situations.
Romanova is an expert tactician. She is a very effective strategist, tactician, and field commander. She has led the Avengers and even S.H.I.E.L.D. on one occasion.
The Black Widow uses a variety of equipment invented by Soviet scientists and technicians, with later improvements by S.H.I.E.L.D. scientists and technicians. She usually wears distinctively shaped bracelets which fire the Widow 's Bite electro - static energy blasts that can deliver charges up to 30,000 volts, as well as "Widow 's Line '' grappling hooks, tear gas pellets, and a new element introduced during her ongoing series during the "Kiss or Kill '' arc called the "Widow 's Kiss '' -- an aerosol instant knock - out gas she has modified. She wears a belt of metallic discs; some are disc - charges containing plastic explosives, while others have been shown to be compartments for housing other equipment. Her costume consists of synthetic stretch fabric equipped with micro-suction cups on fingers and feet, enabling her to adhere to walls and ceilings. In the 2006 "Homecoming '' mini-series, she was seen using knives, unarmed combat, and various firearms, but she has since begun using her bracelets again. While in disguise as Yelena Belova, when infiltrating the then Osborn - sanctioned Thunderbolts during "Dark Reign '', she used a specialized multi-lens goggle / head - carapace that demonstrated various technical abilities enhancing vision and communication. Later, she has used a modified gun based on her Widow 's Bite wrist cartridge, during her adventures alongside the new Captain America.
The Black Widow was ranked as the 176th greatest comic book character in Wizard magazine. IGN also ranked her as the 74th greatest comic book character stating that wherever conspiracy and treachery are afoot, you can expect the Black Widow to appear to save the day, and as # 42 on their list of the "Top 50 Avengers ''. She was ranked 31st in Comics Buyer 's Guide 's "100 Sexiest Women in Comics '' list.
|
why abhijnana shakuntala is considered as the best drama of kalidasa | Shakuntala (play) - wikipedia
Shakuntala, also known as The Recognition of Shakuntala, The Sign of Shakuntala, and many other variants (Devanagari: अभिज्ञानशाकुन्तलम् -- Abhijñānashākuntala), is a Sanskrit play by the ancient Indian poet Kālidāsa, dramatizing the story of Shakuntala told in the epic Mahabharata. It is considered to be the best of Kālidāsa 's works. Its date is uncertain, but Kālidāsa is often placed in the period between the 1st century BCE and 4th century CE.
Shakuntala elaborates upon an episode mentioned in the Mahabharata, with minor changes made (by Kālidāsa) to the plot.
Manuscripts differ on what its exact title is. Usual variants are Abhijñānaśakuntalā, Abhijñānaśākuntala, Abhijñānaśakuntalam and the "grammatically indefensible '' Abhijñānaśākuntalam. The Sanskrit title means pertaining to the recognition of Shakuntala, so a literal translation could be Of Shakuntala who is recognized. The title is sometimes translated as The token - for - recognition of Shakuntala or The Sign of Shakuntala. Titles of the play in published translations include Sacontalá or The Fatal Ring and Śakoontalá or The Lost Ring.
The protagonist is Shakuntala, daughter of the sage Vishwamitra and the apsara Menaka. Abandoned at birth by her parents, Shakuntala is reared in the secluded hermitage of the sage Kanva, and grows up a comely but innocent maiden.
While Kanva and the other elders of the hermitage are away on a pilgrimage, Dushyanta, king of Hastinapura, comes hunting in the forest and chances upon the hermitage. He is captivated by Shakuntala, courts her in royal style, and marries her. He then has to leave to take care of affairs in the capital. She is given a ring by the king, to be presented to him when she appears in his court. She can then claim her place as queen.
The anger - prone sage Durvasa arrives when Shakuntala is lost in her fantasies, so that when she fails to attend to him, he curses her by bewitching Dushyanta into forgetting her existence. The only cure is for Shakuntala to show him the signet ring that he gave her.
She later travels to meet him, and has to cross a river. The ring is lost when it slips off her hand when she dips her hand in the water playfully. On arrival the king refuses to acknowledge her. Shakuntala is abandoned by her companions, who return to the hermitage.
Fortunately, the ring is discovered by a fisherman in the belly of a fish, and Dushyanta realises his mistake - too late. The newly wise Dushyanta defeats an army of Asuras, and is rewarded by Indra with a journey through heaven. Returned to Earth years later, Dushyanta finds Shakuntala and their son by chance, and recognizes them.
In other versions, especially the one found in the Mahabharata, Shakuntala is not reunited until her son Bharata is born, and found by the king playing with lion cubs. Dushyanta enquires about his parents to young Bharata and finds out that Bharata is indeed his son. Bharata is an ancestor of the lineages of the Kauravas and Pandavas, who fought the epic war of the Mahabharata. It is after this Bharata that India was given the name "Bharatavarsha '', the ' Land of Bharata '.
By the 18th century, Western poets were beginning to get acquainted with works of Indian literature and philosophy. Shakuntala was the first Indian drama to be translated into a Western language, by Sir William Jones in 1789. In the next 100 years, there were at least 46 translations in twelve European languages.
Sacontalá or The Fatal Ring, Sir William Jones ' translation of Kālidāsa 's play, was first published in Calcutta, followed by European republications in 1790, 1792 and 1796. A German and a French version of Jones ' translation were published in 1791 and 1803 respectively. Goethe published an epigram about Shakuntala in 1791, and in his Faust he adopted a theatrical convention from the prologue of Kālidāsa 's play. Karl Wilhelm Friedrich Schlegel 's plan to translate Shakuntala in German never materialised, but he did however publish a translation of the Mahabharata version of Shakuntala 's story in 1808. Goethe 's epigram goes like this:
Wilt thou the blossoms of spring and the fruits that are later in season,
Wilt thou have charms and delights,
Wilt thou have strength and support,
Wilt thou with one short word encompass the earth and the heaven,
All is said if I name only, Shakuntla, thee.
When Leopold Schefer became a student of Antonio Salieri in September 1816, he had been working on an opera about Shakuntala for at least a decade, a project which he did however never complete. Franz Schubert, who had been a student of Salieri until at least December of the same year, started composing his Sakuntala opera, D 701, in October 1820. Johann Philipp Neumann based the libretto for this opera on Kālidāsa 's play, which he probably knew through one or more of the three German translations that had been published by that time. Schubert abandoned the work in April 1821 at the latest. A short extract of the unfinished score was published in 1829. Also Václav Tomášek left an incomplete Sakuntala opera.
Kālidāsa 's Shakuntala was the model for the libretto of Karl von Perfall (de) 's first opera, which premièred in 1853. In 1853 Monier Monier - Williams published the Sanskrit text of the play. Two years later he published an English translation of the play, under the title: Śakoontalá or The Lost Ring. A ballet version of Kālidāsa 's play, Sacountalâ, on a libretto by Théophile Gautier and with music by Ernest Reyer, was first performed in Paris in 1858. A plot summary of the play was printed in the score edition of Karl Goldmark 's Overture to Sakuntala, Op. 13 (1865). Sigismund Bachrich composed a Sakuntala ballet in 1884. Felix Weingartner 's opera Sakuntala, with a libretto based on Kālidāsa 's play, premièred the same year. Also Philipp Scharwenka 's Sakuntala, a choral work on a text by Carl Wittkowsky, was published in 1884.
Bengali translations:
Tamil translations include:
Felix Woyrsch 's incidental music for Kālidāsa 's play, composed around 1886, is lost. Ignacy Jan Paderewski would have composed a Shakuntala opera, on a libretto by Catulle Mendès, in the first decade of the 20th century: the work is however no longer listed as extant in overviews of the composer 's or librettist 's oeuvre. Arthur W. Ryder published a new English translation of Shakuntala in 1912. Two years later he collaborated to an English performance version of the play.
Italian Franco Alfano composed an opera, named La leggenda di Sakùntala (The legend of Sakùntala) in its first version (1921) and simply Sakùntala in its second version (1952).
Chinese translation:
Fritz Racek 's completion of Schubert 's Sakontala was performed in Vienna in 1971. Another completion of the opera, by Karl Aage Rasmussen, was published in 2005 and recorded in 2006. A scenic performance of this version was premièred in 2010.
Norwegian electronic musician Amethystium wrote a song called "Garden of Sakuntala '' which can be found on the CD Aphelion. According to Philip Lutgendorf, the narrative of the movie Ram Teri Ganga Maili recapitulates the story of Shakuntala.
In Koodiyattam, the only surviving ancient Sanskrit theatre tradition, performances of Kālidāsa 's plays are rare. However, legendary Kutiyattam artist and Natyashastra scholar Nātyāchārya Vidūshakaratnam Padma Shri Guru Māni Mādhava Chākyār has choreographed a Koodiyattam production of The Recognition of Sakuntala.
A production directed by Tarek Iskander was mounted for a run at London 's Union Theatre in January and February 2009. The play is also appearing on a Toronto stage for the first time as part of the Harbourfront World Stage program. An adaptation by the Magis Theatre Company (1) featuring the music of Indian - American composer Rudresh Mahanthappa had its premiere at La MaMa E.T.C. in New York February 11 -- 28, 2010.
|
why did the british government send the cabinet mission to india | 1946 Cabinet Mission to India - wikipedia
The United Kingdom Cabinet Mission of 1946 came to India aimed to discuss the transfer of power from the British government to the Indian leadership, with the aim of preserving India 's unity and granting it independence. Formulated at the initiative of Clement Attlee, the Prime Minister of the United Kingdom, the mission had Lord Pethick - Lawrence, the Secretary of State for India, Sir Stafford Cripps, President of the Board of Trade, and A.V. Alexander, the First Lord of the Admiralty. Lord Wavell, the Viceroy of India, did not participate in every step but was present.
The British wanted to keep India and its Army united so as to keep it in their system of ' imperial defence ' even after granting it independence. To preserve India 's unity the British formulated the Cabinet Mission Plan. The Cabinet Mission 's role was to hold preparatory discussions with the elected representatives of British India and the Indian states so as to secure agreement to the method of framing the constitution, to set up a constitution body and to set up an Executive Council with the support of the main Indian parties.
The Mission held talks with the representatives of the Indian National Congress and the All - India Muslim League, the two largest political parties in the Constituent Assembly of India. The two parties planned to determine a power - sharing arrangement between Hindus and Muslims to prevent a communal dispute. The Congress, under Gandhi and Nehru, wanted to obtain a strong central government, with more powers than state governments. The All India Muslim League, under Jinnah, initially wanted to keep India united but with political safeguards provided to Muslims like parity in the legislatures because of the wide belief of Muslims that the British Raj was simply going to be turned into a Hindu Raj once the British departed, and since the Muslim League regarded itself as the sole spokesman party of Indian Muslims, it was incumbent upon it to take the matter up with the Crown. However, in the 1940 Lahore session of the Muslim League, Jinnah endorsed partition via the two nation theory of a West and East Pakistan, painting recent developments (1937 provincial election, refusal of Congress to recognise the League as the sole mouthpiece of Muslims in India, etc.) as a sign of assured enmity from Hindus, culminating in the Lahore Resolution, also known as the Pakistan Resolution. After initial dialogue, the Mission proposed its plan over the composition of the new government on 16 May 1946. In its proposals, the creation of a separate Muslim Pakistan was rejected.
An interim Government at the Centre representing all communities would be installed on the basis of parity between the representatives of the Hindus and the Muslims
The plan of 16 May 1946 had a united India, in line with Congress and Muslim League aspirations, but that was where the consensus between the two parties ended since Congress abhorred the idea of having the groupings of Muslim - majority provinces and that of Hindu - majority provinces with the intention of balancing one another at the central legislature. The Muslim League could not accept any changes to this plan since they wanted to keep the safeguards of British Indian laws to prevent absolute rule of Hindus over Muslims.
Reaching an impasse, the British proposed a second plan on 16 June 1946 to arrange for India to be divided into Hindu - majority India and a Muslim - majority India that would later be renamed Pakistan since Congress had vehemently rejected ' parity ' at the centre. A list of princely states of India, which would be permitted to accede to the dominion or attain independence, was also drawn up.
The Cabinet Mission arrived in karachi on 23 March 1946 and in Delhi on 24 March 1946 and left on June 29. The announcement of the Plan on 16 May 1946 had been preceded by the Simla Conference in the first week of May.
The approval of the plans determined the composition of the new government. The Congress Working Committee officially did not accept either plan. The resolution of the committee dated 24 May 1946 concluded that
The Working Committee consider that the connected problems involved in the establishment of a Provisional Government and a Constituent Assembly should be viewed together... In absence of a full picture, the Committee are unable to give a final opinion at this stage.
And the resolution of 25 June 1946, in response to the June plan concluded
In the formation of a Provisional or other governance, Congressmen can never give up the national character of Congress, or accept an artificial and unjust parity, or agree to a veto of a communal group. The Committee are unable to accept the proposals for formation of an Interim Government as contained in the statement of June 16. The Committee have, however, decided that the Congress should join the proposed Constituent Assembly with a view to framing the Constitution of a free, united and democratic India.
The Viceroy began organising the transfer of power to a Congress - League coalition. But in a "provocative speech '' on 10 July 1946 Nehru was quoted in the press as saying "We are not bound by a single thing except that we have decided to go into the Constituent Assembly ''. By this Nehru effectively "torpedoed '' any hope for a united India. Having been "duped in such a way '', Jinnah withdrew the Muslim League 's acceptance of the Cabinet Mission Plan on 17 July.
Thus Congress leaders entered the Viceroy 's Executive Council or the Interim Government of India. Nehru became the head, vice-president in title, but possessing the executive authority. Patel became the home member, responsible for internal security and government agencies. Congress - led governments were formed in most provinces, including the NWFP, in Punjab (a coalition with the Shiromani Akali Dal and the Unionist Muslim League). The League led governments in Bengal and Sind. The Constituent Assembly was instructed to begin work to write a new constitution for India.
Jinnah and the League condemned the new government, and vowed to agitate for Pakistan by any means possible. Disorder arose in Punjab and Bengal, including the cities of Delhi, Bombay and Calcutta. On the League - organized Direct Action Day, over 5,000 people were killed across India, and Hindu, Sikh and Muslim mobs began clashing routinely. Wavell stalled the Central government 's efforts to stop the disorder, and the provinces were instructed to leave this to the governors, who did not undertake any major action. To end the disorder and rising bloodshed, Wavell encouraged Nehru to ask the League to enter the government. While Patel and most Congress leaders were opposed to conceding to a party that was organising disorder, Nehru conceded in hope of preserving communal peace.
League leaders entered the council under the leadership of Liaquat Ali Khan, the future first Prime Minister of Pakistan who became the finance minister, but the council did not function in harmony, as separate meetings were not held by League ministers, and both parties vetoed the major initiatives proposed by the other, highlighting their ideological differences and political antagonism. At the arrival of the new (and proclaimed as the last) viceroy, Lord Mountbatten of Burma in early 1947, Congress leaders expressed the view that the coalition was unworkable. That led to the eventual proposal, and acceptance of the partition of India. The rejection of cabinet mission plan led to a resurgence of confrontational politics beginning with the Muslim League 's Direct action day and the subsequent killings in Noakhali and Bihar. The portioning of responsibility of the League, the Congress and the British Colonial Administration for the breakdown continues to be a topic of fierce disagreement.
|
which would have been considered a push factor of immigration apex | History of the Poles in the United States - wikipedia
The history of Poles in the United States dates to the American Colonial era. Poles have lived in present - day United States territories for over 400 years -- since 1608. There are 10 million Americans of Polish descent in the U.S. today, making it the largest diaspora of Poles in the world. Polish Americans have always been the largest group of Slavic origin in the United States.
Historians divide Polish American immigration into three "waves '', the largest from 1870 to 1914, a second after World War II, and a third after Poland 's independence in 1989. Most Polish Americans are descended from the first wave, when millions of Poles fled Polish districts of Germany, Russia, and Austria. This group is often called the za chlebem (for bread) immigrants because most were peasants in Poland who did not own land and lacked basic subsistence. The Austrian Poles were from Galicia, unarguably the most destitute region in Europe at the time. Up to a third of the Poles returned to Poland after living in the United States for a few years, but the majority stayed. Substantial research and sociological works such as The Polish Peasant in Europe and America found that many Polish immigrants shared a common objective of someday owning land in the U.S. or back in Poland. Anti-Slavic legislation cut Polish immigration from 1921 to World War II, but opened up after World War II to include many displaced persons from the Holocaust. A third wave, much smaller, came in 1989 when Poland was freed from Communist rule.
Immigrants in all three waves were attracted by the high wages and ample job opportunities for unskilled manual labor in the United States, and were driven to jobs in American mining, meatpacking, construction, steelwork, and heavy industry -- in many cases dominating these fields until the mid-20th century. Over 90 % of Poles arrived and settled in communities with other Polish immigrants. These communities are called Polonia and the largest such community historically was in Chicago, Illinois. A key feature of Polish life in the Old World had been religion, and in the United States, Catholicism often became an integral part of Polish identity. In the United States, Polish immigrants created communities centered on Catholic religious services, and built hundreds of churches and parish schools in the 20th century.
The Polish today are well assimilated into American society. Average incomes have increased from well below average to above average today, and Poles continue to expand into white - collar professional and managerial roles. Poles are still well represented in blue collar construction and industrial trades, and many live in or near urban cities. They are well dispersed throughout the United States, intermarry at high levels, and have a very low rate of language fluency (less than 5 % can speak Polish).
Polish and American sources cite Polish pitch - makers as settlers among William Raleigh 's failed Roanoke Colony in 1585. Historian Józef Retinger stated that Raleigh 's purpose of bringing the Poles was to reduce the English dependency on timber and pitch from Poland.
The first Polish immigrants came to the Jamestown colony in 1608, twelve years before the Pilgrims arrived in Massachusetts. These early settlers were brought as skilled artisans by the English soldier -- adventurer Captain John Smith, and included a glass blower, a pitch and tar maker, a soap maker and a timberman. Historian John Radzilowski stated that these Poles were experts in pitch and tar making at the time and recruited to develop a key naval stores industry. He estimated that "two dozen Poles '' at most were in the colony by 1620. In 1947, a purported historical diary, Nonetheless, the Polish colonists led a strike in 1619 to protest their disenfranchisement in the New World; they had been excluded from voting rights by the first - ever legislative body. Their strike was the first labor protest in the New World.
The date of their arrival, October 1, 1608, is a commemorative holiday for Polish - Americans. Polish American Heritage Month is based on this month, and October 1 is commemorated annually in Polonia organizations. 2008 was considered the 400 - year anniversary of Polish settlement in the United States, and 2019 is looked upon as the 400th celebration of the Jamestown strike, considered a fight for civil liberties, more specifically, their voting rights, and equal recognition regardless of ethnicity.
Protestant Poles left Poland for America seeking greater religious freedom. This was not due to the Counter-Reformation in Poland; in Poland, the Jesuits spread Catholicism chiefly by promoting religious education among the youth. After the Swedish Deluge, Polish Brethren, who were seen as Swedish sympathizers, were told to convert or leave the country. The Polish Brethren were banished by law from Poland in 1658, and faced physical fights, seizure of property, and court fines for preaching their religion. Polish exiles originally sought refuge in England, but lacking support, sought peace in America. The majority of exiled Poles arrived in New Sweden, although some had gone to New Amsterdam and the English Virginia colony. There is no evidence of Polish immigration to Catholic Spanish or French territories in North America in the 17th Century, which historian Frank Mocha suggests is a signal that early Poles were Protestants and wanted to live with Protestants in America. These Poles were generally well educated and aristocratic. One known immigrant, pioneer Anthony Sadowski, had come from an area populated by Moravian Brethren and Arians in the Sandomierz Voivodeship of the Polish -- Lithuanian Commonwealth, consistent with a religious exodus. Research has confirmed that one of his first actions upon arrival was visiting a Polish Protestant colony in New Jersey, and his uncle, Stanislaw Sadowski, converted to Calvinism before fleeing Poland. Protestants (and other non-Catholics) regained their rights and religious freedoms in Poland in 1768, ending pressure to leave Poland on religious grounds.
Later Polish immigrants included Jakub Sadowski, who in 1770, settled in New York with his sons -- the first Europeans to penetrate as far as Kentucky. It is said that Sandusky, Ohio, was named after him. At the time, Polish -- Lithuanian Commonwealth was failing and being gradually stripped of its independence due to military partitions by foreign powers, a number of Polish patriots, among them Kazimierz Pułaski and Tadeusz Kościuszko, left for America to fight in the American Revolutionary War.
Kazimierz Pułaski, having led the losing side of a civil war, escaped a death sentence by leaving for America. There, he served as Brigadier - general in the Continental Army and commanded its cavalry. He saved General George Washington 's army at the Battle of Brandywine and died leading a cavalry charge at the Siege of Savannah, aged 31. Pułaski later become known as the "father of American cavalry ''. He is also commemorated in Casimir Pulaski Day and the Pulaski Day Parade.
Kościuszko was a professional military officer who served in the Continental Army in 1776 and was instrumental in the victories at the Battle of Saratoga and West Point. After returning to Poland, he led the failed Polish insurrection against Russia which ended with the Partition of Poland in 1795. Pułaski and Kościuszko both have statues in Washington, D.C.
After the Revolution, Americans who commented generally held positive views of the Polish people. Polish music such as mazurkas and krakowiaks were popular in the U.S. during the antebellum period. However, after the Civil War (1861 -- 65) the image turned negative and Poles appeared as crude and uneducated people who were not good fits for America socially or culturally.
The first emigrants from Poland were Silesians from the Prussian partition of Poland. They settled in Texas in 1854, creating an agricultural community that carried their native traditions, customs, and language. The land they chose was bare, unpopulated countryside, and they erected the homes, churches, and municipal accommodations as a private community. The first home built by a Pole is the John Gawlik House, constructed 1858. The building still stands, and displays a high - pitched roof common in Eastern European architecture. The Poles in Texas built brick houses with thatched roofs until the 1900s. That region in Texas is subject to less than 1 inch of snow per year, and meteorological studies show that level of insulation is unwarranted. The Polish Texans modified their homes from their European models, building shaded verandas to escape the subtropical temperatures. They often added porches to their verandas, particularly on the southward windy side. According to oral histories recorded from descendants, the verandas were used for "almost all daily activities from preparing meals to dressing animal hides. '' Panna Maria, Texas, was often called a Polish colony because of its ethnic and cultural isolation from Texas, and remains an unincorporated community in Texas. The geographically isolated area continues to maintain its heritage but the population mostly moved to nearby Karnes City and Falls City.
Leopold Moczygemba, a Polish priest, founded Panna Maria by writing letters back to Poland encouraging them to emigrate to Texas, a place with free land, fertile soils, and golden mountains. About 200 - 300 Poles took the trip and nearly mutinied when they encountered the desolate fields and rattlesnakes of Texas. Moczygemba and his brothers served as leaders during the town 's development. The settlers and their children all spoke Silesian. Resurrectionist priests led church services and religious education for children. Letters sent back to Poland demonstrate a feeling of profound new experience in America. Hunting and fishing were favorite pastimes among the settlers, who were thrilled by the freedoms of shooting wild game in the countryside. The farmers used labor - intensive agricultural techniques that maximized crop yields of corn and cotton; they sold excess cotton to nearby communities and created profitable businesses selling crops and livestock. Polish leaders and Polish historical figures settled in the community, including Matthew Pilarcyk, a Polish soldier sent to Mexico in the 1860s to fight for the Austrian Emperor Maximilian. Some records recall that he fled the Army in 1867 during the fall of the empire, escaped a firing squad and traversed the Rio Grande to enter Panna Maria, where he had heard Poles were living. When he arrived, he married a local woman and joined the community as a political leader. The community was nearly massacred following the Civil War, where the government of Texas was dismantled and gangs of cowboys and former Confederate nativists harassed and shot at Poles in Panna Maria. The Poles in Panna Maria had Union sympathies and were the subject of discrimination by the local Southerners. In 1867, a showdown between a troupe of armed cowboys and the Polish community neared a deadly confrontation; Polish priests requested the Union Army to protect them, and a stationed Army helped keep them safe, registered to vote in elections, and free from religious intolerance. The language used by these settlers was carried down to their descendants over 150 years, and the Texas Silesian dialect still exists. Cemeteries contain inscriptions written in Polish or Polish - and - English. The Silesians held a millennial celebration for the Christianization of Poland in 966, and were presented a mosaic of the Black Madonna of Częstochowa by President Lyndon B. Johnson.
Poles settled a farming community in Parisville, Michigan, in 1857. Historians debate whether the community was established earlier, and claims that the community originated in 1848 still exist. The community was started by five or six Polish families who came from Poland by ship in the 1850s, and lived in Detroit, Michigan in 1855 before deciding to initiate a farming community in Parisville, where they created prosperous farms, and raised cattle and horses. The lands were originally dark black swamps, and the settlers succeeded in draining the land for use as fruit orchards. As per the Swamplands Act of 1850, the lands were legally conferred to pioneering settlers who could make use of these territories. Individual Polish farmers and their families took advantage of this new law, and other immigrants settled disparate areas in interior Michigan independently. The Parisville community was surrounded by Native American Indians who continued to live in tepees during this time. The Poles and the Indians enjoyed good relations and historical anecdotes of gift - giving and resource sharing are documented. Polish farmers were dispersed throughout Michigan, and by 1903 roughly 50,000 Poles were said to live in Detroit.
Portage County, Wisconsin
The Kashubian settlement in Portage County, Wisconsin (not to be confused with the city of Portage, Wisconsin) is the United States 's oldest. The first Kashubian to settle there was Michael Koziczkowski, formerly of Gdansk, who arrived in Stevens Point late in 1857. A son, Michael Junior, was born to Koziczkowski and his wife Franciszka on September 6, 1858 in Portage County. One of the first Kashubian settlements was the aptly named Polonia, Wisconsin. Within five years, more than two dozen Kashubian families joined the Koziczkowskis. Since the Portage County Kashubian community was largely agricultural, it was spread out over Sharon, Stockton, and Hull townships. After the end of the Civil War, many more immigrants from throughout occupied Poland settled in Portage County, this time including the city of Stevens Point.
Winona 's first known Kashubian immigrants, the family of Jozef and Franciszka von Bronk, reached Winona in 1859. Starting in 1862, some Winona Kashubians began to settle in the farming hamlet of Pine Creek, across the Mississippi River in Trempealeau County, Wisconsin. To this day, Winona and Pine Creek (Dodge Township) remain two parts of the same community. Winona has never been a purely Kashubian settlement, as were the settlements in Wilno, Renfrew County, Ontario and the various hamlets of Portage County, Wisconsin; even so, it was known as early as 1899 as the Kashubian Capital of America, largely because of the Winona Kashubians ' rapid acquisition of a social, economical and political cohesion unequaled in other Kashubian settlements. Engineer Dan Przybylski started manufacturing trenchers in the city and invented a single cylinder hydraulic extension crane. A Polish Museum of Winona was established in 1977, residing in the building of a late - 19th century lumber company.
Many of Poland 's political elites were in hiding from the Russians following an unsuccessful uprising in 1830 to 1831. Hundreds of military officers, nobles, and aristocrats were hiding as refugees in Austria, but the Emperor of Austria was under pressure to surrender them to Russia for execution. He had previously made a commitment to keep them safe from the Russians, but wanted to avoid war. The U.S. Congress and President Andrew Jackson agreed to take several hundred Polish refugees. They arrived on several small ships, the largest single arrival being 235 refugees, including August Antoni Jakubowski. Jakubowski later wrote his memoirs in English, documenting his time as a Polish exile in America. He recalled that the refugees originally wanted to go to France, but the government refused to receive them, and under obligation by the Austrian authorities, they came to America.
Jackson wrote to the Secretary of the Treasury to secure 36 sections of land within Illinois or Michigan for a Polish settlement. In 1834, a rural territory near the Rock River in Illinois was surveyed by the U.S. government. The Polish emigres formed a group, the Polish committee, to plead for aid settling in the U.S. Despite three applications to Congress by the Polish committee, no Acts were passed and no lands were ever officially appropriated for settlement. Polish immigrant Charles Kraitsir blamed Secretary of Treasury Albert Gallatin, who he said was intercepting letters addressed to the Polish Committee and took them himself, and was making statements on their behalf, without their input. Kraitsir alleged that American citizens who donated funds to their cause had their funds diverted by Gallatin. The plans were abandoned when American pioneers took the settlement lands and squatted them, leaving the Polish settlement effort politically unfeasible. No land was ever officially handed to the Polish emigres.
The Polish exiles settled in the United States. One of them was a doctor of medicine and a soldier, Felix Wierzbicki, a veteran of the November Uprising, who, in 1849, published the first English - language book printed in California, California as it is, and as it may be. The book is a description of the culture, peoples, and climate of the area at that time. According to the Library of Congress, the book was a valuable guide to California for prospective settlers that includes a survey of agriculture, hints on gold mining, a guide to San Francisco, and a chapter on California 's Hispanic residents and Native American tribes.
Polish political exiles founded organizations in America, and the first association of Poles in America, Towarzystwo Polakow w Ameryce (Association of Poles in America) was founded March 20, 1842. The association 's catchphrase was "To die for Poland ''. Some Polish intellectuals identified so strongly with Polish nationalism, that they warned repeatedly against assimilation into American culture. It was the duty of Poles to someday return to liberate the homeland, they argued to newly arrived Poles in America. The Polish National Alliance (PNA) newspaper, Zgoda, warned in 1900, "The Pole is not free to Americanize '' because Poland 's religion, language and nationality had been "partially torn away by the enemies ''. In other words, "The Pole is not free to Americanize because wherever he is -- he has a mission to fulfill. '' The poet Teofila Samolinska, known as the "mother of the Polish National Alliance, '' tried to bridge the gap between the political exiles of the 1860s and the waves of peasants arriving late in the century. She wrote:
Here one is free to fight for the Fatherland; Here the cruelty of tyrants will not reach us, Here the scars inflicted on us will fade.
Many of the exiles in America were actively political and saw their mission in the United States as one to create a new Poland in the United States. Some rejected the term "exile '' and considered themselves "pilgrims '', following the Polish messianism message of Adam Mickiewicz. The political exiles created nationalist clubs and spread news about the oppression in partitioned Poland. A Polish Central Committee founded in New York in 1863 attempted to rally American public opinion for Polish independence and fund - raised to support the revolutionaries. The American public opinion was not swayed by the small group, in large part because the Civil War was ongoing at the time and little care was taken for a foreign war. Russia, being strongly pro-Union, was also considered an ally to many Northerners, and Poland 's uprising was mistaken by some Americans as just another secessionist movement.
Future Polish immigrants referred to this group, who arrived in the United States before 1870 as the stara emigracja (old emigration), and differentiated them from the nowa emigracja (new emigration) who came from 1870 to 1920.
Polish Americans fought in the American Civil War on both sides. The majority were Union soldiers, owing to geography and ideological sympathies with the abolitionists. An estimated 5,000 Polish Americans served in the Union, and 1,000 for the Confederacy. By coincidence, the first soldiers killed in the American Civil War were both Polish: Captain Constantin Blandowski, a Union battalion commander in Missouri who died in the Camp Jackson Affair, and Thaddeus Strawinski, an 18 - year - old Confederate who was accidentally shot at Fort Moultrie on Sullivan 's Island. Two Polish immigrants achieved leadership positions in the Union Army, Colonels Joseph Kargé and Włodzimierz Krzyżanowski. Kargé commanded the 2nd New Jersey Volunteer Cavalry Regiment that defeated Confederate Nathan Bedford Forrest in a battle. Krzyżanowski first commanded the mostly immigrant 58th New York Volunteer Infantry Regiment, nicknamed the Polish Legion, in which Poles and other immigrants fought battles in the Eastern Theater and Western Theater of the American Civil War. Krzyżanowski later commanded an infantry brigade, from 1862 to 1864, with the 58th in that formation.
In 1863 -- 1864, the Imperial Russian Army suppressed the January Uprising, a large scale insurrection in the Russian partition of the former territories of the Polish -- Lithuanian Commonwealth. Many Polish resistance fighters fled the country, and Confederate agents tried and failed to encourage them to immigrate and join the military of the Confederate States of America.
After the collapse of the Confederacy, Polish foreign labor was desired to work on Southern farms as replacements for black slaves. Several such societies were founded in Texas, largely by private planters, but in 1871, Texas funded immigration of Europeans through direct state aid (Texas Bureau of Immigration). The Waverly Emigration Society, formed in 1867 in Walker County, Texas, by several planters, dispatched Meyer Levy, a Polish Jew, to Poland to acquire roughly 150 Poles to pick cotton. He sailed to Poland and brought back farm laborers, who arrived in New Waverly, Texas, in May 1867. The agreement the Poles had with the plantation owners was that the farmers would be paid $90 (equivalent to $1,576 in 2017), $100 ($1751), and $110 ($1926) per year for three years of their labor, while the owners provided them with a "comfortable cabin '' and food. Poles paid back their owners for the ship tickets to America, often in installments. By 1900, after years working on Southerners ' farms, Poles had "bought almost all the farmland '' in New Waverly, and were expanding their land ownership to the surrounding areas. New Waverly served as a mother colony for future Polish immigrants to the United States, as many arriving Poles lived and worked there before moving on to other Polonias in the U.S. Polish farmers commonly worked directly with southern blacks in east Texas, and they were commonly in direct competition for agricultural jobs. Blacks frequently picked up a few words of Polish and Poles picked up some of the black English dialect in these areas during the late 19th century. R.L. Daniels in Lippincott 's Monthly Magazine wrote a piece on "Polanders '' in Texas in 1888, praising their industriousness and hard work ethic. He cited instances where Polish farmers called their landlords massa, denoting a subordinate position on level with slavery, and, when asking a woman why she left Poland, she replied ' Mudder haf much childs and ' Nough not to eat all ". Daniels found that the Poles were efficient farmers, and planted corn and cotton so close to their homes as not to leave even elbow room to the nearby buildings. Texas blacks, referred to Poles as "' dem white niggahs ' whom they hold in undisguised contempt '' were apparently stunned by their high literacy rates, according to Daniels.
Polish immigrants came in high numbers to Baltimore, Maryland, following the Civil War and created an ethnic community in Fells Point. They worked on farms in Maryland and many became migrant farming families. Oyster companies from the Gulf of Mexico hired recruiters to hire Polish farmers for work in the oyster farming industry. Jobs were advertised with illustrations of a green, tropical environment and wages in 1909 were promised at 15 cents per hour (equivalent to $4.09 in 2017) for men and 12.5 cents per hour ($3.40) for women. Polish farmers in Baltimore, Maryland and in the southern United States commonly came to Louisiana and Mississippi during the winter months. Those that came were provided very small, cramped living quarters and only one worker per family was given a permanent job canning oysters. These were paid 12 cents per hour ($3.27) for men and 8 cents per hour ($2.18) for women. Companies paid the rest to shell oysters and paid them 5 cents ($1.36) per measure; according to a worker, a measure should weigh about 4.5 lb (2.0 kg) but usually weighed more than 7 -- 8 lb (3.2 -- 3.6 kg). Jobs were segregated by gender; women and children worked in the oyster house while men and boys fished on the boats.
"Men depart by boat to the water where they stay one to two weeks. Because oysters are scarce, the net yields at best fifteen percent of the expected catch when pulled up to the deck. The rest are shells and slime. This work is hard beyond words. A person not used to cranking up the net gives up from exhaustion.
If fog appears during the catch, the oysters open up and most of them die when the sun starts shining. In such cases it becomes the worker 's loss.
Polish foremen were used to manage and supervise the workers. many immigrants did not speak English and were wholly dependent on their foreman to communicate to the company. Photographer Lewis Hine spoke with one foreman, who recruited Poles from Baltimore, who said, "I tell you, I have to lie to employees. They 're never satisfied. Hard work to get them. '' The foremen were allowed to beat their workers and functioned as pimps in some cases. Nesterowicz found some foremen convinced attractive women to sleep with their American bosses in exchange for higher - paying positions. The moral degradation and exploitation in the oyster farms led a local Polish priest, Father Helinski, to ask Polish organizations to dissuade any more Poles from entering the business.
Polish Americans were represented in the American temperance movement, and the first wave of immigrants was affected by prohibition. A leading Pole in the Temperance movement in the United States was Colonel John Sobieski, a lineal descendant of Polish King John III Sobieski, who served as a Union general in the American Civil War. In 1879, he married a prominent abolitionist and prohibitionist Lydia Gertrude Lemen, an American from Salem, Illinois. Through his wife 's affiliation, he became a leading member of the Polish branch of the Women 's Christian Temperance Union, and preached against alcohol in Ohio, Wisconsin, and Illinois to prohibition - camps. Sobieski and the predominantly Protestant Christian Temperance groups never made great in - roads into the Polish community. Polish Catholics immigrants frequently heard lectures and received literature from the Catholic Church against alcohol. Polish immigrants were distrustful of the Irish - dominated American Catholic Church, and did not resonate with the temperance movement in great numbers. A visit by Archbishop John Ireland to the PNA in St. Paul in 1887 was ineffective in drawing them to the Catholic Total Abstinence Union of America. The Polish language press covered the topic of abstinence occasionally in the U.S. It was not until 1900 that the PNA introduced sanctions for alcoholics among its membership, and abstinence generally was unpopular among American Poles. In New Britain, Connecticut, Father Lucian Bojnowski started an abstinence association which offended a local Polish club, he received a death threat in response. In 1911, Father Walter Kwiatkowski founded a newspaper called Abystynent (The Abstainer) promoting local abstinence societies. The newspaper did not last long, and the Polish abstinence groups never united. The Polish National Catholic Church never created official policies towards abstinence from alcohol, nor took it as a priority that differed from the Catholic Church.
Polish immigrants were attracted to saloons -- drinking was a popular social activity. Saloons allowed Poles to relieve their stresses from difficult physical labor, the selling of steamship tickets, and meeting grounds for mutual aid societies and political groups. Among Polish immigrants, a saloon - keeper was a favorite entrepreneurship opportunity, second only to a grocery store owner. By 1920, when alcohol was prohibited in the United States, American Poles continued to drink and run bootlegging operations. Contemporary Polish language newspapers decried a pervasive alcoholism among Polish American families, where mothers would brew liquor and beer at home for their husbands (and sometimes children). Although small in both numbers and scope, Poles joined organized crime and mafia - related distribution networks of alcohol in the U.S.
The largest wave of Polish immigration to America occurred in the years after the American Civil War until World War I. Polish immigration began en masse from Prussia in 1870 following the Franco - Prussian War. Prussia retaliated against Polish support for France with increasing Germanization following the war. This wave of immigrants are referred to as za chlebem (for bread) immigrants because they were primarily peasants facing starvation and poverty in occupied Poland. A study by the U.S. Immigration Commission found that in 1911, 98.8 % of Polish immigrants to the United States said that they would be joining relatives or friends, leading to conclusions that letters sent back home played a major role in promoting immigration. They arrived first from the German Polish partition, and then from the Russian partition and Austrian partition. U.S. restrictions on European immigration during the 1920s and the general chaos of World War I cut off immigration significantly until World War II. Estimates for the large wave of Polish immigrants from about 1870s to 1920s are given at about 1.5 million. In addition, many Polish immigrants arrived at the port of Baltimore. The actual numbers of ethnically Polish arrivals at that time is difficult to estimate due to prolonged occupation of Poland by neighboring states, with total loss of its international status. Similar circumstances developed in the following decades: during the Nazi German occupation of Poland in World War II; and further, in the communist period, under the Soviet military and political dominance with re-drawn national borders. During the Partitions of Polish -- Lithuanian Commonwealth (1795 -- 1918), the Polish nation was forced to define itself as a disjointed and oppressed minority, within three neighboring empires, in the Austrian Partition, Prussian Partition, and Russian Partition. The Polish diaspora in the United States, however, was founded on a unified national culture and society. Consequently, it assumed the place and moral role of the fourth province.
Poland was largely an agrarian society throughout the Middle Ages and into the 19th century. Polish farmers were mostly peasants, ruled by Polish nobility that owned their land and restricted their political and economic freedoms. Peasants were disallowed from trading, and typically would have to sell their livestock to the nobility, who in turn would function as middlemen in economic life. Commercial farming did not exist, and frequent uprisings by the peasants were suppressed harshly, both by the nobility and the foreign powers occupying Poland. A number of agricultural reforms were introduced in the mid-19th century to Poland, first in German Poland, and later eastern parts of the country. The agricultural technologies originated in Britain and were carried eastward by conversing traders and merchants; Poland gained these secrets in the most developed regions first, and through successful implementation, areas that adopted them boomed. The introduction of a four - crop rotation system tripled the output of Poland 's farmlands and created a surplus of agricultural labor in Poland. Prior to this, Polish peasants continued Medieval Era practice of three field rotation, losing one year of productive growing time to replenish soil nutrients. Instead of leaving a field fallow, or without any plants for a season, the introduction of turnips and especially red clover allowed Polish fields to maximize nutrients by green manure. Red clover was especially popular because it fed cattle as grazing land, giving the extra benefit of more robust livestock raising in Poland.
Between 1870 and 1914, more than 3.6 million people departed from Polish territories (of whom 2.6 million arrived in the U.S.) Serfdom was abolished in Prussia in 1808, in the Austria -- Hungary in 1848 and in the Russian Empire, in 1861. In the late 19th century, the beginnings of industrialization, commercial agriculture and a population boom, that exhausted available land, transformed Polish peasant - farmers into migrant - laborers. Racial discrimination and unemployment drove them to emigrate.
The first group of Poles to emigrate to the United States were those in German - occupied Poland. The German territories advanced their agricultural technologies in 1849, creating a surplus of agricultural labor, first in Silesia, then in eastern Prussian territories. The rise in agricultural yields created the unintended effect of boosting the Polish population, as infant mortality and starvation decreased, increasing the Polish birth rate. In 1886, Otto von Bismarck gave a speech to the Lower House of the Prussian Parliament defending his policies of anti-polonism, and warning of the ominous position Silesia was in with over 1 million Poles who could fight Germany "within twenty four hour notice ''. Citing the November Uprising of 1830 -- 31, Bismarck introduced measures to limit freedoms of press and political representation that Poles enjoyed within the Empire. Bismarck forced the deportation of an estimated 30,000 -- 40,000 Poles out of German territory in 1885, with a five - year ban on any Polish immigration back into Germany. Many Poles did return in 1890, when the ban was lifted, but others left for the United States during this time. Bismarck 's anti-Catholic Kulturkampf policies aimed at Polish Catholics increased political unrest and interrupted Polish life, also causing emigration. Around 152,000 Poles left for United States during the Kulturkampf.
The Russian partition of Poland experienced considerable industrialization, particularly the textile capital of Łódź, then the Manchester of Imperial Russia. Russia 's policies were pro-foreign immigration, whereas German Poland was unambiguously anti-immigrant. Polish laborers were encouraged to migrate for work in the iron - foundries of Piotrków Trybunalski and migrants were highly desired in Siberian towns. Russia also established a Peasant Bank to promote land ownership for its peasant population, and many Poles were given employment opportunities pulling them from rural areas into industrial Russian cities. Of the three partitions, the Russian one contained the most middle - class Polish workers, and the number of industrial workers overall between 1864 and 1890 increased from 80,000 to 150,000. Łódź experienced a booming economy, as the Russian Empire consumed about 70 % of its textile production.
Russian - occupied Poles experienced increasingly abusive Russification in the mid-19th century. From 1864 onward, all education was mandated to be in Russian, and private education in Polish was illegal. Polish newspapers, periodicals, books, and theater plays were permitted, but were frequently censored by the authorities. All high school students were required to pass national exams in Russian; young men who failed these exams were forced into the Russian Army. In 1890, Russia introduced tariffs to protect the Russian textile industry, which began a period of economic decline and neglect towards Poland. The decline of Russia 's economy after the Russo - Japanese War and the 1905 Russian Revolution further pushed Polish emigration. Polish nationalists at first discouraged emigration. In many respects, the nationalists were succeeding, creating secret Polish language schools so children could learn Polish, and leading insurrectionist activity against the Russian occupiers. However, when emigrants in the United States began sending back money to their poor relatives in Russia and Galicia, attitudes against emigration subsided. Polish National Party leader Roman Dmowski saw emigration in a positive light, as an "improvement of the fortunes of the masses who are leaving Europe. '' At its peak, in 1912 -- 1913, annual emigration to the U.S., from the Polish provinces of the Russian Empire, exceeded 112,345 (including large numbers of Jews, Lithuanians and Belarusians).
Among the most famous immigrants from partitioned Poland at this time was Marcella Sembrich. She had performed in Poland as an opera singer and moved to the United States. When sharing her experience with the Kansas City Journal, she described the social discrimination affecting her in what was then The Kingdom of Poland, a puppet state of Russia:
"... children who speak Polish on the streets of Vilna are punished and performances of any kind in the Polish language are forbidden. Polish is not allowed anywhere, and the police are still as strict as ever in trying to prevent its use. The first night I sang at Vilna I was wild to sing in Polish. I spoke to the manager about it and he implored me on his knees not to think of such a thing. But I was determined to do it if I could, so at the end of the performance, when the audience kept demanding encores, I prepared for it by singing a song in Russian. Then I sang one of Chopin 's songs in Polish.
When I finished there was a moment of absolute stillness. Then came such an outburst as I have never seen in my life. I seized my husband 's arm and stood waiting to see...
Polish children in Austrian Galicia were largely uneducated; by 1900, 52 percent of all male and 59 percent of all female Galicians over six years of age were illiterates. Austrian Poles started immigrating from the United States beginning in 1880. The Austrian government tightened emigration in the late 1800s, as many young Polish males were eager to leave the mandatory conscription of the Austrian government, and peasants were displeased with the lack of upward opportunities and stability from heavy, labor - intensive agricultural work. The Galician government wanted to tie peasants to contracts and legal obligations to the land they worked on, and tried to enforce legislation to keep them on the lands. Polish peasant revolts in 1902 and 1903 changed the Austrian government 's policies, and emigration from Galicia increased tremendously in the early 1900 -- 1910 period.
Galician Poles experienced among the most difficult situations in their homeland. When serfdom was outlawed in 1848, the Austrian government continued to drive a wedge between Polish peasants and their Polish landlords to detract them from a more ambitious Polish uprising. Galicia was isolated from the west geographically by the Vistula river and politically by the foreign powers, leaving Galician Poles restricted from commercial agriculture in the west of Poland. Galician Poles continued to use outdated agricultural techniques such as burning manure for fuel instead of using it for fertilizer, and the antiquated Medieval - era three - year crop rotation system, which had been long - replaced in western Poland by the use of clover as a fodder crop. Galician Poles resented the government for its apathy in handling disease; a typhus epidemic claimed 400,000 lives between 1847 and 1849, and cholera killed over 100,000 in the 1850s. Galicia suffered a potato blight between 1847 and 1849, similar to Ireland 's famine at the same time, but relief was never reached because of political and geographical isolation. A railroad system connecting Poland began reaching West Galicia from 1860 to 1900, and railroad tickets cost roughly half a farmhand 's salary at the time. Polish peasants were no longer the property of their landlords, but remained tied to their plots of land for subsistence and were financially indebted to the landlords and government taxmen. The plight of the Galician Poles was termed the "Galician misery '', as many were deeply frustrated and depressed by their situations.
Austrian Poles experienced an enormous rise in religiosity during the late 19th century. From 1875 to 1914, the number of Polish nuns increased sixfold in Galicia; at the same time, German Poland had a less marked increase and in Russian Poland it decreased. Historian William Galush noted that many nuns were from the peasant class, and young women choosing marriage were faced with the prospect of hard farm work. Polish peasants in Galicia were forced to work harder on smaller size farms than those they had grown up on as a result of Poland 's rapid population growth.
Polish immigrants were highly desired by American employers for low - level positions. In steel mills and tin mills, it was observed that foremen, even when given the choice to directly employ workers of their own ethnic background, still desired to choose Poles. Steel work was undesirable to other immigrant groups, as it lasted 12 hours a day and 7 days a week, self - selecting for the most industrious and hardworking people. Polish immigrants chose to chain - market the job positions to their friends and relatives, and it was very common for a Polish friend with good English to negotiate wage rates for newer immigrants. Polish Americans favored steel areas and mining camps, which had a high demand for manual labor; favorite destinations included Chicago, Detroit, Milwaukee, Cleveland, Buffalo, New York, and Pittsburgh, as well as smaller industrial cities and mining towns. Relatively few went to New England or to farming areas; almost none went to the South. Poles came to dominate certain fields of work: in 1920, 33.1 % of all U.S. coal - mine operatives and 25.2 % of all blast furnace laborers were Polish. Polish immigrants were categorized for low - status positions within U.S. companies, as the same steel companies that recruited Polish immigrants for work in blast furnaces recruited Irish immigrants for work with finished metal.
Polish immigrants took low - paying jobs at blast furnaces in high numbers. As in many jobs Poles took in America, the demand fluctuated, hours were long, and the supply of expendable labor was high. Industrialist Amasa Stone actively sought out Polish immigrants to work in his steel mill in Ohio, and personally traveled to Poland in the 1870s to advertise laborer opportunities. He advertised jobs in Gdansk, promising jobs for laborers at a salary of $7.25 a week (the average wage at his mill was $11.75 for Americans), and a free ship ride to the United States. Hundreds of Poles took those jobs and the Polish population of Cleveland grew from 2,848 to 8,592 between 1880 and 1890 as a result of his recruiting. In 1910, 88 % of workers labored for an 84 - hour weekly shift (7 days, 12 hours per day). Day and night shifts rotated every two weeks, requiring men to perform 18 - or 24 - hour straight shifts. Movements to end the 7 day week were pushed by management, but many workers did not oppose the practice and saw it as a necessary evil. The United States Steel Corporation slowly eliminated its 7 - day work weeks, down from 30 % in 1910 to 15 % in 1912. Polish American families grew up fatherless in Chicago, and the long hours spent at the blast furnaces only averaged 17.16 cents per hour (equivalent to $4.35 in 2017), below the poverty limit at the time in Chicago. Workers at the blast furnaces had little time for self - improvement, leisure, or many social activities. When the 7 - day week was done away with, some workers saw it as a waste of time because their children were in school and their friends were at work, so they spent time at saloons and drank. Many plants found that a large number of workers quit their jobs when Sunday was taken off their schedules, citing the day off as a reason.
West Virginia experienced an influx of immigrant coal miners during the early 20th century, increasing the number of Poles in West Virginia to almost 15,000 by 1930. Poles were the third - largest immigrant group in West Virginia, following the Italians and the Hungarians, who also joined the mining industry in large numbers. Poles often worked alongside other Slavic immigrants, and recorded work safety signs from the mines in the 1930s were commonly posted in Polish, Lithuanian, Czech, and Hungarian languages. Poles predominated certain communities, comprising the largest ethnic group in 5 towns by 1908: Raleigh in Raleigh County, Scotts Run in Monongalia County, and Whipple and Carlisle in Fayette County. Pennsylvania attracted the greatest number of Polish miners. Polish immigration to Luzerne County was popular from the end of the Civil War. Employment in the mining industry increased from 35,000 in 1870 to over 180,000 in 1914. According to historian Brian McCook, over 80 % of Poles in northern Pennsylvania were laborers in the coal mines prior to World War I. Northern Pennsylvania contains over 99 % of America 's anthracite coal, which was favored for home heating during the colder months. Demand for the coal was seasonal and left many workers unemployed for 3 to 4 months each summer. Poles joined ethnic and Catholic insurance programs with fellow workers, pooling funds together for medical and disability insurance. In 1903 a Polish - language newspaper, Gornik, later Gornik Pensylwanski (Pennsylvanian Miner), was started in Wilkes - Barre to share local industry news. A Pennsylvania State Investigating Committee in 1897 found the workers ' salaries to be severely low, stating it was "utterly impossible for any moderate sized family to more than exist, let alone enjoy the comforts which every American workingman desires and deserves. '' In Pennsylvania, miners averaged $521.41 ($14202) per year, and historians have calculated that $460 ($12529) would allow basic survival in northern Pennsylvania. In 1904 Frank Julian Warne claimed that a Slavic miner could have a monthly salary of $30 ($817) and still send a $20 ($545) remittance monthly to Poland. He found Slavic miners lived together, 14 unmarried men in an apartment, buying food collectively, required only $4 ($108.95) a month for living expenses and $5 to $12 ($136.19 -- 326.84) each on rent. In 1915, Coal Age magazine estimated that $10 million ($272 million) was sent back to Poland annually from Polish miners. Warne accused the Slavs of depressing wages and effectively "attacking and retarding communal advancement '' by the United Mine Workers. Miners had to purchase their own working supplies, and company management enforced requirements that the equipment and blasting powder be purchased from the company store, at prices exceeding 30 % over retail. Warne argued that Slavs did not feel the financial burden of increasing material supplies because of their lower standard - of - living, weakening their support for the United Mine Worker strikes. Laws were pushed by the United Mine Workers to limit Polish competition; the Pennsylvania Legislature passed a law in 1897 mandating that a worker perform as a laborer for at least two years and pass an examination in English to receive a promotion. Polish miners joined the United Mine Workers and joined in strikes during the turn of the twentieth century, bridging past nativist concerns. Descendants of the Polish miners still exist in the northern industrial areas of West Virginia, and many have dispersed across the U.S. Polish immigrants were favored for mining, where hundreds died each year, because they "played their part with a devotion, amenability, and steadiness not excelled by men of the old immigration. '' A novel set in 1901 written from the perspective of a young Polish American in a coal mining family, Theodore Roosevelt by Jennifer Armstrong, reflects the poor conditions and labor struggles affecting the miners. A Coal Miner 's Bride: the Diary of Anetka Kaminska by Susan Campbell Bartoletti is written from the perspective of a 13 - year - old Polish girl who is transported to the U.S. to marry a coal miner in Pennsylvania. In a 1909 novel by Stanisław Osada, Z pennsylwańskiego piekła (From a Pennsylvania Hell), a Polish miner is seduced and subverted by an Irish - American girl who tears him from his immigrant community and possesses him in a lustful relationship. Historian Karen Majewski identifies this novel as one which depicts an Americanized Pole, "seduced and demoralized by this country 's materialism and lack of regulation. ''
Meatpacking was dominated by Polish immigrants in the Midwestern United States during the late 19th century until World War II.
The meatpacking industry was a large industry in Chicago in the 1880s. Although some had joined earlier, a large number of Poles joined Chicago 's packing plants in 1886, and through networking and successive generations, Poles predominated the profession. Historian Dominic Pacyga identifies the Polish influx of workers in 1886 as a result of the failed strike by the mainly German and Irish workers that year. The union was further weakened by yellow dog contracts forced on returning workers, and by the supply of cheap Polish labor.
Job security in the Chicago plants was highly uncommon. Since the livestock supplies were seasonal, particularly cattle, management laid off its unskilled workers in the killing department each year. Workers, including Poles, sometimes paid management kickbacks to secure employment at the company. The meatpacking industry increased its production process tremendously in the late 19th century, but its wages fell. "In 1884 five cattle splitters in a gang would process 800 head of cattle in ten hours, or 16 cattle per man per hour at an hourly wage of 45 cents. By 1894, four splitters were getting out 1,200 cattle in ten hours, or 30 cattle per man per hour. This was an increase of nearly 100 percent in 10 years, yet the wage rate fell to 40 cents per hour. ''
In 1895 government inspectors found a child working at a dangerous machine. The child told inspectors that his father was injured at the machine and would lose his job if his son did not work. Illinois labor inspectors needed Polish translators to collect evidence because some child workers, in 1896, were unable to answer questions, like "What is your name? '' and "Where do you live? '', in English. Reports also found that parents falsified child birth records to bypass laws prohibiting work for children under 14 years old. Under investigations with the children themselves, it was found that work commonly started at age 10 or 11. School records certifying that children could read and write by age 16 were easily obtained by Catholic parish schools after confirmation. Because of vigorous State prosecution against factories, from 1900 to 1914 the number of children under 16 working in urban Illinois fell from 8,543 to 4,264.
How different is the treatment of the same newcomer who wants to work on a farm. The native, indigenous person is more modest in his own life. He desires and knows well from his personal experience that beginnings are difficult. When a newcomer lives at first in a quickly - built shack and sleeps on a few boards put together, it is taken as a natural stage, nothing by which to be disgusted. When the same American sees how our peasant takes a plow into his hands, how he gets horses to move, how row after row of soil is beautifully plowed, instead of contempt, he feels respect toward our men.
Poles arriving in America frequently had years of experience working in agriculture and gained a reputation as skilled farmers in the United States. Polish immigrants traveled to the Northern United States intentionally with hopes of working in industrial trades. Stereotypes casting them as "farm people '' and economic necessities in many cases predetermined their careers, which continued them in agricultural roles. Polish immigrants to Massachusetts and Connecticut came seeking jobs in New England 's mills, but the local American population in Connecticut River Valley was actively seeking those jobs and effectively opened agricultural opportunities for them. In New England, Poles came and used land that had been abandoned by Yankee farmers. Poles had even higher crop yields than the local Americans because of their labor - intensive efforts and willingness to try lands previously disregarded as worthless. Poles succeeded rapidly; in Northampton in 1905, Poles were 4.9 % of the population and owned 5.2 % of the farmland. By 1930, they made up 7.1 % of the town and owned 89.2 % of the farmland. The Polish farmers ' success is due to their large families, where children helped in agriculture, and their long hours of work, as many spent hours clearing abandoned land after a full day 's work. Louis Adamic in A Nation of Nations wrote that Poles "restored hundreds of thousands of apparently hopeless acres to productivity ''. Lenders viewed Polish immigrants as low credit risks because of their thrift, work ethic, and honesty. Polish immigrants were said to embody "immigrant Puritanism '', demonstrating economic puritanism better than the original New Englanders. Author Elizabeth Stearns Tyler in 1909 found that Polish children attending American schools did on par or better than the American - born, yet most went back to farming after high school, continuing a self - fulfilling prophecy:
Poles were seen as industrious, hardworking, and productive, while paradoxically lacking in ambition. They had created ethnic communities in farming that were stable and successful, and did not venture out into larger professions. Polish Americans eschewed intellectualism and pursued money through hard work and thrift. They gained a reputation for "chasing the dollar '', but were honest and reliable in their pursuits.
Several novels based on early 20th century New England contain an overplayed dynamic between the dying and shrinking Yankee population and the young Polish immigrants. Polish characters typically came from large families, embodied hard work, and commonly learned English and engaged in relationships with the women in the New England towns. A 1913 novel, The Invaders, which referred to Poles as "beasts '' and animal - like, contains a love story between a native New Englander and a Polish immigrant man. The story of amalgamation between a first - generation Polish immigrant and a white native woman is seen as a form of limited acceptance. A 1916 story, Our Naputski Neighbors, similarly depicts a lowly Polish immigrant family in New England which succeeds over its American neighbors.In the story, the younger generation changes their names and marries into a native Yankee family. The story demonstrates a cliché attitude of social and cultural inferiority that the Poles carry with them, but that can be easily solved through hygiene, education, learning English, and romantic attachments. In the 1931 story Heirs by Cornelia James Cannon, Poles are recognized as occupying a higher economic space than the protagonist Marilla. In the story, Poles who are Americanized through learning English are given higher status jobs, but she and her husband occupy a space of importance in teaching them English, as she said in one scene, "You ca n't Americanize without Americans! ''. In one scene, Marilla sees two young Polish children cutting firewood and teaches them to appreciate the trees as naturalists, rather than for their purpose as fuel. The protagonist 's view is somewhat condescending and elitist, although historian Stanislaus Blejwas found the tone of superiority is moderated in later novels written with Polish American characters.
Very few Poles opened shops, restaurants, stores, or other entrepreneurial ventures. The Galician and Russian Poles entered the United States with the least resources and least education and entered hard labor that they remained in their entire careers. Historian Bukowczyk found that the German Poles, who entered with "significant resources and advantages '' still were tepid in their entrepreneurial risk - taking. For first and second - generation Poles who entered business, supermarkets and saloon - owners were most popular.
Bukowczyk pointed that the Poles ' contentment with steady paychecks served as a detriment to their families and future generations. As other immigrant groups, including the Jews, Italians, Greeks, etc. were slowly rising the "ladders of success '' through small businesses, Poles were locked in economically by less aggressive, less challenging careers.
The immigrants of the late 19th - early 20th century wave were very different from those who arrived in the United States earlier. By and large, those who arrived in the early 19th century were nobility and political exiles; those in the wave of immigration were largely poor, uneducated, and willing to settle for manual labor positions. Pseudoscientific studies were conducted on Polish immigrants in the early 20th century, most notably by Carl Brigham. In A Study of Human Intelligence, which relied heavily on English aptitude tests from the U.S. military, Brigham concluded that Poles have inferior intelligence and their population would dilute the superior "Nordic '' American stock. His data was highly damning towards blacks, Italians, Jews, and other Slavs. A United States Congress Joint Immigration Commission study prepared on Polish Americans cited similar studies and said Poles were undesirable immigrants because of their "inherently unstable personalities ''. In a historical text examining Poland, Nevin Winter described in 1913 that "an extremeness in temperament is a characteristic of the Slav '' and asserting this view as an inborn and unchangeable personality trait in Poles as well as Russians. Future U.S. President Woodrow Wilson called Poles, Hungarians, and Italians, in his 1902 History of the American People, "men of the meaner sort '' who possessed "neither skill nor energy nor any initiative of quick intelligence. '' He later called these groups less preferable than the Chinese immigrants. Wilson later apologized, and met publicly with Polish - American leaders. The 1916 book The Passing of the Great Race similarly drew on intelligence studies of immigrants such as the Poles to argue that American civilization was in decline and society as a whole would suffer from a steady increase in inferior intelligence.
Polish (and Italian) immigrants demonstrated high fecundity in the United States, and in a U.S. Congress report in 1911, Poles were noted as having the single highest birth rate. The 1911 Dillingham Commission had a section devoted to the Fecundity of Immigrant Women, using data from the 1900 Census. As per Dillingham 's findings, there were 40 births per 1,000 Polish people, whereas the non-Polish birth rate was closer to 14 per 1,000. Historians debate the accuracy and sample group of this data, as many Polish immigrants arrived young and of child - bearing age, whereas other ethnics had a lengthy and sustained immigration policy with the United States, meaning multiple generations existed. In reports, the birth rate was very high for Poles and by 1910, the number of children born to Polish immigrants was larger than the number of arriving Polish immigrants. In Polish communities such as rural Minnesota, nearly three - fourths of all Polish women had at least 5 children. The Polish American baby boom lasted from 1906 to 1915 and then fell dramatically, as many of the immigrant mothers had passed out of their prime childbearing age. This was the highest birth rate for American Poles documented in the United States. During the 1920s and 1930s, Polish Americans were coming of age, developing ethnic fraternal organizations, baseball leagues, summer camps, scouting groups, and other youth activities. In large parts of Minnesota and Michigan, over half the population was under sixteen years old. Polish youths created nearly 150 street gangs in Chicago in the 1920s, and in Detroit and Chicago, created the single largest group of inmates in juvenile prisons.
Polish men in particular were romanticized as objects of raw sexual energy in the early 20th century. Many first wave Polish immigrants were single males or married men who left their wives to strike fortune in the United States. Some were "birds of passage '' who sought to return to Poland and their families with strong financial savings. They built a reputation in the United States for hard work, physical strength, and vigorous energy. The 1896 novel Yekl: A Tale of the New York Ghetto describes the life of Jake who left his wife and children in Poland behind and began an affair in the United States, when soon his wife meets him in New York. Central to the 1931 romance novel American Beauty is a theme of attractive Polish men. In one instance, main character Temmie Oakes says, "... You saw the sinews rippling beneath the cheap stuff of their sweaty shirts. Far, far too heady a draught for the indigestion of this timorous New England remnant of a dying people. For the remaining native men were stringly of withers, lean shanked, of vinegar blood, and hard wrung. '' Historian John Radzilowski notes that the theme of vivacious young immigrants replacing dying old white ethnic populations was common in America until the 1960s and 70s.
Immigration from Poland was primarily conducted at Ellis Island, New York, although some people entered via Castle Garden and to a lesser extent, in Baltimore. Ellis Island developed an infamous reputation among Polish immigrants and their children. An American reporter in the 1920s found that Polish immigrants were treated as "third class '', and were subject to humiliation, profanity, and brutality at Ellis Island. The Cleveland Polish language daily Wiadomości Codzienne (Polish Daily News) reported that officers at Ellis Island demanded women to strip from the waist up in public view. The immigration of paupers was forbidden by the U.S. Congress beginning with the Immigration Act of 1882. A newsman at Castle Garden found in a single ship of arriving passengers, 265 were "Poles and Slavonians '', and 60 were detained as "destitute and likely to become public charges. '' Polish Americans were disgusted by the Immigration Act of 1924 which restricted Polish immigration to 1890 levels, when there was no Polish nation. A Polish American newspaper stated, "... If the Americans wish to have more Germans and fewer Slavs, why do n't they admit that publicly!? '' It further went to examine the recent World War with Germany, which was America 's enemy, whereas the Polish had been patriotic and loyal to the U.S. Armed Services. Polish Americans were unconvinced that the immigration decreases of the 1920s were for the "protection '' of American workers, and Polish language newspapers reflected their distrust and suspicion of racial undertones behind immigration legislation.
Official records of the number of Polish immigrants to the United States are highly inconsistent. A general estimate of over 2 million Polish immigrants is generally stated. Reports as high as 4 million Polish immigrants to the United States has been written, which could be possible if non-Polish immigrants is considered in the total. Polish immigrants were categorized by U.S. immigration agents by nation of origin, usually Austria, Prussia, or Russia (between 1898 and 1919, there was no Polish nation). Immigrants during this time were allowed to write or say their "race or people '' to an agent. Documents report 1.6 million immigrants arriving between 1821 and 1924 self - reported as being of "Polish race ''. This is considered an undercount, caused by misinterpretation of the question. Ellis Island officials checked immigrants for weapons and criminal inclinations. In an 1894 news article, Ellis Island inspectors identify daggers found on several Polish immigrants as a reason for increased inspection techniques. Immigration officials at Ellis Island questioned immigrants for their settlement plans, and found that the majority entered the United States with deliberate plans for working on farms and factories, generally in communities with other Poles. A Polish settlement was stated as Mille Lacs County, Minnesota, where Polish immigrants settled to perform agricultural work.
The clothing industry in New York City was staffed by many immigrants from Eastern and Southern Europe. Historian Witold Kula found that many Jewish immigrants, and to a much lesser extent, Italians, were identified upon their arrival to the United States as having work background as tailors even if they did not. Kula identified several letters written by Jewish immigrants back to their families in Poland indicating that they were just learning the trade, when in fact, they had papers stating that it was their native profession. The new immigrants generally did not speak English nor did the immigration agents speak any Polish, Yiddish, or Italian. Kula suggests that the Immigration agents were influenced by the demands of the workforce and essentially staffed the industries based on their expectations of each ethnic group. By 1912, the needle trades were the largest employer of Polish Jews in the United States, and 85 % of the needle trade employees were Eastern European Jews.
Immigration restrictions were increased considerably in 1903, 1907 and 1910 on white immigrant women, including Poles. Public fears of prostitution and sex trafficking from eastern Europe led to the Mann Act, also referred to as the White Slavery Act of 1910. Eastern European women were rigorously screened for sexually immoral behavior. Few European immigrants were deported, and at its height in 1911, only 253 of over 300,000 European women were deported for "prostitution. '' In The Qualities of a Citizen, Martha Gardner found there was a "sweeping intent of immigration laws and policies directed at eradicating prostitution by European immigrant women '' in the early 20th century, which was absent from the "incriminating and even dismissive treatment of Asian and Mexican prostitutes. '' This view was expressed in contemporary governmental reports, including the Dillingham Commission which discussed a theme of "white sexual slavery '' among eastern European women:
Her earnings may be large -- ten times as much in this country as in eastern Europe. She may at times earn in one day from two to four times as much as her washerwoman can earn in a week, but of these earnings she generally gets practically nothing; if she is docile and beautiful and makes herself a favorite with the madam, she may occasionally be allowed to ride in the parks handsomely dressed; she may wear jewelry to attract a customer; but of her earnings the madam will take one - half; she must pay twice as much for board as she would pay elsewhere; she pays three or four times the regular price for clothes that are furnished her; and when these tolls have been taken by the madam, little or nothing is left. She is usually kept heavily in debt in order that she may not escape; and besides that, her exploiters keep the books and often cheat her out of her rightful dues, even under the system of extortion which she recognizes. Frequently she is not allowed to leave the house except in company with those who will watch her; she is deprived of all street clothing; she is forced to receive any visitor who chooses her to gratify his desires, however vile or unnatural; she often contracts loathsome and dangerous diseases and lives hopelessly on, looking forward to an early death.
The American public felt a deep connection to the issue of white slavery and placed a high moral responsibility on immigration inspectors for their inability to weed out European prostitutes. In a report by the Commissioner General of Immigration in 1914, the Commissioner gave a case - in - point where a young girl from Poland nearly landed an American man a Federal sentence for criminal trafficking after telling immigration officials an "appalling revelation of importation for immoral purposes ''. She later repudiated her earlier story. According to Gardner, the level of protection and moral standard afforded to European women was very different from the governmental view in the 1870s on Chinese and Japanese immigrants, where virtually all were viewed as "sexual degenerates ''.
Polish immigration was increasing rapidly in the early 20th century until 1911 when it was drastically cut by new legislation. Immigration from Europe was cut severely in 1911, and the quota for Polish immigrants was shrunk drastically. Poles were restricted from coming to the United States for decades, and only after World War II were the immigration laws reversed.
The Poles were the last to come in large numbers before World War I and the Quota Act which choked off immigration. Consequently they were subjected to far more than their share of prejudice and discrimination bred usually not by malice, but by fear -- chiefly economic insecurity of the minorities already settled in the areas to which they came. Since other groups did not succeed them in large numbers, they remained for longer than the usual period at the lowest level occupationally and residentially, since others did not "push them up. ''
According to James S. Pula, "the drastic reduction in Polish immigration served not only to cut off the external source of immigrants used to perpetuate the urban ethnic communities, but also cut off direct access to cultural renewal from Poland. '' He said, "increasingly, Polonia 's image of Poland became fixed, delimited by the indistinct images of the nineteenth century agricultural villages their ancestors left rather than the developing modern nation that Poland was moving toward during the interwar period. '' Family members who traveled to Poland to see their families risked not being allowed back if they were not citizens. Polonia leader Rev. Lucyan Bójnowski wrote in the 1920s, "In a few decades, unless immigration from Poland is upheld, Polish American life will disappear, and we shall be like a branch cut off from its trunk. ''
Polish immigrants to the United States were typically poor peasants who did not have significant involvement in Polish civic life, politics, or education. Poland had not been independent since 1795, and peasants historically had little trust or concern for the State as it was dominated by the Polish nobility. Most 18th - and 19th - century Polish peasants had a great apathy towards nationalist movements and did not find importance or great promise in joining them. Peasants had great reservations identifying with any szlachta, and were reluctant to support any national figures. When Kosciuszko came to liberate Poland -- after the success and admiration he gained in the American Revolution -- he only succeeded in bringing a handful of supporters, "not even his appearance in peasant attire and his proclamation of individual liberty of the peasants, provided they pay their former landlord their debt and taxes, was able to marshal the masses of burgesses and peasants in the struggle for Polish independence. Joseph Swastek speculated that "an attitude of apprehensive distrust of civil authority '' was conditioned by the "political and cultural bondage '' of peasants within the 18th - and 19th - century partitioned territories.
Helena Lopata argued that a Polish nationalism grew in Polish Americans during World War I, but fell sharply afterward. Polish immigrants to the United States did not know much about Poland aside from their local villages. In preparation for World War I, the Polish government asked for donations using appeals on behalf of the safety of their loved ones back home, as well as promises of a good high status back in Poland when they returned home. Lopata found that after World War I, many Polish Americans continued to receive requests for aid in Poland, and feelings of anger for all the years they had delayed bettering their own situation were common. Return immigrants who had dreamed of using their American savings to buy status symbols in Poland (farmlands, houses, etc.) were still treated as peasants in Poland, creating resentment towards the motherland.
Polish Americans generally joined local Catholic parishes, where they were encouraged to send their children to parochial schools. Polish - born nuns were often used. In 1932 about 300,000 Polish Americans were enrolled in over 600 Polish grade schools in the United States. Very few of the Polish Americans who graduated from grade school pursued high school or college at that time. High School was not required and enrollment across the United States was far lower at the time. In 1911, only 38 men and 6 women of Polish descent studied at institutions of higher learning.
Polish Americans took to the Catholic schools in great numbers. In Chicago, 36,000 students (60 percent of the Polish population) attended Polish parochial schools in 1920. Nearly every Polish parish in the American Catholic Church had a school, whereas in Italian parishes, it was typically one in ten parishes. Even as late as 1960, about 60 % of the Polish American students attended Catholic schools.
It is notable that many of the Polish American priests in the early 20th century were members of the Resurrectionist Congregation, and diverged somewhat from the mainstream American Catholic Church on theology in addition to their language differences. Polish American priests created several of their own seminaries and universities, and founded St. Stanislaus College in 1890.
Milwaukee was one of the most important Polish centers, with 58,000 immigrants by 1902 and 90,000 by 1920. Most came from Germany, and became blue - collar workers in the industrial districts in Milwaukee 's south side. They supported numerous civic and cultural organization and 14 newspapers and magazines. The first Polish Catholic parochial school opened in 1868 at the parish of St. Stanislaus. The children would no longer have to attend Protestant - oriented public schools, or German language Catholic schools. The Germans controlled the Catholic Church in Milwaukee, and encouraged Polish - speaking priests and Polish - oriented schools. Starting in 1896, Michał Kruszka began a campaign to introduce Polish language curricula into Milwaukee public schools. His efforts were panned as anti-religious, and thwarted by Catholic and Polish leaders. By the early 20th century, 19 parishes were operating schools, with the School Sisters of Notre Dame, and to a lesser extent the Sisters of Saint Joseph, providing the teaching force. The Polish community rejected proposals to teach Polish in the city 's public schools, fearing it would undermine their parochial schools. The Americanization movement in World War I made English the dominant language. In the 1920s, morning lessons were taught in Polish, covering the Bible, Catechism, Church history, Polish language and the history of Poland; all the other courses were taught in English in the afternoon. Efforts to create a Polish high school were unsuccessful until a small one opened in 1934. Those students who went on attended heavily Polish public high school. By 1940, the teachers students and parents preferred English. Elderly priests still taught religion classes in Polish as late as the 1940s. The last traces of Polish culture came in traditional Christmas carols, which are still sung. Enrollments fell during the Great Depression, as parents and teachers were less interested in the Polish language, and were hard - pressed to pay tuition. With the return of prosperity in World War II, enrollments increased again, peaking about 1960. After 1960, the nuns mostly left the sisterhood and were replaced by lay teachers. Increasingly, the original families have moved to the suburbs, and the schools now served black and Hispanic children. Some schools have been closed, or consolidated with historically German language parochial schools.
The 1920s was the peak year for Polish language in the United States. A record number of respondents to the U.S. Census reported Polish as their native language in 1920, which has since been dropping as a result of assimilation. According to the 2000 United States Census, 667,000 Americans of age 5 years and older, reported Polish as the language spoken at home, which is about 1.4 % of people who speak languages other than English or 0.25 % of the U.S. population.
Polish Americans established their own Catholic churches and parishes in the United States. A general pattern whereby laymen joined a city and united with other Poles to collect funds and develop representative leaders. When the community 's size became substantial, they would take the initiative of petitioning a local bishop for permission to build a church with his commitment to supply a priest. Polish immigrants in many instances erected their own churches and then asked for a priest. Roman Catholic churches built with the Polish cathedral style follow a design that includes high ornamentation, decorative columns and buttresses, and many visual depictions of the Virgin Mary and Jesus. When a church was to be built, devout Poles funded their construction with absolute devotion. Some members mortgaged their homes to fund parishes, others loaned monies that their church was never able to repay, and in St. Stanislaus Kostka parish in Chicago, Poles who lived in abject poverty with large families still donated large portions of their paychecks. Polish parishioners attached great meaning to the successful completion of their churches. Father Wacław Kruszka of Wisconsin told his parishioners, "The house of God must be beautiful if it is to be for the praise of God '', infusing spiritual motivation into his sermons. Perceived mishandling of church funds was not well - tolerated; stories of fistfights and physical assaults on priests suspected of cheating their parishes were well - documented in American newspapers.
Poles (and Italians) were angry with the Americanization and especially "Irishization '' of the Catholic Church in America.
Parishes in Poland were generally out of the parishioners ' hands. Catholicism had existed for hundreds of years in Poland, and local nobles (and taxes) were the main financiers of churches. This contrasted with the United States, where the creation of churches relied on immigrants from largely peasant backgrounds. Polish parishes in the United States were generally funded by members of Polish fraternal organizations, the PNA and Polish Roman Catholic Union of America (PRCNU) being the two largest. Members paid dues to belong to these groups. The groups were mutual aid organizations which provided members with financial assistance during times of need, but also gave money to churches. Church committeemen were often leaders in the Polish fraternal societies also. Parishioners who did not pay membership fees were still able to attend mass at the churches, but were viewed as freeloaders for not paying pew rent. The committeemen who ran and handled funds for the fraternal organizations agreed to have Catholic bishops appoint priests and claim property rights to their churches, but wanted to keep their power over church decisions. Galush noted that through the election of church committeemen and direct payment of church expenses, parishioners had grown accustomed to a democratic leadership style, and suggests that this created the ongoing struggle with clergy expecting more authority. In one example, Bishop Ignatius Frederick Horstmann, of the Roman Catholic Diocese of Cleveland, ordered a Polish American priest, Hipolyte Orlowski, to appoint church committeemen instead of holding elections. Orlowski ignored Hortmann 's order. Hortmann criticized Orlowski, and wrote "an irate letter '' asking "Why do the Poles always cause trouble in this regard? '' Polish Catholics generally did not differ on Catholic theology. Polish customs taken into American churches include the Pasterka (a midnight mass celebrated between December 24 and 25), the gorzkie żale (bitter lamentations devotion), and święconka (blessing of easter eggs).
Many Polish Americans were devout Catholics and placed pressure on the Church to have services in Polish and include them in the priesthood and bishophood. Polish Americans grew deeply frustrated by their lack of representation in the church leadership; many loyal parishioners were offended that they could not participate in church decision - making or finances. Polish parishioners who collectively donated millions of dollars to construct and maintain churches and parishes in the United States were concerned that these church properties were now legally owned by German and Irish clergy. The Polish - German relations in church parishes was tense during the 19th century. At the St. Boniface parish of Chicago, Rev. James Marshall spoke English and German for years, but when he started conducting mass in Polish, German parishioners started a confrontation with him and forced him into resignation. The greatest confrontation occurred in Scranton, Pennsylvania, where a large Polish population settled to work in coal mines and factories in the 1890s. They saved money from small paychecks to build a new church in the Roman Catholic parish, and were offended when the church sent an Irish bishop, Monsignor O'Hara, to lead services. Polish parishioners requested repeatedly to partake in church affairs; they were turned down and the bishop repudiated their "disobedience ''. Parishioners had fights in front of the church and several were arrested by the local police for civil disobedience and criminal charges. The mayor of the city was also Irish, and Poles strongly disagreed with his decisions in determining the severity of the arrests. Reportedly, Rev. Francis Hodur, a Catholic priest serving a few miles away heard the stories from Polish parishioners and said, "Let all those who are dissatisfied and feel wronged in this affair set about organizing and building a new church, which shall remain in possession of the people themselves. After that, we shall decide what further steps are necessary. '' Parishioners followed his advice and purchased land and began building a new church; when they asked Bishop O'Hara to bless the building and appoint a pastor, he refused, asking for a title of the property to be written out in his name. O'Hara invoked the Council of Baltimore saying that laypeople had no right to create and own their own church without ceding to the Roman Catholic diocese. Hodur disagreed and led church services beginning March 14, 1897. Hodur was excommunicated from the Roman Catholic Church on October 22, 1898 for refusing to cede ownership of the church property and insubordination.
Francis Hodur 's Polish church grew as neighboring Polish families defected from the Roman Catholic Church. Polish parishioners were hesitant to leave at first, but the organization of the Polish National Union in America in 1908 created mutual insurance benefits and aided in securing burial space for the deceased. The Polish National Catholic Church expanded from a regional church in Pennsylvania when Poles in Buffalo defected in 1914, expanding the church. Lithuanians in Pennsylvania united to form their own Lithuanian National Catholic Church, and in 1914, joined with the Polish National Church. The Lithuanian and Slovak National churches (1925) have since joined in affiliation with the larger Polish National Catholic Church. The PNCC took no initiative in seeking out other ethnic breakaway Catholic Churches during its history; these churches often sought out the PNCC as a model and asked to be affiliated. In 1922, four Italian parishes in New Jersey defected from the Roman Catholic Church and asked Hodur to support them in fellowship. Hodur blessed one of their buildings, and another Italian congregation in the Bronx, New York united with the PNCC before its closure. The PNCC has been sympathetic of the property rights and self - determination of laypeople in the church; in the PNCC 's St. Stanislaus church, a stained glass window of Abraham Lincoln exists and Lincoln 's birthday is a church holiday. Lincoln is honored by the PNCC for his role as a lawyer defending Irish Catholics who refused to surrender their church property to the Catholic church. The PNCC grew to a national entity and spread to Polish communities across the United States during the 20th century, mainly around Chicago and the Northeast. The PNCC developed an active mission in Poland following World War I.
Leon Czolgosz, a Polish American born in Alpena, Michigan, changed American history in 1901 by assassinating U.S. President William McKinley. Though Czolgosz was a native - born citizen, the American public displayed high anti-Polish and anti-immigrant sentiment after the attack. McKinley, who survived the shooting for several days, called Czolgosz a "common murderer '', and did not make mention of his background. Different Slavic groups debated his ethnic origins in the days and weeks that followed the attack, and Hungarian Americans took effort to also distance themselves from him. Police who arrested him reported that Czolgosz himself identified as a Pole. The Polish American community in Buffalo was deeply ashamed and angry with the negative publicity that Czolgosz created, both for their community and the Pan-American Exposition, and canceled a Polish American parade following the attack. Polish Americans burned effigies of Czolgosz in Chicago and Polish American leaders publicly repudiated him.
The Milwaukee Sentinel posted on September 11, 1901 an editorial noting that Czolgosz was an anarchist acting alone, without any ties to the Polish people:
Czolgosz is not a Pole. He is an American citizen, born, bred and educated in this country. His Polish name and extraction have nothing whatever to do with his crime, or with the motives which impelled him to it. The apparent notion, therefore, of Polish - Americans that it is incumbent on them to show in some special and distinctive way their abhorrence of Czolgosz and his deed, while creditable to them as a sentiment, is not founded in reason. Responsibility for Czolgosz ' crime is a question not of race but of doctrine. Anarchism knows no country, no fatherland. It is a cancer eating into the breast of society at large.
As a result of the assassination, Polish Americans were "racially profiled '' and American nativism against Poles grew. Several Polish immigrants were arrested for questioning in the police investigation, but police found that he acted independently. A later anonymous copycat threat sent to the police in Boston was investigated, and neighbors claimed a Polish radical who was a "native of the same town as the assassin '' (Żnin) to be the culprit. No actual crime occurred in coincidence with the threatening letter. Theodore Roosevelt took the office of President of the United States in McKinley 's place. Radical groups and anarchists were quelled nationally, and federal legislation was taken to stop future assassinations. Federal legislation made an attempted assassination of the President a capital offense and despite the fact that Czolgosz was born in the United States, the Immigration Act of 1903 was passed to stop immigrants with subversive tendencies from entering the country.
Polish immigrants were the lowest paid white ethnic group in the United States. A study of immigrants before World War I found that in Brooklyn, New York, the average annual income was $721. The average for the Norwegians residing there was $1142; for the English, $1015, for the Czechs, $773; but for the Poles, only $595. A study by Richard J. Jensen at the University of Illinois found that despite the pervasive narrative of anti-Irish discrimination in the U.S., in reality, NINA signage was very rare and first - generation Irish immigrants were about average in job pay rates during the 1880s and certainly above average by the turn of the century. Despite the absence of explicit ethnic discrimination in job advertisements, immigrant Poles were higher on the index of job segregation measures than the Irish in both the 1880s and the 1930s.
However, by the 1960s, Polish Americans had an above average annual income, even though relatively few were executives or professionals. Kantowicz argues that:
Polish workers appear to have opted for monetary success in unionized jobs and for economic and psychological security in home - owning rather than for more prestigious occupations in business and the professions.
Anti-Polish sentiment in the early 20th century relegated Polish immigrants to a very low status in American society. Other white ethnic groups such as the Irish and Germans had assimilated to the American language and gained powerful positions in the Catholic Church and in various government positions by this time, and Poles were seen with disdain. Poles did not share in any political or religious say in the United States until 1908, when the first American bishop of Polish descent was appointed in Chicago, Illinois - Most Rev. Father Rhode. His appointment was the result of growing pressure placed on the Illinois Archdiochese by Polish Americans eager to have a bishop of their own background. The Pope himself finally acquiesced when Chicago Archbishop James Edward Quigley finally lobbied on behalf of his Polish parishioners in Rome. Poles were viewed as powerful workers, suited for their uncommonly good physical health, endurance, and stubborn character, capable of heavy work from dawn to dusk. The majority of Polish immigrants were young men of in superior physical health, feeding well into the stereotype, and the lack of a significant immigration of intelligentsia perpetuated this perception in the United States. Historian Adam Urbanski drew an observation through The Immigrant Press and its Control, which stated, "Loneliness in an unfamiliar environment turns the wanderers ' thoughts and affections back upon his native land. The strangeness of his new surroundings emphasizes his kinship with those he has lost. '' Polish immigrants viewed themselves as common workers and carried an inferiority complex where they saw themselves as outsiders and only wanted peace and security within their own Polish communities; many found comfort in the economic opportunities and religious freedoms that made living in the United States a less strange experience. When Poles moved into non-Polish communities, the natives moved out, forcing immigrants to live in the United States as separate communities, often near other eastern European ethnics.
World War I motivated Polish - Americans to contribute to the cause of defeating the Germans, freeing their homeland, and fighting for their new home. Polish Americans vigorously supported the war effort during World War I, with large numbers volunteering for or drafted into the United States Army, working in war - related industries, and buying war bonds. A common theme was to fight for America and for the restoration of Poland as a unified, independent nation. Polish Americans were personally affected by the War because they heard reports of Poles being used as soldiers for both the Allied and Central Powers, and Polish newspapers confirmed fatalities for many families. Communication was very difficult to their families in Poland and immigration was halted.
After the war The Literary Digest estimated that the U.S. army had 220,000 Poles in its ranks and reported that Polish names made up 10 percent of the casualty lists, while the proportion of Poles in the country amounted to 4 percent. Of the first 100,000 volunteers to enlist in the U.S. Armed Services during World War I, over 40 % were Polish American.
France in 1917 decided to set up a Polish Army, to fight on the Western Front under French command. Canada was given responsibility for recruiting and training. It was known as the Blue Army because of its uniform. France lobbied for the Polish Army idea, pressuring Washington to allow recruiting in Polonia. The U.S. in 1917 finally agreed by sanctioning recruiting of men who were ineligible for the draft. This included recent Polish immigrants who did not pass the five - year residency requirements for citizenship. Also there were Poles born in Germany or Austria who were thus considered enemy aliens ineligible for drafting into the United States Army. The so - called "blue army '' reached nearly 22,000 men from the U.S. and over 45,000 from Europe (mostly POWs) out of a planned 100,000. It entered battle in summer 1918. When the war ended the Blue Army under General Józef Haller de Hallenburg was moved to Poland where it helped establish the new state. Most veterans who originated in the U.S. returned to the U.S. in the 1920s, but they never received recognition as veterans by either the U.S. or the Polish government.
Polish pianist Ignacy Paderewski came to the U.S. and asked immigrants for help. He raised awareness of the plight and suffering in Poland before and after World War I. Paderewski used his name recognition to promote the sale of dolls to benefit Poland. The dolls, dressed in traditional Polish garb, had "Halka and Jan '' as main characters. Sales provided enough money for the Polish refugees in Paris who designed the dolls to survive, and extra profits were used to purchase and distribute food to the poor in Poland.
Wilson designated January 1, 1916 as Polish Relief Day. Contributions to the Red Cross given that day were used to give relief to Poland. Polish Americans frequently pledged a working day 's pay to the cause. American Poles purchased over $67 million in Liberty Loans during World War I to help finance the war.
By 1917 there were over 7000 Polish organizations in the United States, with a membership - often overlapping - of about 800,000 people. The most prominent were the Polish Roman Catholic Union founded in 1873, the PNA (1880) and the gymnastic Polish Falcons (1887). Women also established separate organizations.
The PNA was formed in 1880 to mobilize support among Polish Americans for the liberation of Poland; it discouraged Americanization before World War I. Down until 1945 it was locked in battle with the rival organization Polish Roman Catholic Union. It then focused more on its fraternal roles such as social activities for its membership. By the 1980s it focused on its insurance program, with 300,000 members and assets of over $176 million.
The first Polish politicians were now seeking major offices. In 1918 a Republican was elected to Congress from Milwaukee, the next one was elected to Congress in 1924 as a Republican from Detroit. In the 1930s, the Polish vote became a significant factor in larger industrial cities, and switched heavily into the Democratic Party. Charles Rozmerek, the PNA president from 1939 to 1969, built a political machine from the Chicago membership, and played a role in Chicago Democratic politics.
Following World War I, the reborn Polish state began the process of economic recovery and some Poles tried to return. Since all the ills of life in Poland could be blamed on foreign occupation, the migrants did not resent the Polish upper classes. Their relation with the mother country was generally more positive than among migrants of other European countries. It is estimated that 30 % of the Polish emigrants from lands occupied by the Russian Empire returned home. The return rate for non-Jews was closer to 50 -- 60 %. More than two - thirds of emigrants from Polish Galicia (freed from under the Austrian occupation) also returned.
American nativism countered the immigration and assimilation of Poles into the United States. In 1923, Carl Brigham dismissed the Poles as inferior in intelligence. He even defended his assertions against popular support for Kościuszko and Pulaski, well - known Polish heroes from the American Revolution, stating, "careless thinkers (...) select one or two striking examples of ability from a particular group, and (believe) that they have overthrown an argument based on the total distribution of ability. '' Orators "can not alter the distribution of the intelligence of the Polish immigrant. All countries send men of exceptional ability to America, but the point is that some send fewer than others. ''
Polish communities in the United States were targeted by Nativist groups and sympathizers during the 1920s. In White Deer, Texas, where Poles were virtually the only ethnic minority, Polish children had near - daily fights with other schoolchildren, and southerners imitated their parents in calling them "Polocks and damn Catholics ''. The Ku Klux Klan in particular rose in numbers and political activity during the 1920s, leading parades, protests, and violence in Polish American neighborhoods. On May 18, 1921, about 500 white - robed, torch - bearing members from Houston took a train to Brenham, Texas and marched carrying signs such as "Speak English or quit talking on Brenham 's streets ''. Physical attacks on German Americans were more common than for Poles, who were not as politically active in Brenham. Following the parade, residents would not come to the town or leave their homes to go to church, afraid of violence. To defuse the situation, a meeting at a local courthouse between Anglo, German, and Slavic leaders created laws requiring funeral services, church sermons, and business transactions to be conducted in English only for the next few months. During the time, Brenham was popularly known as the "Capital of Texas Polonia '' because of its large Polish population. The KKK led a similar anti foreigner event in Lilly, Pennsylvania in 1924, which had a significant number of Poles. A novel based on the historical experience of Polish Americans in Lilly, Pennsylvania during this affair is The Masked Family by Robert Jeschonek. The Klan infiltrated the local police of southern Illinois during the 1920s, and search warrants were freely given to Klan groups who were deputized as prohibition officers. In one instance in 1924, S. Glenn Young and 15 Klansmen raided a Polish wedding in Pittsburg, Illinois, violently pushing everyone against the walls, drank their wine, stole their silver dollars, and stomped on the wedding cake. The Polish couple had informed Mayor Arlie Sinks and Police chief Mun Owens beforehand that they were throwing a wedding and wanted to ensure protection; they did not know that Sinks and Owens themselves were Klansmen.
Polish Americans were active in strikes and trade union organizations during the early 20th century. Many Polish Americans worked in industrial cities and in organized trades, and contributed to historical labor struggles in large numbers. Many Polish Americans contributed to strikes and labor uprisings, and political leaders emerged from the Polish community. Leo Krzycki, a Socialist leader known as a "torrential orator '', was hired by different trade unions such as the Congress of Industrial Organizations to educate and agitate American workers in both English and Polish during the 1910s to the 1930s. Krzycki was an organizer for the Amalgamated Clothing Workers of America. He motivated worker strikes in the Chicago - Gary steel strike of 1919 and the packing - house workers of Chicago strike in 1921. Krzycki was often used for his effectiveness in mobilizing Americans of Polish descent, and was heavily inspired by Eugene Debs and the Industrial Workers of the World. He was associated with the sit - down strike at the Goodyear Tire and Rubber Company in Akron, Ohio in 1936, which was the first twenty - four hour sit - down. Krzycki was one of the main speakers during the protest that later became known as the Memorial Day massacre of 1937. Polish Americans made up 85 % of the union of Detroit Cigar Workers in 1937, during the longest sitdown strike in U.S. history.
The Great Depression in the United States hurt the Polish American communities across the country as heavy industry and mining sharply cut employment. During the prosperous 1920s, the predominantly Polish Hamtramck neighborhood suffered from an economic slowdown in the manufacturing sector of Detroit. The Hamtramck neighborhood was in disrepair, with poor public sanitation, high poverty, rampant tuberculosis, and overcrowding, and at the height of the Depression in 1932, nearly 50 % of all Polish Americans were unemployed. Those who continued to work in the nearby Dodge main plant, where a majority of workers were Polish, faced intolerable conditions, poor wages, and were demanded to speed up production beyond reasonable levels. As the industrial trades Polish Americans worked in became less financially stable, an influx of Blacks and poor southern Whites into Detroit and Hamtramck exacerbated the job market and competed directly with Poles for low - paying jobs. Corporations benefited from the interracial strife and routinely hired Blacks as strikebreakers against the predominantly Polish - American trade unions. The Ford Motor Company used Black strikebreakers in 1939 and 1940 to counter strikes by the United Auto Workers, which had a predominantly Polish - American membership. The mainly Polish UAW membership and pro-Ford Black loyalists fought at the gates of the plant, often in violent clashes. Tensions with blacks in Detroit was heightened by the construction of a federally funded housing project, the Sojourner Truth houses, near the Polish community in 1942. Polish Americans lobbied against the houses, but their political sway was ineffective. Racial tensions finally exploded in the race riot of 1943.
Polish Americans were strong supporters of Roosevelt and the Allies against Nazi Germany. They worked in war factories, tended victory gardens, and purchased large numbers of war bonds. Of a total 5 million self - identified Polish Americans, 900,000 to 1,000,000 (20 % of their entire population in the U.S.) joined the U.S. Armed Services. Americans of Polish descent were common in all the military ranks and divisions, and were among the first to volunteer for the war effort. Polish Americans had been enthusiastic enlistees in the U.S. military in 1941. They composed 4 % of the American population at the time, but over 8 % of the U.S. military during World War II. Matt Urban was among the most decorated war heroes. Francis Gabreski won accolades during World War II for his victories in air fights, later to be named the "greatest living ace. '' During World War II, General Władysław Sikorski attempted to recruit Polish Americans to a segregated battalion; crowds of men he spoke to in Buffalo, Chicago, and Detroit were frequently second and third generation and did not join in high numbers -- only 700 Poles from North America and 900 from South America joined the Polish Army. Historians identified Sikorski 's tone towards the Polish American diaspora as problematic because he repeatedly told people he did not want their money but only wanted young men in the military. He said Polonia was "turning its back '' on Poland by not joining the cause.
During the latter part of World War II, Polish Americans developed a strong interest in political activity ongoing in Poland. Generally, Polish American leaders took the position that Polish Prime Minister Władysław Sikorski should make deals and negotiate with the Soviet Union. Maksymilian Węgrzynek, editor of the New York Nowy Swiat, was fiercely anti-Soviet and founded the National Committee of Americans of Polish Descent (KNAPP) in 1942 to oppose Soviet occupation in Poland. His newspaper became an outlet for exiled Polish leaders to voice their distrust and fears of a disintegrating Polish government under Wladyslaw Sikorski. One such leader was Ignacy Matuszewski who opposed any negotiation with the Soviets without safeguards honoring Polish territorial claims. The majority of American Poles were in - line with the anti-Soviet views of Wegrzynek.
Three important pro-Soviet Polish Americans were Leo Krzycki, Rev. Stanislaw Orlemanski, and Oskar R. Lange. They were deeply resented by Polish Americans in New York and Chicago, but found a strong following in Detroit, Michigan. Orlemanski founded the Kosciusko League in Detroit in 1943 to promote American - Soviet friendship. His organization was entirely of Polish Americans and was created with the goal of expanding throughout Polonia. Lange had great influence among Detroit Poles, arguing that Poland could return to its "democratic '' roots by ceding territories on the Curzon Line to the Belarusians and Ukrainians, and distributing farmland to the peasants. His viewpoints were well aligned with those of later American and Soviet agreements, whereby Poland gained western territories from Germany. In 1943, Lange, Orlemanski, and U.S. Senator James Tunnell wrote a book outlining their foreign policy aims with respect to Poland, titled, We will Join Hands with Russia. Russian newspapers including Pravda featured supportive articles approving of the work that Detroit Poles were making, and singled Krzycki, Orlemanski, and Lange as heroic leaders. On January 18, 1944, Russian diplomat Vyacheslav Molotov met with American ambassador Harriman, saying Poland needed a regime change and Krzycki, Orlemanski, and Lange would be excellent candidates for leadership in Poland. Stalin promoted the idea and asked that Orlemanski and Lange be given Russian passports quickly and allowed to visit Russia. President Roosevelt agreed to process those passports quickly, and later agreed to many of the political points they made, but advised Stalin that the visit be kept secretive. Lange visited Russia, meeting with Stalin personally, as well as the Polish nationalist government. Lange later returned to the United States where he pushed Polish Americans to accept that Poland would cede the Curzon line, and a communist regime change in Poland was inevitable.
American Poles had a reinvigorated interest in Poland during and after World War II. Polish American newspapers, both anti and pro-Soviet in persuasion, wrote articles supporting Poland 's acquisition of the Oder - Neisse line from Germany at the close of the war. The borders of Poland were in flux after the war, since Nazi occupying forces were mainly withdrawn, and Poland 's claims did not have German recognition. Polish Americans were apprehensive about the U.S. commitment to assuring them the western territories. The Potsdam Agreement specifically stated that Poland 's borders would be "provisional '' until an agreement with Germany was signed. At the close of the war, America occupied West Germany and relations with the Eastern bloc became increasingly difficult because of Soviet domination. Polish Americans feared that America 's occupation of, and close relations with, West Germany would mean a distancing from Poland. West Germany received many German refugees who escaped Communist hostility in Poland, and their stories of persecution and hostility were not helpful to Polish - German relations. The Polish American Congress (PAC) was established in 1944 to ensure that Polish Americans (6 million at the time) had a political voice to support Poland following World War II. The PAC traveled to Paris in 1946 to stop the United States Secretary of State, James F. Byrnes, from making further agreements with Germany. Byrnes and Soviet Minister of Foreign Affairs Vyacheslav Molotov both were making speeches expressing support for an economically and politically unified Germany, and both invoked the "provisional '' nature of the Oder - Neisse line in their talks. Polish Americans were outraged when Byrnes stated in Germany that German public opinion should be accounted for in territorial claims. The Polish newspaper Glos Ludu made a cartoon of Byrnes in front of an American flag with Swatstikas and black heads instead of stars, criticizing his support of Germany as a "sell - out ''. Even pro-Soviet Polish Americans called those lands "Recovered Territories '', suggesting wide and popular support among American Poles. The PAC remained distrustful of the United States government during the Truman administration and afterwards. In 1950, after East Germany and Poland signed an agreement on the Oder - Neisse line making it officially Polish territory, the U.S. Commissioner in Germany, John J. McCloy, issued a statement saying that a final resolution on the border would require another peace conference.
A wave of Polish immigrants came to the United States following World War II. They differed from the first wave in that they did not want to, and often could not, return to Poland. They assimilated rather quickly, learned English and moved into the American middle class with less of the discrimination faced by the first wave. This group of immigrants also had a strong Polish identity; Poland created a strong national and cultural identity during the 1920s and 1930s when it gained independence, and immigrants carried much of this cultural influx to the United States. Poles in the second wave were much more likely to seek white - collar and professional positions, took pride in expressing Poland 's cultural and historical successes, and did not submit to the low status American Poles had taken in previous generations. The background of these immigrants varied widely. Historically, 5 or 6 million Poles lived in territories annexed by the Soviet Union during World War II. Many were aristocrats, students, university graduates, and middle - class citizens who were systematically categorized by the Soviet police; Polish military officers were killed in Katyn, the civilians were deported to remote territories in Central Asia or Nazi concentration camps. During the War, Poles attempted to seek refuge in the United States, and some were allowed in. Following the War, many Poles escaped Soviet oppression by fleeing to sympathetic Western nations such as the United Kingdom, France, and the United States.
A small steady immigration for Poland has taken place since 1939. Political refugees arrived after the war. In the 1980s about 34,000 refugees arrived fleeing Communism in Poland, along with 29,000 regular immigrants. Most of the newcomers were well - educated professionals, artists of political activists and typically did not settle in the long - established neighborhoods.
In 1945 the Red Army took control and Poland became a Communist - controlled satellite of the Soviet Union. It broke free with American support in 1989. Many Polish Americans viewed Roosevelt 's treaties with Stalin as backhanded tactics, and feelings of betrayal were high in the Polish community. After the war, however, some higher status Poles were outraged with Roosevelt 's acceptance of Stalin 's control over Poland; they shifted their vote in the 1946 congressional elections to conservative Republicans who opposed the Yalta agreement and foreign policy in Eastern Europe. However, working - class Polish Americans remained loyal to the Democratic party in the face of a Republican landslide that year. Into the 1960s Polonia as a whole continued to vote solidly for the liberal New Deal Coalition and for local Democratic party organization candidates.
The first candidate on a national ticket was Senator Edmund S Muskie, nominated by the Democrats for vice president in 1968. He was a prominent, but unsuccessful, candidate for the Democratic nomination for president in 1972; he later served as Secretary of State. The first appointee to the Cabinet was John Gronouski, chosen by John F. Kennedy as postmaster general 1963 -- 65.
By 1967, there were nine Polish Americans in Congress including four from the Chicago area. The three best known were Democrats who specialized in foreign policy, taxes and environmentalism. Clement J. Zablocki of Milwaukee served 1949 -- 83, and became chairman of the House Foreign Affairs Committee from 1977 until his death in 1983; although liberal on domestic issues, he was a hawk regarding the Vietnam War. Dan Rostenkowski served 1959 -- 95, and became chair of the powerful House Ways and Means Committee, which writes the tax laws. His father was an influential alderman and party leader from the center of Polonia on the Northwest side of Chicago. Even more influential has been John Dingell of Detroit, who was first elected to Congress in 1955 and is still there (with the second longest tenure on record). A liberal Democrat known for hard - hitting investigations, Dingell has been a major voice in economic, energy, medical and environmental issues. His father John D. Dingell, Sr. held the same seat in Congress from 1933 to 1955. He was the son of Marie and Joseph A. Dzieglewicz, Polish immigrants.
Historian Karen Aroian has identified a bump in Polish immigration in the 1960s and 1970s as the "Third Wave ''. Poland was liberalized during the Gierek era when emigration was loosened, and U.S. immigration policy remained relatively kind to the Poles. Interviews with immigrants from this wave found that they were consistently shocked at how important materialism and careerism was in the United States. Compared to Poland, as they experienced it, the United States had a very meager social welfare system and neighbors did not recognize the neighborly system of favors and bartering common in Poland. Polish immigrants saw a major difference in the variety of consumer goods in America, whereas in Poland shopping for consumer goods was less a luxury and more a means of survival. Aroian identifies his interviewees may have been skewed by the relatively recent immigrant status of his subjects, as every immigrant faces some setbacks in social standing when entering a new country.
Polish Americans settled and created a thriving community in Detroit 's east side. The name "Poletown '' was first used to describe the community in 1872, where there was a high number of Polish residents and businesses. Historically, Poles took great pride in their communities; in a 1912 survey of Chicago, in the black section, 26 % of the homes were in good repair while 71 % of the Polish homes were; by contrast, only 54 % of the ethnically mixed stockyards district were in good repair. Polish neighborhoods were consistently low on FBI crime rate statistics, particularly in Pennsylvania, despite being economically depressed during much of the 20th century. Polish Americans were highly reluctant to move to the suburbs as other white ethnics were fleeing Detroit. Poles had invested millions of dollars in their churches and parochial schools, and World War I drives drained their savings (the Polish National Fund alone received $5,187,000 by 1920). Additional savings were given to family and friends from Poland, where many immigrants and their children sent back money. During the 1960s, the black population of Detroit increased by 98,000, while 386,000 whites were leaving the city. Polish Americans and blacks entering the urban communities often lived next door to each other, and in close confrontations at times. In Chicago and in other northern cities, historian Joseph Parot observed real estate agents pressing white couples to move to the suburbs while encouraging blacks to move into Polish ethnic communities. Parot found that housing patterns commonly showed white ethnics such as Poles and Italians were used as "buffer zones '' between black and white areas in multiple cities. Poles who stayed in the cities generally lost ties with their children, who moved away to start new families, and faced an increase in crime and racial tension with the growing black population. In the mid-1960s, the few Polish American protests against the disintegration of their ethnic communities were portrayed in the media as "racist ''. Poles were not cooperative with government incursions into their neighborhoods; in Pittsburgh 's Model Cities Program, tax money paid by the residents was used to tear down blocks of a Polish community to build low income housing for blacks and Hispanics. In the predominantly Polish Catholic parish of St. Thaddeus, parishioners were demoralized by orders made from the Archdiocese of Detroit mandating that a percentage of proceeds from church events go to serve low - income black parishes. Polish American Roman Gribbs who served from 1970 to 1974 when the city was roughly half - white and half - black, believes the major exodus of whites happened when children going to public school faced increased crime and physical danger in Detroit. Detroit became known as the murder capital of America during the 1970s, and Polish Americans residents suffered several murders. In 1975, the Detroit Polish community was disgusted by the innocent killing of Marian Pyszko, a World War II freedom fighter and 6 - year concentration camp survivor who was killed by three African American youth who were avenging the accidental shooting of their friend. The man who shot their friend was sentenced to 3 years for reckless use of a firearm, but the three youths who killed Pyszko were acquitted of all charges by a biased jury. The jurors argued that the black riot was greater than the 3 boys (roughly 700 people were in the Livernois -- Fenkell riot where Pyszko was targeted) and there was insufficient evidence to convict them. The Polish community was disgusted by the lack of justice it faced in Detroit, and enmity towards blacks grew during the 1960s and 1970s. Many Polish Americans were forced out by the construction of freeways, public housing, and industrial complexes. More than 25 % of Hamtramck 's population was displaced by the building of Interstate - 75. Poles saw their communities disintegrate as forces such as blockbusting caused their longtime friends and neighbors to take white flight. The quality of life for those who stayed decreased rapidly, as did the sense of community:
Having lived here since her exodus from Poland at age fourteen, my grandmother is bombarded daily with phone calls from high - pressure realtors who tell her she better hurry and sell before "they '' all move in and the house becomes worthless. The pitch has succeeded all too well with others and occasionally she admits that "maybe it would be better ''... I become angry at those who flee because of fear, bigotry or ignorance. It seems people keep pushing farther and farther out of the city all the white saying it is n't worth their help. I became angry at those who remain and have lost the hope that is so vital for a neighborhood 's survival. Many talk of getting out, of biding their time, while ignoring the garbage strewn in the alley behind their houses. Have we become so service oriented that we wo n't pick up an old tire laying the in street because it 's "the city 's job: it 's not my property? ''
As late at 1970, Hamtramck and Warren, Michigan, were highly Polish. The communities (and counterparts in Polish Chicago areas) rapidly changed into naturally occurring retirement communities where young families and single adults fled and left the elderly alone. Many of the elder Polish Americans suffered a loss of control over their daily lives, as many lost the assistance of their children and had a shrinking community to associate with for necessary help and service. Many withdrew from public life and descended into private consumption and activities to occupy their time. Depression, isolation, and loneliness increased in many of Detroit 's Poles. The Hamtramck neighborhood used to be inhabited chiefly by Polish immigrants and their children until most moved to Warren, north of Detroit. Homes left behind were old and expensive to maintain. Many homes fell into disrepair and neglect, litter grew, and children 's playgrounds were deserted.
In the late 1960s and 1970s, Americans of Polish descent felt a new low in their social status. Polish Americans were seen as bigoted and racist towards Blacks during the 1960s, as an increasing number of southern Blacks ran into conflict with Poles inside urban cities such as Detroit and Chicago. In Detroit in particular, Polish Americans were among the last white ethnic groups to remain in the city as its demographics changed into a Black enclave. Poles resented Black newcomers to their urban communities, and resented white liberals who called them racist for their attempts to remain in Polish - majority communities. Poles in Chicago fought against blockbusting by real estate agents who ruined the market value of their homes while changing their communities into low - income, high crime centers. Poles in Chicago were against the open housing efforts of Martin Luther King, Jr., who encouraged black integration into Polish urban communities; his policies and resulting integration efforts led to violent riots between Poles and Blacks in 1966 and 1967, particularly in Detroit. In 1968, a local president of the Chicago Polish Homeowner 's Association raised a flag from half - mast to full - mast on the day of MLK 's death, nearly sparking a riot. Polish homeowners in Hamtramck were given a legal blow in 1971 when a Michigan federal court ruled against their urban renewal efforts which had effectively decreased the community 's black population. The experience created a rift between Polish Americans and political liberalism; Poles were labeled as racist by white liberals who had already fled to the suburbs and did not have any connection to the violence and urban warfare facing Polish American communities. Poles were similarly disgusted by the affirmative action programs institutionalized in their workplaces and schools, and were unfairly blamed for historical slavery and the economic and political disenfranchisement of blacks in America. Race relations between whites and blacks had been poor in many cities, but through the progress of the Civil Rights movement, anti-Black discrimination became highly unacceptable but anti-Polish discrimination did not have the same legal safeguards. Highly offensive jokes commonly replaced the word "black '' or "nigger '' with "Polack ''. As an example, historian Bukowczyk heard a student in Detroit tell this "joke '':
When he questioned the student why she told this Polish joke, she said it was originally a black joke, but the word "nigger '' was replaced by "Polack '' because she did not want to be "prejudiced ''.
Polish jokes were everywhere in the 1960s and 1970s. In the late ' 60s a book of Polish jokes was published and copyrighted, and commercial goods, gift cards, and merchandise followed and even profited at the expense of the Poles. Polish stereotyping was deeply pervasive in America and assimilation, upward mobility, higher education, and even intermarriage did not solve the problem. In 1985, Bukowczyk recalled meeting a college student from largely Polish Detroit, Michigan who lived in a home where her Irish - American mother would sometimes call her Polish - American father a "dumb Polack. '' Polish Americans were ashamed of their identities, and thousands changed their names to fit into American society. The American media spread an image of the Polish male as a "jock '', typically large, strong, and tough athletically, but lacking in intelligence.
Thomas Tarapacki theorized that the prominence and high visibility of Polish Americans in sports during the postwar era contributed to the Polish jokes of the 1960s and 70s. Although Poles were succeeding in all types of sports, including tennis and golf, they came to dominate football in high numbers beginning in the 1930s and 40s. Blue collar, working class Americans repeatedly saw their favorite team rosters filled with Polish names and began to closely identify the two. Poles in many regards were proud of Polish American successes in American sports, and a Hall of Fame was constructed to celebrate their successes. However, by the 1960s, Tarapacki argues, Polish Americans struggled to combat the "jock '' image because there had not been national recognition of successes in other fields other than athletics.
Polish Americans often downplayed their ethnicity and changed their names to fit into American society. During the late 19th and early 20th Centuries, name changes were commonly done by immigration agents at Ellis Island. An example of this is in the family of Edmund Muskie, whose Polish surname was Marciszewski. During the 1960s and 1970s, an unprecedented number of Poles voluntarily chose to Anglicize their own names. In Detroit alone, over 3,000 of the areas ' 300,000 Polish Americans changed their names every year during the 1960s. Americans took no effort to respect or learn the pronunciation of Polish last names, and Poles who made it to positions of public visibility were told to Anglicize their own names. Many people, according to linguist John M. Lipski, "are convinced that all Polish names end in - ski and contain difficult consonant clusters. '' Although "very little is known about the psychological parameters, '' Lipski speculates about reasons for mispronunciation; for example, he found that English speakers consistently mispronounced his two syllable surname, Lipski, because, he speculates, an emotion based "inherent ethnolinguistic ' filtering mechanism ' rejects '' a simple two - syllable sequence when there is an expectation that all Polish names are "unpronounceable. '' In areas with no significant Slavic populations such as Houston, Texas, Lipski found mispronunciations were nonexistent. Lipski experienced mispronunciations often in Toledo, Ohio, and Alberta, Canada, where there were greater Slavic populations, which he believed was an example of unconscious prejudice. With little tolerance for learning and appreciating Polish last names, Americans viewed Poles who refused to change their names as unassimilable greenhorns. Even more common, Polish American children quickly changed their first names to American versions (Mateusz to Matthew, Czeslaw to Chester, Elzbieta to Elizabeth, Piotr to Peter). A 1963 study based on probate court records of 2,513 Polish Americans who voluntarily changed their last names share a pattern; over 62 % changed their names entirely from the original to one with no resemblance to the Polish origin (examples include: Czarnecki to Scott, Borkowski to Nelson, and Kopacz to Woods). The second-most common choice was to subtract the Polish - sounding ending (ex: Ewanowski to Evans, Adamski to Adams, Dobrogowski to Dobro), often with an Anglicized addition (Falkowski to Falkner, Barzyk to Barr). These subtractions and Anglicized combinations were roughly 30 % of cases. It was very rare for a name to be shortened with a Polish - sounding ending (ex: Niewodomski to Domski, Karpinski to Pinski, Olejarz to Jarz), as such examples accounted for less than. 3 % of cases.
During the 1970s, Polish Americans began to take pride in their ethnicity and identified with their Polish roots. Pins and T - shirts reading "Kiss me I 'm Polish '' and "Polish Power '' began selling in the 1960s, and Polish polka experienced a growing popularity. In 1972, 1.1 million more people reported Polish ethnicity to the U.S. Census Bureau than they had only 3 years earlier. Public figures began to express their Polish identity openly and several Poles who had often changed their names for career advancement in the past began to change their names back. The book Rise of the Unmeltable Ethnics (1971) explored the resurgence of white ethnic pride that happened in America at the time.
Polish Americans (and Poles around the world) were elated by the election of Pope John Paul II in 1978. Polish identity and ethnic pride grew as a result of his papacy. Polish Americans partied when he was elected Pope, and Poles worldwide were ecstatic to see him in person. John Paul II 's charisma drew large crowds wherever he went, and American Catholics organized pilgrimages to see him in Rome and Poland. Polish pride reached a height unseen by generations of Polish Americans. Sociologist Eugene Obidinski said, "there is a feeling that one of our kind has made it. Practically every issue of the Polish American papers reminds us that we are in a new glorious age. '' Polish Americans had been doubly blessed during the election; reportedly, Polish American Cardinal John Krol had played kingmaker at the papal election, and Karol Wojtyla became the first Polish pope. John Paul II 's wide popularity and political power gave him soft power crucial to Poland 's Solidarity movement. His visit to Poland and open support for the Solidarity movement is credited for bringing a swift end to communism in 1981, as well as the subsequent fall of the Iron Curtain. John Paul II 's theology was staunchly conservative on social and sexual issues, and though popular as a religious and political figure, church attendance among Polish Americans did slowly decline during his papacy. John Paul II used his influence with the Polish American faithful to reconnect with the Polish National Catholic Church, and won some supporters back to the Catholic Church. John Paul II reversed the nearly 100 - year excommunication of Francis Hodur and affirmed that those who received sacraments at the National Church were receiving the valid Eucharist. In turn, Prime Bishop Robert M. Nemkovich attended the funeral of John Paul II in 2005. John Paul II remains a popular figure for Polish Americans, and American politicians and religious leaders have invoked his memory to build cultural connection.
Polish Americans found that they were not protected by the United States courts system in defending their own civil rights. The Civil Rights Act of 1964 Title VII states: "No person in the United States shall on the grounds of race, color, or national origins, be excluded from participation in, or denied the benefits of, or be subjected to discrimination. '' In Budinsky v. Corning Glass Works, an employee of Slavic origin was fired after 14 years for speaking up about name - calling and anti-Slavic discrimination by his supervisors. The judge ruled that the statute did not extend beyond "race '' and the employment discrimination suit was dismissed because he was therefore not part of a protected class. In the District of Columbia, Kurylas v. U.S. Department of Agriculture, a Polish American bringing suit over equal opportunity employment was told by the court that his case was invalid, as "only nonwhites have standing to bring an action ''. Poles were also snubbed by the destruction of their Poletown East, Detroit, community in 1981, when eminent domain by corporations triumphed against them in court and displaced their historic town. Aloysius Mazewski of the Polish American Congress felt that Poles were overlooked by the eminent domain and corporate personhood changes to U.S. law, arguing for a change in laws so that "groups as well as individuals '' could launch anti-defamation lawsuits and confront civil rights charges. Senator Barbara Mikulski supported such a measure, although no movement has been successful in this issue of amending law for ethnic groups not recognized as racial minorities.
U.S. President Ronald Reagan and Pope John Paul II placed great pressure on the Soviet Union in the 1980s, leading to Poland 's independence. Reagan supported Poland 's independence by actively protesting against martial law. He urged Americans to light candles for Poland to show support for their freedoms which were being repressed by communist rule. In 1982, Reagan met with leaders from western Europe to push for economic sanctions on the Soviet Union in return for liberalizing Poland. Reportedly, European leaders were wary of Russia and sought to practice an ongoing detente, but Reagan pressed firmly for punitive measures against the USSR. The public image of the Polish suffering in an economically and politically backward state hurt the Soviets ' image abroad; to change public perception, the Soviets granted amnesty to several Polish prisoners and gave a one - time economic stimulus to boost the Polish economy. George H.W. Bush met with Solidarity leaders in Poland beginning in 1987 as vice president. On April 17, 1989, Bush, in his first foreign policy address as president, announced his economic policy toward Poland, offering money in return for political liberation in the communist regime. The address venue, Hamtramck, was chosen because it had a large Polish American population. Banners at the event included Solidarnosc signs and a backdrop of "Hamtramck: a touch of Europe in America ''. Bush 's announcement was politically risky because it promised trade and financial credit during a tight U.S. budget, and for placing the White House, and not the State Department, as the key decision maker on foreign diplomacy. Bush 's original aid plan was a modest stimulus package estimated at $2 -- 20 million, but by 1990, the United States and allies granted Poland a package of $1 billion to revitalize its newly capitalist market. The U.S. Ambassador in Poland John R. Davis found that Bush 's speech was closely watched in Poland and Poles were eagerly awaiting follow - up on his speech. Davis predicted that the July 1989 visit by Bush to Poland "will be an action - forcing event for the Polish leadership '' and could radically change their government. In Poland, Davis assessed that, "the U.S. occupies such an exaggerated place of honor in the minds of most Poles that it goes beyond rational description. '' The perception of the U.S., according to Davis, was partially "derive (d) from (the) economic prosperity and lifestyle, enjoyed by 10 million Polish - Americans and envied by their siblings and cousins left behind. ''
Polish immigration to the United States experienced a small wave in the years following 1989. Specifically, the fall of the Berlin Wall and the subsequent fall of Soviet control freed emigration from Poland. A pent - up demand of Poles who previously were not allowed to emigrate was satisfied, and many left for Germany or America. The United States Immigration Act of 1990 admitted immigrants from 34 countries adversely affected by a previous piece of immigration legislation; in 1992, when the Act was implemented, over a third of Polish immigrants were approved under this measure. The most popular destination for Polish immigrants following 1989 was Chicago, followed by New York City. This was the oldest cohort of immigrants from Poland, averaging 29.3 years in 1992.
American media depictions of Poles have been historically negative. Fictional Polish - Americans include Barney Gumble, Moe Szyslak, Banacek, Ernst Stavro Blofeld, Brock Samson, Walt Kowalski of Gran Torino, The Big Lebowski, and Polish Wedding. Polish characters tend to be brutish and ignorant, and are frequently the butt of jokes in the pecking order of the show. In the series Banacek, the main character was described as "not only a rugged insurance sleuth but also a walking lightning rod for Polish jokes. '' In the 1961 film West Side Story, the character Chino takes issue with the caucasian Tony, who is of mixed Polish and Swedish heritage, and has a line in which he said, "If it 's the last thing I do, I 'm gon na kill that Polack! '' The slurring of Tony 's ancestry is unique in that none of the other white ancestries are targeted. Folklorist Mac E. Barrick observed that TV comedians were reluctant to tell ethnic jokes until Spiro Agnew 's "polack jokes '' in 1968, pointing to an early Polish joke told by comedian Bob Hope in 1968, referencing politicians. Barrick stated that "even though the Polack joke usually lacks the bitterness found in racial humor, it deals deliberately with a very small minority group, one not involved in national controversy, and one that has no influential organization for picketing or protesting. '' During the 1960s and 1970s, there was a revived expression of white ethnicity in American culture. The popular 1970s sitcom Barney Miller depicted Polish - American character Sergeant Wojohowicz as uneducated and mentally slow. Among the worst offenders was the popular 1970s sitcom All in the Family, where protagonist Archie Bunker routinely called his son - in - law a "dumb Polack ''. The desensitization that was caused by the hateful language in All in the Family created a mainstream acceptance of the jokes, and the word Polack. Sociologist Barbara Ehrenreich called the show "the longest - running Polish joke. '' In the series Coach, character Dauber Dybinski played the "big, dumb hulk of a player '' role for nine series, and a spin - off character George Dubcek (also with a Polish name) in Teech displayed the "burly but dumb son of a former football player ''. In the movie The End, lead supporting actor Marlon Burunki is depicted as an oafish and schizophrenic Polish - American in a mental institution. The term Polack was so pervasive in American society through the 1960s and 1970s that high - ranking U.S. politicians followed suit. In 1978, Senator Henry Jackson of Washington made Polish jokes at a banquet. Ronald Reagan told Polish jokes multiple times during his presidential campaign in 1980 and during his presidency. As late as 2008, Senator Arlen Specter of Pennsylvania told Polish jokes to an audience of Republican supporters. Reportedly an audience member interrupted him, saying, "Hey careful, I 'm Polish '', and Specter replied, "That 's ok, I 'll tell it more slowly. '' Mayor Marion Barry slurred Poles in 2012, and was apparently unaware the word Polacks was inappropriate.
The Polish American community has pursued litigation to stop negative depictions of Poles in Hollywood, often to no avail. The Polish American Congress petitioned the Federal Communications Commission against American Broadcasting Company (ABC) "of a ' consistent policy ' of portraying the ' dumb polack image ' '' and citing a 1972 episode of The Dick Cavett Show in which host Steve Allen in, and the next episode in which Allen 's "alleged ' apology ' was, '' according to the petition, "surrounded by a comic setting and was the basis for more demeaning humor. '' New York State 's highest Appellate court, in State Division of Human Rights v. McHarris Gift Center (1980)., ruled that a gift shop was allowed to sell merchandise with "Polack jokes '' on them; it was one vote short of making it illegal, based on public accommodations statutes citing the fact that Polish customers should be welcome and free from discrimination in the place of business. A lawsuit filed against Paramount Pictures in 1983 over "Polish jokes '' in the movie Flashdance was thrown out of court, as the judge found "that ' the telling of Polish jokes does not attain that degree of outlandishness ' to jeoparize Poles ' employment and business opportunities. ''
Polish Americans are largely assimilated to American society and personal connections to Poland and Polish culture are scarce. Of the 10 million Polish Americans, only about 4 % are immigrants; the American - born Poles predominate. Among Poles of single ancestry, about 90 % report living in a mixed - ethnic neighborhood, usually with other white ethnics. No congressional district or large city in the United States is predominantly Polish, although several Polish enclaves exist. Among American - born citizens of Polish ancestry, roughly 50 % report eating Polish dishes, and many can name a variety of Polish foods unprompted. Whereas over 60 % of Italian Americans reported eating Italian food at least once a week, less than 10 % of Polish Americans ate Polish food once a week. This figure is still a higher occurrence than Irish Americans, who can only name a few traditional Irish foods (typically corned beef and cabbage), and only 30 % report eating Irish food each year. Even fewer English, Dutch, and Scottish Americans can report that they eat ethnic cuisine regularly.
There has been growth in Polonia institutions in the early 21st century. The Piast Institute was founded in 2003 and remains the only Polish think tank in America. It has been recognized by the United States Census Bureau as an official Census Information Center, lending its historical information and policy information to interested Polish Americans. Poles in politics and public affairs have greater visibility and an avenue to address issues in the Polonia community through the American Polish Advisory Council. Both are secular institutions. Historically, Polish Americans linked their identity to the Catholic Church, and according to historian John Radzilowski, "Secular Polish Americanness has proved ephemeral and unsustainable over the generations '', citing as evidence the decline of Polish parishes as reason for the decline in Polish American culture and language retention, since the parish served as an "incubator for both ''.
The first The Polish American encyclopedia was published in 2008, by James S. Pula. In 2009, the Pennsylvania state legislature voted and approved the first ever Polish American Heritage Month.
Polish Americans continue to face discrimination and negative stereotyping in the United States. In February 2013, a YouTube video on Pączki Day made comments saying that on that day, "everybody is Polish, which means they are all fat and stupid. '' The Polish Consulate contacted the man who made the video and YouTube, urging it be taken down. It has since been taken off YouTube. Polish jokes by late night host Jimmy Kimmel were answered by a letter from the Polish American Congress in December 2013, urging Disney - ABC Television to discontinue ridiculing Poles as "stupid ''. On October 4, 2014, lawyers for Michael Jagodzinski, a mining foreman in West Virginia, announced a lawsuit against his former employer, Rhino Eastern, for discrimination based on national origin. Jagodzinski faced insults and taunts from the workers, who had written graffiti and called him a "dumb Polack '', and was fired after raising the issue to management, who had refused to take any corrective measures to stop it. As part of a January 2016 settled consent decree, Jagodzinski will receive monetary relief.
The United States Geological Survey continues listing natural monuments and places with the name Polack. As of 2017, there are six topographic features and one locale with the name "Polack.
|
what triggers the release of calcium from the sarcoplasmic reticulum | Sarcoplasmic reticulum - wikipedia
The sarcoplasmic reticulum (SR) is a membrane bound structure found within muscle cells, that is similar to the endoplasmic reticulum in other cells. The main function of the SR is to store calcium ions (Ca). Calcium ion levels are kept relatively constant, with the concentration of calcium ions within a cell being 100,000 times smaller than the concentration of calcium ions outside the cell. This means that small increases in calcium ions within the cell are easily detected and can bring about important cellular changes (the calcium is said to be a second messenger; see calcium in biology for more details). Calcium is used to make calcium carbonate (found in chalk) and calcium phosphate, two compounds that the body uses to make teeth and bones. This means that too much calcium within the cells can lead to hardening (calcification) of certain intracellular structures, including the mitochondria, leading to cell death. Therefore, it is vital that calcium ion levels are controlled tightly, and can be released into the cell when necessary and then removed from the cell.
The sarcoplasmic reticulum is a network of tubules that extend throughout muscle cells, wrapping around (but not in direct contact with) the myofibrils (contractile units of the cell). Cardiac and skeletal muscle cells, contain structures called transverse tubules (T - tubules), which are extensions of the cell membrane that travel into the centre of the cell. T - tubules are closely associated with a specific region of the SR, known as the terminal cisternae in cardiac muscle or junctional SR in skeletal muscle, with a distance of roughly 12 nanometers, separating them. This is the primary site of calcium release. The longitudinal SR are thinner projects, that run between the terminal cisternae / junctional SR, and are the location where ion channels necessary for calcium ion absorption are most abundant. These processes are explained in more detail below and are fundamental for the process of excitation - contraction coupling in skeletal, cardiac and smooth muscle.
The SR contains ion channel pumps, within its membrane that are responsible for pumping Ca into the SR. As the calcium ion concentration within the SR is higher than in the rest of the cell, the calcium ions wo n't freely flow into the SR, and therefore pumps are required, that use energy, which they gain from a molecule called adenosine triphosphate (ATP). These calcium pumps are called Sarco (endo) plasmic reticulum ATPases (SERCA). There are a variety of different forms of SERCA, with SERCA 2a being found primarily in cardiac and skeletal muscle.
SERCA consists of 13 subunits (labelled M1 - M10, N, P and A). Calcium ions bind to the M1 - M10 subunits (which are located within the membrane), whereas ATP binds to the N, P and A subunits (which are located outside the SR). When 2 calcium ions, along with a molecule of ATP, bind to the cytosolic side of the pump (i.e the region of the pump outside the SR), the pump opens. This occurs because ATP (which contains three phosphate groups) releases a single phosphate group (becoming adenosine diphosphate). The released phosphate group then binds to the pump, causing the pump to change shape. This shape change causes the cytosolic side of the pump to open, allowing the two Ca to enter. The cytosolic side of the pump then closes and the sarcoplasmic reticulum side opens, releasing the Ca into the SR.
A protein found in cardiac muscle, called phospholamban (PLB) has been shown to prevent SERCA from working. It does this by binding to the SERCA and decreasing its attraction (affinity) to calcium, therefore preventing calcium uptake into the SR. Failure to remove Ca from the cytosol, prevents muscle relaxation and therefore means that there is a decrease in muscle contraction too. However, molecules such as adrenaline and noradrenaline, can prevent PLB from inhibiting SERCA. When these hormones bind to a receptor, called a beta 1 adrenoceptor, located on the cell membrane, they produce a series of reactions (known as a cyclic AMP pathway) that produces an enzyme called protein kinase A (PKA). PKA can add a phosphate to PLB (this is known as phosphorylation), preventing it from inhibiting SERCA and allowing for muscle relaxation.
Located within the SR is a protein called calsequestrin. This protein can bind to around 50 Ca, which decreases the amount of free Ca within the SR (as more is bound to calsequestrin). Therefore more calcium can be stored (the calsequestrin is said to be a buffer). It is primarily located within the junctional SR / terminal cisternae, in close association with the calcium release channel (described below)
Calcium ion release from the SR, occurs in the junctional SR / terminal cisternae through a ryanodine receptor (RyR) and is known as a calcium spark. There are three types of ryanodine receptor, RyR1 (in skeletal muscle), RyR2 (in cardiac muscle) and RyR3 (in the brain). Calcium release through ryanodine receptors in the SR is triggered differently in different muscles. In cardiac and smooth muscle an electrical impulse (action potential) triggers calcium ions to enter the cell through an L - type calcium channel located in the cell membrane (smooth muscle) or T - tubule membrane (cardiac muscle). These calcium ions bind to and activate the RyR, producing a larger increase in intracellular calcium. In skeletal muscle, however, the L - type calcium channel is bound to the RyR. Therefore activation of the L - type calcium channel, via an action potential, activates the RyR directly, causing calcium release (see calcium sparks for more details). Also, caffeine (found in coffee) can bind to and stimulate RyR. Caffeine works by making the RyR more sensitive to either the action potential (skeletal muscle) or calcium (cardiac or smooth muscle) therefore producing calcium sparks more often (this can result in increased heart rate, which is why we feel more awake after coffee).
Triadin and Junctin are proteins found within the SR membrane, that are bound to the RyR. The main role of these proteins is to anchor calsequestrin (see above) to the ryanodine receptor. At ' normal ' (physiological) SR calcium levels, calsequestrin binds to the RyR, Triadin and Junctin, which prevents the RyR from opening. If calcium concentration within the SR falls too low, there will be less calcium bound to the calsequestrin. This means that there is more room on the calsequestrin, to bind to the junctin, triadin and ryanodine receptor, therefore it binds tighter. However, if calcium within the SR rises too high, more calcium binds to the calsequestrin and therefore it binds to the junctin - triadin - RyR complex less tightly. The RyR can therefore open and release calcium into the cell.
In addition to the effects that PKA had on phospholamban (see above) that resulted in increased relaxation of the cardiac muscle, PKA (as well as another enzyme called calmodulin kinase II) can also phosphorylate ryanodine receptors. When phosphorylated, RyRs are more sensitive to calcium, therefore they open more often and for longer periods of time. This increases calcium release from the SR, increasing the rate of contraction. Therefore, in cardiac muscle, activation of PKA, through the cyclic AMP pathway, results in increased muscle contraction (via RyR2 phosphorylation) and increased relaxation (via phospholamban phosphorylation), which increases heart rate.
The mechanism behind the termination of calcium release through the RyR is still not fully understood. Some researchers believe it is due to the random closing of ryanodine receptors (known as stochastic attrition), or the ryanodine receptors becoming inactive after a calcium spark, while others believe that a decrease in SR calcium, triggers the receptors to close (see calcium sparks for more details).
|
which of the following is not among the features of the meals on wheels program | Meals on Wheels - wikipedia
Meals on Wheels is a program that delivers meals to individuals at home who are unable to purchase or prepare their own meals. The name is often used generically to refer to home - delivered meals programs, not all of which are actually named "Meals on Wheels ''. Because they are housebound, many of the recipients are the elderly, and many of the volunteers are also elderly but able - bodied and able to drive automobiles.
Research shows that home - delivered meal programs significantly improve diet quality, increase nutrient intakes, reduce food insecurity and improve quality - of - life among the recipients. The programs also reduce government expenditures by reducing the need of recipients to use hospitals, nursing homes or other expensive community - based services.
Meals on Wheels originated in the United Kingdom during the Blitz, when many people lost their homes and therefore the ability to cook their own food. The Women 's Volunteer Service for Civil Defence (WVS, later WRVS) provided food for these people. The name "Meals on Wheels '' derived from the WVS 's related activity of bringing meals to servicemen. The concept of delivering meals to those unable to prepare their own evolved into the modern programmes that deliver mostly to the housebound elderly, sometimes free, or at a small charge.
The first home delivery of a meal on wheels following World War II was made by the WVS in Hemel Hempstead, Hertfordshire, England in 1943. Many early services used old prams to transport the meals, using straw bales, and even old felt hats, to keep the meals warm in transit.
This type of service requires many volunteers with an adequate knowledge of basic cooking to prepare the meals by a set time each day. The majority of local authorities in the United Kingdom have now moved away from freshly cooked food delivery, and towards the supply of frozen pre-cooked reheatable meals.
One of the pioneers of meals on wheels was said to be Harbans Lall Gulati, a general practitioner from Battersea, according to his obituary in the British Medical Journal. This however is disputed as there is little evidence.
Doris Taylor MBE founded Meals on Wheels in South Australia in 1953, and in 1954 the first meal was served from the Port Adelaide kitchen. The first meals were delivered to eight elderly Port Adelaide residents on 9 August 1954.
In New South Wales, Meals on Wheels was started in March 1957 by the Sydney City Council. In the first week, 150 meals were served for inner city dwellers; these were cooked in the Town Hall kitchen.
Organised on a regional basis, in Australia Meals on Wheels is a well established, active and thriving group of organisations. The history of a small sample of some of the organisations includes: New South Wales, Queensland, South Australia, Western Australia and Victoria.
In 2012, the Queensland branch of Meals on Wheels was a recipient of the Queensland Greats Awards.
Established in 1974, the oldest and largest national organization is Meals on Wheels America, which supports more than 5,000 community - based senior nutrition organizations across the country. By providing funding, leadership, education and advocacy support, Meals on Wheels America empowers its local member programmes to provide services to their communities. With local programmes, they galvanize the resources of local community organizations, businesses, donors, sponsors and more than two million volunteers -- bolstered by supplemental funding from the Older Americans Act -- into a national safety net for seniors.
Currently, they are located in Arlington, Virginia, and headed by President and CEO Ellie Hollander.
Pennsylvania
The first home - delivered meal program in the United States began in Philadelphia, Pennsylvania, in January 1954. At the request of the Philadelphia Health & Welfare Council, and funded by a grant from the Henrietta Tower Wurtz Foundation, Margaret Moffat Toy, a social worker in Philadelphia 's Lighthouse Community Center, pioneered a program to provide nourishment that met the dietary needs of homebound seniors and other "shut - ins '' in the area who otherwise would have to go hungry. As is the case today, many participants were people who did not require hospitalization, but who simply needed a helping hand in order to maintain their independence. Most of the volunteers were high school students, who were dubbed "Platter Angels. '' The "Platter Angels '' would prepare, package, and deliver food to the elderly and disabled through their community. The daily delivery consisted of one nutritionally balanced hot meal to eat at lunch time, and a dinner consisting of a cold sandwich and milk along with varying side dishes.
Margaret 's legacy of fighting hunger continues today. Her collegiate sorority, Alpha Gamma Delta, officially partnered with Meals on Wheels in 2017 as part of their international philanthropic focus.
Columbus, Ohio, was the second city in the U.S. to establish a community based meals program. Building on the model set forth in Philadelphia, a federation of women 's clubs went through the town to inform themselves of possible participants for a meal service. In Columbus, all of the meals were prepared by local restaurants and delivered by taxi cabs during the week. On weekends, high school students filled the posts.
The city of Rochester, New York, began its home - delivered meal program in 1958. It was originally a pilot project initiated by the New York Department of Health and administered by the Visiting Nurse Service. The Bureau of Chronic Diseases and Geriatrics of the New York Department of Health underwrote the costs.
Also in the late 1950s, a group of concerned women in San Diego, California recognised that isolated seniors were in need of regular meals and human contact. What was to become Meals - on - Wheels Greater San Diego, Inc. started in 1960, and has served area seniors for 50 years. When the national association, Meals on Wheels Association of America (MOWAA), sought guidance with the parameters to be used in the evaluation process for locally - run agencies, Meals - on - Wheels Greater San Diego was used as a model to help define the benchmark for successful operations, and "set the standard '' for approval. The mission of Meals - on - Wheels Greater San Diego, Inc. is to support the independence and well - being of seniors. A private, not - for - profit corporation, Meals - on - Wheels San Diego strives to keep seniors independent in their own homes by delivering meals to those who are unable to adequately meet their own nutritional needs. Often, the availability of this service enables seniors to avoid seeking institutional alternatives. From modest beginnings, Meals - on - Wheels has grown into one of the largest senior service programmes in Southern California. Meals - on - Wheels is currently the only organization in the area home - delivering two meals a day, for seven days a week (including holidays), and providing modified diets to seniors, age 60 and older, throughout San Diego County. In 2009, agency - wide, 82 staff members supported over 2,200 volunteers who donated their time to home - deliver 450,000 meals to approximately 2,000 seniors throughout San Diego County. Debbie Case is the CEO and President of Meals - on - Wheels Greater San Diego.
Another California Meals On Wheels programme is Meals On Wheels West (MOWW), which has been delivering services to home - bound individuals in their homes since April 1974. The agency has since grown from an organization that served 8 clients in one city to one that provides meals and companionship to 396 individuals in 6 of Los Angeles County 's coastal communities by 2010. In 2010 alone, over 3,000 volunteers delivered more than 86,000 meals. Begun as a programme of the Westside Ecumenical Conference, MOWW attained its own non-profit status in 1994. The CEO since 1987, RoseMary Regalbuto 's, first challenge upon arrival at MOWW was to eliminate the waiting list by increasing the number of routes and clients served. Since then, no eligible clients have been made to wait for services. When Mrs. Regalbuto took the helm, MOWW was only serving Santa Monica. Due to her leadership, the agency now serves Santa Monica, Topanga, Pacific Palisades, Malibu and parts of Marina Del Rey. In 2010, 93 % of MOWW clients stated that MOWW was a major factor in their ability to remain in their own homes. 88 % of clients reported that the daily contact with Meals On Wheels West volunteers was important to them, and 50 % stated the volunteers were the only visitors during the day. 70 % of volunteers stay with the organization for more than 5 years, which allows for significant lasting connections with clients.
Meals on Wheels People was founded in 1969 in Portland, Oregon and currently produces 5,000 hot, nutritious meals five days each week which are delivered to 34 senior centers throughout Multnomah County, Washington County and Clark County. The Meals are served at noon to seniors in center dining rooms or sent out as Meals on Wheels to frail, homebound elderly. Meals on Wheels People continues to expand to other locations, such as the Edwards Community Center in Aloha, Oregon where, in partnership with Edwards Center Inc. and Washington County Disabled, Aging and Veterans Services people over 60 years of age may receive hot lunches, alongside veterans and adults with developmental disabilities thereby providing new community connections for several groups that might otherwise become isolated.
Ypsilanti, Michigan
Ypsilanti Meals on Wheels was a programme idea that originated from the city of Ypsilanti 's Mayor 's Council on Aging. As a result of the Council 's recommendation in the spring of 1973, the City Council agreed to appropriate $8000 to a meal delivery programme to begin on July 1, 1973. Pastor William Bingham, pastor of the First Baptist Church of Ypsilanti and a member of the mayor 's Council on Aging, became the first Executive Director. Volunteers prepared meals in the kitchen of the First Baptist Church. Meal service began on January 14, 1974 to 16 homebound individuals. As time passed and the demand grew, Ypsilanti meals on Wheels purchased vans and equipped them with refrigerators and microwave ovens. Up until that time, volunteers had picked up cold meals, prepared at Eastern Michigan University, and heated them at each stop. Today, Ypsilanti meals on Wheels owns four vans and delivers approximately 200 hot meals each day, Monday through Friday. Drivers pick up the meals at Eastern Michigan University Hoyt Conference center and pack them into insulated containers that maintain the appropriate temperature until the delivery of the last meal.
Brampton, Ontario is the first city in Canada to deliver meals to seniors in need. In the spring of 1963, Ruby Cuthbert, a nurse, implemented the Meals on Wheels programme with the support of the local Soroptimist Club. Later, the Auxiliary group from Peel Memorial Hospital took over the responsibility and Brampton Meals on Wheels (BMOW) started with six meals a day.
Meals on Wheels was formed in response to a plea from the Hospital Chaplaincy Committee of the Calgary Presbytery of the United Church. In 1965, a study was undertaken by the Presbyterian United Church Women into the needs of the elderly living alone and those being discharged from hospitals with no help available during their convalescence. On November 30, 1965 the Calgary Church Women 's Community Care was incorporated and in 1976 the name was officially changed to "Calgary Meals on Wheels ''. In addition to the United Church, the Anglican, Baptist, Catholic and Presbyterian Churches supported the movement while interested volunteers and service clubs answered the call for help and proved to be the backbone of the fledgling organization. The United Way and the City of Calgary have also played a vital role in the success of this community service. On 15 November 1965 the first meal service started serving eight clients. By 1982 the number of clients had increased to in excess of 380 per day, requiring a move to a larger centre. In 2005, Calgary Meals on Wheels celebrated its 40th Anniversary, (having never missed a meal delivery in its 40 - year history), and delivered to some 1,900 clients, plus services to several unique programmes. The organisation is governed by a Board of Directors, all of whom are volunteers. A pool of some 750 volunteers donate just under 75,000 hours of time a year to deliver meals five days a week within Calgary city limits.
The first meals were delivered on 21 April 1969 for $0.65 each. There was one route on the south side with a total of three meals.
Meals on Wheels of Fredericton, New Brunswick began in 1967. They currently serve over 40,000 meals per year to about 300 clients. The organization will be celebrating 50 years of service in 2017.
Meals on Wheels began in Nova Scotia in 1969, with three volunteers delivering six meals in Halifax.
Since 1967, the Health and Home Care Society of BC has operated the Meals on Wheels programme in Vancouver and Richmond.
Meals on Wheels was originally created as a side project of The Home Welfare Association. A 1961 study recommended the establishment of a Meals on Wheels delivery service for people who were unable to prepare meals for themselves, such as the elderly and infirm. A three - year pilot project was started and they delivered the first meals on 30 June 1965. In 1981 the Home Welfare Association chapter was officially closed when the name was changed to Meals on Wheels of Winnipeg, Inc.
Longford
The first meals were delivered in Longford, a small county town in the Midlands. They were delivered by County Longford Social Services, organised by Sister Calasanctius of the local Convent of Mercy and the local hospital 's medical officer Dr Gerard McDonagh. The meals were distributed from a mobile kitchen for which funds had been raised by the local children. The fundraising had been organised by the Longford News (local newspaper) editor Derek Cobbe.
Some of the first meals were delivered by a volunteer driver, the late Pat Hourican, with volunteer helper the late Sr. Bonaventure. The mobile kitchens were built by a local businessman, Noel Hanlon, at his ambulance factory in the town. The vans had specially fitted gas cookers provided by the ambulance factory to keep the dinners warm. Meals were delivered then to some 400 people around Longford, mostly elderly or disabled, and were free of charge, supported by small grants and locally collected funds.
Today, Meals on Wheels programmes generally operate at the county level or smaller. Programmes vary widely in their size, service provided, organisation, and funding.
There are Meals on Wheels programmes in Australia, Canada, Ireland, the United Kingdom and the United States. The National Association of Care Catering are a great source of information on UK Meals on Wheels services. The Meals On Wheels Association of America (MOWAA) is a national association for senior nutrition programmes headquartered in Alexandria, Virginia, but each programme operates independently.
Most Meals on Wheels programmes deliver meals hot and ready - to - eat, but some deliver cold meals in containers ready to microwave. Others supply deep - frozen meals. Some warm - meal programmes provide an additional frozen meal during the days prior to a weekend or holiday, when there would be no delivery. Depending on the programme, meals may be delivered by paid drivers or by volunteers. In addition to providing nutrition to sustain the health of a client, a meal delivery by a Meals on Wheels driver or volunteer also serves as a safety check and a source of companionship for the client.
Most clients of Meals on Wheels programmes are elderly, but others who are unable to shop or cook for themselves (as well as their pets) are generally eligible for assistance. In the United States, programmes receiving federal funding may not serve people less than 60 years old. US Federally funded programmes may only request voluntary contributions from clients, while other programmes often charge a moderate fee for service. Regardless of their sources of funding, eligibility for most programmes is determined solely by medical need, with financial need and actual ability to pay not making a difference either way.
In the county of Suffolk, the programme is referred to as "Community Meals ''. "Meals on Wheels '' services are provided for those who have been assessed to have difficulty cooking for themselves. Community Meals services can comprise daily hot meals, chilled meals or a weekly or fortnightly delivery of frozen meals. Traditional hot deliveries are cooked in a central kitchen then transported to the service user.
Support to the elderly is also provided by WRVS (formerly named Women 's Royal Voluntary Service).
National Association of Care Catering Community Meals Week is a national event aiming to increase visibility of Community Meals Services. In October 2008, Camilla, The Duchess of Cornwall assisted in Meals on Wheels Week activities.
Increasingly in the UK, commercial rather than voluntary or local authorities organisations are providing the meals. For example, some Local Authorities have stopped providing hot meals and are instead delivering frozen pre-cooked meals. Other variations include using Apetito, who operate a "Chefmobil '' service which regenerates meals en route, and Apetito subsidiary Wiltshire Farm Foods, which operates a Meals on Wheels alternative service for those who do not meet assessment criteria.
October 2011 saw a Hairy Bikers series, Meals on Wheels, air on BBC Two. The series fronted a campaign with BBC Learning to save local ' meals on wheels ' services around the UK.
Halifax Meals on Wheels in Nova Scotia currently operate 68 programmes across the province; more than 600 volunteers serve an estimated 3400 meals a week. In Halifax, the service is partially funded by the municipality. The United Way also provides funding, depending on how much the programmes need. Organizations such as nursing homes and hospitals provide many of the meals; others come from restaurants and private homes. The programme is n't just for the elderly; people of any age who live alone often call when they 're recovering after a recent hospital stay and are unable to cook for themselves. Other users of Meals on Wheels are people with disabilities such as multiple sclerosis who use the programme to help them through a rough time when cooking becomes too difficult. In 1996, 56.7 % of clients in Halifax used the service for less than three months.
There are dozens of independent meals on wheels in Montreal, one of the largest and most innovative is the unique intergenerational Santropol Roulant, an organisation operated mainly by young volunteers in central Montreal neighbourhoods. Deliveries are done on foot, by bicycle and by hybrid car in some outlying routes.
Currently (2016) vans are still used to deliver meals around Longford by County Longford Social Services, a registered charity - 4 vans deliver to all areas of County Longford, but the meals are now hot soup and chilled main course and dessert - recipients have microwave ovens for reheating the dinners. Meals are provided 7 days per week, 365 days per year.
A study by Trinity College Dublin published in 2008 on behalf of the National Council on Ageing and Older People found most of Ireland served by Meals on Wheels services (or centre - based alternatives) since the 1980s, over half being registered charities. Half of the services are noted to be parish - only, with many more serving a slightly larger area: the report notes only 5 % of providers serve "a significant proportion of their county '' (but they do not mention the longest - running service in Longford, which serves the whole county).
The Meals on Wheels Association of America (MOWAA) is headquartered in Alexandria, Virginia. MOWAA is the oldest and largest organization in the United States representing those who provide meal services to seniors in need, specifically those at risk of or experiencing hunger. MOWAA is a non-profit organization working toward the social, physical, nutritional and economic betterment of vulnerable Americans by providing the tools and information its programmes need to make a difference in the lives of others. In 2016, Meals on Wheels provided approximately 218 million meals to 2.5 million Americans. The annual meal cost is $2,765 per recipient. Approximately 500,000 of the recipients are veterans.
Citymeals - on - Wheels serves the New York City area. In 2008, Citymeals delivered over 2.1 million meals to 17,713 frail aged in every borough of New York City. In addition, over 1,500 volunteers collectively spent 62,000 hours visiting and delivering meals to New York 's frail aged. Gael Greene and James Beard founded Citymeals - on - Wheels in 1981 after reading a newspaper article about homebound elderly New Yorkers with nothing to eat on weekends and holidays. They rallied their friends in the restaurant community, raising private funds as a supplement to the government - funded weekday meal delivery programme. Twenty five years ago their first efforts brought a Christmas meal to 6,000 frail aged.
In 2007, the MOWAA Foundation commissioned a study on hunger (see next section). In 2009, MOWAA partnered with The Mission Continues, an organization which addresses the needs of veterans who have served the United States.
Specialty Meals on Wheels programmes, such as "Kosher Meals on Wheels '', also exist to service niche clientele.
The Meals On Wheels Association of America Foundation (MOWAAF), recognizing that hunger is a serious threat facing millions of seniors in the United States, determined that understanding of the problem is a critical first step to developing remedies. In 2007, MOWAAF, underwritten by the Harrah 's Foundation, commissioned a research study entitled The Causes, Consequences and Future of Senior Hunger in America. The report was released at a hearing of the U.S. Senate Special Committee on Aging in March 2008 in Washington, D.C.
The study found that in the United States, over 5 million seniors, (11.4 % of all seniors), experience some form of food insecurity, i.e. were marginally food insecure. Of these, about 2.5 million are at - risk of hunger, and about 750,000 suffer from hunger due to financial constraints. Some groups of seniors are more likely to be at - risk of hunger. Relative to their representation in the overall senior population, those with limited incomes, under age 70, African - Americans, Hispanics, never - married individuals, renters, and persons living in the southern United States are all more likely to be at - risk of hunger. While certain groups of seniors are at greater - risk of hunger, hunger cuts across the income spectrum. For example, over 50 % of all seniors who are at - risk of hunger have incomes above the poverty threshold. Likewise, it is present in all demographic groups. For example, over two - thirds of seniors at - risk of hunger are caucasian. There are marked differences in the risk of hunger across family structure, especially for those seniors living alone, or those living with a grandchild. Those living alone are twice as likely to experience hunger compared to married seniors. One in five senior households with a grandchild, but no adult child, present is at risk of hunger, compared to about one in twenty households without a grandchild present. Seniors living in non-metropolitan areas are as likely to experience food insecurity as those living in metropolitan areas, suggesting that food insecurity cuts across the urban - rural continuum.
In March 2017, President Donald Trump 's proposed budget would make cuts to block grants that go towards spending on Meals on Wheels. Defending these cuts, director of the Office of Management and Budget Mick Mulvaney said that "Meals on Wheels sounds great '' but that the program is one of many that is "just not showing any results. ''
A 2013 review study on the impact of home - delivered meal programs found that "all but two studies found home - delivered meal programs to significantly improve diet quality, increase nutrient intakes, and reduce food insecurity and nutritional risk among participants. Other beneficial outcomes include increased socialization opportunities, improvement in dietary adherence, and higher quality of life. '' The study concluded, "Home - delivered meal programs improve diet quality and increase nutrient intakes among participants. These programs are also aligned with the federal cost - containment policy to rebalance long - term care away from nursing homes to home - and community - based services by helping older adults maintain independence and remain in their homes and communities as their health and functioning decline. ''
|
where is the right to trial by jury found | Sixth Amendment to the United States Constitution - wikipedia
The Sixth Amendment (Amendment VI) to the United States Constitution is the part of the United States Bill of Rights that sets forth rights related to criminal prosecutions. The Supreme Court has applied the protections of this amendment to the states through the Due Process Clause of the Fourteenth Amendment.
In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.
Criminal defendants have the right to a speedy trial. In Barker v. Wingo, 407 U.S. 514 (1972), the Supreme Court laid down a four - part case - by - case balancing test for determining whether the defendant 's speedy trial right has been violated. The four factors are:
In Strunk v. United States, 412 U.S. 434 (1973), the Supreme Court ruled that if the reviewing court finds that a defendant 's right to a speedy trial was violated, then the indictment must be dismissed and / or the conviction overturned. The Court held that, since the delayed trial is the state action which violates the defendant 's rights, no other remedy would be appropriate. Thus, a reversal or dismissal of a criminal case on speedy trial grounds means that no further prosecution for the alleged offense can take place.
In Sheppard v. Maxwell, 384 U.S. 333 (1966), the Supreme Court ruled that the right to a public trial is not absolute. In cases where excess publicity would serve to undermine the defendant 's right to due process, limitations can be put on public access to the proceedings. According to Press - Enterprise Co. v. Superior Court, 478 U.S. 1 (1986), trials can be closed at the behest of the government if there is "an overriding interest based on findings that closure is essential to preserve higher values and is narrowly tailored to serve that interest. '' The accused may also request a closure of the trial; though, it must be demonstrated that "first, there is a substantial probability that the defendant 's right to a fair trial will be prejudiced by publicity that closure would prevent, and second, reasonable alternatives to closure can not adequately protect the defendant 's right to a fair trial. ''
The right to a jury has always depended on the nature of the offense with which the defendant is charged. Petty offenses -- those punishable by imprisonment for no more than six months -- are not covered by the jury requirement. Even where multiple petty offenses are concerned, the total time of imprisonment possibly exceeding six months, the right to a jury trial does not exist. Also, in the United States, except for serious offenses (such as murder), minors are usually tried in a juvenile court, which lessens the sentence allowed, but forfeits the right to a jury.
Originally, the Supreme Court held that the Sixth Amendment right to a jury trial indicated a right to "a trial by jury as understood and applied at common law, and includes all the essential elements as they were recognized in this country and England when the Constitution was adopted. '' Therefore, it was held that juries had to be composed of twelve persons and that verdicts had to be unanimous, as was customary in England.
When, under the Fourteenth Amendment, the Supreme Court extended the right to a trial by jury to defendants in state courts, it re-examined some of the standards. It has been held that twelve came to be the number of jurors by "historical accident, '' and that a jury of six would be sufficient, but anything less would deprive the defendant of a right to trial by jury. The Sixth Amendment mandates unanimity in a federal jury trial. However, the Supreme Court has ruled that the Due Process Clause of the Fourteenth Amendment, while requiring states to provide jury trials for serious crimes, does not incorporate all the elements of a jury trial within the meaning of the Sixth Amendment. Thus, states are not mandated to require jury unanimity, unless the jury has only six members.
The Sixth Amendment requires juries to be impartial. Impartiality has been interpreted as requiring individual jurors to be unbiased. At voir dire, each side may question potential jurors to determine any bias, and challenge them if the same is found; the court determines the validity of these challenges for cause. Defendants may not challenge a conviction because a challenge for cause was denied incorrectly if they had the opportunity to use peremptory challenges.
In Peña - Rodriguez v. Colorado (2017), the Supreme Court ruled that the Sixth Amendment requires a court in a criminal trial to investigate whether a jury 's guilty verdict was based on racial bias. For a guilty verdict to be set - aside based on the racial bias of a juror, the defendant must prove that the racial bias "was a significant motivating factor in the juror 's vote to convict. ''
Another factor in determining the impartiality of the jury is the nature of the panel, or venire, from which the jurors are selected. Venires must represent a fair cross-section of the community; the defendant might establish that the requirement was violated by showing that the allegedly excluded group is a "distinctive '' one in the community, that the representation of such a group in venires is unreasonable and unfair in regard to the number of persons belonging to such a group, and that the under - representation is caused by a systematic exclusion in the selection process. Thus, in Taylor v. Louisiana, 419 U.S. 522 (1975), the Supreme Court invalidated a state law that exempted women who had not made a declaration of willingness to serve from jury service, while not doing the same for men.
In Apprendi v. New Jersey, 530 U.S. 466 (2000), and Blakely v. Washington, 542 U.S. 296 (2004), the Supreme Court ruled that a criminal defendant has a right to a jury trial not only on the question of guilt or innocence, but also regarding any fact used to increase the defendant 's sentence beyond the maximum otherwise allowed by statutes or sentencing guidelines. In Alleyne v. United States, 11 - 9335 (2013), the Court expanded on Apprendi and Blakely by ruling that a defendant 's right to a jury applies to any fact that would increase a defendant 's sentence beyond the minimum otherwise required by statute.
Article III, Section 2 of the Constitution requires defendants be tried by juries and in the state in which the crime was committed. The Sixth Amendment requires the jury to be selected from judicial districts ascertained by statute. In Beavers v. Henkel, 194 U.S. 73 (1904), the Supreme Court ruled that the place where the offense is charged to have occurred determines a trial 's location. Where multiple districts are alleged to have been locations of the crime, any of them may be chosen for the trial. In cases of offenses not committed in any state (for example, offenses committed at sea), the place of trial may be determined by the Congress.
A criminal defendant has the right to be informed of the nature and cause of the accusation against him. Therefore, an indictment must allege all the ingredients of the crime to such a degree of precision that it would allow the accused to assert double jeopardy if the same charges are brought up in subsequent prosecution. The Supreme Court held in United States v. Carll, 105 U.S. 611 (1881), that "in an indictment... it is not sufficient to set forth the offense in the words of the statute, unless those words of themselves fully, directly, and expressly, without any uncertainty or ambiguity, set forth all the elements necessary to constitute the offense intended to be punished. '' Vague wording, even if taken directly from a statute, does not suffice. However, the government is not required to hand over written copies of the indictment free of charge.
The Confrontation Clause relates to the common law rule preventing the admission of hearsay, that is to say, testimony by one witness as to the statements and observations of another person to prove that the statement or observation was accurate. The rationale was that the defendant had no opportunity to challenge the credibility of and cross-examine the person making the statements. Certain exceptions to the hearsay rule have been permitted; for instance, admissions by the defendant are admissible, as are dying declarations. Nevertheless, in California v. Green, 399 U.S. 149 (1970), the Supreme Court has held that the hearsay rule is not the same as the Confrontation Clause. Hearsay is admissible under certain circumstances. For example, in Bruton v. United States, 391 U.S. 123 (1968), the Supreme Court ruled that while a defendant 's out of court statements were admissible in proving the defendant 's guilt, they were inadmissible hearsay against another defendant. Hearsay may, in some circumstances, be admitted though it is not covered by one of the long - recognized exceptions. For example, prior testimony may sometimes be admitted if the witness is unavailable. However, in Crawford v. Washington, 541 U.S. 36 (2004), the Supreme Court increased the scope of the Confrontation Clause by ruling that "testimonial '' out - of - court statements are inadmissible if the accused did not have the opportunity to cross-examine that accuser and that accuser is unavailable at trial. In Davis v. Washington 547 U.S. 813 (2006), the Court ruled that "testimonial '' refers to any statement that an objectively reasonable person in the declarant 's situation would believe likely to be used in court. In Melendez - Diaz v. Massachusetts, 557 U.S. ___ (2009), and Bullcoming v. New Mexico, 564 U.S. ___ (2011), the Court ruled that admitting a lab chemist 's analysis into evidence, without having him testify, violated the Confrontation Clause. In Michigan v. Bryant, 562 U.S. ___ (2011), the Court ruled that the "primary purpose '' of a shooting victim 's statement as to who shot him, and the police 's reason for questioning him, each had to be objectively determined. If the "primary purpose '' was for dealing with an "ongoing emergency '', then any such statement was not testimonial and so the Confrontation Clause would not require the person making that statement to testify in order for that statement to be admitted into evidence.
The right to confront and cross-examine witnesses also applies to physical evidence; the prosecution must present physical evidence to the jury, providing the defense ample opportunity to cross-examine its validity and meaning. Prosecution generally may not refer to evidence without first presenting it.
In the late 20th and early 21st century this clause became an issue in the use of the silent witness rule.
The Compulsory Process Clause gives any criminal defendant the right to call witnesses in his favor. If any such witness refuses to testify, that witness may be compelled to do so by the court at the request of the defendant. However, in some cases the court may refuse to permit a defense witness to testify. For example, if a defense lawyer fails to notify the prosecution of the identity of a witness to gain a tactical advantage, that witness may be precluded from testifying.
A criminal defendant has the right to be represented by counsel.
In Powell v. Alabama, 287 U.S. 45 (1932), the Supreme Court ruled that "in a capital case, where the defendant is unable to employ counsel, and is incapable adequately of making his own defense because of ignorance, feeble mindedness, illiteracy, or the like, it is the duty of the court, whether requested or not, to assign counsel for him. '' In Johnson v. Zerbst, 304 U.S. 458 (1938), the Supreme Court ruled that in all federal cases, counsel would have to be appointed for defendants who were too poor to hire their own.
In 1961, the Court extended the rule that applied in federal courts to state courts. It held in Hamilton v. Alabama, 368 U.S. 52 (1961), that counsel had to be provided at no expense to defendants in capital cases when they so requested, even if there was no "ignorance, feeble mindedness, illiteracy, or the like. '' Gideon v. Wainwright, 372 U.S. 335 (1963), ruled that counsel must be provided to indigent defendants in all felony cases, overruling Betts v. Brady, 316 U.S. 455 (1942), in which the Court ruled that state courts had to appoint counsel only when the defendant demonstrated "special circumstances '' requiring the assistance of counsel. Under Argersinger v. Hamlin, 407 U.S. 25 (1972), counsel must be appointed in any case resulting in a sentence of actual imprisonment. Regarding sentences not immediately leading to imprisonment, the Court in Scott v. Illinois, 440 U.S. 367 (1979), ruled that counsel did not need to be appointed, but in Alabama v. Shelton, 535 U.S. 654 (2002), the Court held that a suspended sentence that may result in incarceration can not be imposed if the defendant did not have counsel at trial.
As stated in Brewer v. Williams, 430 U.S. 387 (1977), the right to counsel "(means) at least that a person is entitled to the help of a lawyer at or after the time that judicial proceedings have been initiated against him, whether by formal charge, preliminary hearing, indictment, information, or arraignment. '' Brewer goes on to conclude that once adversary proceedings have begun against a defendant, he has a right to legal representation when the government interrogates him and that when a defendant is arrested, "arraigned on (an arrest) warrant before a judge '', and "committed by the court to confinement '', "(t) here can be no doubt that judicial proceedings ha (ve) been initiated. ''
A criminal defendant may represent himself, unless a court deems the defendant to be incompetent to waive the right to counsel.
In Faretta v. California, 422 U.S. 806 (1975), the Supreme Court recognized a defendant 's right to pro se representation. However, under Godinez v. Moran, 509 U.S. 389 (1993), a court that believes the defendant is less than fully competent to represent himself can require that defendant to be represented by counsel. In Martinez v. Court of Appeal of California, 528 U.S. 152 (2000), the Supreme Court ruled the right to pro se representation did not apply to appellate courts. In Indiana v. Edwards, 554 U.S. 164 (2008), the Court ruled that a criminal defendant could be simultaneously competent to stand trial, but not competent to represent himself.
In Bounds v. Smith, 430 U.S. 817 (1977), the Supreme Court held that the constitutional right of "meaningful access to the courts '' can be satisfied by counsel or access to legal materials. Bounds has been interpreted by several United States courts of appeals to mean a pro se defendant does not have a constitutional right to access a prison law library to research his defense when access to the courts has been provided through appointed counsel.
|
where have you been where have you gone | Where Are You Going, Where Have You Been? - Wikipedia
"Where Are You Going, Where Have You Been? '' is a frequently anthologized short story written by Joyce Carol Oates. The story first appeared in the Fall 1966 edition of Epoch magazine. It was inspired by three Tucson, Arizona murders committed by Charles Schmid, which were profiled in Life magazine in an article written by Don Moser on March 4, 1966. Oates said that she dedicated the story to Bob Dylan because she was inspired to write it after listening to his song "It 's All Over Now, Baby Blue ''. The story was originally named "Death and the Maiden ''.
The main character of Oates ' story is Connie, a beautiful, self - absorbed 15 - year - old girl, who is at odds with her mother -- once a beauty herself -- and with her dutiful, "steady '', and homely older sister. Without her parents ' knowledge, she spends most of her evenings picking up boys at a Big Boy restaurant, and one evening captures the attention of a stranger in a gold convertible covered with cryptic writing. While her parents are away at her aunt 's barbecue, two men pull up in front of Connie 's house and call her out. She recognizes the driver, Arnold Friend, as the man from the drive - in restaurant, and is initially charmed by the smooth - talking, charismatic stranger. He tells Connie he is 18 and has come to take her for a ride in his car with his sidekick Ellie. Connie slowly realizes that he is actually much older, and grows afraid. When she refuses to go with them, Friend becomes more forceful and threatening, saying that he will harm her family, while at the same time appealing to her vanity, saying that she is too good for them. Connie is compelled to leave with him and do what he demands of her.
Considerable academic analysis has been written about the story, with scholars divided on whether it is intended to be taken literally or as allegory. Several writers focus on the series of numbers written on Friend 's car, which he indicates are a code of some sort, but which is never explained:
"Now, these numbers are a secret code, honey, '' Arnold Friend explained. He read off the numbers 33, 19, 17 and raised his eyebrows at her to see what she thought of that, but she did n't think much of it.
Literary scholars have interpreted this series of numbers as different Biblical references, as an underlining of Friend 's sexual deviancy, or as a reference to the ages of Friend and his victims.
The narrative has also been viewed as an allegory for initiation into sexual adulthood, an encounter with the devil, a critique of modern youth 's obsession with sexual themes in popular music, or as a dream sequence.
The story was loosely adapted into the 1985 film Smooth Talk, starring Laura Dern and Treat Williams. Oates has written an essay named "Smooth Talk: Short Story into Film '' about the adaptation.
The short story is the inspiration and basis for The Blood Brothers ' song "The Salesman, Denver Max ''.
|
what is the steps of a ladder called | Ladder - wikipedia
A ladder is a vertical or inclined set of rungs or steps. There are two types: rigid ladders that are self - supporting or that may be leaned against a vertical surface such as a wall, and rollable ladders, such as those made of rope or aluminium, that may be hung from the top. The vertical members of a rigid ladder are called stringers or rails (US) or stiles (UK). Rigid ladders are usually portable, but some types are permanently fixed to a structure, building, or equipment. They are commonly made of metal, wood, or fiberglass, but they have been known to be made of tough plastic.
Rigid ladders are available in many forms, such as:
Rigid ladders were originally made of wood, but in the 20th century aluminium became more common because of its lighter weight. Ladders with fiberglass stiles are used for working on or near overhead electrical wires, because fiberglass is an electrical insulator. Henry Quackenbush patented the extension ladder in 1867.
The most common injury made by ladder climbers is bruising from falling off a ladder, but bone fractures are common and head injuries are also likely, depending on the nature of the accident. Ladders can slip backwards owing to faulty base pads which usually fit into the ladder stiles. If badly worn, they can allow the aluminium to contact the ground rather than plastic or rubber, and so lower the friction with the ground. Ladder stabilizers are available that increase the ladder 's grip on the ground. One of the first ladder stabilizers or ladder feet was offered in 1936 and today they are standard equipment on most large ladders.
A ladder standoff, or stay, is a device fitted to the top of a ladder to hold it away from the wall. This enables the ladder to clear overhanging obstacles, such as the eaves of a roof, and increases the safe working height for a given length of ladder.
It has become increasingly common to provide anchor points on buildings to which the top rung of an extension ladder can be attached, especially for activities like window cleaning, especially if a fellow worker is not available for "footing '' the ladder. Footing occurs when another worker stands on the lowest rung and so provides much greater stability to the ladder when being used. The anchor point is usually a ring cemented into a slot in the brick wall to which the rungs of a ladder can be attached using rope for example, or a carabiner.
If a leaning ladder is placed at the wrong angle, the risk of a fall is greatly increased. The safest angle for a ladder is 75.5 °; if it is too shallow, the bottom of the ladder is at risk of sliding, and if it is too steep, the ladder may fall backwards. Both scenarios can cause significant injury, and are especially important in industries like construction, which require heavy use of ladders.
The European Union and the United Kingdom established a ladder certification system -- ladder classes - for any ladders manufactured or sold in Europe. The certification classes apply solely to ladders that are portable such as stepladders and extension ladders and are broken down into three types of certification. Each ladder certification is colour - coded to indicate the amount of weight the ladder is designed to hold, the certification class and its use. The color of the safety label specifies the class and use.
In the UK there are a number of British standards included in the three main ladder certifications relative to the particular ladder type. Relevant classifications include BS 1129: 1990 (British) which applies to Timber Ladders and Steps; BS 2037: 1994 (British) which applies to Metal and Aluminium Ladders and Steps and BS EN 131: 1993 (European) which applies to both Timber and Aluminium Ladders and Steps.
Ladders are ancient tools and technology. A ladder is featured in a Mesolithic rock painting that is at least 10,000 years old, depicted in the Spider Caves in Valencia, Spain. The painting depicts two humans using a ladder to reach a wild honeybee nest to harvest honey. The ladder is depicted as long and flexible, possibly made out of some kind of grass.
Photo of a dog and pawl on an extension ladder.
Sketch of Cat Ladder (UK terminology) an aid when working on steep roofs.
Sketch of double extension ladder
Detail of a bamboo ladder, the most common type of ladder in China
A roof ladder on the roof. Hooks extend over the ridge holding the ladder in place.
|
volcanoes on the west coast of north america | Cascade volcanoes - wikipedia
The Cascade Volcanoes (also known as the Cascade Volcanic Arc or the Cascade Arc) are a number of volcanoes in a volcanic arc in western North America, extending from southwestern British Columbia through Washington and Oregon to Northern California, a distance of well over 700 miles (1,100 km). The arc has formed due to subduction along the Cascadia subduction zone. Although taking its name from the Cascade Range, this term is a geologic grouping rather than a geographic one, and the Cascade Volcanoes extend north into the Coast Mountains, past the Fraser River which is the northward limit of the Cascade Range proper.
Some of the major cities along the length of the arc include Portland, Seattle, and Vancouver, and the population in the region exceeds 10 million. All could be potentially affected by volcanic activity and great subduction - zone earthquakes along the arc. Because the population of the Pacific Northwest is rapidly increasing, the Cascade volcanoes are some of the most dangerous, due to their eruptive history and potential for future eruptions, and because they are underlain by weak, hydrothermally altered volcanic rocks that are susceptible to failure. Consequently, Mount Rainier is one of the Decade Volcanoes identified by the International Association of Volcanology and Chemistry of the Earth 's Interior (IAVCEI) as being worthy of particular study, due to the danger it poses to Seattle and Tacoma. Many large, long - runout landslides originating on Cascade volcanoes have engulfed valleys tens of kilometers from their sources, and some of the areas affected now support large populations.
The Cascade Volcanoes are part of the Pacific Ring of Fire, the ring of volcanoes and associated mountains around the Pacific Ocean. The Cascade Volcanoes have erupted several times in recorded history. Two most recent were Lassen Peak in 1914 to 1921 and a major eruption of Mount St. Helens in 1980. It is also the site of Canada 's most recent major eruption about 2,350 years ago at the Mount Meager massif.
The Cascade Arc includes nearly 20 major volcanoes, among a total of over 4,000 separate volcanic vents including numerous stratovolcanoes, shield volcanoes, lava domes, and cinder cones, along with a few isolated examples of rarer volcanic forms such as tuyas. Volcanism in the arc began about 37 million years ago; however, most of the present - day Cascade volcanoes are less than 2,000,000 years old, and the highest peaks are less than 100,000 years old. Twelve volcanoes in the arc are over 10,000 feet (3,000 m) in elevation, and the two highest, Mount Rainier and Mount Shasta, exceed 14,000 feet (4,300 m). By volume, the two largest Cascade volcanoes are the broad shields of Medicine Lake Volcano and Newberry Volcano, which are about 145 cubic miles (600 km) and 108 cubic miles (450 km) respectively. Glacier Peak is the only Cascade volcano that is made exclusively of dacite.
Over the last 37 million years, the Cascade Arc has been erupting a chain of volcanoes along the Pacific Northwest. Several of the volcanoes in the arc are frequently active. The volcanoes of the Cascade Arc share some general characteristics, but each has its own unique geological traits and history. Lassen Peak in California, which last erupted in 1917, is the southernmost historically active volcano in the arc, while the Mount Meager massif in British Columbia, which erupted about 2,350 years ago, is generally considered the northernmost member of the arc. A few isolated volcanic centers northwest of the Mount Meager massif such as the Silverthrone Caldera, which is a circular 20 km (12 mi) wide, deeply dissected caldera complex, may also be the product of Cascadia subduction because the igneous rocks andesite, basaltic andesite, dacite and rhyolite can also be found at these volcanoes as they are elsewhere along the subduction zone. At issue are the current estimates of plate configuration and rate of subduction, but based on the chemistry of these volcanoes, they are also subduction related and therefore part of the Cascade Volcanic Arc. The Cascade Volcanic Arc appears to be segmented; the central portion of the arc is the most active and the northern end least active.
Lavas representing the earliest stage in the development of the Cascade Volcanic Arc mostly crop out south of the North Cascades proper, where uplift of the Cascade Range has been less, and a thicker blanket of Cascade Arc volcanic rocks has been preserved. In the North Cascades, geologists have not yet identified with any certainty any volcanic rocks as old as 35 million years, but remnants of the ancient arc 's internal plumbing system persist in the form of plutons, which are the crystallized magma chambers that once fed the early Cascade volcanoes. The greatest mass of exposed Cascade Arc plumbing is the Chilliwack batholith, which makes up much of the northern part of North Cascades National Park and adjacent parts of British Columbia beyond. Individual plutons range in age from about 35 million years old to 2.5 million years old. The older rocks invaded by all this magma were affected by the heat.
Around the plutons of the batholith, the older rocks recrystallized. This contact metamorphism produced a fine mesh of interlocking crystals in the old rocks, generally strengthening them and making them more resistant to erosion. Where the recrystallization was intense, the rocks took on a new appearance dark, dense and hard. Many rugged peaks in the North Cascades owe their prominence to this baking. The rocks holding up many such North Cascade giants, as Mount Shuksan, Mount Redoubt, Mount Challenger, and Mount Hozomeen, are all partly recrystallized by plutons of the nearby and underlying Chilliwack batholith.
The Garibaldi Volcanic Belt is the northern extension of the Cascade Arc. Volcanoes within the volcanic belt are mostly stratovolcanoes along with the rest of the arc, but also include calderas, cinder cones, and small isolated lava masses. The eruption styles within the belt range from effusive to explosive, with compositions from basalt to rhyolite. Due to repeated continental and alpine glaciations, many of the volcanic deposits in the belt reflect complex interactions between magma composition, topography, and changing ice configurations. Four volcanoes within the belt appear related to seismic activity since 1975, including: Mount Meager massif, Mount Garibaldi and Mount Cayley massif.
The Pemberton Volcanic Belt is an eroded volcanic belt north of the Garibaldi Volcanic Belt, which appears to have formed during the Miocene before fracturing of the northern end of the Juan de Fuca Plate. The Silverthrone Caldera is the only volcano within the belt that appears related to seismic activity since 1975.
The Mount Meager massif is the most unstable volcanic massif in Canada. It has dumped clay and rock several meters deep into the Pemberton Valley at least three times during the past 7,300 years. Recent drilling into the Pemberton Valley bed encountered remnants of a debris flow that had traveled 50 km (31 mi) from the volcano shortly before it last erupted 2,350 years ago. About 1,000,000,000 cubic metres (0.24 cu mi) of rock and sand extended over the width of the valley. Two previous debris flows, about 4,450 and 7,300 years ago, sent debris at least 32 km (20 mi) from the volcano. Recently, the volcano has created smaller landslides about every ten years, including one in 1975 that killed four geologists near Meager Creek. The possibility of the Mount Meager massif covering stable sections of the Pemberton Valley in a debris flow is estimated at about one in 2,400 years. There is no sign of volcanic activity with these events. However scientists warn the volcano could release another massive debris flow over populated areas any time without warning.
In the past, Mount Rainier has had large debris avalanches, and has also produced enormous lahars due to the large amount of glacial ice present. Its lahars have reached all the way to Puget Sound. Around 5,000 years ago, a large chunk of the volcano slid away and that debris avalanche helped to produce the massive Osceola Mudflow, which went all the way to the site of present - day Tacoma and south Seattle. This massive avalanche of rock and ice took out the top 1,600 feet (490 m) of Rainier, bringing its height down to around 14,100 feet (4,300 m). About 530 to 550 years ago, the Electron Mudflow occurred, although this was not as large - scale as the Osceola Mudflow.
While the Cascade volcanic arc (a geological term) includes volcanoes such as the Mount Meager massif and Mount Garibaldi, which lie north of the Fraser River, the Cascade Range (a geographic term) is considered to have its northern boundary at the Fraser.
Indigenous peoples have inhabited the area for thousands of years and developed their own myths and legends concerning the Cascade volcanoes. According to some of these tales, Mounts Baker, Jefferson, Shasta and Garibaldi were used as refuge from a great flood. Other stories, such as the Bridge of the Gods tale, had various High Cascades such as Hood and Adams, act as god - like chiefs who made war by throwing fire and stone at each other. St. Helens with its pre-1980 graceful appearance, was regaled as a beautiful maiden for whom Hood and Adams feuded. Among the many stories concerning Mount Baker, one tells that the volcano was formerly married to Mount Rainier and lived in that vicinity. Then, because of a marital dispute, she picked herself up and marched north to her present position. Native tribes also developed their own names for the High Cascades and many of the smaller peaks, the most well known to non-natives being Tahoma, the Lushootseed name for Mount Rainier. Mount Cayley and The Black Tusk are known to the Squamish people who live nearby as "the Landing Place of the Thunderbird ''.
Hot springs in the Canadian side of the arc, were originally used and revered by First Nations people. The springs located on Meager Creek are called Teiq in the language of the Lillooet people and were the farthest up the Lillooet River. The spirit - beings / wizards known as "the Transformers '' reached them during their journey into the Lillooet Country, and were a "training '' place for young First Nations men to acquire power and knowledge. In this area, also, was found the blackstone chief 's head pipe that is famous of Lillooet artifacts; found buried in volcanic ash, one supposes from the 2350 BP eruption of the Mount Meager massif.
Legends associated with the great volcanoes are many, as well as with other peaks and geographical features of the arc, including its many hot springs and waterfalls and rock towers and other formations. Stories of Tahoma -- today Mount Rainier and the namesake of Tacoma, Washington -- allude to great, hidden grottos with sleeping giants, apparitions and other marvels in the volcanoes of Washington, and Mount Shasta in California has long been well known for its associations with everything from Lemurians to aliens to elves and, as everywhere in the arc, Sasquatch or Bigfoot.
In the spring of 1792 British navigator George Vancouver entered Puget Sound and started to give English names to the high mountains he saw. Mount Baker was named for Vancouver 's third lieutenant, the graceful Mount St. Helens for a famous diplomat, Mount Hood was named in honor of Samuel Hood, 1st Viscount Hood (an admiral of the Royal Navy) and the tallest Cascade, Mount Rainier, is the namesake of Admiral Peter Rainier. Vancouver 's expedition did not, however, name the arc these peaks belonged to. As marine trade in the Strait of Georgia and Puget Sound proceeded in the 1790s and beyond, the summits of Rainier and Baker became familiar to captains and crews (mostly British and American over all others, but not exclusively).
With the exception of the 1915 eruption of remote Lassen Peak in Northern California, the arc was quiet for more than a century. Then, on May 18, 1980, the dramatic eruption of little - known Mount St. Helens shattered the quiet and brought the world 's attention to the arc. Geologists were also concerned that the St. Helens eruption was a sign that long - dormant Cascade volcanoes might become active once more, as in the period from 1800 to 1857 when a total of eight erupted. None have erupted since St. Helens, but precautions are being taken nevertheless, such as the Mount Rainier Volcano Lahar Warning System in Pierce County, Washington.
The Cascade Volcanoes were formed by the subduction of the Juan de Fuca, Explorer and the Gorda Plate (remnants of the much larger Farallon Plate) under the North American Plate along the Cascadia subduction zone. This is a 680 - mile (1,090 km) long fault, running 50 miles (80 km) off the coast of the Pacific Northwest from northern California to Vancouver Island, British Columbia. The plates move at a relative rate of over 0.4 inches (10 mm) per year at a somewhat oblique angle to the subduction zone.
Because of the very large fault area, the Cascadia subduction zone can produce very large earthquakes, magnitude 9.0 or greater, if rupture occurred over its whole area. When the "locked '' zone stores up energy for an earthquake, the "transition '' zone, although somewhat plastic, can rupture. Thermal and deformation studies indicate that the locked zone is fully locked for 60 km (37 mi) downdip from the deformation front. Further downdip, there is a transition from fully locked to aseismic sliding.
Unlike most subduction zones worldwide, there is no oceanic trench present along the continental margin in Cascadia. Instead, terranes and the accretionary wedge have been uplifted to form a series of coast ranges and exotic mountains. A high rate of sedimentation from the outflow of the three major rivers (Fraser River, Columbia River, and Klamath River) which cross the Cascade Range contributes to further obscuring the presence of a trench. However, in common with most other subduction zones, the outer margin is slowly being compressed, similar to a giant spring. When the stored energy is suddenly released by slippage across the fault at irregular intervals, the Cascadia subduction zone can create very large earthquakes such as the M 8.7 -- 9.2 Cascadia earthquake of 1700.
The 1980 eruption of Mount St. Helens was one of the most closely studied volcanic eruptions in the arc and one of the best studied ever. It was a Plinian style eruption with a VEI = 5 and was the most significant to occur in the lower 48 U.S. states in recorded history. An earthquake at 8: 32 a.m. on May 18, 1980, caused the entire weakened north face to slide away. An ash column rose high into the atmosphere and deposited ash in 11 U.S. states. The eruption killed 57 people and thousands of animals and caused more than a billion U.S. dollars in damage.
On May 22, 1915, an explosive eruption at Lassen Peak devastated nearby areas and rained volcanic ash as far away as 200 miles (320 km) to the east. A huge column of volcanic ash and gas rose more than 30,000 feet (9,100 m) into the air and was visible from as far away as Eureka, California, 150 miles (240 km) to the west. A pyroclastic flow swept down the side of the volcano, devastating a 3 - square - mile (7.8 km) area. This explosion was the most powerful in a 1914 -- 17 series of eruptions at Lassen Peak.
The Mount Meager massif produced the most recent major eruption in Canada, sending ash as far away as Alberta. The eruption was similar to the 1980 eruption of Mount St. Helens, sending an ash column approximately 20 km (12 mi) high into the stratosphere. This activity produced a diverse sequence of volcanic deposits, well exposed in the bluffs along the Lillooet River, which is defined as the Pebble Creek Formation. The eruption was episodic, occurring from a vent on the north - east side of Plinth Peak. An unusual, thick apron of welded vitrophyric breccia may represent the explosive collapse of an early lava dome, depositing ash several meters in thickness near the vent area.
The 7,700 BP eruption of Mount Mazama was a large catastrophic eruption in the U.S. state of Oregon. It began with a large eruption column with pumice and ash that erupted from a single vent. The eruption was so great that most of Mount Mazama collapsed to form a caldera and subsequent smaller eruptions occurred as water began to fill in the caldera to form Crater Lake. Volcanic ash from the eruption was carried across most of the Pacific Northwest as well as parts of southern Canada.
About 13,000 years ago, Glacier Peak generated an unusually strong sequence of eruptions depositing volcanic ash as far away as Wyoming.
Most of the Silverthrone Caldera 's eruptions in the Pacific Range occurred during the last ice age and was episodically active during both Pemberton and Garibaldi Volcanic Belt stages of volcanism. The caldera is one of the largest of the few calderas in western Canada, measuring about 30 kilometres (19 mi) long (north - south) and 20 kilometres (12 mi) wide (east - west). The last eruption from Mount Silverthrone ran up against ice in Chernaud Creek. The lava was dammed by the ice and made a cliff with a waterfall up against it. The most recent activity was 1000 years ago.
Mount Garibaldi in the Pacific Range was last active about 10,700 to 9,300 years ago from a cinder cone called Opal Cone. It produced a 15 km (9.3 mi) long broad dacite lava flow with prominent wrinkled ridges. The lava flow is unusually long for a silicic lava flow.
During the mid-19th century, Mount Baker erupted for the first time in several thousand years. Fumarole activity remains in Sherman Crater, just south of the volcano 's summit, became more intense in 1975 and is still energetic. However, an eruption is not expected in the near future.
Glacier Peak last erupted about 200 -- 300 years ago and has erupted about six times in the past 4,000 years.
Mount Rainier last erupted between 1824 and 1854, but many eyewitnesses reported eruptive activity in 1858, 1870, 1879, 1882 and in 1894 as well. Mount Rainier has created at least four eruptions and many lahars in the past 4,000 years.
Mount Adams was last active about 1,000 years ago and has created few eruptions during the past several thousand years, resulting in several major lava flows, the most notable being the A.G. Aiken Lava Bed, the Muddy Fork Lava Flows, and the Takh Takh Lava Flow. One of the most recent flows issued from South Butte created the 4.5 - mile (7.2 km) long by 0.5 - mile (0.80 km) wide A.G. Aiken Lava Bed. Thermal anomalies (hot spots) and gas emissions (including hydrogen sulfide) have occurred especially on the summit plateau since the Great Slide of 1921.
Mount Hood was last active about 200 years ago, creating pyroclastic flows, lahars, and a well - known lava dome close to its peak called Crater Rock. Between 1856 and 1865, a sequence of steam explosions took place at Mount Hood.
A great deal of volcanic activity has occurred at Newberry Volcano, which was last active about 1,300 years ago. It has one of the largest collections of cinder cones, lava domes, lava flows and fissures in the world.
Medicine Lake Volcano has erupted about eight times in the past 4,000 years and was last active about 1,000 years ago when rhyolite and dacite erupted at Glass Mountain and associated vents near the caldera 's eastern rim.
Mount Shasta last erupted in 1786 and has been the most active volcano in California for about 4,000 years, erupting once every 300 years. The 1786 eruption created a pyroclastic flow, a lahar and three cold lahars, which streamed 7.5 miles (12.1 km) down Shasta 's east flank via Ash Creek. A separate hot lahar went 12 miles (19 km) down Mud Creek.
Eleven of the thirteen volcanoes in the Cascade Range have erupted at least once in the past 4,000 years, and seven have done so in just the past 200 years. The Cascade volcanoes have had more than 100 eruptions over the past few thousand years, many of them explosive eruptions. However, certain Cascade volcanoes can be dormant for hundreds or thousands of years between eruptions, and therefore the great risk caused by volcanic activity in the regions is not always readily apparent.
When Cascade volcanoes do erupt, pyroclastic flows, lava flows, and landslides can devastate areas more than 10 miles (16 km) away; and huge mudflows of volcanic ash and debris, called lahars, can inundate valleys more than 50 miles (80 km) downstream. Falling ash from explosive eruptions can disrupt human activities hundreds of miles downwind, and drifting clouds of fine ash can cause severe damage to jet aircraft even thousands of miles away.
All of the known historical eruptions have occurred in Washington, Oregon and in Northern California. The two most recent were Lassen Peak in 1914 to 1921 and a major eruption of Mount St. Helens in 1980. Minor eruptions of Mount St. Helens have also occurred, most recently in 2006. In contrast, volcanoes in southern British Columbia, central and southern Oregon are currently dormant. The regions lacking new eruptions keep in touch to positions of fracture zones that offset the Gorda Ridge, Explorer Ridge and the Juan de Fuca Ridge. The volcanoes with historical eruptions include: Mount Rainier, Glacier Peak, Mount Baker, Mount Hood, Lassen Peak, and Mount Shasta.
Renewed volcanic activity in the Cascade Arc, such as the 1980 eruption of Mount St. Helens, has offered a great deal of evidence about the structure of the Cascade Arc. One effect of the 1980 eruption was a greater knowledge of the influence of landslides and volcanic development in the evolution of volcanic terrain. A vast piece on the north side of Mount St. Helens dropped and formed a jumbled landslide environment several kilometers away from the volcano. Pyroclastic flows and lahars moved across the countryside. Parallel episodes have also happened at Mount Shasta and other Cascade volcanoes in prehistoric times.
Washington has a majority of the very highest volcanoes, with 4 of the top 6 overall, although Oregon does hold a majority of the next highest peaks. Even though Mount Rainier is the tallest, it is not the largest by volume. Mount Shasta in California is the largest by volume, followed by Washington 's Mount Adams. Mount Rainier is thus the 3rd largest by eruptive volume. Below is a list of the highest Cascade volcanoes:
|
house of the rising sun song in what movie | The House of the Rising Sun - wikipedia
"The House of the Rising Sun '' is a traditional folk song, sometimes called "Rising Sun Blues ''. It tells of a life gone wrong in New Orleans; many versions also urge a sibling to avoid the same fate. The most successful commercial version, recorded in 1964 by British rock group the Animals, was a number one hit on the UK Singles Chart and also in the United States and France. As a traditional folk song recorded by an electric rock band, it has been described as the "first folk rock hit ''.
Like many classic folk ballads, "The House of the Rising Sun '' is of uncertain authorship. Musicologists say that it is based on the tradition of broadside ballads, and thematically it has some resemblance to the 16th - century ballad The Unfortunate Rake. According to Alan Lomax, "Rising Sun '' was used as the name of a bawdy house in two traditional English songs, and it was also a name for English pubs. He further suggested that the melody might be related to a 17th - century folk song, "Lord Barnard and Little Musgrave '', also known as "Matty Groves '', but a survey by Bertrand Bronson showed no clear relationship between the two songs. Lomax proposed that the location of the house was then relocated from England to New Orleans by white southern performers. However, Vance Randolph proposed an alternative French origin, the "rising sun '' referring to the decorative use of the sunburst insignia dating to the time of Louis XIV, which was brought to North America by French immigrants.
"House of Rising Sun '' was said to have been known by miners in 1905. The oldest published version of the lyrics is that printed by Robert Winslow Gordon in 1925, in a column "Old Songs That Men Have Sung '' in Adventure Magazine. The lyrics of that version begin:
There is a house in New Orleans, it 's called the Rising Sun
It 's been the ruin of many a poor girl
The oldest known recording of the song, under the title "Rising Sun Blues '', is by Appalachian artists Clarence "Tom '' Ashley and Gwen Foster, who recorded it for Vocalion Records on September 6, 1933. Ashley said he had learned it from his grandfather, Enoch Ashley. Roy Acuff, an "early - day friend and apprentice '' of Ashley 's, learned it from him and recorded it as "Rising Sun '' on November 3, 1938. Several older blues recordings of songs with similar titles are unrelated, for example, "Rising Sun Blues '' by Ivy Smith (1927) and "The Risin ' Sun '' by Texas Alexander (1928).
The song was among those collected by folklorist Alan Lomax, who, along with his father, was a curator of the Archive of American Folk Song for the Library of Congress. On an expedition with his wife to eastern Kentucky, Lomax set up his recording equipment in Middlesboro, Kentucky, in the house of singer and activist Tilman Cadle. In 1937 he recorded a performance by Georgia Turner, the 16 - year - old daughter of a local miner. He called it The Rising Sun Blues. Lomax later recorded a different version sung by Bert Martin and a third sung by Daw Henson, both eastern Kentucky singers. In his 1941 songbook Our Singing Country, Lomax credits the lyrics to Turner, with reference to Martin 's version.
In 1941, Woody Guthrie recorded a version. A recording made in 1947 by Libby Holman and Josh White (who is also credited with having written new words and music that have subsequently been popularized in the versions made by many other later artists) was released by Mercury Records in 1950. White learned the song from a "white hillbilly singer '', who might have been Ashley, in North Carolina in 1923 -- 1924. Lead Belly recorded two versions of the song, in February 1944 and in October 1948, called "In New Orleans '' and "The House of the Rising Sun '', respectively; the latter was recorded in sessions that were later used on the album Lead Belly 's Last Sessions (1994, Smithsonian Folkways).
In 1957 Glenn Yarbrough recorded the song for Elektra Records. The song is also credited to Ronnie Gilbert on an album by The Weavers released in the late 1940s or early 1950s. Pete Seeger released a version on Folkways Records in 1958, which was re-released by Smithsonian Folkways in 2009. Andy Griffith recorded the song on his 1959 album Andy Griffith Shouts the Blues and Old Timey Songs. In 1960 Miriam Makeba recorded the song on her eponymous RCA album.
Joan Baez recorded it in 1960 on her self - titled debut album; she frequently performed the song in concert throughout her career. Nina Simone recorded her first version for the album Nina at the Village Gate in 1962. Tim Hardin sang it on This is Tim Hardin, recorded in 1964 but not released until 1967. The Chambers Brothers recorded a version on Feelin ' the Blues, released on Vault records (1970).
In late 1961, Bob Dylan recorded the song for his debut album, released in March 1962. That release had no songwriting credit, but the liner notes indicate that Dylan learned this version of the song from Dave Van Ronk. In an interview for the documentary No Direction Home, Van Ronk said that he was intending to record the song and that Dylan copied his version. Van Ronk recorded it soon thereafter for the album Just Dave Van Ronk.
I had learned it sometime in the 1950s, from a recording by Hally Wood, the Texas singer and collector, who had got it from an Alan Lomax field recording by a Kentucky woman named Georgia Turner. I put a different spin on it by altering the chords and using a bass line that descended in half steps -- a common enough progression in jazz, but unusual among folksingers. By the early 1960s, the song had become one of my signature pieces, and I could hardly get off the stage without doing it.
June 19, 1964 (1964 - 06 - 19) (UK)
An interview with Eric Burdon revealed that he first heard the song in a club in Newcastle, England, where it was sung by the Northumbrian folk singer Johnny Handle. The Animals were on tour with Chuck Berry and chose it because they wanted something distinctive to sing.
The Animals ' version transposes the narrative of the song from the point of view of a woman led into a life of degradation to that of a man whose father was now a gambler and drunkard, rather than the sweetheart in earlier versions.
The Animals had begun featuring their arrangement of "House of the Rising Sun '' during a joint concert tour with Chuck Berry, using it as their closing number to differentiate themselves from acts that always closed with straight rockers. It got a tremendous reaction from the audience, convincing initially reluctant producer Mickie Most that it had hit potential, and between tour stops the group went to a small recording studio on Kingsway in London to capture it.
The song was recorded in just one take on May 18, 1964, and it starts with a now - famous electric guitar A minor chord arpeggio by Hilton Valentine. According to Valentine, he simply took Dylan 's chord sequence and played it as an arpeggio. The performance takes off with Burdon 's lead vocal, which has been variously described as "howling, '' "soulful, '' and as "... deep and gravelly as the north - east English coal town of Newcastle that spawned him. '' Finally, Alan Price 's pulsating organ part (played on a Vox Continental) completes the sound. Burdon later said, "We were looking for a song that would grab people 's attention. ''
As recorded, "House of the Rising Sun '' ran four and a half minutes, regarded as far too long for a pop single at the time. Producer Most, who initially did not really want to record the song at all, said that on this occasion: "Everything was in the right place... It only took 15 minutes to make so I ca n't take much credit for the production ''. He was nonetheless now a believer and declared it a single at its full length, saying "We 're in a microgroove world now, we will release it. ''
In the United States however, the original single (MGM 13264) was a 2: 58 version. The MGM Golden Circle reissue (KGC 179) featured the unedited 4: 29 version, although the record label gives the edited playing time of 2: 58. The edited version was included on the group 's 1964 U.S. debut album The Animals, while the full version was later included on their best - selling 1966 U.S. greatest hits album, The Best of the Animals. However, the very first American release of the full - length version was on a 1965 album of various groups entitled Mickie Most Presents British Go - Go (MGM SE - 4306), the cover of which, under the listing of "House of the Rising Sun '', described it as the "Original uncut version. '' Americans could also hear the complete version in the movie Go Go Mania in the spring of 1965.
"House of the Rising Sun '' was not included on any of the group 's British albums, but it was reissued as a single twice in subsequent decades, charting both times, reaching number 25 in 1972 and number 11 in 1982, using the famous Wittlesbach organ.
The Animals version was played in 6 / 8 meter, unlike the 4 / 4 of most earlier versions. Arranging credit went only to Alan Price. According to Burdon, this was simply because there was insufficient room to name all five band members on the record label, and Alan Price 's first name was first alphabetically. However, this meant that only Price received songwriter 's royalties for the hit, a fact that has caused bitterness ever since, especially with Valentine.
"House of the Rising Sun '' was a trans - Atlantic hit: after reaching the top of the UK pop singles chart in July 1964, it topped the U.S. pop singles chart two months later, on September 5, 1964, where it stayed for three weeks, and became the first British Invasion number one unconnected with the Beatles. It was the group 's breakthrough hit in both countries and became their signature song. The song was also a hit in a number of other countries, including Ireland, where it reached No. 10 and dropped off the charts one week later.
Bob Dylan said he first heard The Animals ' version on his car radio and "jumped out of his car seat '' because he liked it so much; but he stopped playing the song after the Animals ' recording became a hit because fans accused him of plagiarism. Dave Van Ronk said that The Animals ' version -- like Dylan 's version before it -- was based on his arrangement of the song.
Dave Marsh described the Animals ' take on "The House of the Rising Sun '' as "... the first folk - rock hit, '' sounding "... as if they 'd connected the ancient tune to a live wire. '' Writer Ralph McLean of the BBC agreed that "It was arguably the first folk rock tune, '' calling it "a revolutionary single '', after which "the face of modern music was changed forever. ''
The Animals ' rendition of the song is recognized as one of the classics of British pop music. Writer Lester Bangs labeled it "a brilliant rearrangement '' and "a new standard rendition of an old standard composition. '' It ranked number 122 on Rolling Stone magazine 's list of "500 Greatest Songs of All Time ''. It is also one of the Rock and Roll Hall of Fame 's "500 Songs That Shaped Rock and Roll ''. The RIAA ranked it number 240 on their list of "Songs of the Century ''. In 1999 it received a Grammy Hall of Fame Award. It has long since become a staple of oldies and classic rock radio formats. A 2005 Channel Five poll ranked it as Britain 's fourth - favourite number one song.
In 1969, the Detroit band Frijid Pink recorded a psychedelic version of "House of the Rising Sun '', which became an international hit in 1970. Their version is in 4 / 4 time (like Van Ronk 's and most earlier versions, rather than the 6 / 8 used by the Animals) and was driven by Gary Ray Thompson 's distorted guitar with fuzz and wah - wah effects, set against the frenetic drumming of Richard Stevers.
According to Stevers, the Frijid Pink recording of "House of the Rising Sun '' was done impromptu when there was time left over at a recording session booked for the group at the Tera Shirma Recording Studios. Stevers later played snippets from that session 's tracks for Paul Cannon, the music director of Detroit 's premier rock radio station, WKNR; the two knew each other, as Cannon was the father of Stevers 's girlfriend. Stevers recalled, "we went through the whole thing and (and Cannon) did n't say much. Then ' House (of the Rising Sun) ' started up and I immediately turned it off because it was n't anything I really wanted him to hear. '' However, Cannon was intrigued and had Stevers play the complete track for him, then advising Stevers, "Tell Parrot (Frijid Pink 's label) to drop God Gave Me You (the group 's current single) and go with this one. ''
Frijid Pink 's "House of the Rising Sun '' debuted at # 29 on the WKNR hit parade dated January 6, 1970 and broke nationally after some seven weeks -- during which the track was re-serviced to radio three times -- with a number 73 debut on the Hot 100 in Billboard dated February 27, 1970 (number 97 Canada 1970 / 01 / 31) with a subsequent three - week ascent to the Top 30 en route to a Hot 100 peak of number 7 on April 4, 1970. The certification of the Frijid Pink single "House of the Rising Sun '' as a gold record for domestic sales of one million units was reported in the issue of Billboard dated May 30, 1970.
The Frijid Pink single of "House of the Rising Sun '' would give the song its most widespread international success, with Top Ten status reached in Austria (number 3), Belgium (Flemish region, number 6), Canada (number 3), Denmark (number 3), Germany (two weeks at number 1), Greece, Ireland (number 7), Israel (number 4), the Netherlands (number 3), Norway (seven weeks at number 1), Poland (number 2), Sweden (number 6), Switzerland (number 2), and the UK (number 4). The single also charted in Australia (number 14), France (number 36), and Italy (number 54).
The song has twice been a hit record on Billboard 's country chart.
In 1973, Jody Miller 's version reached number 29 on the country charts and number 41 on the adult contemporary chart.
Recorded by Brian Johnson 's band ' Geordie ' for their 1974 album "Do n't Be Fooled By The Name '', Johnson went on to become vocalist for AC / DC.
In September 1981, Dolly Parton released a cover of the song as the third single from her album 9 to 5 and Odd Jobs. Like Miller 's earlier country hit, Parton 's remake returns the song to its original lyric of being about a fallen woman. The Parton version makes it quite blunt, with a few new lyric lines that were written by Parton. Parton 's remake reached number 14 on the U.S. country singles chart and crossed over to the pop charts, where it reached number 77 on the Billboard Hot 100; it also reached number 30 on the U.S. adult contemporary chart. Parton has occasionally performed the song live, including on her 1987 -- 88 television show, in an episode taped in New Orleans.
The American heavy metal band Five Finger Death Punch released a cover of "House of the Rising Sun '' on their fifth studio album, The Wrong Side of Heaven and the Righteous Side of Hell, Volume 2, which was later released as the album 's second single and the band 's third single of the Wrong Side era. The references to New Orleans have been changed to Sin City, a reference to the negative effects of gambling in Las Vegas. The song was a top ten hit on mainstream rock radio in the United States. It also in the video game Guitar Hero Live.
The song was covered in French by Johnny Hallyday. His version (titled "Le Pénitencier '') was released in October 1964 and spent one week at no. 1 on the singles sales chart in France (from October 17 to 23). In Wallonia (French Belgium) his single spent 28 weeks on the chart, also peaking at number 1.
He performed the song during his 2014 USA tour.
Various places in New Orleans have been proposed as the inspiration for the song, with varying plausibility. The phrase "House of the Rising Sun '' is often understood as a euphemism for a brothel, but it is not known whether or not the house described in the lyrics was an actual or a fictitious place. One theory is that the song is about a woman who killed her father, an alcoholic gambler who had beaten his wife. Therefore, the House of the Rising Sun may be a jailhouse, from which one would be the first person to see the sun rise (an idea supported by the lyric mentioning "a ball and chain, '' though that phrase has been slang for marital relationships for at least as long as the song has been in print). Because women often sang the song, another theory is that the House of the Rising Sun was where prostitutes were detained while treated for syphilis. Since cures with mercury were ineffective, going back was very unlikely.
Only three candidates that use the name Rising Sun have historical evidence -- from old city directories and newspapers. The first was a small, short - lived hotel on Conti Street in the French Quarter in the 1820s. It burned down in 1822. An excavation and document search in early 2005 found evidence that supported this claim, including an advertisement with language that may have euphemistically indicated prostitution. Archaeologists found an unusually large number of pots of rouge and cosmetics at the site.
The second possibility was a "Rising Sun Hall '' listed in late 19th - century city directories on what is now Cherokee Street, at the riverfront in the uptown Carrollton neighborhood, which seems to have been a building owned and used for meetings of a Social Aid and Pleasure Club, commonly rented out for dances and functions. It also is no longer extant. Definite links to gambling or prostitution (if any) are undocumented for either of these buildings.
A third was "The Rising Sun '', which advertised in several local newspapers in the 1860s, located on what is now the lake side of the 100 block of Decatur Street. In various advertisements it is described as a "Restaurant, '' a "Lager Beer Salon, '' and a "Coffee House. '' At the time, New Orleans businesses listed as coffee houses often also sold alcoholic beverages.
Dave Van Ronk claimed in his biography "The Mayor of MacDougal Street '' that at one time when he was in New Orleans someone approached him with a number of old photos of the city from the turn of the century. Among them "was a picture of a forbidding stone doorway with a carving on the lintel of a stylized rising sun... It was the Orleans Parish women 's prison. ''
Bizarre New Orleans, a guidebook on New Orleans, asserts that the real house was at 1614 Esplanade Avenue between 1862 and 1874 and was said to have been named after its madam, Marianne LeSoleil Levant, whose surname means "the rising sun '' in French.
Another guidebook, Offbeat New Orleans, asserts that the real House of the Rising Sun was at 826 -- 830 St. Louis St. between 1862 and 1874, also purportedly named for Marianne LeSoleil Levant. The building still stands, and Eric Burdon, after visiting at the behest of the owner, said, "The house was talking to me. ''
There is a contemporary B&B called the House of the Rising Sun, decorated in brothel style. The owners are fans of the song, but there is no connection with the original place.
Not everyone believes that the house actually existed. Pamela D. Arceneaux, a research librarian at the Williams Research Center in New Orleans, is quoted as saying:
I have made a study of the history of prostitution in New Orleans and have often confronted the perennial question, "Where is the House of the Rising Sun? '' without finding a satisfactory answer. Although it is generally assumed that the singer is referring to a brothel, there is actually nothing in the lyrics that indicate that the "house '' is a brothel. Many knowledgeable persons have conjectured that a better case can be made for either a gambling hall or a prison; however, to paraphrase Freud: sometimes lyrics are just lyrics.
Notes
|
who played henry viii in the other boleyn girl | The Other Boleyn Girl (2008 film) - wikipedia
The Other Boleyn Girl is a 2008 British - American romantic historical drama film directed by Justin Chadwick. The screenplay by Peter Morgan was adapted from the 2001 novel of the same name by Philippa Gregory. It is a fictionalized account of the lives of 16th - century aristocrats Mary Boleyn, one - time mistress of King Henry VIII, and her sister, Anne, who became the monarch 's ill - fated second wife, though much history is distorted.
Production studio BBC Films also owns the rights to adapt the sequel novel, The Boleyn Inheritance, which tells the story of Anne of Cleves, Catherine Howard and Jane Parker.
King Henry VIII 's (Eric Bana) marriage to Catherine of Aragon (Ana Torrent) does not produce a male heir to the throne; their only surviving daughter is Mary (Constance Stride). Thomas Howard, Duke of Norfolk (David Morrissey) and his brother - in - law Thomas Boleyn (Mark Rylance), plan to install Boleyn 's older daughter Anne Boleyn (Natalie Portman), as the king 's mistress. They hope Anne will bear him a son. Anne 's mother, Lady Elizabeth Boleyn (Kristin Scott Thomas), is disgusted by the plot. Anne eventually agrees to please her father and uncle. Anne 's younger sister, Mary Boleyn (Scarlett Johansson), marries William Carey (Benedict Cumberbatch), even though his family had asked for Anne 's hand.
While visiting the Boleyn estate, Henry is injured in a hunting accident, indirectly caused by Anne, and, urged by her scheming uncle, is nursed by Mary. While in her care, Henry becomes smitten with her and invites her to court. Mary and her husband reluctantly agree, aware that the king has invited her because he desires her. Mary and Anne become ladies - in - waiting to Queen Catherine and Henry sends William Carey abroad on an assignment. Separated from her husband, Mary finds herself falling in love with Henry. Anne secretly marries the nobleman Henry Percy (Oliver Coleman), although he is betrothed to Lady Mary Talbot. Anne confides in her brother George Boleyn (Jim Sturgess), who is overjoyed and proceeds to tell Mary. Fearing Anne will ruin the Boleyn family by marrying such a prominent earl without the king 's consent, Mary alerts her father and uncle. They confront Anne, annul the marriage, and exile her to France.
Mary becomes pregnant. Her family receives new grants and estates, their debts are paid, and Henry arranges George 's marriage to Jane Parker. When Mary nearly suffers a miscarriage, she is confined to bed until her child is born. Norfolk recalls Anne to England to keep Henry 's attention from wandering to another rival. In her belief that Mary exiled her to increase her own status, Anne successfully campaigns to win Henry over. When Mary gives birth to a son, Henry Carey, Thomas and Norfolk are overjoyed, but the celebration is short lived, as Anne whispered to Henry that the baby was born a bastard, which infuriates Norfolk. Henry then has Mary sent to the country at Anne 's request. Shortly after, Mary is widowed. Anne encourages Henry to break from the Catholic Church when the Pope refuses to annul his marriage to Queen Catherine. Henry succumbs to Anne 's demands, declares himself Supreme Head of the Church of England, and gets Cardinal Thomas Wolsey to annul the marriage.
Henry comes to Anne 's rooms but she refuses to have sex with him, and, in a fit of rage, he rapes her. A pregnant Anne marries Henry to please her family and becomes Queen of England. Despite the birth of a healthy daughter, Elizabeth, Henry blames Anne for not producing a son, and begins courting Jane Seymour (Corinne Galloway) in secret. After Anne suffers the miscarriage of a son, she begs George to have sex with her to replace the child she lost, because if anyone found out about the miscarriage, she would be burned as a witch. George at first agrees, realizing that it is Anne 's only hope, but they do not go through with it. However, George 's neglected wife Jane witnesses enough of their encounter to become suspicious. She reports what she has seen and both Anne and George are arrested. The two are found guilty and sentenced to death for treason, adultery and incest. Distraught by the news of the execution of George, his mother disowns her husband and brother, vowing never to forgive them for what their greed has done to her children.
After Mary learns that she was late for George 's execution, she returns to court to plead for Anne 's life. Believing that Henry will spare her sister, she leaves to see Anne right before the scheduled execution. Anne asks Mary to take care of her daughter Elizabeth if anything should happen to her. Mary watches from the crowd as Anne makes her final speech, waiting for the execution to be cancelled as Henry promised. A letter from Henry is given to Mary, warning her not to come to his court further, and implicitly revealing his decision to execute Anne after all. Ten days after Anne 's execution, Henry and Jane are married, Norfolk is imprisoned, and the next three generations of his family are executed for treason. Mary marries William Stafford (Eddie Redmayne) and they have two children, Anne and Edward. Mary takes an active role in raising Anne 's daughter Elizabeth (Maisie Smith), who grows up to become Queen of England, and reigns for 44 years.
Much of the filming took place in Kent, England, though Hever Castle was not used, despite being the original household of Thomas Boleyn and family from 1505 -- 1539. The Baron 's Hall at Penshurst Place featured, as did Dover Castle, which stood in for the Tower of London in the film, and Knole House in Sevenoaks was used in several scenes. The home of the Boleyns was represented by Great Chalfield Manor in Wiltshire, and other scenes were filmed at locations in Derbyshire, including Cave Dale, Haddon Hall, Dovedale and North Lees Hall near Hathersage. Dover Castle was transformed into the Tower of London for the execution scenes of George and Anne Boleyn. Knole was the setting for many of the film 's London night scenes and the inner courtyard doubles for the entrance of Whitehall Palace where the grand arrivals and departures were staged. The Tudor Gardens and Baron 's Hall at Penshurst Place were transformed into the interiors of Whitehall Palace, including the scenes of Henry 's extravagant feast.
Historian Alex von Tunzelmann criticized The Other Boleyn Girl for its portrayal of the Boleyn family and Henry VIII, citing factual errors. She stated, "In real life, by the time Mary Boleyn started her affair with Henry, she had already enjoyed a passionate liaison with his great rival, King François I of France. Rather ungallantly, François called her ' my hackney ', explaining that she was fun to ride. Chucked out of France by his irritated wife, Mary sashayed back to England and casually notched up her second kingly conquest. The film 's portrayal of this Boleyn girl as a shy, blushing damsel could hardly be further from the truth. '' She further criticized the depiction of Anne as a "manipulative vixen '' and Henry as "nothing more than a gullible sex addict in wacky shoulder pads ''. The film presents other historical inaccuracies, such as the statement of a character that, by marrying Henry Percy, Anne Boleyn would become Duchess of Northumberland, a title that was only created in the reign of Henry 's son, Edward VI. Also, it places Anne 's time in French court after her involvement with Percy, something that occurred before the affair.
The film was first released in theaters on February 29, 2008, though its world premiere was held at the 58th Berlin International Film Festival held on February 7 -- 17, 2008. The film earned $9,442,224 in the United Kingdom, and $26,814,957 in the United States and Canada. The combined worldwide gross of the film was $75,598,644, more than double the film 's $35 million budget.
The film was released in Blu - ray and DVD formats on June 10, 2008. Extras on both editions include an audio commentary with director Justin Chadwick, deleted and extended scenes, character profiles, and featurettes. The Blu - ray version includes BD - Live capability and an additional picture - in - picture track with character descriptions, notes on the original story, and passages from the original book.
The film received mixed reviews. Rotten Tomatoes reported that 42 % of critics gave the film positive reviews, based on 140 reviews. The site 's general consensus is: "Though it features some extravagant and entertaining moments, The Other Boleyn Girl feels more like a soap opera than historical drama. '' Metacritic reported the film had an average score of 50 out of 100, based on 34 reviews.
Manohla Dargis of The New York Times called the film "more slog than romp '' and an "oddly plotted and frantically paced pastiche. '' She added, "The film is both underwritten and overedited. Many of the scenes seem to have been whittled down to the nub, which at times turns it into a succession of wordless gestures and poses. Given the generally risible dialogue, this is n't a bad thing. ''
Mick LaSalle of the San Francisco Chronicle said, "This in an enjoyable movie with an entertaining angle on a hard - to - resist period of history... Portman 's performance, which shows a range and depth unlike anything she 's done before, is the No. 1 element that tips The Other Boleyn Girl in the direction of a recommendation... (She) wo n't get the credit she deserves for this, simply because the movie is n't substantial enough to warrant proper attention. ''
Peter Travers of Rolling Stone stated, "The film moves in frustrating herks and jerks. What works is the combustible teaming of Natalie Portman and Scarlett Johansson, who give the Boleyn hotties a tough core of intelligence and wit, swinging the film 's sixteenth - century protofeminist issues handily into this one. ''
Peter Bradshaw of The Guardian awarded the film three out of five stars, describing it as a "flashy, silly, undeniably entertaining Tudor romp '' and adding, "It is absurd yet enjoyable, and playing fast and loose with English history is a refreshing alternative to slow and tight solemnity; the effect is genial, even mildly subversive... It is ridiculous, but imagined with humour and gusto: a very diverting gallop through the heritage landscape. ''
Sukhdev Sandhu of The Telegraph said, "This is a film for people who prefer their costume dramas to gallop along at a merry old pace rather than get bogged down in historical detail... Mining relatively familiar material here, and dramatising highly dubious scenarios, (Peter Morgan) is unable to make the set - pieces seem revelatory or tart... In the end, The Other Boleyn Girl is more anodyne than it has any right to be. It ca n't decide whether to be serious or comic. It promises an erotic charge that it never carries off, inducing dismissive laughs from the audience for its soft - focus love scenes soundtracked by swooning violins. It is tasteful, but unappetising. ''
|
top 10 largest soccer stadiums in the world | List of association football stadiums by capacity - wikipedia
The following is a list of association football stadiums. They are ordered by their seating capacity, that is the maximum number of spectators that the stadium can accommodate in seated areas. All stadiums that are the home of a club or national team with a capacity of 40,000 or more are included. That is the minimum capacity required for a stadium to host FIFA World Cup finals matches.
The list contains both stadiums used solely for football, and those used for other sports as well as football. Some stadiums are only used by a team for certain high attendance matches, like local derbies or cup games.
|
who sings freak flag in shrek the musical | Shrek the Musical - wikipedia
Shrek The Musical is a musical with music by Jeanine Tesori and book and lyrics by David Lindsay - Abaire. It is based on the 2001 DreamWorks Animation 's film Shrek and William Steig 's 1990 book Shrek!. After a trial run in Seattle, the original Broadway production opened in December 2008 and closed after a run of over 12 months in January 2010. It was followed by a tour of the United States which opened in 2010, and a re-vamped West End production from June 2011 to February 2013. Since its debut, the musical 's rights have been available for independent theaters overseas, who have chosen to stage their own versions of the show.
A high definition filming of the Broadway production was released on DVD, Blu - ray and digital download on October 15, 2013 in North America and December 2 in the United Kingdom.
Lindsay - Abaire and Jason Moore (director) began working on the show in 2002, with Tesori joining the team from 2004. A reading took place on August 10, 2007, with Stephen Kramer Glickman in the role of Shrek, Celia Keenan - Bolger as Princess Fiona, Robert L. Daye, Jr. as Donkey and Christopher Sieber as Lord Farquaad.
The musical premiered in an out - of - town tryout at the 5th Avenue Theatre in Seattle. Previews began August 14, 2008, with an opening night of September 10. The tryout ran through September 21, and played to generally favorable reviews, being cited as one of the few movie - to - stage adaptations "with heart ''. The principal cast included Brian d'Arcy James as Shrek, Sutton Foster as Princess Fiona, Christopher Sieber as Lord Farquaad, Chester Gregory II as Donkey, John Tartaglia as Pinocchio and Kecia Lewis - Evans as the Dragon.
During previews, "I Could Get Used to This '' was replaced by "Do n't Let Me Go, '' and "Let Her In '' became "Make a Move ''. Also during previews, a brief reprise of "Who I 'd Be '' was sung after Shrek overhears Fiona 's misleading comment about being with a hideous beast, which led into "Build a Wall ''. This was cut and "Build a Wall '' was placed after "Morning Person (Reprise) ''. "Build a Wall '' was later cut during previews, but re-instated towards the end of the run.
After extensive changes were made, the show began previews on Broadway at the Broadway Theatre on November 8, 2008, with the official opening on December 14. The cast included Brian d'Arcy James as Shrek, Sutton Foster as Fiona, Christopher Sieber as Farquaad and Tartaglia as Pinocchio. Daniel Breaker took over the role of Donkey, as the creative team thought Chester Gregory II did not fit the part. The Dragon was voiced by company members Haven Burton, Aymee Garcia and Rachel Stern, instead of a soloist. Kecia Lewis - Evans, who played Dragon in Seattle, was offered a part in the show 's ensemble but declined. Ben Crawford was the standby for Shrek, until he replaced d'Arcy James for the final months of performances.
The song "I 'm a Believer '', which was originally played as the audience left the theatre, was added to the score on October 2, 2009, and sung by the entire company at the end of the performance.
The Broadway production of the show received a total of twelve Drama Desk Award and eight Tony Award nominations, including Best Musical and acting awards for d'Arcy James, Foster and Sieber. At the Tony Awards, the entire cast performed a section of "Freak Flag '' for the opening number medley; later on, d'Arcy James, Foster and Breaker introduced Sieber and company, who performed "What 's Up Duloc? ''.
The Broadway production closed on January 3, 2010, after 441 performances and 37 previews. At the time, it was one of the most expensive musicals to open on Broadway, at an estimated $25 million, and despite generally good reviews, it failed to recoup its initial investment. The show was then extremely modified for the national tour.
A national tour of North America began previews at the Cadillac Palace Theatre in Chicago, on July 13, 2010, with opening night on July 25. Rob Ashford is the co-director, as the Broadway creative team revised changes. The production marked the debut of an all - new Dragon, voiced off - stage by a single vocalist, with four puppeteers controlling the movements of the new 25 - foot puppet... On the subject, set designer Tim Hatley stated "The biggest change (will be) the dragon. It will be a different creature from the puppet / soul trio on Broadway (but) I think we 've finally gotten it right ''. The tour also features a new opening, new songs and improved illusions, from those on Broadway.
Many changes made for the tour include a new song sung by the dragon entitled "Forever '', replacing "Donkey Pot Pie ''.
The original touring cast featured Eric Petersen as Shrek, Haven Burton as Princess Fiona, Alan Mingo, Jr. as Donkey, and David F.M. Vaughn as Lord Farquaad. Carrie Compere played the Dragon, with Blakely Slaybaugh as Pinocchio. Todd Buonopane was originally cast in the role of Lord Farquaad, but was replaced by Vaughn before opening. The tour played its final performance at the Pantages Theatre in Los Angeles on July 31, 2011, ahead of a non-equity tour in September.
A second tour of North America, featuring a Non-Equity cast, launched September 9, 2011, at the Capitol Theatre in Yakima, Washington. Merritt David Janes appeared as Lord Farquaad. The tour officially opened in Portland, Oregon on September 13, 2011. The tour ran in the U.S. through April 29, 2012, with the final show in Springfield, Missouri, before playing Asia.
The second non-equity tour began October 5, 2012, in Anchorage, Alaska, ending on April 7, 2013, in Reno, Nevada.
A newly revised scaled down version, began performances in the West End at the Theatre Royal Drury Lane, on May 6, 2011. Nigel Lindsay headlined as Shrek, Richard Blackwood as Donkey, Nigel Harman as Lord Farquaad and Amanda Holden as Princess Fiona. Landi Oshinowo plays the Dragon, with Jonathan Stewart as Pinocchio. The official opening night took place on June 14, 2011. Most critics were positive about the production, and in particular praised Harman 's performance, branding him "hysterically funny ''.
The show was nominated for a total of four awards at the 2012 Laurence Olivier Awards, including Best New Musical, Best Actor for Lindsay and Supporting Actor for Harman, as well as Best Costume Design for Tim Hatley. Harman won the award for Best Performance in a Supporting Role in a Musical for his performance as Lord Farquaad. The ensemble cast performed "Freak Flag '' at the awards.
Kimberley Walsh, of UK pop group Girls Aloud, took over the role of Princess Fiona from October 5, 2011, after Holden announced her pregnancy. Dean Chisnall and Neil McDermott took over from Lindsay and Harman as Shrek and Lord Farquaad respectively on February 29, 2012. Carley Stenson later took over as Princess Fiona from May 23, 2012.
The London production of the show came to an end after 715 performances, on February 24, 2013. Producers announced their plans to tour Shrek across the UK in 2014.
The first UK and Ireland tour began at the Grand Theatre in Leeds, England on July 23, 2014, before touring across the UK and Ireland. Dean Chisnall repeats his West End performance as Shrek, under the direction of Nigel Harman, who originated the role of Lord Farquaad in the West End. A full company announcement was made in February 2014, with Chisnall to be joined by Legally Blonde star Faye Brookes as Princess Fiona, Gerard Carey as Lord Farquaad, Idriss Kargbo as Donkey, Candace Furbert as Dragon and Will Haswell as Pinocchio. A cast change for the tour took place July 8, 2015, with ensemble member Bronté Barbé taking over the role of Princess Fiona from Brookes. The tour concluded at The Lowry, Salford on February 20, 2016.
A second UK and Ireland tour commenced at the Edinburgh Playhouse from 12 December 2017. Nigel Harman once again directs the tour. The tour is currently scheduled to run until at least January 2019. The full cast was announced in November 2017 The X Factor star Amelia Lily and Call the Midwife actress Laura Main will share the role of Princess Fiona, alongside Samuel Holmes as Lord Farquaad, Stefan Harri as Shrek and Marcus Ayton as Donkey
Shrek was made available for independent US and overseas theatres. They have chosen to stage their own versions of the show with the same music, book and lyrics intact and their own designs for sets, costuming and other creative elements. Productions have been staged in Asia, Poland, Spain, France, Italy, Germany, The Netherlands, Belgium, Mexico, Brazil, the Philippines, Estonia, Finland, Israel, Sweden, Panama, Denmark, Canada, Puerto Rico, Argentina and Norway.
On his seventh birthday, two ogre parents send their son Shrek out of their house and into the world to make his living. They warn him that because of his looks, humans will hate him, and the last thing he will see is an angry mob before he dies. ("Big Bright Beautiful World ''). Some years later, an embittered, grown up Shrek is living contentedly alone in a swamp. However, his solitude is disrupted when a band of fairytale creatures show up on his property, including:
They explain of their banishment from the Kingdom of Duloc, by order of the evil Lord Farquaad, who exiled them for being freaks, under penalty of death if they ever return ("Story of My Life ''). Even so, Shrek decides to travel to see Farquaad and try to regain his swamp, along with getting the Fairytale Creatures their homes back (but mostly to get his swamp back), with much encouragement from Pinocchio and the gang ("The Goodbye Song '').
Along the way, Shrek rescues a talkative donkey from some of Farquaad 's guards. In return for rescuing him, and offering his friendship, Donkey insists on tagging along to show him the way to Duloc ("Do n't Let Me Go ''), which Shrek reluctantly agrees to, due to him being lost.
Meanwhile, in the Kingdom of Duloc, Farquaad is torturing Gingy the Gingerbread Man into revealing the whereabouts of other Fairytale Creatures that are still hiding in the Kingdom so he can have them arrested as well. They are interrupted by one of his henchmen, Thelonious, who reveals that Farquaad 's guards have acquired the Magic Mirror. Farquaad then sends Gingy to the swamp with all the other Fairytale Creatures. The Mirror reveals that Princess Fiona is currently trapped in a castle surrounded by hot boiling lava and guarded by a terrible fire - breathing dragon. Accepting this task, Farquaad decides to marry her to become king, and rushes out to prepare for the wedding before the Mirror can tell him what happens to Fiona at night. The Mirror then shows the audience the story of Fiona 's childhood.
A seven - year - old Fiona dreams of the brave knight who, as her storybooks tell her, will one day rescue her from her tower and end her mysterious curse with "True Love 's First Kiss ''. As she grows into a teenager, and then a headstrong woman, she becomes a little bit stir - crazy, but she never loses her faith in her fairytales ("I Know It 's Today '').
Shrek and Donkey arrive in Duloc where Farquaad expresses his love for his kingdom, accompanied by his cheerful cookie - cut army of Duloc Dancers ("Welcome to Duloc '' / "What 's Up, Duloc? ''). They approach Farquaad, with him being impressed by Shrek 's size and appearance. Farquaad demands that Shrek must rescue Fiona, and in return, he will give Shrek the deed to his swamp ("What 's Up, Duloc? (Reprise) '').
The two unlikely friends set off to find Fiona, with Shrek becoming increasingly annoyed with Donkey as time progresses ("Travel Song ''). After crossing the rickety old bridge and arriving at the castle, Shrek sets off alone to rescue Fiona while Donkey encounters a ferocious female dragon who initially wants to eat him, but then wants to keep him for her own after Donkey manages to charm her ("Donkey Pot Pie '') / ("Forever ''). When Shrek finds Fiona, his lack of interest in playing out her desired, romantic rescue scene annoys her, and he drags her off by force. The two of them reunite with Donkey, and all three attempt to escape while being chased by the angry Dragon. Shrek traps Dragon and they get to a safe point ("This Is How A Dream Comes True '').
Fiona then insists that Shrek reveal his identity and is shocked that her rescuer is an ogre and not the Prince Charming her stories indicated. Shrek explains that he is merely her champion; instead, she is to marry Farquaad. The trio begin their journey back to Duloc, but Fiona becomes apprehensive as the sun begins to set. She insists that they rest for the night and that she spend the night alone in a nearby cave. Donkey and Shrek remain awake, with Donkey asking Shrek who he would be, if he did not have to be an ogre anymore. As Shrek opens up to Donkey on who he would wish to be, Fiona transforms into an ogress as part of her curse that happens during sunset, stands apart, alone, and listens ("Who I 'd Be '').
The next day, Princess Fiona rises early and sings with a bluebird and dances with a deer (before making the bird explode and throwing the deer off a cliff). She assists the Pied Piper in his rat - charming duties ("Morning Person ''). Shrek brings down her mood by attempting to give subtle hints about her groom - to - be ("Men of Farquaad 's stature are in short supply '', "He 's very good at small talk '', etc.) and mocking her tragic childhood circumstances. The two begin a contest of trying to one - up each other to outdo the others ' backstory, but end up revealing their respective pasts ("I Think I Got You Beat ''). Both admit to being thrown out by their parents; this connection, as well as bonding over a love of disgusting bodily noises, kindles friendship.
Back in Duloc, Lord Farquaad plans his wedding, and he reveals his own sordid heritage after The Magic Mirror insists that Farquaad should invite his father, but Farquaad refuses, explaining how he abandoned him in the woods as a child ("The Ballad of Farquaad '').
As Shrek and Fiona 's newfound camaraderie grows into love, Donkey insists, with the help of the Three Blind Mice from his imagination, that Shrek should gather his courage and romantically engage Fiona ("Make a Move ''). Shrek, finally beginning to come out of his caustic, protective shell, tries to find the words to explain his feelings to Fiona ("When Words Fail '').
While Shrek is out finding a flower for Fiona, Donkey discovers that Fiona turns into an ogress at night, and she confesses that she was cursed as a child, which is why she was locked away in the tower. Only a kiss from her true love will return her to her proper form, and she asks Donkey to promise never to tell. Shrek arrives near the end of the conversation and misunderstands Fiona 's description of herself as an ugly beast, and thinks she is talking about him. Hurt by her presumed opinion, Shrek storms off.
The next day, transformed back to her human form, Fiona decides to tell Shrek about her curse ("Morning Person (Reprise) ''). When she tries to explain, Shrek rebuffs her with his "ugly beast '' overhearing, causing Fiona in turn to misunderstand. Then Farquaad arrives to claim Fiona and tells Shrek he has cleared the swamp of the Fairytale Creatures, and now belongs to Shrek again. While not very impressed with Farquaad, Fiona agrees to marry him and insists that they have the wedding before sunset. As Farquaad and Fiona ride back to Duloc, Donkey tries to explain the misunderstanding to Shrek (who is too angry and upset to listen), and Shrek rejects him as well, declaring that he will return to his swamp alone and build a wall to shield himself from the world ("Build a Wall '').
Meanwhile, the Fairytale Creatures are on their way to a landfill which is to be their new home, since they have been evicted from the swamp. After dealing with the fact that Shrek broke his promise to them, they start agreeing that Farquaad 's treatment of them is intolerable; just because they are freaks does not mean they deserve to be hated, and start deciding on doing something about it. But a bitter Pinocchio (remembering they are not allowed back in the Kingdom, or they will be executed), not wanting his friends to get hurt, suggests they should just keep going, and wait until everything gets better, all the while, wishing to be a real boy. The gang convince Pinocchio that they need to stand up to Farquaad, while also inspiring him to accept who he is, as all of them have accepted who they are. They gather new confidence and strength in themselves, as they declare that they 'll raise their "Freak Flag '' high against their tormentors ("Freak Flag ''). Now realizing that they have become something more than friends, and have become a family, Pinocchio and the gang now set off to Duloc to stand up to Farquaad once and for all.
Shrek has returned to his once again private swamp, but he misses Fiona. Donkey shows up attempting to seal off his half of the swamp with stone boulders, which Shrek rebuffs. In turn, Donkey angrily berates Shrek for his reclusive and stubborn habits, even to the point of driving off Fiona. An angered Shrek reveals he heard her talking about a hideous creature the night before, and Donkey retorts that they were not talking about him, but of "someone else ''. When a confused Shrek inquires who it was, Donkey, wanting to keep his promise, and still cross with Shrek, refuses to talk. When Shrek apologizes and extends his friendship, Donkey forgives him.
The two then go back to Duloc, where Shrek interrupts the wedding before Farquaad can kiss Fiona, and Fiona convinces him to let Shrek speak with her. Shrek finally finds the words to express his feelings for Fiona, and he declares his love for her ("Big Bright Beautiful World (Reprise) ''). However, his declaration of love is mocked by Farquaad. Caught between love and her desire to break the curse, Fiona tries to escape the event. Just then, the Fairytale Creatures storm into the wedding and protest their banishment. They are accompanied by Grumpy, one of the Seven Dwarfs, who reveals that he is Farquaad 's father and he kicked Farquaad out at the age of 28 when he would n't move out of the basement, revealing Farquaad is a freak as well. During the argument, the sun sets, causing Fiona to turn into an ogress in front of everyone. Farquaad, furious and disgusted over the change, orders for Shrek to be drawn and quartered (along with the Fairytale Creatures) and Fiona banished back to her tower. As Farquaad proclaims himself the new King, Shrek whistles for the Dragon, who has now escaped the castle (and is the reason Shrek and Donkey got to the wedding just in time). Dragon then crashes through the window with Donkey and incinerates Farquaad with her fiery breath.
With Farquaad dead, Shrek and Fiona admit their love for each other and share true love 's first kiss. Fiona 's curse is broken, and she takes her true form: an ogress. At first, she is ashamed of her looks, but Shrek declares that she is still beautiful. The two ogres begin a new life together (along with Donkey and the Fairytale Creatures), as everyone celebrates what makes them special ("This Is Our Story '') and they all live happily ever after ("I 'm A Believer '').
≠ Not included on the original Broadway cast recording. "I 'm a Believer '', however, was recorded later and released as a single as it was not in the show when the cast recording was made.
Changes
The original principal casts of the English - speaking productions.
The Orchestra includes one bass guitar player, one trumpeter, one trombonist, two guitar players, one drummer, two violinists, two reed players, one horn player, two keyboard players, a cello player, and a percussion player. The guitar players double on ukulele, mandolins, electric guitars and acoustic guitars. The trumpeter doubles on a flugelhorn and a piccolo trumpet. The trombonist doubles on tenor and bass trombones. The bass player doubles on the upright bass, the electric bass, and the 5 - string bass guitar. The first reed doubles on alto sax, clarinet, flute, and piccolo. The second reed doubles on soprano sax, baritone sax, tenor sax, flute, bass clarinet, and clarinet.
The original Broadway orchestration included an additional trumpet, an additional trombone, two more violinists, one more cellist, and two more reed players. In this orchestration, the first reed doubles on piccolo, flute, and recorder. The second reed doubles on oboe, English horn, clarinet, and alto sax. The third reed doubles on flute, clarinet, bass clarinet, soprano sax, and tenor sax. The fourth reed doubles on clarinet, bassoon, and baritone sax.
The original Broadway cast recording was recorded on January 12, 2009, and was released on March 24, 2009, by Decca Broadway Records. The album debuted at # 1 on Billboard 's Top Cast Albums chart and # 88 on the Billboard 200. "I 'm a Believer '' was not featured on the initial recording as it was only added to the show on October 2, 2009. It was later included as part of a Highlighted Cast Recording, released on November 17, 2009. On December 4, 2009, when the Grammy Award nominees were announced, the cast recording was nominated for Best Musical Show Album.
"Donkey Pot Pie '' (which is included on the original Broadway cast recording) was cut from future productions, replaced by "Forever ''. The song became available on iTunes in 2011. It was recorded during a live performance of the national tour in Chicago, and features Carrie Compere (Dragon) and Alan Mingo, Jr. (Donkey).
The original London cast recorded a single of "I 'm a Believer '' for promotional purposes.
An original Spanish - language cast recording featuring the Madrid cast was recorded between August and September 2011, and released in September. The Spanish album includes later added songs "Forever '' and "I 'm a Believer '', as well as different orchestrations to the Broadway recording and the arrangements made for the national tour.
The musical received mixed to positive reviews from critics. Ben Brantley wrote in The New York Times: "' Shrek, ' for the record, is not bad... As the title character, a misanthropic green ogre who learns to love, the talented Mr. James is... encumbered with padding and prosthetics... As the evil, psychologically maimed Lord Farquaad, the very droll Christopher Sieber is required to walk on his knees, with tiny fake legs dangling before him -- an initially funny sight gag that soon drags ''. He praises Sutton Foster as "an inspired, take - charge musical comedian... Ms. Foster manages both to make fun of and exult in classical musical - comedy moves while creating a real, full character at the same time. ''
Variety noted that the production had a reported budget of $24 million. Any "theme - park cutesiness is offset by the mischievous humor in David Lindsay - Abaire 's book and lyrics. The production 's real achievement, however, is that the busy visuals and gargantuan set - pieces never overwhelm the personalities of the actors or their characters. The ensemble is talented and the four leads, in particular, could n't be better. ''
The Associated Press said that "the folks at DreamWorks have done their darndest to make sure we are entertained at Shrek the Musical, the company 's lavish stage adaptation of its hit animated movie. For much of the time, they succeed, thanks to the talent and ingratiating appeal of the show 's four principal performers. The show 's massive sets and colorful costumes (both courtesy of Tim Hatley) are so visually eye - catching that they often distract from what 's going on with the story and score. Composer Jeanine Tesori has written attractive, eclectic, pop - flavored melodies that range from a jaunty ' Travel Song ' to a gutsy duet called ' I Got You Beat ' for Shrek and Fiona that revels in rude noises. '' The review also noted that Lindsay - Abaire 's lyrics are often fun and quite witty.
USA Today gave the show three and 1⁄2 out of four stars, writing: "Shrek, which draws from William Steig 's book about a lovable ogre and the DreamWorks animated movie that it inspired, is nonetheless a triumph of comic imagination with a heart as big and warm as Santa 's. It is the most ingeniously wacky, transcendently tasteless Broadway musical since The Producers, and more family - friendly than that gag-fest. '' The review also noted, however, that "Like other musical adaptations of hit films, Shrek... leans heavily on winking satire. There are the usual nods to more fully realized shows, from Gypsy to A Chorus Line, and Jeanine Tesori 's blandly ingratiating score does n't feature any songs you 're likely to be humming 20 years from now. ''
In October 2009, Jeffrey Katzenberg said that a performance of the Broadway production had been recorded for a potential DVD release. However, due to the national tour and West End productions running considerably longer, the idea was put on - hold. On July 19, 2013, following the closure of the national tour and West End productions, Amazon.com confirmed that the filmed performance would available for instant viewing on September 17, 2013. It also became available "in HD for playback on Kindle Fire HD, Xbox 360, PlayStation 3, Roku or other HD compatible devices '' beginning October 15, 2013. The home video release is also available on Netflix Streaming as of January, 2014. A DVD, Blu - ray, and digital download was also released on that day. The performance is an edit of several live performances as well as a performance shot without an audience. The original principal cast appear, as well as various alumni across the show 's Broadway run. Also, it keeps the song "Donkey Pot Pie '' instead of the replacement, "Forever. ''
|
what's the drinking age in puerto rico 2017 | Legal drinking age - wikipedia
The legal drinking age is the age at which a person can legally consume or purchase alcoholic beverages. These laws cover a wide range of issues and behaviors, addressing when and where alcohol can be consumed. The minimum age alcohol can be legally consumed can be different from the age when it can be purchased in some countries. These laws vary among different countries and many laws have exemptions or special circumstances. Most laws apply only to drinking alcohol in public places, with alcohol consumption in the home being mostly unregulated (an exception being the UK, which has a minimum legal age of five for supervised consumption in private places). Some countries also have different age limits for different types of alcoholic drinks.
Some Islamic nations prohibit Muslims, or both Muslims and non-Muslims, from drinking alcohol at any age. In other countries, it is not illegal for minors to drink alcohol, but the alcohol can be seized without compensation. In some cases, it is illegal to sell or give alcohol to minors. The following list indicates the age of the person for whom it is legal to consume and purchase alcohol.
Kazakhstan, Oman, Pakistan, Qatar, Sri Lanka, Tajikistan, Thailand, United Arab Emirates, Federated States of Micronesia, Palau, Paraguay, Solomon Islands, India (certain states), the United States (except U.S. Virgin Islands and Puerto Rico), Yemen (Aden and Sana'a), Japan, Iceland, Canada (certain Provinces and Territories), and South Korea have the highest set drinking ages, however some of these countries do not have off - premises drinking limits. Austria, Belgium, Burundi, Denmark, Germany, Georgia, Luxembourg, Moldova and Western Sahara have the lowest set drinking ages.
Federal law explicitly provides for religious exceptions. As of 2005, thirty - one states have family member or location exceptions to their underage possession laws. However, non-alcoholic beer in many (but not all) states, such as Idaho, Texas, and Maryland, is considered legal for those under the age of twenty - one.
By a judge 's ruling, South Carolina appears to allow the possession and consumption of alcohol by adults eighteen to twenty years of age, but a circuit court judge has said otherwise.
The states of Washington and Wisconsin allows the consumption of alcohol in the presence of parents.
Some U.S. states have legislation that make providing to and possession of alcohol by persons under twenty - one a gross misdemeanor with a potential penalty of a $5,000 fine or up to year in jail.
Main Legislation
The legal age for drinking alcohol is 18 in Abu Dhabi (although a Ministry of Tourism by - law allows hotels to serve alcohol only to those over 21), and 21 in Dubai and the Northern Emirates (except Sharjah, where drinking alcohol is illegal).
It is a punishable offence to drink, or to be under the influence of alcohol, in public.
By tradition, youths are privately allowed to drink alcohol after their confirmation. If a shop or bar fails to ask for an ID card and is identified having sold alcohol to an underage, it is subject to fine. A national ID card, obtained in the local town hall, can serve as age verification. This card is rarely used though since a passport or drivers license is more commonly used.
Both the legal drinking and purchasing age in the Faroe Islands is 18.
Police may search minors in public places and confiscate or destroy any alcoholic beverages in their possession. Incidents are reported to the legal guardian and social authorities, who may intervene with child welfare procedures. In addition, those aged 15 or above are subject to a fine.
In private, offering alcohol to a minor is considered a criminal offence if it results in drunkenness and the act can be deemed reprehensible as a whole, considering the minor 's age, degree of maturity and other circumstances.
16 (in public) (14 when accompanied by a custodial person) for beer and wine 18 for spirits and aliments containing spirits above negligible amount
The minimum age to be served in licensed premises is 16 if:
Alcohol with more than 60 % ABV is generally not sold in Norway, although exceptions may be made by the government for specific products.
Alcohol possessed by minors may be confiscated as evidence. Drinking in public is prohibited, though this is rarely enforced in recreational areas.
None
None (less than 2.25 % ABV) 18 (bars and restaurants), 18 (2.25 % -- 3.5 % ABV in food shops), 20 (Systembolaget shops),
It is illegal to sell, serve, offer or consume alcoholic beverages in public under the age of 18.
Under the BBPA 's Challenge 21 and Challenge 25 schemes, customers attempting to buy alcoholic beverages are asked to prove their age if in the retailer 's opinion they look under 21 (or optionally 25) even though the law states they must be a minimum of 18. Many supermarket and off - licence chains display Challenge 21 (or Challenge 25) notices stating that they will not serve persons who look under 21 (or 25) without ID.
|
name a solid that is an oxidizing agent | Oxidizing agent - wikipedia
In chemistry, an oxidizing agent (oxidant, oxidizer) is a substance that has the ability to oxidize other substances (cause them to lose electrons). Common oxidizing agents are oxygen, hydrogen peroxide and the halogens.
In one sense, an oxidizing agent is a chemical species that undergoes a chemical reaction that removes one or more electrons from another atom. In that sense, it is one component in an oxidation -- reduction (redox) reaction. In the second sense, an oxidizing agent is a chemical species that transfers electronegative atoms, usually oxygen, to a substrate. Combustion, many explosives, and organic redox reactions involve atom - transfer reactions.
Electron acceptors participate in electron - transfer reactions. In this context, the oxidizing agent is called an electron acceptor and the reducing agent is called an electron donor. A classic oxidizing agent is the ferrocenium ion Fe (C 5H 5) + 2, which accepts an electron to form Fe (C H). One of the strongest acceptors commercially available is "Magic blue '', the radical cation derived from N (C H - 4 - Br).
Extensive tabulations of ranking the electron accepting properties of various reagents (redox potentials) are available, see Standard electrode potential (data page).
In more common usage, an oxidising agent transfers oxygen atoms to a substrate. In this context, the oxidising agent can be called an oxygenation reagent or oxygen - atom transfer (OAT) agent. Examples include MnO − 4 (permanganate), CrO 2 − 4 (chromate), OsO (osmium tetroxide), and especially ClO − 4 (perchlorate). Notice that these species are all oxides.
In some cases, these oxides can also serve as electron acceptors, as illustrated by the conversion of MnO − 4 to MnO 2 − 4, manganate.
The dangerous materials definition of an oxidizing agent is a substance that can cause or contribute to the combustion of other material. By this definition some materials that are classified as oxidising agents by analytical chemists are not classified as oxidising agents in a dangerous materials sense. An example is potassium dichromate, which does not pass the dangerous goods test of an oxidising agent.
The U.S. Department of Transportation defines oxidizing agents specifically. There are two definitions for oxidizing agents governed under DOT regulations. These two are Class 5; Division 5.1 and Class 5; Division 5.2. Division 5.1 "means a material that may, generally by yielding oxygen, cause or enhance the combustion of other materials. '' Division 5.1 of the DOT code applies to solid oxidizers "if, when tested in accordance with the UN Manual of Tests and Criteria (IBR, see § 171.7 of this subchapter), its mean burning time is less than or equal to the burning time of a 3: 7 potassium bromate / cellulose mixture. '' 5.1 of the DOT code applies to liquid oxidizers "if, when tested in accordance with the UN Manual of Tests and Criteria, it spontaneously ignites or its mean time for a pressure rise from 690 kPa to 2070 kPa gauge is less than the time of a 1: 1 nitric acid (65 percent) / cellulose mixture. ''
|
explain why insects excrete uric acid as their principal nitrogenous waste | Metabolic waste - wikipedia
Metabolic wastes or excretes are substances left over from metabolic processes (such as cellular respiration), which can not be used by the organism (they are surplus or toxic), and must therefore be excreted. This includes nitrogen compounds, water, CO, phosphates, sulfates, etc. Animals treat these compounds as excretes. Plants have chemical "machinery '' which transforms some of them (primarily the nitrogen compounds) into useful substances, and it has been shown by Brian J. Ford that abscised leaves also carry wastes away from the parent plant. In this way, Ford argues that the shed leaf acts as an excretory (an organ carrying away excretory products).
All the metabolic wastes are excreted in a form of water solutes through the excretory organs (nephridia, Malpighian tubules, kidneys), with the exception of CO, which is excreted together with the water vapor throughout the lungs. The elimination of these compounds enables the chemical homeostasis of the organism.
The nitrogen compounds through which excess nitrogen is eliminated from organisms are called nitrogenous wastes (/ naɪˈtrɒdʒɪnəs /) or nitrogen wastes. They are ammonia, urea, uric acid, and creatinine. All of these substances are produced from protein metabolism. In many animals, the urine is the main route of excretion for such wastes; in some, the feces is.
Ammonotelism is the excretion of ammonia and ammonium ions. Ammonia (NH) forms with the oxidation of amino groups. (- NH), which are removed from the proteins when they convert into carbohydrates. It is a very toxic substance to tissues and extremely soluble in water. Only one nitrogen atom is removed with it. A lot of water is needed for the excretion of ammonia, about 0.5 L of water is needed per 1 g of nitrogen to maintain ammonia levels in the excretory fluid below the level in body fluids to prevent toxicity. Thus, the marine organisms excrete ammonia directly into the water and are called ammonotelic. Ammonotelic animals include protozoans, crustaceans, platyhelminths, cnidarians, poriferans, echinoderms, and other aquatic invertebrates.
The excretion of urea is called ureotelism. Land animals, mainly amphibians and mammals, convert ammonia into urea, a process which occurs in the liver and kidney. These animals are called ureotelic. Urea is a less toxic compound than ammonia; two nitrogen atoms are eliminated through it and less water is needed for its excretion. It requires 0.05 L of water to excrete 1 g of nitrogen, approximately only 10 % of that required in ammonotelic organisms.
Uricotelism is the ridding of excess nitrogen using uric acid. This method is used by birds and diapsids, insects, lizards, and snakes, and these animals are called uricotelic. Uric acid is less toxic than ammonia or urea. It contains four nitrogen atoms and only a small amount of water (about 0.001 L per 1 g of nitrogen) is needed for its excretion. Uric acid is the least soluble in water and can be stored in cells and body tissues without toxic effects. A single molecule of uric acid can remove four atoms of nitrogen making uricotelism more efficient than ammonotelism or ureotelism.
Uricotelic organisms typically have white pasty excreta. Some mammals including humans excrete uric acid as a component of their urine but it is only a small amount.
These compounds form during the catabolism of carbohydrates and lipids in condensation reactions, and in some other metabolic reactions of the amino acids. Oxygen is produced by plants and some bacteria in photosynthesis, while CO is a waste product of all animals and plants. Nitrogen gases are produced by denitrifying bacteria and as a waste product, and bacteria for decaying yield ammonia, as do most invertebrates and vertebrates. Water is the only liquid waste from animals and photosynthesizing plants.
Nitrates and nitrites are wastes produced by nitrifying bacteria, just as sulfur and sulfates are produced by sulfur - reducing bacteria and sulfate - reducing bacteria. Insoluble iron waste can be made by iron bacteria by using soluble forms. In plants, resins, fats, waxes, and complex organic chemicals are exuded from plants, e.g., the latex from rubber trees and milkweeds. Solid waste products may be manufactured as organic pigments derived from breakdown of pigments like hemoglobin, and inorganic salts like carbonates, bicarbonates, and phosphate, whether in ionic or molecular form, are excreted as solids.
Animals dispose of solid waste as feces.
|
who's team is brynn on the voice | The Voice (Us season 14) - wikipedia
The fourteenth season of the American reality talent show The Voice premiered on February 26, 2018, on NBC. New coach Kelly Clarkson and returning coach Alicia Keys replaced Miley Cyrus and Jennifer Hudson. An ad for this season was released on YouTube for the Super Bowl 52 on February 4, 2018.
On May 22, 2018, Brynn Cartelli was crowned the winner of The Voice. With her win, the fifteen - year - old became the youngest winner in the show 's history. Sawyer Fredericks at sixteen was the youngest until Cartelli won. With her victory, Kelly Clarkson became the first new coach to win on her first season, and overall, the third female winning coach, behind Alicia Keys and Christina Aguilera. Additionally, runner - up Britton Buchanan became the highest - placing artist who advanced via an Instant Save, following Joshua Davis of season eight and Chris Jamison of season seven, who both placed third.
The coaching lineup changed once again for the fourteenth season. Adam Levine and Blake Shelton returned as coaches, making them the only members of the coaching panel to be part of all fourteen seasons. Miley Cyrus and Jennifer Hudson were replaced by returning coach Alicia Keys, who returned after a one season absence and was participating in her third season, and Kelly Clarkson, who served as an advisor for Team Blake during the Battle rounds in the second season and key advisor for all the teams during the Knockout Rounds in the thirteenth season. Carson Daly returned for his fourteenth season as host
This season a new feature, the Block, was added during the Blind Auditions. This allows the coaches to block one coach from getting an artist the coach turned around for.
Also, Season 14 introduced a Save button that allows a coach to save an artist they just eliminated during the Knockouts Round. However, if another coach also presses his or her Steal button, the contestant can then decide whether they want to go to a new team or return to their former coach.
This season 's advisors for the Battle Rounds weree Julia Michaels for Team Adam, Shawn Mendes for Team Alicia, Hailee Steinfeld for Team Kelly, and Trace Adkins for Team Blake.
Color key
A new feature within the Blind Auditions this season is the Block, which each coach can use once to prevent one of the other coaches from getting a contestant.
The Battle Rounds started on March 19. Season fourteen 's advisors include: Julia Michaels for Team Adam, Shawn Mendes for Team Alicia, Hailee Steinfeld for Team Kelly, and Trace Adkins for Team Blake. The coaches can steal two losing artists from other coaches. Contestants who win their battle or are stolen by another coach will advance to the Knockout rounds.
Color key:
The Knockouts round started on April 2. The coaches can each steal one losing artist from another team and save one artist who lost their Knockout on their own team. The top 24 contestants then moved on to the Live Playoffs.
For the first time in Voice history, past winners of the show returned as Key Advisors -- Jordan Smith for Team Adam, Chris Blue for Team Alicia, Cassadee Pope for Team Kelly, and Chloe Kohanski for Team Blake.
Color key:
Color key:
For the first time, the Live Playoffs were split into two rounds and introduced immunity to the highest vote - getters to advance to the Top 12 in Round 1.
During the first round of the Live Playoffs, all of the Top 24 performed for the votes of the public. The artist with the highest amount of votes on each team directly advanced to the Top 12, receiving immunity from the Round 2 eliminations, and left the remaining five artists from each team to perform again in Round 2.
Color key:
On Nights 2 and 3, the twenty remaining artists performed a second time for the votes of the public (this vote was separate from the Round 1 vote). The highest vote - getter and a coach 's choice from each team joined the other four artists from Round 1 in the Top 12.
Color key:
This week 's theme was "Story Behind The Song ''. The two artists with the fewest votes competed for an Instant Save, with one or two leaving the competition each week until the semifinals.
None of the artists reached the top 10 on iTunes, so no bonuses were awarded.
The theme for this week was "Fan Night '', meaning that the artists performed songs chosen by the fans.
iTunes bonuses were awarded to Pryor Baird (# 4) and Britton Buchanan (# 6).
The theme for this week was "Overcoming Struggles ''. Different from previous weeks, the bottom three vote - getters competed in the Instant Save, with two artists being eliminated.
iTunes bonuses were awarded to Kyla Jade (# 5), Brynn Cartelli (# 8), Britton Buchanan (# 9) and Pryor Baird (# 10).
The Top 8 performed on Monday, May 14, 2018, with the results following on Tuesday, May 15, 2018. In the semifinals, three artists automatically moved on to next week 's finale, the two artists with the least votes were immediately eliminated and the middle three contended for the remaining spot in next week 's finale via the Instant Save. iTunes bonuses were awarded to Kyla Jade (# 2), Britton Buchanan (# 3), Brynn Cartelli (# 4), Kaleb Lee (# 5), Pryor Baird (# 7) and Spensha Baker (# 10). In addition to their individual songs, each artist performed a duet with another artist in the competition, though these duets were not available for purchase on iTunes.
With the elimination of Rayshun LaMarr, Adam Levine no longer had any artists remaining on his team. This is the first time since the fourth season that he had lost his entire team prior to the finale. In addition, with the advancement of Brynn Cartelli to the finale, Kelly Clarkson became the third new coach to successfully get an artist on her team to the finale on her first attempt as a coach, the second being Alicia Keys, who coached Wé McDonald all the way to the finale of the eleventh season, and the first being Usher, who coached Michelle Chamuel all the way to the finale of the fourth season. This also marked the first time ever that two female coaches were represented in the finale.
The Final 4 performed on Monday, May 21, 2018, with the final results following on Tuesday, May 22, 2018. This week, the four finalists performed a solo cover song, a duet with their coach, and an original song. iTunes bonuses were awarded to Britton Buchanan (# 1 and # 10), Brynn Cartelli (# 3 and # 6), Spensha Baker (# 4) and Kyla Jade (# 9). Buchanan 's "Where You Come From '' and Cartelli 's "Walk My Way '' are the only songs that charted # 1 on iTunes.
This is the first time that an artist who won the Instant Save in the semifinals did not finish in the last two spots in the finals, with Britton Buchanan finishing 2nd. With Britton Buchanan and Brynn Cartelli making it to the Top 2, this marked the first time in which the final results came down to two artists who have neither represented Blake Shelton nor Adam Levine. This is also the first time that the final results came down to two artists coached by two female coaches, with Buchanan representing Alicia Keys, and Cartelli representing Kelly Clarkson.
|
where does the fluoride in our drinking water come from | Water fluoridation - wikipedia
Water fluoridation is the controlled addition of fluoride to a public water supply to reduce tooth decay. Fluoridated water contains fluoride at a level that is effective for preventing cavities; this can occur naturally or by adding fluoride. Fluoridated water operates on tooth surfaces: in the mouth, it creates low levels of fluoride in saliva, which reduces the rate at which tooth enamel demineralizes and increases the rate at which it remineralizes in the early stages of cavities. Typically a fluoridated compound is added to drinking water, a process that in the U.S. costs an average of about $1.06 per person - year. Defluoridation is needed when the naturally occurring fluoride level exceeds recommended limits. In 2011 the World Health Organization suggested a level of fluoride from 0.5 to 1.5 mg / L (milligrams per litre), depending on climate, local environment, and other sources of fluoride. Bottled water typically has unknown fluoride levels.
Dental caries remains a major public health concern in most industrialized countries, affecting 60 -- 90 % of schoolchildren and the vast majority of adults. Water fluoridation reduces cavities in children, while efficacy in adults is less clear. A Cochrane review estimates a reduction in cavities when water fluoridation was used by children who had no access to other sources of fluoride to be 35 % in baby teeth and 26 % in permanent teeth. The evidence quality was poor. Most European countries have experienced substantial declines in tooth decay without its use, however milk and salt fluoridation is widespread. Recent studies suggest that water fluoridation, particularly in industrialized nations, may be unnecessary because topical fluorides (such as in toothpaste) are widely used, and caries rates have become low.
Although fluoridation can cause dental fluorosis, which can alter the appearance of developing teeth or enamel fluorosis, the differences are mild and usually not considered to be of aesthetic or public health concern. There is no clear evidence of other adverse effects from water fluoridation. Fluoride 's effects depend on the total daily intake of fluoride from all sources. Drinking water is typically the largest source; other methods of fluoride therapy include fluoridation of toothpaste, salt, and milk. The views on the most efficient method for community prevention of tooth decay are mixed. The Australian government states that water fluoridation is the most effective way to achieve fluoride exposure that is community - wide. The World Health Organization reports that water fluoridation, when feasible and culturally acceptable, has substantial advantages, especially for subgroups at high risk, while the European Commission finds no benefit to water fluoridation compared with topical use.
Public water fluoridation was first practiced in the U.S. As of 2012, 25 countries have artificial water fluoridation to varying degrees, 11 of them have more than 50 % of their population drinking fluoridated water. A further 28 countries have water that is naturally fluoridated, though in many of them the fluoride is above the recommended safe level. As of 2012, about 435 million people worldwide received water fluoridated at the recommended level (i.e., about 5.4 % of the global population). About 214 million of them living in the United States. Major health organizations such as the World Health Organization and FDI World Dental Federation supported water fluoridation as safe and effective. The Centers for Disease Control and Prevention lists water fluoridation as one of the ten great public health achievements of the 20th century in the U.S. Despite this, the practice is controversial as a public health measure; some countries and communities have discontinued it, while others have expanded it. Opponents of the practice argue that neither the benefits nor the risks have been studied adequately, and debate the conflict between what might be considered mass medication and individual liberties.
The goal of water fluoridation is to prevent tooth decay by adjusting the concentration of fluoride in public water supplies. Tooth decay (dental caries) is one of the most prevalent chronic diseases worldwide. Although it is rarely life - threatening, tooth decay can cause pain and impair eating, speaking, facial appearance, and acceptance into society, and it greatly affects the quality of life of children, particularly those of low socioeconomic status. In most industrialized countries, tooth decay affects 60 -- 90 % of schoolchildren and the vast majority of adults; although the problem appears to be less in Africa 's developing countries, it is expected to increase in several countries there because of changing diet and inadequate fluoride exposure. In the U.S., minorities and the poor both have higher rates of decayed and missing teeth, and their children have less dental care. Once a cavity occurs, the tooth 's fate is that of repeated restorations, with estimates for the median life of an amalgam tooth filling ranging from 9 to 14 years. Oral disease is the fourth most expensive disease to treat. The motivation for fluoridation of salt or water is similar to that of iodized salt for the prevention of intellectual disabilitiy and goiter.
The goal of water fluoridation is to prevent a chronic disease whose burdens particularly fall on children and the poor. Another of the goals was to bridge inequalities in dental health and dental care. Some studies suggest that fluoridation reduces oral health inequalities between the rich and poor, but the evidence is limited. There is anecdotal but not scientific evidence that fluoride allows more time for dental treatment by slowing the progression of tooth decay, and that it simplifies treatment by causing most cavities to occur in pits and fissures of teeth. Other reviews have found not enough evidence to determine if water fluoridation reduces oral - health social disparities.
Health and dental organizations worldwide have endorsed its safety and effectiveness. Its use began in 1945, following studies of children in a region where higher levels of fluoride occur naturally in the water. Further research showed that moderate fluoridation prevents tooth decay.
Fluoridation does not affect the appearance, taste, or smell of drinking water. It is normally accomplished by adding one of three compounds to the water: sodium fluoride, fluorosilicic acid, or sodium fluorosilicate.
These compounds were chosen for their solubility, safety, availability, and low cost. A 1992 census found that, for U.S. public water supply systems reporting the type of compound used, 63 % of the population received water fluoridated with fluorosilicic acid, 28 % with sodium fluorosilicate, and 9 % with sodium fluoride.
The Centers for Disease Control and Prevention developed recommendations for water fluoridation that specify requirements for personnel, reporting, training, inspection, monitoring, surveillance, and actions in case of overfeed, along with technical requirements for each major compound used.
Although fluoride was once considered an essential nutrient, the U.S. National Research Council has since removed this designation due to the lack of studies showing it is essential for human growth, though still considering fluoride a "beneficial element '' due to its positive impact on oral health. The European Food Safety Authority 's Panel on Dietetic Products, Nutrition and Allergies (NDA) considers fluoride not to be an essential nutrient, yet, due to the beneficial effects of dietary fluoride on prevention of dental caries they have defined an Adequate Intake (AI) value for it. The AI of fluoride from all sources (including non-dietary sources) is 0.05 mg / kg body weight per day for both children and adults, including pregnant and lactating women.
In 2011, the U.S. Department of Health and Human Services (HHS) and the U.S. Environmental Protection Agency (EPA) lowered the recommended level of fluoride to 0.7 mg / L. In 2015, the U.S. Food and Drug Administration (FDA), based on the recommendation of the U.S. Public Health Service (PHS) for fluoridation of community water systems, recommended that bottled water manufacturers limit fluoride in bottled water to no more than 0.7 milligrams per liter (mg / L) (milligrams per liter, equivalent to parts per million).
Previous recommendations were based on evaluations from 1962, when the U.S. specified the optimal level of fluoride to range from 0.7 to 1.2 mg / L (milligrams per liter, equivalent to parts per million), depending on the average maximum daily air temperature; the optimal level is lower in warmer climates, where people drink more water, and is higher in cooler climates.
These standards are not appropriate for all parts of the world, where fluoride levels might be excessive and fluoride should be removed from water, and is based on assumptions that have become obsolete with the rise of air conditioning and increased use of soft drinks, processed food, fluoridated toothpaste, and other sources of fluorides. In 2011 the World Health Organization stated that 1.5 mg / L should be an absolute upper bound and that 0.5 mg / L may be an appropriate lower limit. A 2007 Australian systematic review recommended a range from 0.6 to 1.1 mg / L.
Fluoride naturally occurring in water can be above, at, or below recommended levels. Rivers and lakes generally contain fluoride levels less than 0.5 mg / L, but groundwater, particularly in volcanic or mountainous areas, can contain as much as 50 mg / L. Higher concentrations of fluorine are found in alkaline volcanic, hydrothermal, sedimentary, and other rocks derived from highly evolved magmas and hydrothermal solutions, and this fluorine dissolves into nearby water as fluoride. In most drinking waters, over 95 % of total fluoride is the F ion, with the magnesium -- fluoride complex (MgF) being the next most common. Because fluoride levels in water are usually controlled by the solubility of fluorite (CaF), high natural fluoride levels are associated with calcium - deficient, alkaline, and soft waters. Defluoridation is needed when the naturally occurring fluoride level exceeds recommended limits. It can be accomplished by percolating water through granular beds of activated alumina, bone meal, bone char, or tricalcium phosphate; by coagulation with alum; or by precipitation with lime.
Pitcher or faucet - mounted water filters do not alter fluoride content; the more - expensive reverse osmosis filters remove 65 -- 95 % of fluoride, and distillation removes all fluoride. Some bottled waters contain undeclared fluoride, which can be present naturally in source waters, or if water is sourced from a public supply which has been fluoridated. The FDA states that bottled water products labeled as de-ionized, purified, demineralized, or distilled have been treated in such a way that they contain no or only trace amounts of fluoride, unless they specifically list fluoride as an added ingredient.
Existing evidence suggests that water fluoridation reduces tooth decay. Consistent evidence also suggests that it causes dental fluorosis, most of which is mild and not usually of aesthetic concern. No clear evidence of other adverse effects exists, though almost all research thereof has been of poor quality.
Reviews have shown that water fluoridation reduces cavities in children. A conclusion for the efficacy in adults is less clear with some reviews finding benefit and others not. Studies in the U.S. in the 1950s and 1960s showed that water fluoridation reduced childhood cavities by fifty to sixty percent, while studies in 1989 and 1990 showed lower reductions (40 % and 18 % respectively), likely due to increasing use of fluoride from other sources, notably toothpaste, and also the ' halo effect ' of food and drink that is made in fluoridated areas and consumed in unfluoridated ones.
A 2000 UK systematic review (York) found that water fluoridation was associated with a decreased proportion of children with cavities of 15 % and with a decrease in decayed, missing, and filled primary teeth (average decreases was 2.25 teeth). The review found that the evidence was of moderate quality: few studies attempted to reduce observer bias, control for confounding factors, report variance measures, or use appropriate analysis. Although no major differences between natural and artificial fluoridation were apparent, the evidence was inadequate for a conclusion about any differences. A 2007 Australian systematic review used the same inclusion criteria as York 's, plus one additional study. This did not affect the York conclusions. A 2011 European Commission systematic review based its efficacy on York 's review conclusion. A 2015 Cochrane systematic review estimated a reduction in cavities when water fluoridation was used by children who had no access to other sources of fluoride to be 35 % in baby teeth and 26 % in permanent teeth. The evidence was of poor quality.
Fluoride may also prevent cavities in adults of all ages. A 2007 meta - analysis by CDC researchers found that water fluoridation prevented an estimated 27 % of cavities in adults, about the same fraction as prevented by exposure to any delivery method of fluoride (29 % average). A 2011 European Commission review found that the benefits of water fluoridation for adult in terms of reductions in decay are limited. A 2015 Cochrane review found no conclusive research regarding the effectiveness of water fluoridation in adults. A 2016 review found variable quality evidence that, overall, stopping of community water fluoridation programs was typically followed by an increase in cavities.
Most countries in Europe have experienced substantial declines in cavities without the use of water fluoridation. For example, in Finland and Germany, tooth decay rates remained stable or continued to decline after water fluoridation stopped. Fluoridation may be useful in the U.S. because unlike most European countries, the U.S. does not have school - based dental care, many children do not visit a dentist regularly, and for many U.S. children water fluoridation is the prime source of exposure to fluoride. The effectiveness of water fluoridation can vary according to circumstances such as whether preventive dental care is free to all children.
Fluoride 's adverse effects depend on total fluoride dosage from all sources. At the commonly recommended dosage, the only clear adverse effect is dental fluorosis, which can alter the appearance of children 's teeth during tooth development; this is mostly mild and is unlikely to represent any real effect on aesthetic appearance or on public health. In April 2015, recommended fluoride levels in the United States were changed to 0.7 ppm from 0.7 -- 1.2 ppm to reduce the risk of dental fluorosis. The 2015 Cochrane review estimated that for a fluoride level of 0.7 ppm the percentage of participants with fluorosis of aesthetic concern was approximately 12 %. This increases to 40 % when considering fluorosis of any level not of aesthetic concern. In the US mild or very mild dental fluorosis has been reported in 20 % of the population, moderate fluorosis in 2 % and severe fluorosis in less than 1 %.
The critical period of exposure is between ages one and four years, with the risk ending around age eight. Fluorosis can be prevented by monitoring all sources of fluoride, with fluoridated water directly or indirectly responsible for an estimated 40 % of risk and other sources, notably toothpaste, responsible for the remaining 60 %. Compared to water naturally fluoridated at 0.4 mg / L, fluoridation to 1 mg / L is estimated to cause additional fluorosis in one of every 6 people (95 % CI 4 -- 21 people), and to cause additional fluorosis of aesthetic concern in one of every 22 people (95 % CI 13.6 -- ∞ people). Here, aesthetic concern is a term used in a standardized scale based on what adolescents would find unacceptable, as measured by a 1996 study of British 14 - year - olds. In many industrialized countries the prevalence of fluorosis is increasing even in unfluoridated communities, mostly because of fluoride from swallowed toothpaste. A 2009 systematic review indicated that fluorosis is associated with consumption of infant formula or of water added to reconstitute the formula, that the evidence was distorted by publication bias, and that the evidence that the formula 's fluoride caused the fluorosis was weak. In the U.S. the decline in tooth decay was accompanied by increased fluorosis in both fluoridated and unfluoridated communities; accordingly, fluoride has been reduced in various ways worldwide in infant formulas, children 's toothpaste, water, and fluoride - supplement schedules.
Fluoridation has little effect on risk of bone fracture (broken bones); it may result in slightly lower fracture risk than either excessively high levels of fluoridation or no fluoridation. There is no clear association between fluoridation and cancer or deaths due to cancer, both for cancer in general and also specifically for bone cancer and osteosarcoma. Other adverse effects lack sufficient evidence to reach a confident conclusion.
Fluoride can occur naturally in water in concentrations well above recommended levels, which can have several long - term adverse effects, including severe dental fluorosis, skeletal fluorosis, and weakened bones; water utilities in the developed world reduce fluoride levels to regulated maximum levels in regions where natural levels are high, and the WHO and other groups work with countries and regions in the developing world with naturally excessive fluoride levels to achieve safe levels. The World Health Organization recommends a guideline maximum fluoride value of 1.5 mg / L as a level at which fluorosis should be minimal.
In rare cases improper implementation of water fluoridation can result in overfluoridation that causes outbreaks of acute fluoride poisoning, with symptoms that include nausea, vomiting, and diarrhea. Three such outbreaks were reported in the U.S. between 1991 and 1998, caused by fluoride concentrations as high as 220 mg / L; in the 1992 Alaska outbreak, 262 people became ill and one person died. In 2010, approximately 60 gallons of fluoride were released into the water supply in Asheboro, North Carolina in 90 minutes -- an amount that was intended to be released in a 24 - hour period.
Like other common water additives such as chlorine, hydrofluosilicic acid and sodium silicofluoride decrease pH and cause a small increase of corrosivity, but this problem is easily addressed by increasing the pH. Although it has been hypothesized that hydrofluosilicic acid and sodium silicofluoride might increase human lead uptake from water, a 2006 statistical analysis did not support concerns that these chemicals cause higher blood lead concentrations in children. Trace levels of arsenic and lead may be present in fluoride compounds added to water, but no credible evidence exists that their presence is of concern: concentrations are below measurement limits.
The effect of water fluoridation on the natural environment has been investigated, and no adverse effects have been established. Issues studied have included fluoride concentrations in groundwater and downstream rivers; lawns, gardens, and plants; consumption of plants grown in fluoridated water; air emissions; and equipment noise.
Fluoride exerts its major effect by interfering with the demineralization mechanism of tooth decay. Tooth decay is an infectious disease, the key feature of which is an increase within dental plaque of bacteria such as Streptococcus mutans and Lactobacillus. These produce organic acids when carbohydrates, especially sugar, are eaten. When enough acid is produced to lower the pH below 5.5, the acid dissolves carbonated hydroxyapatite, the main component of tooth enamel, in a process known as demineralization. After the sugar is gone, some of the mineral loss can be recovered -- or remineralized -- from ions dissolved in the saliva. Cavities result when the rate of demineralization exceeds the rate of remineralization, typically in a process that requires many months or years.
All fluoridation methods, including water fluoridation, create low levels of fluoride ions in saliva and plaque fluid, thus exerting a topical or surface effect. A person living in an area with fluoridated water may experience rises of fluoride concentration in saliva to about 0.04 mg / L several times during a day. Technically, this fluoride does not prevent cavities but rather controls the rate at which they develop. When fluoride ions are present in plaque fluid along with dissolved hydroxyapatite, and the pH is higher than 4.5, a fluorapatite - like remineralized veneer is formed over the remaining surface of the enamel; this veneer is much more acid - resistant than the original hydroxyapatite, and is formed more quickly than ordinary remineralized enamel would be. The cavity - prevention effect of fluoride is mostly due to these surface effects, which occur during and after tooth eruption. Although some systemic (whole - body) fluoride returns to the saliva via blood plasma, and to unerupted teeth via plasma or crypt fluid, there is little data to determine what percentages of fluoride 's anticavity effect comes from these systemic mechanisms. Also, although fluoride affects the physiology of dental bacteria, its effect on bacterial growth does not seem to be relevant to cavity prevention.
Fluoride 's effects depend on the total daily intake of fluoride from all sources. About 70 -- 90 % of ingested fluoride is absorbed into the blood, where it distributes throughout the body. In infants 80 -- 90 % of absorbed fluoride is retained, with the rest excreted, mostly via urine; in adults about 60 % is retained. About 99 % of retained fluoride is stored in bone, teeth, and other calcium - rich areas, where excess quantities can cause fluorosis. Drinking water is typically the largest source of fluoride. In many industrialized countries swallowed toothpaste is the main source of fluoride exposure in unfluoridated communities. Other sources include dental products other than toothpaste; air pollution from fluoride - containing coal or from phosphate fertilizers; trona, used to tenderize meat in Tanzania; and tea leaves, particularly the tea bricks favored in parts of China. High fluoride levels have been found in other foods, including barley, cassava, corn, rice, taro, yams, and fish protein concentrate. The U.S. Institute of Medicine has established Dietary Reference Intakes for fluoride: Adequate Intake values range from 0.01 mg / day for infants aged 6 months or less, to 4 mg / day for men aged 19 years and up; and the Tolerable Upper Intake Level is 0.10 mg / kg / day for infants and children through age 8 years, and 10 mg / day thereafter. A rough estimate is that an adult in a temperate climate consumes 0.6 mg / day of fluoride without fluoridation, and 2 mg / day with fluoridation. However, these values differ greatly among the world 's regions: for example, in Sichuan, China the average daily fluoride intake is only 0.1 mg / day in drinking water but 8.9 mg / day in food and 0.7 mg / day directly from the air due to the use of high - fluoride soft coal for cooking and drying foodstuffs indoors.
The views on the most effective method for community prevention of tooth decay are mixed. The Australian government review states that water fluoridation is the most effective means of achieving fluoride exposure that is community - wide. The European Commission review states "No obvious advantage appears in favour of water fluoridation compared with topical prevention ''. Other fluoride therapies are also effective in preventing tooth decay; they include fluoride toothpaste, mouthwash, gel, and varnish, and fluoridation of salt and milk. Dental sealants are effective as well, with estimates of prevented cavities ranging from 33 % to 86 %, depending on age of sealant and type of study.
Fluoride toothpaste is the most widely used and rigorously evaluated fluoride treatment. Its introduction in the early 1970s is considered the main reason for the decline in tooth decay in industrialized countries, and toothpaste appears to be the single common factor in countries where tooth decay has declined. Toothpaste is the only realistic fluoride strategy in many low - income countries, where lack of infrastructure renders water or salt fluoridation infeasible. It relies on individual and family behavior, and its use is less likely among lower economic classes; in low - income countries it is unaffordable for the poor. Fluoride toothpaste prevents about 25 % of cavities in young permanent teeth, and its effectiveness is improved if higher concentrations of fluoride are used, or if the toothbrushing is supervised. Fluoride mouthwash and gel are about as effective as fluoride toothpaste; fluoride varnish prevents about 45 % of cavities. By comparison, brushing with a nonfluoride toothpaste has little effect on cavities.
The effectiveness of salt fluoridation is about the same as that of water fluoridation, if most salt for human consumption is fluoridated. Fluoridated salt reaches the consumer in salt at home, in meals at school and at large kitchens, and in bread. For example, Jamaica has just one salt producer, but a complex public water supply; it started fluoridating all salt in 1987, achieving a decline in cavities. Universal salt fluoridation is also practiced in Colombia and the Swiss Canton of Vaud; in Germany fluoridated salt is widely used in households but unfluoridated salt is also available, giving consumers a choice. Concentrations of fluoride in salt range from 90 to 350 mg / kg, with studies suggesting an optimal concentration of around 250 mg / kg.
Milk fluoridation is practiced by the Borrow Foundation in some parts of Bulgaria, Chile, Peru, Russia, Macedonia, Thailand and the UK. Depending on location, the fluoride is added to milk, to powdered milk, or to yogurt. For example, milk powder fluoridation is used in rural Chilean areas where water fluoridation is not technically feasible. These programs are aimed at children, and have neither targeted nor been evaluated for adults. A systematic review found low - quality evidence to support the practice, but also concluded that further studies were needed.
Other public - health strategies to control tooth decay, such as education to change behavior and diet, have lacked impressive results. Although fluoride is the only well - documented agent which controls the rate at which cavities develop, it has been suggested that adding calcium to the water would reduce cavities further. Other agents to prevent tooth decay include antibacterials such as chlorhexidine and sugar substitutes such as xylitol. Xylitol - sweetened chewing gum has been recommended as a supplement to fluoride and other conventional treatments if the gum is not too costly. Two proposed approaches, bacteria replacement therapy (probiotics) and caries vaccine, would share water fluoridation 's advantage of requiring only minimal patient compliance, but have not been proven safe and effective. Other experimental approaches include fluoridated sugar, polyphenols, and casein phosphopeptide -- amorphous calcium phosphate nanocomplexes.
A 2007 Australian review concluded that water fluoridation is the most effective and socially the most equitable way to expose entire communities to fluoride 's cavity - prevention effects. A 2002 U.S. review estimated that sealants decreased cavities by about 60 % overall, compared to about 18 -- 50 % for fluoride. A 2007 Italian review suggested that water fluoridation may not be needed, particularly in the industrialized countries where cavities have become rare, and concluded that toothpaste and other topical fluoride are the best way to prevent cavities worldwide. A 2004 World Health Organization review stated that water fluoridation, when it is culturally acceptable and technically feasible, has substantial advantages in preventing tooth decay, especially for subgroups at high risk.
As of November 2012, a total of about 378 million people worldwide received artificially fluoridated water. The majority of those were in the United States. About 40 million worldwide received water that was naturally fluoridated to recommended levels.
Much of the early work on establishing the connection between fluoride and dental health was performed by scientists in the U.S. during the early 20th century, and the U.S. was the first country to implement public water fluoridation on a wide scale. It has been introduced to varying degrees in many countries and territories outside the U.S., including Argentina, Australia, Brazil, Canada, Chile, Colombia, Hong Kong, Ireland, Israel, Korea, Malaysia, New Zealand, the Philippines, Serbia, Singapore, Spain, the UK, and Vietnam. In 2004, an estimated 13.7 million people in western Europe and 194 million in the U.S. received artificially fluoridated water. In 2010, about 66 % of the U.S. population was receiving fluoridated water.
Naturally fluoridated water is used by approximately 4 % of the world 's population, in countries including Argentina, France, Gabon, Libya, Mexico, Senegal, Sri Lanka, Tanzania, the U.S., and Zimbabwe. In some locations, notably parts of Africa, China, and India, natural fluoridation exceeds recommended levels.
Communities have discontinued water fluoridation in some countries, including Finland, Germany, Japan, the Netherlands, and Switzerland. On August 26, 2014, Israel stopped mandating fluoridation, stating "Only some 1 % of the water is used for drinking, while 99 % of the water is intended for other uses (industry, agriculture, flushing toilets etc.). There is also scientific evidence that fluoride in large amounts can lead to damage to health. When fluoride is supplied via drinking water, there is no control regarding the amount of fluoride actually consumed, which could lead to excessive consumption. Supply of fluoridated water forces those who do not so wish to also consume water with added fluoride. This approach is therefore not accepted in most countries in the world. '' This change was often motivated by political opposition to water fluoridation, but sometimes the need for water fluoridation was met by alternative strategies. The use of fluoride in its various forms is the foundation of tooth decay prevention throughout Europe; several countries have introduced fluoridated salt, with varying success: in Switzerland and Germany, fluoridated salt represents 65 % to 70 % of the domestic market, while in France the market share reached 60 % in 1993 but dwindled to 14 % in 2009; Spain, in 1986 the second West European country to introduce fluoridation of table salt, reported a market share in 2006 of only 10 %. In three other West European countries, Greece, Austria and the Netherlands, the legal framework for production and marketing of fluoridated edible salt exists. At least six Central European countries (Hungary, the Czech and Slovak Republics, Croatia, Slovenia, Romania) have shown some interest in salt fluoridation; however, significant usage of approximately 35 % was only achieved in the Czech Republic. The Slovak Republic had the equipment to treat salt by 2005; in the other four countries attempts to introduce fluoridated salt were not successful.
The history of water fluoridation can be divided into three periods. The first (c. 1801 -- 1933) was research into the cause of a form of mottled tooth enamel called the Colorado brown stain. The second (c. 1933 -- 1945) focused on the relationship between fluoride concentrations, fluorosis, and tooth decay, and established that moderate levels of fluoride prevent cavities. The third period, from 1945 on, focused on adding fluoride to community water supplies.
In the first half of the 19th century, investigators established that fluoride occurs with varying concentrations in teeth, bone, and drinking water. In the second half they speculated that fluoride would protect against tooth decay, proposed supplementing the diet with fluoride, and observed mottled enamel (now called severe dental fluorosis) without knowing the cause. In 1874, the German public health officer Carl Wilhelm Eugen Erhardt recommended potassium fluoride supplements to preserve teeth. In 1892 the British physician James Crichton - Browne noted in an address that fluoride 's absence from diets had resulted in teeth that were "peculiarly liable to decay '', and who proposed "the reintroduction into our diet... of fluorine in some suitable natural form... to fortify the teeth of the next generation ''.
The foundation of water fluoridation in the U.S. was the research of the dentist Frederick McKay (1874 -- 1959). McKay spent thirty years investigating the cause of what was then known as the Colorado brown stain, which produced mottled but also cavity - free teeth; with the help of G.V. Black and other researchers, he established that the cause was fluoride. The first report of a statistical association between the stain and lack of tooth decay was made by UK dentist Norman Ainsworth in 1925. In 1931, an Alcoa chemist, H.V. Churchill, concerned about a possible link between aluminum and staining, analyzed water from several areas where the staining was common and found that fluoride was the common factor.
In the 1930s and early 1940s, H. Trendley Dean and colleagues at the newly created U.S. National Institutes of Health published several epidemiological studies suggesting that a fluoride concentration of about 1 mg / L was associated with substantially fewer cavities in temperate climates, and that it increased fluorosis but only to a level that was of no medical or aesthetic concern. Other studies found no other significant adverse effects even in areas with fluoride levels as high as 8 mg / L. To test the hypothesis that adding fluoride would prevent cavities, Dean and his colleagues conducted a controlled experiment by fluoridating the water in Grand Rapids, Michigan, starting January 25, 1945. The results, published in 1950, showed significant reduction of cavities. Significant reductions in tooth decay were also reported by important early studies outside the U.S., including the Brantford -- Sarnia -- Stratford study in Canada (1945 -- 1962), the Tiel -- Culemborg study in the Netherlands (1953 -- 1969), the Hastings study in New Zealand (1954 -- 1970), and the Department of Health study in the U.K. (1955 -- 1960). By present - day standards these and other pioneering studies were crude, but the large reductions in cavities convinced public health professionals of the benefits of fluoridation.
Fluoridation became an official policy of the U.S. Public Health Service by 1951, and by 1960 water fluoridation had become widely used in the U.S., reaching about 50 million people. By 2006, 69.2 % of the U.S. population on public water systems were receiving fluoridated water, amounting to 61.5 % of the total U.S. population; 3.0 % of the population on public water systems were receiving naturally occurring fluoride. In some other countries the pattern was similar. New Zealand, which led the world in per - capita sugar consumption and had the world 's worst teeth, began fluoridation in 1953, and by 1968 fluoridation was used by 65 % of the population served by a piped water supply. Fluoridation was introduced into Brazil in 1953, was regulated by federal law starting in 1974, and by 2004 was used by 71 % of the population. In the Republic of Ireland, fluoridation was legislated in 1960, and after a constitutional challenge the two major cities of Dublin and Cork began it in 1964; fluoridation became required for all sizeable public water systems and by 1996 reached 66 % of the population. In other locations, fluoridation was used and then discontinued: in Kuopio, Finland, fluoridation was used for decades but was discontinued because the school dental service provided significant fluoride programs and the cavity risk was low, and in Basel, Switzerland, it was replaced with fluoridated salt.
McKay 's work had established that fluorosis occurred before tooth eruption. Dean and his colleagues assumed that fluoride 's protection against cavities was also pre-eruptive, and this incorrect assumption was accepted for years. By 2000, however, the topical effects of fluoride (in both water and toothpaste) were well understood, and it had become known that a constant low level of fluoride in the mouth works best to prevent cavities.
Fluoridation costs an estimated $1.06 per person - year on the average (range: $0.25 -- $11.19; all costs in this paragraph are for the U.S. and are in 2017 dollars, inflation - adjusted from earlier estimates). Larger water systems have lower per capita cost, and the cost is also affected by the number of fluoride injection points in the water system, the type of feeder and monitoring equipment, the fluoride chemical and its transportation and storage, and water plant personnel expertise. In affluent countries the cost of salt fluoridation is also negligible; developing countries may find it prohibitively expensive to import the fluoride additive. By comparison, fluoride toothpaste costs an estimated $9 -- $18 per person - year, with the incremental cost being zero for people who already brush their teeth for other reasons; and dental cleaning and application of fluoride varnish or gel costs an estimated $97 per person - year. Assuming the worst case, with the lowest estimated effectiveness and highest estimated operating costs for small cities, fluoridation costs an estimated $16 -- $25 per saved tooth - decay surface, which is lower than the estimated $95 to restore the surface and the estimated $162 average discounted lifetime cost of the decayed surface, which includes the cost to maintain the restored tooth surface. It is not known how much is spent in industrial countries to treat dental fluorosis, which is mostly due to fluoride from swallowed toothpaste.
Although a 1989 workshop on cost - effectiveness of cavity prevention concluded that water fluoridation is one of the few public health measures that save more money than they cost, little high - quality research has been done on the cost - effectiveness and solid data are scarce. Dental sealants are cost - effective only when applied to high - risk children and teeth. A 2002 U.S. review estimated that on average, sealing first permanent molars saves costs when they are decaying faster than 0.47 surfaces per person - year whereas water fluoridation saves costs when total decay incidence exceeds 0.06 surfaces per person - year. In the U.S., water fluoridation is more cost - effective than other methods to reduce tooth decay in children, and a 2008 review concluded that water fluoridation is the best tool for combating cavities in many countries, particularly among socially disadvantaged groups. A 2016 review of studies published between 1995 to 2013 found that water fluoridation in the U.S. was cost - effective, and that it was more so in larger communities.
U.S. data from 1974 to 1992 indicate that when water fluoridation is introduced into a community, there are significant decreases in the number of employees per dental firm and the number of dental firms. The data suggest that some dentists respond to the demand shock by moving to non-fluoridated areas and by retraining as specialists.
The water fluoridation controversy arises from political, moral, ethical, economic, and safety concerns regarding the water fluoridation of public water supplies. Public health authorities throughout the world state that water fluoridation at appropriate levels is a safe and effective means to prevent dental caries. Authorities ' views on the most effective fluoride therapy for community prevention of tooth decay are mixed; some state water fluoridation is most effective, while others see no special advantage and prefer topical application strategies.
Those opposed argue that water fluoridation has no or little cariostatic benefits, may cause serious health problems, is not effective enough to justify the costs, and pharmacologically obsolete, while some argue that the practice presents a conflict between the common good and individual rights.
|
the first law of thermodynamics is equivalent to which law or principle | First law of thermodynamics - wikipedia
The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic systems. The law of conservation of energy states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed. The first law is often formulated
It states that the change in the internal energy ΔU of a closed system is equal to the amount of heat Q supplied to the system, minus the amount of work W done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible.
Investigations into the nature of heat and work and their relationship began with the invention of the first engines used to extract water from mines. Improvements to such engines so as to increase their efficiency and power output came first from mechanics that worked with such machines but only slowly advanced the art. Deeper investigations that placed those on a mathematical and physics basis came later.
The first law of thermodynamics was developed empirically over about half a century. The first full statements of the law came in 1850 from Rudolf Clausius and from William Rankine; Rankine 's statement is considered less distinct relative to Clausius '. A main aspect of the struggle was to deal with the previously proposed caloric theory of heat.
In 1840, Germain Hess stated a conservation law for the so - called ' heat of reaction ' for chemical reactions. His law was later recognized as a consequence of the first law of thermodynamics, but Hess 's statement was not explicitly concerned with the relation between energy exchanges by heat and work.
According to Truesdell (1980), Julius Robert von Mayer in 1841 made a statement that meant that "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work '', but this is not a general statement of the first law.
The original nineteenth century statements of the first law of thermodynamics appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, not defined or constructed by the theoretical development of the framework, but rather presupposed as prior to it and already accepted. The primitive notion of heat was taken as empirically established, especially through calorimetry regarded as a subject in its own right, prior to thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic '' approach.
The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes.
Clausius also stated the law in another form, referring to the existence of a function of state of the system, the internal energy, and expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows:
Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system.
The concept of internal energy is considered by Bailyn to be of "enormous interest ''. Its quantity can not be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr 's energy relation hν = E − E. In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy).
In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat. '' This definition may be regarded as expressing a conceptual revision, as follows. This was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born 's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth - century writers. It might be called the "mechanical approach ''.
Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer.
The "mechanical '' approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly '' (according to Bailyn) refrains from labelling as ' heat ' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic.
This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Helmholtz, but also in the work of many others.
The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.
The revised statement is then
This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering '' concept of heat engines.
Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born 's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that can not be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.
The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.
A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or ' consumed ', in Clausius ' statement), by the system.
In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units.
The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat.
In a non-cyclic process, the change in the internal energy of a system is equal to net energy added as heat to the system minus the net work done by the system, both being measured in mechanical units. Taking ΔU as a change in internal energy, one writes
where Q denotes the net quantity of heat supplied to the system by its surroundings and W denotes the net work done by the system. This sign convention is implicit in Clausius ' statement of the law given above. It originated with the study of heat engines that produce useful work by consumption of heat.
Often nowadays, however, writers use the IUPAC convention by which the first law is formulated with work done on the system by its surroundings having a positive sign. With this now often used sign convention for work, the first law for a closed system may be written:
This convention follows physicists such as Max Planck, and considers all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of any use for the system as an engine or other device.
When a system expands in a fictive quasistatic process, the work done by the system on the environment is the product, P dV, of pressure, P, and volume change, dV, whereas the work done on the system is - P dV. Using either sign convention for work, the change in internal energy of the system is:
where δQ denotes the infinitesimal amount of heat supplied to the system from its surroundings.
Work and heat are expressions of actual physical processes of supply or removal of energy, while the internal energy U is a mathematical abstraction that keeps account of the exchanges of energy that befall the system. Thus the term heat for Q means "that amount of energy added or removed by conduction of heat or by thermal radiation '', rather than referring to a form of energy within the system. Likewise, the term work energy for W means "that amount of energy gained or lost as the result of work ''. Internal energy is a property of the system whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change ΔU can be achieved by, in principle, many combinations of heat and work.
The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.
For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed ' First law of thermodynamics for open systems '.
There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.
An example of a physical statement is that of Planck (1897 / 1903):
This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium.
An example of a mathematical statement is that of Crawford (1963):
This statement by Crawford, for W, uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems, and to internal energy U defined for bodies in states of thermodynamic equilibrium, which possess well - defined temperatures.
The history of statements of the law for closed systems has two main periods, before and after the work of Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory 's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.
Carathéodory 's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory 's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors.
Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.
The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables '', such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat.
According to Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory 's theory is that a consequence of the Second Law must be considered at this point (in the statement of the first law), i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process. '' Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory 's paper asserts that its statement of the first law corresponds exactly to Joule 's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule 's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which can not penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.
Sometimes the concept of internal energy is not made explicit in the statement.
Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.
A respected modern author states the first law of thermodynamics as "Heat is a form of energy '', which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference. '' The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.
A respected text disregards the Carathéodory 's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous ''. These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897 / 1903).
The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.
The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work).
In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures.
For instance, in Joule 's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.
Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank.
A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence. '' Another expression of this view is "... no systematic precise experiments to verify this generalization directly have ever been attempted. ''
This kind of evidence, of independence of sequence of stages, combined with the above - mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.
That important state variable was first recognized and denoted U (\ displaystyle U) by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it U (\ displaystyle U); and in 1851 by Kelvin who then called it "mechanical energy '', and later "intrinsic energy ''. In 1865, after some hestitation, Clausius began calling his state function U (\ displaystyle U) "energy ''. In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that can not be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states.
In an adiabatic process, adiabatic work takes the system either from a reference state O (\ displaystyle O) with internal energy U (O) (\ displaystyle U (O)) to an arbitrary one A (\ displaystyle A) with internal energy U (A) (\ displaystyle U (A)), or from the state A (\ displaystyle A) to the state O (\ displaystyle O):
Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes a d i a b a t i c, O → A (\ displaystyle \ mathrm (adiabatic), \, O \ to A) or a d i a b a t i c, A → O (\ displaystyle \ mathrm (adiabatic), \, (A \ to O) \,) is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article.
The fact of such irreversibility may be dealt with in two main ways, according to different points of view:
The formula (1) above allows that to go by processes of quasi-static adiabatic work from the state A (\ displaystyle A) to the state B (\ displaystyle B) we can take a path that goes through the reference state O (\ displaystyle O), since the quasi-static adiabatic work is independent of the path
This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement:
A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by adiabatically doing externally determined work on it. The most accurate method is by passing an electric current from outside through a resistance inside the calorimeter. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as work. According to one textbook, "The most common device for measuring Δ U (\ displaystyle \ Delta U) is an adiabatic bomb calorimeter. '' According to another textbook, "Calorimetry is widely used in present day laboratories. '' According to one opinion, "Most thermodynamic data come from calorimetry... '' According to another opinion, "The most common method of measuring "heat '' is with a calorimeter. ''
When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:
Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be globally reversible. For a particular reversible process in general, the work done reversibly on the system, W A → B p a t h P 0, r e v e r s i b l e (\ displaystyle W_ (A \ to B) ^ (\ mathrm (path) \, P_ (0), \, \ mathrm (reversible))), and the heat transferred reversibly to the system, Q A → B p a t h P 0, r e v e r s i b l e (\ displaystyle Q_ (A \ to B) ^ (\ mathrm (path) \, P_ (0), \, \ mathrm (reversible))) are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, P 0 (\ displaystyle P_ (0)), through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.
Putting the two complementary aspects together, the first law for a particular reversible process can be written
This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems.
In particular, if no work is done on a thermally isolated closed system we have
This is one aspect of the law of conservation of energy and can be stated:
If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient and practically frictionless, then the process is irreversible. Then the heat and work transfers may be difficult to calculate, and irreversible thermodynamics is called for. Nevertheless, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, W A → B p a t h P 1, i r r e v e r s i b l e (\ displaystyle W_ (A \ to B) ^ (\ mathrm (path) \, P_ (1), \, \ mathrm (irreversible))), and the heat transferred irreversibly to the system, Q A → B p a t h P 1, i r r e v e r s i b l e (\ displaystyle Q_ (A \ to B) ^ (\ mathrm (path) \, P_ (1), \, \ mathrm (irreversible))), which belong to the same particular process defined by its particular irreversible path, P 1 (\ displaystyle P_ (1)), through the space of thermodynamic states.
This means that the internal energy U (\ displaystyle U) is a function of state and that the internal energy change Δ U (\ displaystyle \ Delta U) between two states is a function only of the two states.
The first law of thermodynamics is so general that its predictions can not all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law.
When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by δ, rather than exact differentials denoted by d, as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process.
The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy U may then be expressed as a function of the system 's defining state variables S, entropy, and V, volume: U = U (S, V). In these terms, T, the system 's temperature, and P, its pressure, are partial derivatives of U with respect to S and V. These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system 's internal thermodynamic equilibrium.
The first law requires that:
Then, for the fictive case of a reversible process, dU can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system. This excludes isochoric work. Then, mechanical work is given by δW = - P dV and the quantity of heat added can be expressed as δQ = T dS. For these conditions
While this has been shown here for reversible changes, it is valid in general, as U can be considered as a thermodynamic state function of the defining state variables S and V:
Equation (2) is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are S and V, with respect to which T and P are partial derivatives of U. It is only in the fictive reversible case, when isochoric work is excluded, that the work done and heat transferred are given by − P dV and T dS.
In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes:
where dN is the (small) increase in amount of type - i particles in the reaction, and μ is known as the chemical potential of the type - i particles in the system. If dN is expressed in mol then μ is expressed in J / mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to:
Here the X are the generalized forces corresponding to the external variables x. The parameters X are independent of the size of the system and are called intensive parameters and the x are proportional to the size and called extensive parameters.
For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems.
A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = - PdV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dVis the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.
It is useful to view the TdS term in the same light: here the temperature is known as a "generalized '' force (rather than an actual mechanical force) and the entropy is a generalized displacement.
Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force '' of evaporation that drives water molecules out of the liquid. There is a generalized "force '' of condensation that drives vapor molecules out of the vapor. Only when these two "forces '' (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.
The two thermodynamic parameters that form a generalized force - displacement pair are called "conjugate variables ''. The two most familiar pairs are, of course, pressure - volume, and temperature - entropy.
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897 / 1903), which might be regarded as ' zero - dimensional ' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long - range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if E (\ displaystyle E) denotes the total energy of that component system, one may write
where E k i n (\ displaystyle E ^ (\ mathrm (kin))) and E p o t (\ displaystyle E ^ (\ mathrm (pot))) denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and U (\ displaystyle U) denotes its internal energy.
Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.
A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction E 12 p o t (\ displaystyle E_ (12) ^ (\ mathrm (pot))) between the subsystems. Thus, in an obvious notation, one may write
The quantity E 12 p o t (\ displaystyle E_ (12) ^ (\ mathrm (pot))) in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.
The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time - varying spatially inhomogeneous systems.
For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy can not permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed.
There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.
Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "can not be reduced to mechanics ''. In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.
In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that
where ΔU and ΔU denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.
For the thermodynamic operation of adding two systems with internal energies U and U, to produce a new system with internal energy U, one may write U = U + U; the reference states for U, U and U should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.
There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.
Also of course
where ΔN and ΔN denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.
A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem.
An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature.
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem can not be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called ' latent heat of evaporation ', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long - range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.
An open system can be in contact equilibrium with several other systems at once.
This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long - range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer can not in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:
where ΔU denotes the change of internal energy of the system, and ΔU denotes the change of internal energy of the ith of the m surrounding subsystems that are in open contact with the system, due to transfer between the system and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and W denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.
If the system is described by the energetic fundamental equation, U = U (S, V, N), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula
where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, N, and μ, are defined as above.
For a general natural process, there is no immediate term-wise correspondence between equations (3) and (4), because they describe the process in different conceptual frames.
Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely.
For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write
For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield
The reference does not actually write equation (5), but what it does write is fully compatible with it. Another helpful account is given by Tschoegl.
There are several other accounts of this, in apparent mutual conflict.
The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined.
Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as ' heat flow ' for a continuous - flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.
Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer '' for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.
The situation is clarified by Gyarmati, who shows that his definition of "heat transfer '', for continuous - flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous - flow as a system defined in the so - called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.
Gyarmati shows that his definition of "the heat flow vector '' is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics. '' Apparently in a different frame of thinking from that of the above - mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous - flow systems, which use the term "heat flux '' in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous - flow systems. They write: "Again the flow of internal energy may be split into a convection flow ρuv and a conduction flow. This conduction flow is by definition the heat flow W. Therefore: j (U) = ρuv + W where u denotes the (internal) energy per unit mass. (These authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol U to refer to total energy, including kinetic energy of bulk flow.) '' This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas - Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux '' of Haase.
In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
|
what is the difference between gib and gb | Gibibyte - wikipedia
The gibibyte is a multiple of the unit byte for digital information. The binary prefix gibi means 2, therefore one gibibyte is equal to 1073741824bytes = 1024 mebibytes. The unit symbol for the gibibyte is GiB. It is one of the units with binary prefixes defined by the International Electrotechnical Commission (IEC) in 1998.
The gibibyte is closely related to the gigabyte (GB), which is defined by the IEC as 10 bytes = 1000000000bytes, 1GiB ≈ 1.074 GB. 1024 gibibytes are equal to one tebibyte. In the context of computer memory, gigabyte and GB are customarily used to mean 1024 (2) bytes, although not in the context of data transmission and not necessarily for hard drive size.
Hard drive and SSD manufacturers use the gigabyte to mean 1000000000 bytes. Therefore, the capacity of a 128 GB SSD is 128 000 000 000 bytes. Expressed in gibibytes this is about 119.2 GiB. Some operating systems, including Microsoft Windows, display such a drive capacity as 119 GB, using the SI prefix G with the binary meaning. No space is missing: the size is simply being expressed in a different unit, even though the same prefix (G) is used in both cases.
The use of gigabyte (GB) to refer to 1000000000 bytes in some contexts and to 1073741824 bytes in others, sometimes in reference to the same device, has led to claims of confusion, controversies, and lawsuits. The IEC created the binary prefixes (kibi, mebi, gibi, etc.) in an attempt to reduce such confusion. They are increasingly used in technical literature and open - source software, and are a component of the International System of Quantities.
One thousand and twenty - four gibibytes (1024 GiB) is equal to one tebibyte (1 TiB).
|
when does the new gotti movie come out | Gotti (2018 film) - Wikipedia
Gotti is a 2018 American biographical crime drama film directed by Kevin Connolly and written by Lem Dobbs and Leo Rossi. The film is about the life of New York City mobster John Gotti and his son, and stars John Travolta, Kelly Preston, Stacy Keach, Pruitt Taylor Vince, Spencer Lofranco, and Victor Gojcaj.
The film languished in development hell for several years with numerous directors and actors, including Barry Levinson and Al Pacino, attached at various points. Principal photography finally began in July 2016 in Cincinnati, Ohio and concluded in Brooklyn, New York in February 2017.
The film was originally set to be released in the United States on December 15, 2017, but, just two weeks prior, Lionsgate, the slated distributor, sold the film back to its producers and studio, delaying its release. On March 12, 2018, its new release date was announced for June 15, 2018 by Vertical Entertainment and MoviePass Ventures, after premiering at the 2018 Cannes Film Festival. Gotti was negatively reviewed by critics, who lamented the writing, aesthetics, and performances; although its use of makeup received some praise. Travolta 's performance received mixed reviews. It is one of the few films to hold an approval rating of 0 % on the website Rotten Tomatoes.
The film chronicles the three - decade reign of crime boss John Gotti, and his rise as the head of the Gambino Crime Family in New York City, along with his son, John Jr., and his loyal wife, Victoria.
In September 2010, Fiore Films announced that it had secured the rights from Gotti Jr. to produce a movie about his life. The film, tentatively titled Gotti: In the Shadow of My Father, was to be directed by Barry Levinson, who eventually left the project. Nick Cassavetes and Joe Johnston were then also attached at different points to direct, as were Al Pacino, Lindsay Lohan and Ben Foster to star in various roles. Joe Pesci was cast as Angelo Ruggiero early in development and gained 30 pounds in order to properly portray him. After having his salary cut and being recast as Lucchese underboss Anthony Casso, he sued Fiore Films for $3 million; the case was settled out of court. Chazz Palminteri, who had played Paul Castellano in the TNT made - for - TV film Boss of Bosses, was also initially cast to reprise Gotti 's predecessor.
On September 8, 2015, it was announced that the project was moving forward with Kevin Connolly as director. Emmett / Furla / Oasis Films, Fiore Films and Herrick Entertainment would be financing the film, with Lionsgate Premiere handling the domestic distribution rights.
On July 27, 2016, the complete cast for the film was announced. It included Kelly Preston as Gotti 's wife Victoria; Stacy Keach as Aniello Dellacroce, the underboss of the Gambino crime family who mentored Gotti; Pruitt Taylor Vince as Angelo Ruggiero, a deferential friend of Gotti; Spencer Lofranco as John Gotti Jr., Gotti 's son; William DeMeo as Sammy Gravano, Gotti 's right hand man who later became an FBI informant and helped them in bringing down Gotti; Leo Rossi as Bartholomew "Bobby '' Boriello, Gotti 's enforcer; Victor Gojcaj as Father Damien, with Tyler Jon Olson and Megan Leonard.
Principal photography on the film began in Cincinnati, Ohio, on July 25, 2016, with locations including Springfield Township. The locations were staged to resemble setting of the film, New York City. Filming also took place on Jonfred Court in Finneytown, and Indian Hill. Filming was also done at Butler County Jail.
The film 's shooting was previously scheduled to take place in New York City, because of its setting there, but it was relocated to Cincinnati. One reason to relocate was Ohio 's revised Motion Picture Tax Credit to benefit films ' creators. Filming for some scenes took place in Brooklyn on February 21, 2017, concluding principal photography.
The film was originally set to be released in a limited release and through video on demand on December 15, 2017, through Lionsgate Premiere. Producers began seeking a new distributor in order for the film to receive a wide theatrical release, as opposed to the original release it was initially intended to have, with Lionsgate selling the film back to the producers and studio. On March 12, 2018, Connolly announced that the film would be released on June 15, 2018. On April 12, 2018, it was announced Vertical Entertainment was the film 's new distributor. On April 25, 2018, it was announced that MoviePass Ventures, a subsidiary of MoviePass, acquired an equity stake in the film and will participate in the revenue generated from the film. The film premiered at the 2018 Cannes Film Festival on May 15, 2018.
Gotti began its limited release in 503 theaters and was projected to make $1 -- 2 million in its opening weekend. It made $105,000 from Thursday night previews at 350 theaters and a total of $1.7 million in its opening weekend, finishing 12th. According to their own reports, MoviePass accounted for 40 % of tickets sold, leading one independent studio head to tell Deadline Hollywood: "It used to be in distribution, we 'd all gossip whether a studio was buying tickets to their own movie to goose their opening. But in the case of MoviePass, there 's no secret: They 're literally buying the tickets to their own movie! '' In its second weekend the film dropped 53 % and made $812,000, finishing 12th.
Gotti was not screened in advance for critics, but the Cannes premiere was attended by reviewers from IndieWire and The Hollywood Reporter, who both gave the film negative reviews. On review aggregator Rotten Tomatoes, the film holds an approval rating of 0 % based on 39 reviews, and an average rating of 2.2 / 10. The website 's critical consensus reads, "Fuhgeddaboudit. '' On Metacritic, the film has a weighted average score of 24 out of 100, based on reviews from 16 critics, indicating "generally unfavorable reviews ''.
Jordan Mintzer of The Hollywood Reporter gave the film a negative review, writing: "... it 's not only that the film is pretty terrible: poorly written, devoid of tension, ridiculous in spots and just plain dull in others. But the fact that it mostly portrays John Gotti as a loving family man and altogether likable guy, and his son John Gotti Jr. as a victim of government persecution, may be a first in the history of the genre. '' The New York Post 's Johnny Oleksinski called the film "the worst mob movie of all - time '' and wrote, "... the long - awaited biopic about the Gambino crime boss ' rise from made man to top dog took four directors, 44 producers and eight years to make. It shows. The finished product belongs in a cement bucket at the bottom of the river. '' Writing for Rolling Stone, Peter Travers gave the film one out of four stars and said, "Insane testimonials from Gotti supporters at the end are as close as this (film) will ever get to good reviews. ''
Observers were quick to note a large disparity on Rotten Tomatoes between the audience approval score of 80 % and the 0 % critics ' score during the film 's opening weekend. As of July 4, 2018, the audience score was down 25 % to a score of 55 %.
On June 19, Dan Murrell of Screen Junkies noted that the disparity "made no sense '' and suspected vote manipulation on behalf of the studio. Accusations against the production studio and marketing team increased after the release of a marketing push suspected to be trying to hit back at the critics. The campaign proclaimed to consumers to ignore the "... trolls behind a keyboard '', and "Audiences loved Gotti but critics do n't want you to see it... The question is why??? Trust the people and see it for yourself! '' Observers also noted the abnormally high number of reviews, ~ 7,000, compared to other films that did better at the box office that weekend, such as Incredibles 2 which logged ~ 7,600 reviews and grossed 105 times more than Gotti.
Rotten Tomatoes staff issued a statement stating they did n't find any evidence of tampering and that "All of the ratings and reviews were left by active accounts. '' As of June 19, 32 of the 54 written reviews were found to be from first - time reviewers on the site, who had also only left a review for Gotti itself, and 45 of the accounts were created the same month. Many of the accounts also wrote a review for American Animals, which along with Gotti are the only films to be owned by MoviePass through its company MoviePass Ventures, which was responsible for 40 % of tickets sold. Jim Vorel of Paste suggested this was done to try to prop up MoviePass 's "unlimited movies '' business model.
|
what factors allowed the renaissance to begin in italy | Italian Renaissance - wikipedia
Transition from the Middle Ages to the Modern era
Timeline
The Italian Renaissance (Italian: Rinascimento (rinaʃʃiˈmento)) was the earliest manifestation of the general European Renaissance, a period of great cultural change and achievement that began in Italy during the 14th century (Trecento) and lasted until the 17th century (Seicento), marking the transition between Medieval and Modern Europe. The French word renaissance (Rinascimento in Italian) means "Rebirth '' and defines the period as one of renewed interest in the culture of classical antiquity after the centuries labeled the Dark Ages by Renaissance humanists, as well as an era of economic revival after the Black Death. The Renaissance author Giorgio Vasari used the term "Rinascita '' in his Lives of the Most Excellent Painters, Sculptors, and Architects but the concept became widespread only in the 19th century, after the works of scholars such as Jules Michelet and Jacob Burckhardt.
The European Renaissance began in Tuscany (Central Italy), and centred in the city of Florence. Florence, one of the several city - states of the peninsula, rose to economic prominence by providing credit for European monarchs and laying down the groundwork for capitalism and banking. The Renaissance later spread to Venice, heart of a mediterranean empire and in control of the trade routes with the east since the end of the crusades, where the remains of ancient Greek culture were brought together, providing humanist scholars with new texts. Finally the Renaissance had a significant effect on the Papal States and Rome, largely rebuilt by Humanist and Renaissance popes (such as Alexander VI and Julius II), who were frequently involved in Italian politics, in arbitrating disputes between competing colonial powers and in opposing the Reformation.
The Italian Renaissance is best known for its achievements in painting, architecture, sculpture, literature, music, philosophy, science and exploration. Italy became the recognized European leader in all these areas by the late 15th century, during the Peace of Lodi (1454 - 1494) agreed between Italian states. The Italian Renaissance peaked in the mid-16th century as domestic disputes and foreign invasions plunged the region into the turmoil of the Italian Wars (1494 - 1559). Following this conflict, the smaller Italian states lost their independence and entered a period known as "foreign domination ''. However, the ideas and ideals of the Italian Renaissance endured and spread into the rest of Europe, setting off the Northern Renaissance. Italian explorers from the maritime republics served under the auspices of European monarchs, ushering the Age of discovery. The most famous among them are Cristopher Columbus who served for Spain, Giovanni da Verrazzano for France, Amerigo Vespucci for Portugal, and John Cabot for England. Italian universities attracted polymaths and scholars such as Copernicus, Vesalius, Galileo and Torricelli, playing a key role in the scientific revolution. Various events and dates have been proposed for the end of the Renaissance, often occurring during the 17th century.
Accounts of Renaissance literature usually begin with the three great poets of the 14th century: Dante Alighieri, Petrarch (best known for the elegantly polished vernacular sonnet sequence of the Canzoniere and for the craze for book collecting that he initiated) and Boccaccio (author of the Decameron). Famous vernacular poets of the Renaissance include the renaissance epic authors Luigi Pulci (author of Morgante), Matteo Maria Boiardo (Orlando Innamorato), Ludovico Ariosto (Orlando Furioso) and Torquato Tasso ("Jerusalem Delivered ''). 15th century writers such as the poet Poliziano and the Platonist philosopher Marsilio Ficino made extensive translations from both Latin and Greek. In the early 16th century, Castiglione (The Book of the Courtier) laid out his vision of the ideal gentleman and lady, while Machiavelli cast a jaundiced eye on "la verità effettuale della cosa '' -- the actual truth of things -- in The Prince, composed, in humanistic style, chiefly of parallel ancient and modern examples of Virtù. Historians of the period include Macchiavelli himself, his friend and critic Francesco Guicciardini and Giovanni Botero (The Reason of State). The Aldine Press, founded by the printer Aldo Manuzio, active in Venice, developed Italic type and the small, relatively portable and inexpensive printed book that could be carried in one 's pocket, as well as being the first to publish editions of books in Ancient Greek.
Italian Renaissance art exercised a dominant influence on subsequent European painting (see Western painting) and sculpture for centuries afterwards, with artists such as Leonardo da Vinci, Michelangelo, Raphael, Donatello, Giotto di Bondone, Masaccio, Fra Angelico, Piero della Francesca, Domenico Ghirlandaio, Perugino, Botticelli, and Titian. The same is true for architecture, as practiced by Brunelleschi, Leon Battista Alberti, Andrea Palladio, and Bramante. Their works include Florence Cathedral, St. Peter 's Basilica in Rome, and the Tempio Malatestiano in Rimini (to name only a few, not to mention many splendid private residences: see Renaissance architecture).
By the Late Middle Ages (circa 1300 onward), Latium, the former heartland of the Roman Empire, and southern Italy were generally poorer than the North. Rome was a city of ancient ruins, and the Papal States were loosely administered, and vulnerable to external interference such as that of France, and later Spain. The Papacy was affronted when the Avignon Papacy was created in southern France as a consequence of pressure from King Philip the Fair of France. In the south, Sicily had for some time been under foreign domination, by the Arabs and then the Normans. Sicily had prospered for 150 years during the Emirate of Sicily and later for two centuries during the Norman Kingdom and the Hohenstaufen Kingdom, but had declined by the late Middle Ages.
In contrast, Northern and Central Italy had become far more prosperous, and it has been calculated that the region was among the richest of Europe. The Crusades had built lasting trade links to the Levant, and the Fourth Crusade had done much to destroy the Byzantine Roman Empire as a commercial rival to the Venetians and Genoese. The main trade routes from the east passed through the Byzantine Empire or the Arab lands and onward to the ports of Genoa, Pisa, and Venice. Luxury goods bought in the Levant, such as spices, dyes, and silks were imported to Italy and then resold throughout Europe. Moreover, the inland city - states profited from the rich agricultural land of the Po valley. From France, Germany, and the Low Countries, through the medium of the Champagne fairs, land and river trade routes brought goods such as wool, wheat, and precious metals into the region. The extensive trade that stretched from Egypt to the Baltic generated substantial surpluses that allowed significant investment in mining and agriculture. Thus, while northern Italy was not richer in resources than many other parts of Europe, the level of development, stimulated by trade, allowed it to prosper. In particular, Florence became one of the wealthiest of the cities of Northern Italy, mainly due to its woolen textile production, developed under the supervision of its dominant trade guild, the Arte della Lana. Wool was imported from Northern Europe (and in the 16th century from Spain) and together with dyes from the east were used to make high quality textiles.
The Italian trade routes that covered the Mediterranean and beyond were also major conduits of culture and knowledge. The recovery of lost Greek classics (and, to a lesser extent, Arab advancements on them) following the Crusader conquest of the Byzantine heartlands, revitalized medieval philosophy in the Renaissance of the 12th century, just as the refugee Byzantine scholars who migrated to Italy during and following the Ottomans conquest of the Byzantines between the 12th and 15th centuries were important in sparking the new linguistic studies of the Renaissance, in newly created academies in Florence and Venice. Humanist scholars searched monastic libraries for ancient manuscripts and recovered Tacitus and other Latin authors. The rediscovery of Vitruvius meant that the architectural principles of Antiquity could be observed once more, and Renaissance artists were encouraged, in the atmosphere of humanist optimism, to excel the achievements of the Ancients, like Apelles, of whom they read.
In the 13th century, much of Europe experienced strong economic growth. The trade routes of the Italian states linked with those of established Mediterranean ports and eventually the Hanseatic League of the Baltic and northern regions of Europe to create a network economy in Europe for the first time since the 4th century. The city - states of Italy expanded greatly during this period and grew in power to become de facto fully independent of the Holy Roman Empire; apart from the Kingdom of Naples, outside powers kept their armies out of Italy. During this period, the modern commercial infrastructure developed, with double - entry book - keeping, joint stock companies, an international banking system, a systematized foreign exchange market, insurance, and government debt. Florence became the centre of this financial industry and the gold florin became the main currency of international trade.
The new mercantile governing class, who gained their position through financial skill, adapted to their purposes the feudal aristocratic model that had dominated Europe in the Middle Ages. A feature of the High Middle Ages in Northern Italy was the rise of the urban communes which had broken from the control by bishops and local counts. In much of the region, the landed nobility was poorer than the urban patriarchs in the High Medieval money economy whose inflationary rise left land - holding aristocrats impoverished. The increase in trade during the early Renaissance enhanced these characteristics. The decline of feudalism and the rise of cities influenced each other; for example, the demand for luxury goods led to an increase in trade, which led to greater numbers of tradesmen becoming wealthy, who, in turn, demanded more luxury goods. This atmosphere of assumed luxury of the time created a need for the creation of visual symbols of wealth, an important way to show a family 's affluence and taste.
This change also gave the merchants almost complete control of the governments of the Italian city - states, again enhancing trade. One of the most important effects of this political control was security. Those that grew extremely wealthy in a feudal state ran constant risk of running afoul of the monarchy and having their lands confiscated, as famously occurred to Jacques Coeur in France. The northern states also kept many medieval laws that severely hampered commerce, such as those against usury, and prohibitions on trading with non-Christians. In the city - states of Italy, these laws were repealed or rewritten.
The 14th century saw a series of catastrophes that caused the European economy to go into recession. The Medieval Warm Period was ending as the transition to the Little Ice Age began. This change in climate saw agricultural output decline significantly, leading to repeated famines, exacerbated by the rapid population growth of the earlier era. The Hundred Years ' War between England and France disrupted trade throughout northwest Europe, most notably when, in 1345, King Edward III of England repudiated his debts, contributing to the collapse of the two largest Florentine banks, those of the Bardi and Peruzzi. In the east, war was also disrupting trade routes, as the Ottoman Empire began to expand throughout the region. Most devastating, though, was the Black Death that decimated the populations of the densely populated cities of Northern Italy and returned at intervals thereafter. Florence, for instance, which had a pre-plague population of 45,000 decreased over the next 47 years by 25 -- 50 %. Widespread disorder followed, including a revolt of Florentine textile workers, the ciompi, in 1378.
It was during this period of instability that the Renaissance authors such as Dante and Petrarch lived, and the first stirrings of Renaissance art were to be seen, notably in the realism of Giotto. Paradoxically, some of these disasters would help establish the Renaissance. The Black Death wiped out a third of Europe 's population. The resulting labour shortage increased wages and the reduced population was therefore much wealthier, better fed, and, significantly, had more surplus money to spend on luxury goods. As incidences of the plague began to decline in the early 15th century, Europe 's devastated population once again began to grow. The new demand for products and services also helped create a growing class of bankers, merchants, and skilled artisans. The horrors of the Black Death and the seeming inability of the Church to provide relief would contribute to a decline of church influence. Additionally, the collapse of the Bardi and Peruzzi banks would open the way for the Medici to rise to prominence in Florence. Roberto Sabatino Lopez argues that the economic collapse was a crucial cause of the Renaissance. According to this view, in a more prosperous era, businessmen would have quickly reinvested their earnings in order to make more money in a climate favourable to investment. However, in the leaner years of the 14th century, the wealthy found few promising investment opportunities for their earnings and instead chose to spend more on culture and art.
Another popular explanation for the Italian Renaissance is the thesis, first advanced by historian Hans Baron, that states that the primary impetus of the early Renaissance was the long - running series of wars between Florence and Milan. By the late 14th century, Milan had become a centralized monarchy under the control of the Visconti family. Giangaleazzo Visconti, who ruled the city from 1378 to 1402, was renowned both for his cruelty and for his abilities, and set about building an empire in Northern Italy. He launched a long series of wars, with Milan steadily conquering neighbouring states and defeating the various coalitions led by Florence that sought in vain to halt the advance. This culminated in the 1402 siege of Florence, when it looked as though the city was doomed to fall, before Giangaleazzo suddenly died and his empire collapsed.
Baron 's thesis suggests that during these long wars, the leading figures of Florence rallied the people by presenting the war as one between the free republic and a despotic monarchy, between the ideals of the Greek and Roman Republics and those of the Roman Empire and Medieval kingdoms. For Baron, the most important figure in crafting this ideology was Leonardo Bruni. This time of crisis in Florence was the period when the most influential figures of the early Renaissance were coming of age, such as Ghiberti, Donatello, Masolino, and Brunelleschi. Inculcated with this republican ideology they later went on to advocate republican ideas that were to have an enormous impact on the Renaissance.
Northern Italy and upper Central Italy were divided into a number of warring city - states, the most powerful being Milan, Florence, Pisa, Siena, Genoa, Ferrara, Mantua, Verona and Venice. High Medieval Northern Italy was further divided by the long - running battle for supremacy between the forces of the Papacy and of the Holy Roman Empire: each city aligned itself with one faction or the other, yet was divided internally between the two warring parties, Guelfs and Ghibellines. Warfare between the states was common, invasion from outside Italy confined to intermittent sorties of Holy Roman Emperors. Renaissance politics developed from this background. Since the 13th century, as armies became primarily composed of mercenaries, prosperous city - states could field considerable forces, despite their low populations. In the course of the 15th century, the most powerful city - states annexed their smaller neighbors. Florence took Pisa in 1406, Venice captured Padua and Verona, while the Duchy of Milan annexed a number of nearby areas including Pavia and Parma.
The first part of the Renaissance saw almost constant warfare on land and sea as the city - states vied for preeminence. On land, these wars were primarily fought by armies of mercenaries known as condottieri, bands of soldiers drawn from around Europe, but especially Germany and Switzerland, led largely by Italian captains. The mercenaries were not willing to risk their lives unduly, and war became one largely of sieges and maneuvering, occasioning few pitched battles. It was also in the interest of mercenaries on both sides to prolong any conflict, to continue their employment. Mercenaries were also a constant threat to their employers; if not paid, they often turned on their patron. If it became obvious that a state was entirely dependent on mercenaries, the temptation was great for the mercenaries to take over the running of it themselves -- this occurred on a number of occasions.
At sea, Italian city - states sent many fleets out to do battle. The main contenders were Pisa, Genoa, and Venice, but after a long conflict the Genoese succeeded in reducing Pisa. Venice proved to be a more powerful adversary, and with the decline of Genoese power during the 15th century Venice became pre-eminent on the seas. In response to threats from the landward side, from the early 15th century Venice developed an increased interest in controlling the terrafirma as the Venetian Renaissance opened.
On land, decades of fighting saw Florence, Milan and Venice emerge as the dominant players, and these three powers finally set aside their differences and agreed to the Peace of Lodi in 1454, which saw relative calm brought to the region for the first time in centuries. This peace would hold for the next forty years, and Venice 's unquestioned hegemony over the sea also led to unprecedented peace for much of the rest of the 15th century. In the beginning of the 15th century, adventurer and traders such as Niccolò Da Conti (1395 -- 1469) traveled as far as Southeast Asia and back, bringing fresh knowledge on the state of the world, presaging further European voyages of exploration in the years to come.
Until the late 14th century, prior to the Medici, Florence 's leading family were the House of Albizzi. In 1293 the Ordinances of Justice were enacted which effectively became the constitution of the republic of Florence throughout the Italian Renaissance. The city 's numerous luxurious palazzi were becoming surrounded by townhouses, built by the ever prospering merchant class. In 1298, one of the leading banking families of Europe, the Bonsignoris, were bankrupted and so the city of Siena lost her status as the banking center of Europe to Florence.
The main challengers of the Albizzi family were the Medicis, first under Giovanni de ' Medici, later under his son Cosimo di Giovanni de ' Medici. The Medici controlled the Medici bank -- then Europe 's largest bank -- and an array of other enterprises in Florence and elsewhere. In 1433, the Albizzi managed to have Cosimo exiled. The next year, however, saw a pro-Medici Signoria elected and Cosimo returned. The Medici became the town 's leading family, a position they would hold for the next three centuries. Florence remained a republic until 1537, traditionally marking the end of the High Renaissance in Florence, but the instruments of republican government were firmly under the control of the Medici and their allies, save during the intervals after 1494 and 1527. Cosimo and Lorenzo rarely held official posts, but were the unquestioned leaders.
Cosimo de ' Medici was highly popular among the citizenry, mainly for bringing an era of stability and prosperity to the town. One of his most important accomplishments was negotiating the Peace of Lodi with Francesco Sforza ending the decades of war with Milan and bringing stability to much of Northern Italy. Cosimo was also an important patron of the arts, directly and indirectly, by the influential example he set.
Cosimo was succeeded by his sickly son Piero de ' Medici, who died after five years in charge of the city. In 1469 the reins of power passed to Cosimo 's twenty - one - year - old grandson Lorenzo, who would become known as "Lorenzo the Magnificent. '' Lorenzo was the first of the family to be educated from an early age in the humanist tradition and is best known as one of the Renaissance 's most important patrons of the arts. Under Lorenzo, the Medici rule was formalized with the creation of a new Council of Seventy, which Lorenzo headed. The republican institutions continued, but they lost all power. Lorenzo was less successful than his illustrious forebears in business, and the Medici commercial empire was slowly eroded. Lorenzo continued the alliance with Milan, but relations with the papacy soured, and in 1478, Papal agents allied with the Pazzi family in an attempt to assassinate Lorenzo. Although the plot failed, Lorenzo 's young brother, Giuliano, was killed, and the failed assassination led to a war with the Papacy and was used as justification to further centralize power in Lorenzo 's hands.
Renaissance ideals first spread from Florence to the neighbouring states of Tuscany such as Siena and Lucca. The Tuscan culture soon became the model for all the states of Northern Italy, and the Tuscan variety of Italian came to predominate throughout the region, especially in literature. In 1447 Francesco Sforza came to power in Milan and rapidly transformed that still medieval city into a major centre of art and learning that drew Leone Battista Alberti. Venice, one of the wealthiest cities due to its control of the Adriatic Sea, also became a centre for Renaissance culture, especially architecture. Smaller courts brought Renaissance patronage to lesser cities, which developed their characteristic arts: Ferrara, Mantua under the Gonzaga, and Urbino under Federico da Montefeltro. In Naples, the Renaissance was ushered in under the patronage of Alfonso I who conquered Naples in 1443 and encouraged artists like Francesco Laurana and Antonello da Messina and writers like the poet Jacopo Sannazaro and the humanist scholar Angelo Poliziano.
In 1417 the Papacy returned to Rome, but that once imperial city remained poor and largely in ruins through the first years of the Renaissance. The great transformation began under Pope Nicholas V, who became pontiff in 1447. He launched a dramatic rebuilding effort that would eventually see much of the city renewed. The humanist scholar Aeneas Silvius Piccolomini became Pope Pius II in 1458. As the papacy fell under the control of the wealthy families, such as the Medici and the Borgias the spirit of Renaissance art and philosophy came to dominate the Vatican. Pope Sixtus IV continued Nicholas ' work, most famously ordering the construction of the Sistine Chapel. The popes also became increasingly secular rulers as the Papal States were forged into a centralized power by a series of "warrior popes ''.
The nature of the Renaissance also changed in the late 15th century. The Renaissance ideal was fully adopted by the ruling classes and the aristocracy. In the early Renaissance artists were seen as craftsmen with little prestige or recognition. By the later Renaissance the top figures wielded great influence and could charge great fees. A flourishing trade in Renaissance art developed. While in the early Renaissance many of the leading artists were of lower - or middle - class origins, increasingly they became aristocrats.
As a cultural movement, the Italian Renaissance affected only a small part of the population. Italy was the most urbanized region of Europe, but three quarters of the people were still rural peasants. For this section of the population, life remained essentially unchanged from the Middle Ages. Classic feudalism had never been prominent in Northern Italy, and most peasants worked on private farms or as sharecroppers. Some scholars see a trend towards refeudalization in the later Renaissance as the urban elites turned themselves into landed aristocrats.
The situation differed in the cities. These were dominated by a commercial elite; as exclusive as the aristocracy of any Medieval kingdom. This group became the main patrons of and audience for Renaissance culture. Below them there was a large class of artisans and guild members who lived comfortable lives and had significant power in the republican governments. This was in sharp contrast to the rest of Europe where artisans were firmly in the lower class. Literate and educated, this group did participate in the Renaissance culture. The largest section of the urban population was the urban poor of semi-skilled workers and the unemployed. Like the peasants, the Renaissance had little effect on them. Historians debate how easy it was to move between these groups during the Italian Renaissance. Examples of individuals who rose from humble beginnings can be instanced, but Burke notes two major studies in this area that have found that the data do not clearly demonstrate an increase in social mobility. Most historians feel that early in the Renaissance social mobility was quite high, but that it faded over the course of the 15th century. Inequality in society was very high. An upper - class figure would control hundreds of times more income than a servant or labourer. Some historians see this unequal distribution of wealth as important to the Renaissance, as art patronage relies on the very wealthy.
The Renaissance was not a period of great social or economic change, only of cultural and ideological development. It only touched a small fraction of the population, and in modern times this has led many historians, such as any that follow historical materialism, to reduce the importance of the Renaissance in human history. These historians tend to think in terms of "Early Modern Europe '' instead. Roger Osborne argues that "The Renaissance is a difficult concept for historians because the history of Europe quite suddenly turns into a history of Italian painting, sculpture and architecture. ''
The end of the Renaissance is as imprecisely marked as its starting point. For many, the rise to power in Florence of the austere monk Girolamo Savonarola in 1494 - 1498 marks the end of the city 's flourishing; for others, the triumphant return of the Medici marks the beginning of the late phase in the arts called Mannerism. Other accounts trace the end of the Italian Renaissance to the French invasions of the early 16th century and the subsequent conflict between France and Spanish rulers for control of Italian territory. Savonarola rode to power on a widespread backlash over the secularism and indulgence of the Renaissance -- his brief rule saw many works of art destroyed in the "Bonfire of the Vanities '' in the centre of Florence. With the Medici returned to power, now as Grand Dukes of Tuscany, the counter movement in the church continued. In 1542 the Sacred Congregation of the Inquisition was formed and a few years later the Index Librorum Prohibitorum banned a wide array of Renaissance works of literature, which marks the end of the illuminated manuscript together with Giulio Clovio, who is considered the greatest illuminator of the Italian High Renaissance, and arguably the last very notable artist in the long tradition of the illuminated manuscript, before some modern revivals.
Equally important was the end of stability with a series of foreign invasions of Italy known as the Italian Wars that would continue for several decades. These began with the 1494 invasion by France that wreaked widespread devastation on Northern Italy and ended the independence of many of the city - states. Most damaging was the May 6, 1527, Spanish and German troops ' sacking Rome that for two decades all but ended the role of the Papacy as the largest patron of Renaissance art and architecture.
While the Italian Renaissance was fading, the Northern Renaissance adopted many of its ideals and transformed its styles. A number of Italy 's greatest artists chose to emigrate. The most notable example was Leonardo da Vinci who left for France in 1516, but teams of lesser artists invited to transform the Château de Fontainebleau created the school of Fontainebleau that infused the style of the Italian Renaissance in France. From Fontainebleau, the new styles, transformed by Mannerism, brought the Renaissance to Antwerp and thence throughout Northern Europe.
This spread north was also representative of a larger trend. No longer was the Mediterranean Europe 's most important trade route. In 1498, Vasco da Gama reached India, and from that date the primary route of goods from the Orient was through the Atlantic ports of Lisbon, Seville, Nantes, Bristol, and London.
The thirteenth - century Italian literary revolution helped set the stage for the Renaissance. Prior to the Renaissance, the Italian language was not the literary language in Italy. It was only in the 13th century that Italian authors began writing in their native language rather than Latin, French, or Provençal. The 1250s saw a major change in Italian poetry as the Dolce Stil Novo (Sweet New Style, which emphasized Platonic rather than courtly love) came into its own, pioneered by poets like Guittone d'Arezzo and Guido Guinizelli. Especially in poetry, major changes in Italian literature had been taking place decades before the Renaissance truly began.
With the printing of books initiated in Venice by Aldus Manutius, an increasing number of works began to be published in the Italian language in addition to the flood of Latin and Greek texts that constituted the mainstream of the Italian Renaissance. The source for these works expanded beyond works of theology and towards the pre-Christian eras of Imperial Rome and Ancient Greece. This is not to say that no religious works were published in this period: Dante Alighieri 's The Divine Comedy reflects a distinctly medieval world view. Christianity remained a major influence for artists and authors, with the classics coming into their own as a second primary influence.
In the early Italian Renaissance, much of the focus was on translating and studying classic works from Latin and Greek. Renaissance authors were not content to rest on the laurels of ancient authors, however. Many authors attempted to integrate the methods and styles of the ancient Greeks into their own works. Among the most emulated Romans are Cicero, Horace, Sallust, and Virgil. Among the Greeks, Aristotle, Homer, and Plato were now being read in the original for the first time since the 4th century, though Greek compositions were few.
The literature and poetry of the Renaissance was largely influenced by the developing science and philosophy. The humanist Francesco Petrarch, a key figure in the renewed sense of scholarship, was also an accomplished poet, publishing several important works of poetry. He wrote poetry in Latin, notably the Punic War epic Africa, but is today remembered for his works in the Italian vernacular, especially the Canzoniere, a collection of love sonnets dedicated to his unrequited love Laura. He was the foremost writer of sonnets in Italian, and translations of his work into English by Thomas Wyatt established the sonnet form in that country, where it was employed by William Shakespeare and countless other poets.
Petrarch 's disciple, Giovanni Boccaccio, became a major author in his own right. His major work was the Decameron, a collection of 100 stories told by ten storytellers who have fled to the outskirts of Florence to escape the black plague over ten nights. The Decameron in particular and Boccaccio 's work in general were a major source of inspiration and plots for many English authors in the Renaissance, including Geoffrey Chaucer and William Shakespeare.
Aside from Christianity, classical antiquity, and scholarship, a fourth influence on Renaissance literature was politics. The political philosopher Niccolò Machiavelli 's most famous works are Discourses on Livy, Florentine Histories and finally The Prince, which has become so well known in Western society that the term "Machiavellian '' has come to refer to the realpolitik advocated by the book. However, what is ordinarily called "Machiavellianism '' is a simplified textbook view of this single work rather than an accurate term for his philosophy. Further, it is not at all clear that Machiavelli himself was the apologist for immorality as whom he is often portrayed: the basic problem is the apparent contradiction between the monarchism of The Prince and the republicanism of the Discourses. Regardless, along with many other Renaissance works, The Prince remains a relevant and influential work of literature today.
One role of Petrarch is as the founder of a new method of scholarship, Renaissance Humanism. Humanism was an optimistic philosophy that saw man as a rational and sentient being, with the ability to decide and think for himself, and saw man as inherently good by nature, which was in tension with the Christian view of man as the original sinner needing redemption. It provoked fresh insight into the nature of reality, questioning beyond God and spirituality, and provided for knowledge about history beyond Christian history.
Petrarch encouraged the study of the Latin classics and carried his copy of Homer about, at a loss to find someone to teach him to read Greek. An essential step in the humanist education being propounded by scholars like Pico della Mirandola was the hunting down of lost or forgotten manuscripts that were known only by reputation. These endeavors were greatly aided by the wealth of Italian patricians, merchant - princes and despots, who would spend substantial sums building libraries. Discovering the past had become fashionable and it was a passionate affair pervading the upper reaches of society. I go, said Cyriac of Ancona, I go to awake the dead. As the Greek works were acquired, manuscripts found, libraries and museums formed, the age of the printing press was dawning. The works of Antiquity were translated from Greek and Latin into the contemporary modern languages throughout Europe, finding a receptive middle - class audience, which might be, like Shakespeare, "with little Latin and less Greek ''.
While concern for philosophy, art and literature all increased greatly in the Renaissance the period is usually seen as one of scientific backwardness. The reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Humanism stressed that nature came to be viewed as an animate spiritual creation that was not governed by laws or mathematics. At the same time philosophy lost much of its rigour as the rules of logic and deduction were seen as secondary to intuition and emotion.
According to some recent scholarship, the ' father of modern science ' is Leonardo da Vinci whose experiments and clear scientific method earn him this title, Italian universities such as Padua, Bologna and Pisa were scientific centres of renown and with many northern European students, the science of the Renaissance moved to Northern Europe and flourished there, with such figures as Copernicus, Francis Bacon, and Descartes. Galileo, a contemporary of Bacon and Descartes, made an immense contribution to scientific thought and experimentation, paving the way for the scientific revolution that later flourished in Northern Europe. Bodies were also stolen from gallows and examined by many like Vesalius, a professor of anatomy. This allowed them to create accurate skeleton models and correct previously believed theories. For example, many thought that the human jawbone was made up of two bones, as they had seen this on animals. However through examining human corpses they were able to understand that humans actually only have one.
In painting, the false dawn of Giotto 's Trecento realism, his fully three - dimensional figures occupying a rational space, and his humanist interest in expressing the individual personality rather than the iconic images, was followed by a retreat into conservative late Gothic conventions.
The Italian Renaissance in painting began anew, in Florence and Tuscany, with the frescoes of Masaccio, then the panel paintings and frescos of Piero della Francesca and Paolo Uccello which began to enhance the realism of their work by using new techniques in perspective, thus representing three dimensions in two - dimensional art more authentically. Piero della Francesca wrote treatises on scientific perspective. The creation of credible space allowed artists to also focus on the accurate representation of the human body and on naturalistic landscapes. Masaccio 's figures have a plasticity unknown up to that point in time. Compared to the flatness of Gothic painting, his pictures were revolutionary. Around 1459 San Zeno Altarpiece (Mantegna), it was probably the first good example of Renaissance painting in Northern Italy a model for all Verona 's painters, for example Girolamo dai Libri. At the turn of the 16th century, especially in Northern Italy, artists also began to use new techniques in the manipulation of light and darkness, such as the tone contrast evident in many of Titian 's portraits and the development of sfumato and chiaroscuro by Leonardo da Vinci and Giorgione. The period also saw the first secular (non-religious) themes. There has been much debate as to the degree of secularism in the Renaissance, which had been emphasized by early 20th - century writers like Jacob Burckhardt, based on, among other things, the presence of a relatively small number of mythological paintings. Those of Botticelli, notably The Birth of Venus and Primavera, are now among the best known, although he was deeply religious (becoming a follower of Savonarola) and the great majority of his output was of traditional religious paintings or portraits.
In sculpture, Donatello 's (1386 -- 1466) study of classical sculpture led to his development of classicizing positions (such as the contrapposto pose) and subject matter (like the unsupported nude -- his second sculpture of David was the first free - standing bronze nude created in Europe since the Roman Empire.) The progress made by Donatello was influential on all who followed; perhaps the greatest of whom is Michelangelo, whose David of 1500 is also a male nude study; more naturalistic than Donatello 's and with greater emotional intensity. Both sculptures are standing in contrapposto, their weight shifted to one leg.
The period known as the High Renaissance represents the culmination of the goals of the earlier period, namely the accurate representation of figures in space rendered with credible motion and in an appropriately decorous style. The most famous painters from this phase are Leonardo da Vinci, Raphael, and Michelangelo. Their images are among the most widely known works of art in the world. Leonardo 's Last Supper, Raphael 's The School of Athens and Michelangelo 's Sistine Chapel Ceiling are the masterpieces of the period.
High Renaissance painting evolved into Mannerism, especially in Florence. Mannerist artists, who consciously rebelled against the principles of High Renaissance, tend to represent elongated figures in illogical spaces. Modern scholarship has recognized the capacity of Mannerist art to convey strong (often religious) emotion where the High Renaissance failed to do so. Some of the main artists of this period are Pontormo, Bronzino, Rosso Fiorentino, Parmigianino and Raphael 's pupil Giulio Romano.
In Florence, the Renaissance style was introduced with a revolutionary but incomplete monument in Rimini by Leone Battista Alberti. Some of the earliest buildings showing Renaissance characteristics are Filippo Brunelleschi 's church of San Lorenzo and the Pazzi Chapel. The interior of Santo Spirito expresses a new sense of light, clarity and spaciousness, which is typical of the early Italian Renaissance. Its architecture reflects the philosophy of Humanism, the enlightenment and clarity of mind as opposed to the darkness and spirituality of the Middle Ages. The revival of classical antiquity can best be illustrated by the Palazzo Rucellai. Here the pilasters follow the superposition of classical orders, with Doric capitals on the ground floor, Ionic capitals on the piano nobile and Corinthian capitals on the uppermost floor.
In Mantua, Leone Battista Alberti ushered in the new antique style, though his culminating work, Sant'Andrea, was not begun until 1472, after the architect 's death.
The High Renaissance, as we call the style today, was introduced to Rome with Donato Bramante 's Tempietto at San Pietro in Montorio (1502) and his original centrally planned St. Peter 's Basilica (1506), which was the most notable architectural commission of the era, influenced by almost all notable Renaissance artists, including Michelangelo and Giacomo della Porta. The beginning of the late Renaissance in 1550 was marked by the development of a new column order by Andrea Palladio. Colossal columns that were two or more stories tall decorated the facades.
In Italy during the 14th century there was an explosion of musical activity that corresponded in scope and level of innovation to the activity in the other arts. Although musicologists typically group the music of the Trecento (music of the 14th century) with the late medieval period, it included features which align with the early Renaissance in important ways: an increasing emphasis on secular sources, styles and forms; a spreading of culture away from ecclesiastical institutions to the nobility, and even to the common people; and a quick development of entirely new techniques. The principal forms were the Trecento madrigal, the caccia, and the ballata. Overall, the musical style of the period is sometimes labelled as the "Italian ars nova. '' From the early 15th century to the middle of the 16th century, the center of innovation in sacred music was in the Low Countries, and a flood of talented composers came to Italy from this region. Many of them sang in either the papal choir in Rome or the choirs at the numerous chapels of the aristocracy, in Rome, Venice, Florence, Milan, Ferrara and elsewhere; and they brought their polyphonic style with them, influencing many native Italian composers during their stay.
The predominant forms of church music during the period were the mass and the motet. By far the most famous composer of church music in 16th century Italy was Palestrina, the most prominent member of the Roman School, whose style of smooth, emotionally cool polyphony was to become the defining sound of the late 16th century, at least for generations of 19th - and 20th century musicologists. Other Italian composers of the late 16th century focused on composing the main secular form of the era, the madrigal: and for almost a hundred years these secular songs for multiple singers were distributed all over Europe. Composers of madrigals included Jacques Arcadelt, at the beginning of the age, Cipriano de Rore, in the middle of the century, and Luca Marenzio, Philippe de Monte, Carlo Gesualdo, and Claudio Monteverdi at the end of the era. Italy was also a centre of innovation in instrumental music. By the early 16th century keyboard improvisation came to be greatly valued, and numerous composers of virtuoso keyboard music appeared. Many familiar instruments were invented and perfected in late Renaissance Italy, such as the violin, the earliest forms of which came into use in the 1550s.
By the late 16th century Italy was the musical centre of Europe. Almost all of the innovations which were to define the transition to the Baroque period originated in northern Italy in the last few decades of the century. In Venice, the polychoral productions of the Venetian School, and associated instrumental music, moved north into Germany; in Florence, the Florentine Camerata developed monody, the important precursor to opera, which itself first appeared around 1600; and the avant - garde, manneristic style of the Ferrara school, which migrated to Naples and elsewhere through the music of Carlo Gesualdo, was to be the final statement of the polyphonic vocal music of the Renaissance.
|
who wrote pack up your troubles in your old kit bag | Pack up your Troubles in your old kit - bag - wikipedia
"Pack Up Your Troubles in Your Old Kit - Bag, and Smile, Smile, Smile '' is the full name of a World War I marching song, published in 1915 in London. It was written by Welsh songwriter George Henry Powell under the pseudonym of "George Asaf '', and set to music by his brother Felix Powell.
It was featured in the American show Her Soldier Boy, which opened in December 1916.
Performers associated with this song include Edward Hamilton, the Victor Military Band, James F. Harrison, Murray Johnson, Reinald Werrenrath, and the Knickerbocker Quartet.
A later play presented by the National Theatre recounts how these music hall stars rescued the song from their rejects pile and re-scored it to win a wartime competition for a marching song. It became very popular, boosting British morale despite the horrors of that war. It was one of a large number of music hall songs aimed at maintaining morale, recruiting for the forces, or defending Britain 's war aims. Another of these songs, It 's a Long Way to Tipperary, was so similar in musical structure that the two were sometimes sung side by side.
Snoopy on several occasions listened to the song when he fantasizes as a WWI flying ace. In the annual special, Schroeder plays a series of WWI songs on his piano, one of which was "Pack Up Your Troubles in Your Old Kit - Bag ''. In multiple comics Snoopy can also be seen on his doghouse, singing "It 's a Long, Long Way to Tipperary '', "Pack Up Your Troubles in Your Old Kit - Bag '' and "Over There ''. He then concludes saying "We World War I flying aces are very sentimental. '' In another strip, he also questions how can you pack up your troubles in a kit - bag.
The song is best remembered for its chorus:
The Dutch version goes:
The Spanish version
The German version:
The Norwegian translation "Legg dine sørger i en gammel sekk '' (possibly 1916) and the Swedish "Lägg dina sorger i en gammal säck '' (1917) were by written by Karl - Ewert Christenson (1888 -- 1965) and recorded by singer Ernst Rolf.
Florrie Forde performed it throughout the United Kingdom in 1916.
Other performers associated with this song include the Helen Clark, Reinald Werrenrath, and Oscar Seagle.
Cilla Black performed the song as a comedy / singing sketch on her variety television series Surprise Surprise.
The original version was sampled in and inspired the song "Pack Up '' by English musician Eliza Doolittle.
The song appears in several movies, including Pack Up Your Troubles (1932) with Laurel & Hardy, High Pressure (1932), and The Shopworn Angel (1938). It is also featured in For Me and My Gal (1942) starring Judy Garland and Gene Kelly.
The song also featured briefly in the 1979 film All That Jazz, sung between Joe Gideon (Roy Scheider) and a hospital orderly.
|
little mix summer hits tour 2018 support act | Summer Hits Tour 2018 - wikipedia
The Summer Hits Tour 2018 is the fifth concert tour by British girl group Little Mix. The tour was announced on 27 November 2017, with the ticket sale starting on 30 November. This is the group 's first stadium tour. It was announced that X Factor winners Rak - Su and Germein would support the group during the leg.
This set list is from the show on 6 July 2018 in Hove. It is not intended to represent all concerts for the tour.
|
what was the main purpose for establishing the united nations in 1945 | United Nations - wikipedia
The United Nations (UN) is an intergovernmental organization tasked to promote international co-operation and to create and maintain international order. A replacement for the ineffective League of Nations, the organization was established on 24 October 1945 after World War II in order to prevent another such conflict. At its founding, the UN had 51 member states; there are now 193. The headquarters of the UN is in Manhattan, New York City, and is subject to extraterritoriality. Further main offices are situated in Geneva, Nairobi, and Vienna. The organization is financed by assessed and voluntary contributions from its member states. Its objectives include maintaining international peace and security, promoting human rights, fostering social and economic development, protecting the environment, and providing humanitarian aid in cases of famine, natural disaster, and armed conflict. The UN is the largest, most familiar, most internationally represented and most powerful intergovernmental organization in the world.
The UN Charter was drafted at a conference between April -- June 1945 in San Francisco, and was signed on 26 June 1945 at the conclusion of the conference; this charter took effect 24 October 1945, and the UN began operation. The UN 's mission to preserve world peace was complicated in its early decades by the Cold War between the US and Soviet Union and their respective allies. The organization participated in major actions in Korea and the Congo, as well as approving the creation of the state of Israel in 1947. The organization 's membership grew significantly following widespread decolonization in the 1960s, and by the 1970s its budget for economic and social development programmes far outstripped its spending on peacekeeping. After the end of the Cold War, the UN took on major military and peacekeeping missions across the world with varying degrees of success.
The UN has six principal organs: the General Assembly (the main deliberative assembly); the Security Council (for deciding certain resolutions for peace and security); the Economic and Social Council (ECOSOC; for promoting international economic and social co-operation and development); the Secretariat (for providing studies, information, and facilities needed by the UN); the International Court of Justice (the primary judicial organ); and the UN Trusteeship Council (inactive since 1994). UN System agencies include the World Bank Group, the World Health Organization, the World Food Programme, UNESCO, and UNICEF. The UN 's most prominent officer is the Secretary - General, an office held by Portuguese António Guterres since 2017. Non-governmental organizations may be granted consultative status with ECOSOC and other agencies to participate in the UN 's work.
The organization won the Nobel Peace Prize in 2001, and a number of its officers and agencies have also been awarded the prize. Other evaluations of the UN 's effectiveness have been mixed. Some commentators believe the organization to be an important force for peace and human development, while others have called the organization ineffective, corrupt, or biased.
In the century prior to the UN 's creation, several international treaty organizations and conferences had been formed to regulate conflicts between nations, such as the International Committee of the Red Cross and the Hague Conventions of 1899 and 1907. Following the catastrophic loss of life in the First World War, the Paris Peace Conference established the League of Nations to maintain harmony between countries. This organization resolved some territorial disputes and created international structures for areas such as postal mail, aviation, and opium control, some of which would later be absorbed into the UN. However, the League lacked representation for colonial peoples (then half the world 's population) and significant participation from several major powers, including the US, USSR, Germany, and Japan; it failed to act against the Japanese invasion of Manchuria in 1931, the Second Italo - Ethiopian War in 1935, the Japanese invasion of China in 1937, and German expansions under Adolf Hitler that culminated in the Second World War.
The earliest concrete plan for a new world organization began under the aegis of the US State Department in 1939. The text of the "Declaration by United Nations '' was drafted by President Franklin Roosevelt, British Prime Minister Winston Churchill, and Roosevelt aide Harry Hopkins, while meeting at the White House, 29 December 1941. It incorporated Soviet suggestions, but left no role for France. "Four Policemen '' was coined to refer to four major Allied countries, United States, United Kingdom, Soviet Union, and China, which emerged in the Declaration by United Nations. Roosevelt first coined the term United Nations to describe the Allied countries. "On New Year 's Day 1942, President Roosevelt, Prime Minister Churchill, Maxim Litvinov, of the USSR, and T.V. Soong, of China, signed a short document which later came to be known as the United Nations Declaration and the next day the representatives of twenty - two other nations added their signatures. '' The term United Nations was first officially used when 26 governments signed this Declaration. One major change from the Atlantic Charter was the addition of a provision for religious freedom, which Stalin approved after Roosevelt insisted. By 1 March 1945, 21 additional states had signed.
A JOINT DECLARATION BY THE UNITED STATES OF AMERICA, THE UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND, THE UNION OF SOVIET SOCIALIST REPUBLICS, CHINA, AUSTRALIA, BELGIUM, CANADA, COSTA RICA, CUBA, CZECHOSLOVAKIA, DOMINICAN REPUBLIC, EL SALVADOR, GREECE, GUATEMALA, HAITI, HONDURAS, INDIA, LUXEMBOURG, NETHERLANDS, NEW ZEALAND, NICARAGUA, NORWAY, PANAMA, POLAND, SOUTH AFRICA, YUGOSLAVIA
The Governments signatory hereto,
Having subscribed to a common program of purposes and principles embodied in the Joint Declaration of the President of the United States of America and the Prime Minister of Great Britain dated August 14, 1941, known as the Atlantic Charter,
Being convinced that complete victory over their enemies is essential to defend life, liberty, independence and religious freedom, and to preserve human rights and justice in their own lands as well as in other lands, and that they are now engaged in a common struggle against savage and brutal forces seeking to subjugate the world,
DECLARE:
The foregoing declaration may be adhered to by other nations which are, or which may be, rendering material assistance and contributions in the struggle for victory over Hitlerism.
During the war, "the United Nations '' became the official term for the Allies. To join, countries had to sign the Declaration and declare war on the Axis.
The UN was formulated and negotiated among the delegations from the Allied Big Four (the Soviet Union, the UK, the US, and China) at the Dumbarton Oaks Conference in 1944. After months of planning, the UN Conference on International Organization opened in San Francisco, 25 April 1945, attended by 50 governments and a number of non-governmental organizations involved in drafting the UN Charter. "The heads of the delegations of the sponsoring countries took turns as chairman of the plenary meetings: Anthony Eden, of Britain, Edward Stettinius, of the United States, T.V. Soong, of China, and Vyacheslav Molotov, of the Soviet Union. At the later meetings, Lord Halifax deputized for Mr. Eden, Wellington Koo for T.V. Soong, and Mr Gromyko for Mr. Molotov. '' The UN officially came into existence 24 October 1945, upon ratification of the Charter by the five permanent members of the Security Council -- France, the Republic of China, the Soviet Union, the UK and the US -- and by a majority of the other 46 signatories.
The first meetings of the General Assembly, with 51 nations represented, and the Security Council took place in London beginning 6 January 1946. The General Assembly selected New York City as the site for the headquarters of the UN, and the facility was completed in 1952. Its site -- like UN headquarters buildings in Geneva, Vienna, and Nairobi -- is designated as international territory. The Norwegian Foreign Minister, Trygve Lie, was elected as the first UN Secretary - General.
Though the UN 's primary mandate was peacekeeping, the division between the US and USSR often paralysed the organization, generally allowing it to intervene only in conflicts distant from the Cold War. (A notable exception was a Security Council resolution in 1950 authorizing a US - led coalition to repel the North Korean invasion of South Korea, passed in the absence of the USSR.) In 1947, the General Assembly approved a resolution to partition Palestine, approving the creation of the state of Israel. Two years later, Ralph Bunche, a UN official, negotiated an armistice to the resulting conflict. In 1956, the first UN peacekeeping force was established to end the Suez Crisis; however, the UN was unable to intervene against the USSR 's simultaneous invasion of Hungary following that country 's revolution.
In 1960, the UN deployed United Nations Operation in the Congo (UNOC), the largest military force of its early decades, to bring order to the breakaway State of Katanga, restoring it to the control of the Democratic Republic of the Congo by 1964. While travelling to meet with rebel leader Moise Tshombe during the conflict, Dag Hammarskjöld, often named as one of the UN 's most effective Secretaries - General, died in a plane crash; months later he was posthumously awarded the Nobel Peace Prize. In 1964, Hammarskjöld 's successor, U Thant, deployed the UN Peacekeeping Force in Cyprus, which would become one of the UN 's longest - running peacekeeping missions.
With the spread of decolonization in the 1960s, the organization 's membership saw an influx of newly independent nations. In 1960 alone, 17 new states joined the UN, 16 of them from Africa. On 25 October 1971, with opposition from the United States, but with the support of many Third World nations, the mainland, communist People 's Republic of China was given the Chinese seat on the Security Council in place of the Republic of China that occupied Taiwan; the vote was widely seen as a sign of waning US influence in the organization. Third World nations organized into the Group of 77 coalition under the leadership of Algeria, which briefly became a dominant power at the UN. In 1975, a bloc comprising the USSR and Third World nations passed a resolution, over strenuous US and Israeli opposition, declaring Zionism to be racism; the resolution was repealed in 1991, shortly after the end of the Cold War.
With an increasing Third World presence and the failure of UN mediation in conflicts in the Middle East, Vietnam, and Kashmir, the UN increasingly shifted its attention to its ostensibly secondary goals of economic development and cultural exchange. By the 1970s, the UN budget for social and economic development was far greater than its peacekeeping budget.
After the Cold War, the UN saw a radical expansion in its peacekeeping duties, taking on more missions in ten years than it had in the previous four decades. Between 1988 and 2000, the number of adopted Security Council resolutions more than doubled, and the peacekeeping budget increased more than tenfold. The UN negotiated an end to the Salvadoran Civil War, launched a successful peacekeeping mission in Namibia, and oversaw democratic elections in post-apartheid South Africa and post-Khmer Rouge Cambodia. In 1991, the UN authorized a US - led coalition that repulsed the Iraqi invasion of Kuwait. Brian Urquhart, Under - Secretary - General from 1971 to 1985, later described the hopes raised by these successes as a "false renaissance '' for the organization, given the more troubled missions that followed.
Though the UN Charter had been written primarily to prevent aggression by one nation against another, in the early 1990s the UN faced a number of simultaneous, serious crises within nations such as Somalia, Haiti, Mozambique, and the former Yugoslavia. The UN mission in Somalia was widely viewed as a failure after the US withdrawal following casualties in the Battle of Mogadishu, and the UN mission to Bosnia faced "worldwide ridicule '' for its indecisive and confused mission in the face of ethnic cleansing. In 1994, the UN Assistance Mission for Rwanda failed to intervene in the Rwandan genocide amid indecision in the Security Council.
Beginning in the last decades of the Cold War, American and European critics of the UN condemned the organization for perceived mismanagement and corruption. In 1984, the US President, Ronald Reagan, withdrew his nation 's funding from UNESCO (the United Nations Educational, Scientific and Cultural Organization, founded 1946) over allegations of mismanagement, followed by Britain and Singapore. Boutros Boutros - Ghali, Secretary - General from 1992 to 1996, initiated a reform of the Secretariat, reducing the size of the organization somewhat. His successor, Kofi Annan (1997 -- 2006), initiated further management reforms in the face of threats from the United States to withhold its UN dues.
In the late 1990s and 2000s, international interventions authorized by the UN took a wider variety of forms. The UN mission in the Sierra Leone Civil War of 1991 -- 2002 was supplemented by British Royal Marines, and the invasion of Afghanistan in 2001 was overseen by NATO.In 2003, the United States invaded Iraq despite failing to pass a UN Security Council resolution for authorization, prompting a new round of questioning of the organization 's effectiveness. Under the eighth Secretary - General, Ban Ki - moon, the UN has intervened with peacekeepers in crises including the War in Darfur in Sudan and the Kivu conflict in the Democratic Republic of Congo and sent observers and chemical weapons inspectors to the Syrian Civil War. In 2013, an internal review of UN actions in the final battles of the Sri Lankan Civil War in 2009 concluded that the organization had suffered "systemic failure ''. One hundred and one UN personnel died in the 2010 Haiti earthquake, the worst loss of life in the organization 's history.
The Millennium Summit was held in 2000 to discuss the UN 's role in the 21st century. The three day meeting was the largest gathering of world leaders in history, and culminated in the adoption by all member states of the Millennium Development Goals (MDGs), a commitment to achieve international development in areas such as poverty reduction, gender equality, and public health. Progress towards these goals, which were to be met by 2015, was ultimately uneven. The 2005 World Summit reaffirmed the UN 's focus on promoting development, peacekeeping, human rights, and global security. The Sustainable Development Goals were launched in 2015 to succeed the Millennium Development Goals.
In addition to addressing global challenges, the UN has sought to improve its accountability and democratic legitimacy by engaging more with civil society and fostering a global constituency. In an effort to enhance transparency, in 2016 the organization held its first public debate between candidates for Secretary - General. On January 1, 2017, Portuguese diplomat António Guterres, who previously served as UN High Commissioner for Refugees, became the ninth Secretary - General. Guterres has highlighted several key goals for his administration, including an emphasis on diplomacy for preventing conflicts, more effective peacekeeping efforts, and streamlining the organization to be more responsive and versatile to global needs.
The UN system is based on five principal organs: the General Assembly, the Security Council, the Economic and Social Council (ECOSOC), the Secretariat, and the International Court of Justice. A sixth principal organ, the Trusteeship Council, suspended operations in 1994, upon the independence of Palau, the last remaining UN trustee territory.
Four of the five principal organs are located at the main UN Headquarters in New York City. The International Court of Justice is located in The Hague, while other major agencies are based in the UN offices at Geneva, Vienna, and Nairobi. Other UN institutions are located throughout the world. The six official languages of the UN, used in intergovernmental meetings and documents, are Arabic, Chinese, English, French, Russian, and Spanish. On the basis of the Convention on the Privileges and Immunities of the United Nations, the UN and its agencies are immune from the laws of the countries where they operate, safeguarding the UN 's impartiality with regard to the host and member countries.
Below the six organs sit, in the words of the author Linda Fasulo, "an amazing collection of entities and organizations, some of which are actually older than the UN itself and operate with almost complete independence from it ''. These include specialized agencies, research and training institutions, programmes and funds, and other UN entities.
The UN obey the Noblemaire principle, which is binding on any organization that belongs to the UN system. This principle calls for salaries that will draw and keep citizens of countries where salaries are highest, and also calls for equal pay for work of equal value independent of the employee 's nationality. Staff salaries are subject to an internal tax that is administered by the UN organizations.
The General Assembly is the main deliberative assembly of the UN. Composed of all UN member states, the assembly meets in regular yearly sessions, but emergency sessions can also be called. The assembly is led by a president, elected from among the member states on a rotating regional basis, and 21 vice-presidents. The first session convened 10 January 1946 in the Methodist Central Hall in London and included representatives of 51 nations.
When the General Assembly votes on important questions, a two - thirds majority of those present and voting is required. Examples of important questions include recommendations on peace and security; election of members to organs; admission, suspension, and expulsion of members; and budgetary matters. All other questions are decided by a majority vote. Each member country has one vote. Apart from approval of budgetary matters, resolutions are not binding on the members. The Assembly may make recommendations on any matters within the scope of the UN, except matters of peace and security that are under consideration by the Security Council.
Draft resolutions can be forwarded to the General Assembly by eight committees:
The Security Council is charged with maintaining peace and security among countries. While other organs of the UN can only make "recommendations '' to member states, the Security Council has the power to make binding decisions that member states have agreed to carry out, under the terms of Charter Article 25. The decisions of the Council are known as United Nations Security Council resolutions.
The Security Council is made up of fifteen member states, consisting of five permanent members -- China, France, Russia, the United Kingdom, and the United States -- and ten non-permanent members elected for two - year terms by the General Assembly (with end of term date) -- Bolivia (term ends 2018), Egypt (2017), Ethiopia (2018), Italy (2018), Japan (2017), Kazakhstan (2018), Senegal (2017), Sweden (2018), Ukraine (2017), Uruguay (2017). The five permanent members hold veto power over UN resolutions, allowing a permanent member to block adoption of a resolution, though not debate. The ten temporary seats are held for two - year terms, with five member states per year voted in by the General Assembly on a regional basis. The presidency of the Security Council rotates alphabetically each month.
The UN Secretariat is headed by the Secretary - General, assisted by a staff of international civil servants worldwide. It provides studies, information, and facilities needed by UN bodies for their meetings. It also carries out tasks as directed by the Security Council, the General Assembly, the Economic and Social Council, and other UN bodies.
The Secretary - General acts as the de facto spokesperson and leader of the UN. The position is defined in the UN Charter as the organization 's "chief administrative officer ''. Article 99 of the charter states that the Secretary - General can bring to the Security Council 's attention "any matter which in his opinion may threaten the maintenance of international peace and security '', a phrase that Secretaries - General since Trygve Lie have interpreted as giving the position broad scope for action on the world stage. The office has evolved into a dual role of an administrator of the UN organization and a diplomat and mediator addressing disputes between member states and finding consensus to global issues.
The Secretary - General is appointed by the General Assembly, after being recommended by the Security Council, where the permanent members have veto power. There are no specific criteria for the post, but over the years it has become accepted that the post shall be held for one or two terms of five years, that the post shall be appointed on the basis of geographical rotation, and that the Secretary - General shall not originate from one of the five permanent Security Council member states. The current Secretary - General is António Guterres, who replaced Ban Ki - moon in 2017.
The International Court of Justice (ICJ), located in The Hague, in the Netherlands, is the primary judicial organ of the UN. Established in 1945 by the UN Charter, the Court began work in 1946 as the successor to the Permanent Court of International Justice. The ICJ is composed of 15 judges who serve 9 - year terms and are appointed by the General Assembly; every sitting judge must be from a different nation.
It is based in the Peace Palace in The Hague, sharing the building with the Hague Academy of International Law, a private centre for the study of international law. The ICJ 's primary purpose is to adjudicate disputes among states. The court has heard cases related to war crimes, illegal state interference, ethnic cleansing, and other issues. The ICJ can also be called upon by other UN organs to provide advisory opinions.
The Economic and Social Council (ECOSOC) assists the General Assembly in promoting international economic and social co-operation and development. ECOSOC has 54 members, which are elected by the General Assembly for a three - year term. The president is elected for a one - year term and chosen amongst the small or middle powers represented on ECOSOC. The council has one annual meeting in July, held in either New York or Geneva. Viewed as separate from the specialized bodies it co-ordinates, ECOSOC 's functions include information gathering, advising member nations, and making recommendations. Owing to its broad mandate of co-ordinating many agencies, ECOSOC has at times been criticized as unfocused or irrelevant.
ECOSOC 's subsidiary bodies include the United Nations Permanent Forum on Indigenous Issues, which advises UN agencies on issues relating to indigenous peoples; the United Nations Forum on Forests, which co-ordinates and promotes sustainable forest management; the United Nations Statistical Commission, which co-ordinates information - gathering efforts between agencies; and the Commission on Sustainable Development, which co-ordinates efforts between UN agencies and NGOs working towards sustainable development. ECOSOC may also grant consultative status to non-governmental organizations; by 2004, more than 2,200 organizations had received this status.
The UN Charter stipulates that each primary organ of the United Nations can establish various specialized agencies to fulfil its duties. Some best - known agencies are the International Atomic Energy Agency, the Food and Agriculture Organization, UNESCO (United Nations Educational, Scientific and Cultural Organization), the World Bank, and the World Health Organization (WHO). The UN performs most of its humanitarian work through these agencies. Examples include mass vaccination programmes (through WHO), the avoidance of famine and malnutrition (through the work of the WFP), and the protection of vulnerable and displaced people (for example, by UNHCR).
With the addition of South Sudan 14 July 2011, there are 193 UN member states, including all undisputed independent states apart from Vatican City. The UN Charter outlines the rules for membership:
In addition, there are two non-member observer states of the United Nations General Assembly: the Holy See (which holds sovereignty over Vatican City) and the State of Palestine. The Cook Islands and Niue, both states in free association with New Zealand, are full members of several UN specialized agencies and have had their "full treaty - making capacity '' recognized by the Secretariat.
The Group of 77 at the UN is a loose coalition of developing nations, designed to promote its members ' collective economic interests and create an enhanced joint negotiating capacity in the UN. Seventy - seven nations founded the organization, but by November 2013 the organization had since expanded to 133 member countries. The group was founded 15 June 1964 by the "Joint Declaration of the Seventy - Seven Countries '' issued at the United Nations Conference on Trade and Development (UNCTAD). The group held its first major meeting in Algiers in 1967, where it adopted the Charter of Algiers and established the basis for permanent institutional structures.
The UN, after approval by the Security Council, sends peacekeepers to regions where armed conflict has recently ceased or paused to enforce the terms of peace agreements and to discourage combatants from resuming hostilities. Since the UN does not maintain its own military, peacekeeping forces are voluntarily provided by member states. These soldiers are sometimes nicknamed "Blue Helmets '' for their distinctive gear. The peacekeeping force as a whole received the Nobel Peace Prize in 1988.
In September 2013, the UN had peacekeeping soldiers deployed on 15 missions. The largest was the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), which included 20,688 uniformed personnel. The smallest, United Nations Military Observer Group in India and Pakistan (UNMOGIP), included 42 uniformed personnel responsible for monitoring the ceasefire in Jammu and Kashmir. UN peacekeepers with the United Nations Truce Supervision Organization (UNTSO) have been stationed in the Middle East since 1948, the longest - running active peacekeeping mission.
A study by the RAND Corporation in 2005 found the UN to be successful in two out of three peacekeeping efforts. It compared efforts at nation - building by the UN to those of the United States, and found that seven out of eight UN cases are at peace, as compared with four out of eight US cases at peace. Also in 2005, the Human Security Report documented a decline in the number of wars, genocides, and human rights abuses since the end of the Cold War, and presented evidence, albeit circumstantial, that international activism -- mostly spearheaded by the UN -- has been the main cause of the decline in armed conflict in that period. Situations in which the UN has not only acted to keep the peace but also intervened include the Korean War (1950 -- 53) and the authorization of intervention in Iraq after the Gulf War (1990 -- 91).
The UN has also drawn criticism for perceived failures. In many cases, member states have shown reluctance to achieve or enforce Security Council resolutions. Disagreements in the Security Council about military action and intervention are seen as having failed to prevent the Bangladesh genocide in 1971, the Cambodian genocide in the 1970s, and the Rwandan genocide in 1994. Similarly, UN inaction is blamed for failing to either prevent the Srebrenica massacre in 1995 or complete the peacekeeping operations in 1992 -- 93 during the Somali Civil War. UN peacekeepers have also been accused of child rape, soliciting prostitutes, and sexual abuse during various peacekeeping missions in the Democratic Republic of the Congo, Haiti, Liberia, Sudan and what is now South Sudan, Burundi, and Ivory Coast. Scientists cited UN peacekeepers from Nepal as the likely source of the 2010 -- 13 Haiti cholera outbreak, which killed more than 8,000 Haitians following the 2010 Haiti earthquake.
In addition to peacekeeping, the UN is also active in encouraging disarmament. Regulation of armaments was included in the writing of the UN Charter in 1945 and was envisioned as a way of limiting the use of human and economic resources for their creation. The advent of nuclear weapons came only weeks after the signing of the charter, resulting in the first resolution of the first General Assembly meeting calling for specific proposals for "the elimination from national armaments of atomic weapons and of all other major weapons adaptable to mass destruction ''. The UN has been involved with arms - limitation treaties, such as the Outer Space Treaty (1967), the Treaty on the Non-Proliferation of Nuclear Weapons (1968), the Seabed Arms Control Treaty (1971), the Biological Weapons Convention (1972), the Chemical Weapons Convention (1992), and the Ottawa Treaty (1997), which prohibits landmines. Three UN bodies oversee arms proliferation issues: the International Atomic Energy Agency, the Organization for the Prohibition of Chemical Weapons, and the Comprehensive Nuclear - Test - Ban Treaty Organization Preparatory Commission.
One of the UN 's primary purposes is "promoting and encouraging respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion '', and member states pledge to undertake "joint and separate action '' to protect these rights.
In 1948, the General Assembly adopted a Universal Declaration of Human Rights, drafted by a committee headed by Franklin D. Roosevelt 's widow, Eleanor, and including the French lawyer René Cassin. The document proclaims basic civil, political, and economic rights common to all human beings, though its effectiveness towards achieving these ends has been disputed since its drafting. The Declaration serves as a "common standard of achievement for all peoples and all nations '' rather than a legally binding document, but it has become the basis of two binding treaties, the 1966 International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights. In practice, the UN is unable to take significant action against human rights abuses without a Security Council resolution, though it does substantial work in investigating and reporting abuses.
In 1979, the General Assembly adopted the Convention on the Elimination of All Forms of Discrimination against Women, followed by the Convention on the Rights of the Child in 1989. With the end of the Cold War, the push for human rights action took on new impetus. The United Nations Commission on Human Rights was formed in 1993 to oversee human rights issues for the UN, following the recommendation of that year 's World Conference on Human Rights. Jacques Fomerand, a scholar of the UN, describes this organization 's mandate as "broad and vague '', with only "meagre '' resources to carry it out. In 2006, it was replaced by a Human Rights Council consisting of 47 nations. Also in 2006, the General Assembly passed a Declaration on the Rights of Indigenous Peoples, and in 2011 it passed its first resolution recognizing the rights of LGBT people.
Other UN bodies responsible for women 's rights issues include United Nations Commission on the Status of Women, a commission of ECOSOC founded in 1946; the United Nations Development Fund for Women, created in 1976; and the United Nations International Research and Training Institute for the Advancement of Women, founded in 1979. The UN Permanent Forum on Indigenous Issues, one of three bodies with a mandate to oversee issues related to indigenous peoples, held its first session in 2002.
Millennium Development Goals
Another primary purpose of the UN is "to achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character ''. Numerous bodies have been created to work towards this goal, primarily under the authority of the General Assembly and ECOSOC. In 2000, the 192 UN member states agreed to achieve eight Millennium Development Goals by 2015.
The UN Development Programme (UNDP), an organization for grant - based technical assistance founded in 1945, is one of the leading bodies in the field of international development. The organization also publishes the UN Human Development Index, a comparative measure ranking countries by poverty, literacy, education, life expectancy, and other factors. The Food and Agriculture Organization (FAO), also founded in 1945, promotes agricultural development and food security. UNICEF (the United Nations Children 's Fund) was created in 1946 to aid European children after the Second World War and expanded its mission to provide aid around the world and to uphold the Convention on the Rights of the Child.
The World Bank Group and International Monetary Fund (IMF) are independent, specialized agencies and observers within the UN framework, according to a 1947 agreement. They were initially formed separately from the UN through the Bretton Woods Agreement in 1944. The World Bank provides loans for international development, while the IMF promotes international economic co-operation and gives emergency loans to indebted countries.
The World Health Organization (WHO), which focuses on international health issues and disease eradication, is another of the UN 's largest agencies. In 1980, the agency announced that the eradication of smallpox had been completed. In subsequent decades, WHO largely eradicated polio, river blindness, and leprosy. The Joint United Nations Programme on HIV / AIDS (UNAIDS), begun in 1996, co-ordinates the organization 's response to the AIDS epidemic. The UN Population Fund, which also dedicates part of its resources to combating HIV, is the world 's largest source of funding for reproductive health and family planning services.
Along with the International Red Cross and Red Crescent Movement, the UN often takes a leading role in co-ordinating emergency relief. The World Food Programme (WFP), created in 1961, provides food aid in response to famine, natural disasters, and armed conflict. The organization reports that it feeds an average of 90 million people in 80 nations each year. The Office of the United Nations High Commissioner for Refugees (UNHCR), established in 1950, works to protect the rights of refugees, asylum seekers, and stateless people. UNHCR and WFP programmes are funded by voluntary contributions from governments, corporations, and individuals, though the UNHCR 's administrative costs are paid for by the UN 's primary budget.
Since the UN 's creation, over 80 colonies have attained independence. The General Assembly adopted the Declaration on the Granting of Independence to Colonial Countries and Peoples in 1960 with no votes against but abstentions from all major colonial powers. The UN works towards decolonization through groups including the UN Committee on Decolonization, created in 1962. The committee lists seventeen remaining "Non-Self - Governing Territories '', the largest and most populous of which is Western Sahara.
Beginning with the formation of the UN Environmental Programme (UNEP) in 1972, the UN has made environmental issues a prominent part of its agenda. A lack of success in the first two decades of UN work in this area led to the 1992 Earth Summit in Rio de Janeiro, Brazil, which sought to give new impetus to these efforts. In 1988, the UNEP and the World Meteorological Organization (WMO), another UN organization, established the Intergovernmental Panel on Climate Change, which assesses and reports on research on global warming. The UN-sponsored Kyoto Protocol, signed in 1997, set legally binding emissions reduction targets for ratifying states.
The UN also declares and co-ordinates international observances, periods of time to observe issues of international interest or concern. Examples include World Tuberculosis Day, Earth Day, and the International Year of Deserts and Desertification.
The UN is financed from assessed and voluntary contributions from member states. The General Assembly approves the regular budget and determines the assessment for each member. This is broadly based on the relative capacity of each country to pay, as measured by its gross national income (GNI), with adjustments for external debt and low per capita income. The two - year budget for 2012 -- 13 was $5.512 billion in total.
The Assembly has established the principle that the UN should not be unduly dependent on any one member to finance its operations. Thus, there is a "ceiling '' rate, setting the maximum amount that any member can be assessed for the regular budget. In December 2000, the Assembly revised the scale of assessments in response to pressure from the United States. As part of that revision, the regular budget ceiling was reduced from 25 % to 22 %. For the least developed countries (LDCs), a ceiling rate of 0.01 % is applied. In addition to the ceiling rates, the minimum amount assessed to any member nation (or "floor '' rate) is set at 0.001 % of the UN budget ($55,120 for the two year budget 2013 -- 2014).
A large share of the UN 's expenditure addresses its core mission of peace and security, and this budget is assessed separately from the main organizational budget. The peacekeeping budget for the 2015 -- 16 fiscal year was $8.27 billion, supporting 82,318 troops deployed in 15 missions around the world. UN peace operations are funded by assessments, using a formula derived from the regular funding scale that includes a weighted surcharge for the five permanent Security Council members, who must approve all peacekeeping operations. This surcharge serves to offset discounted peacekeeping assessment rates for less developed countries. In 2016, the top 8 providers of assessed financial contributions to UN peacekeeping operations were the United States (28.57 %), China (10.29 %), Japan (9.68 %), Germany (6.39 %), France (6.31 %), United Kingdom (5.80 %), Russian Federation (4.01 %) and Italy (3.75 %)
Special UN programmes not included in the regular budget, such as UNICEF and the World Food Programme, are financed by voluntary contributions from member governments, corporations, and private individuals.
A number of agencies and individuals associated with the UN have won the Nobel Peace Prize in recognition of their work. Two Secretaries - General, Dag Hammarskjöld and Kofi Annan, were each awarded the prize (in 1961 and 2001, respectively), as were Ralph Bunche (1950), a UN negotiator, René Cassin (1968), a contributor to the Universal Declaration of Human Rights, and the US Secretary of State Cordell Hull (1945), the latter for his role in the organization 's founding. Lester B. Pearson, the Canadian Secretary of State for External Affairs, was awarded the prize in 1957 for his role in organizing the UN 's first peacekeeping force to resolve the Suez Crisis. UNICEF won the prize in 1965, the International Labour Organization in 1969, the UN Peace - Keeping Forces in 1988, the International Atomic Energy Agency (which reports to the UN) in 2005, and the UN-supported Organization for the Prohibition of Chemical Weapons in 2013. The UN High Commissioner for Refugees was awarded in 1954 and 1981, becoming one of only two recipients to win the prize twice. The UN as a whole was awarded the prize in 2001, sharing it with Annan.
Since its founding, there have been many calls for reform of the UN but little consensus on how to do so. Some want the UN to play a greater or more effective role in world affairs, while others want its role reduced to humanitarian work. There have also been numerous calls for the UN Security Council 's membership to be increased, for different ways of electing the UN 's Secretary - General, and for a UN Parliamentary Assembly. Jacques Fomerand states the most enduring divide in views of the UN is "the North -- South split '' between richer Northern nations and developing Southern nations. Southern nations tend to favour a more empowered UN with a stronger General Assembly, allowing them a greater voice in world affairs, while Northern nations prefer an economically laissez - faire UN that focuses on transnational threats such as terrorism.
After World War II, the French Committee of National Liberation was late to be recognized by the US as the government of France, and so the country was initially excluded from the conferences that created the new organization. The future French president Charles de Gaulle criticized the UN, famously calling it a machin ("contraption ''), and was not convinced that a global security alliance would help maintain world peace, preferring direct defence treaties between countries. Throughout the Cold War, both the US and USSR repeatedly accused the UN of favouring the other. In 1953, the USSR effectively forced the resignation of Trygve Lie, the Secretary - General, through its refusal to deal with him, while in the 1950s and 1960s, a popular US bumper sticker read, "You ca n't spell communism without U.N. '' In a sometimes - misquoted statement, President George W. Bush stated in February 2003 (referring to UN uncertainty towards Iraqi provocations under the Saddam Hussein regime) that "free nations will not allow the UN to fade into history as an ineffective, irrelevant debating society. '' In contrast, the French President, François Hollande, stated in 2012 that "France trusts the United Nations. She knows that no state, no matter how powerful, can solve urgent problems, fight for development and bring an end to all crises... France wants the UN to be the centre of global governance. '' Critics such as Dore Gold, an Israeli diplomat, Robert S. Wistrich, a British scholar, Alan Dershowitz, an American legal scholar, Mark Dreyfus, an Australian politician, and the Anti-Defamation League consider UN attention to Israel 's treatment of Palestinians to be excessive. In September 2015, Saudi Arabia 's Faisal bin Hassan Trad has been elected Chair of the UN Human Rights Council panel that appoints independent experts, a move criticized by human rights groups.
Since 1971, the Republic of China (Taiwan) has been excluded from the UN and since then has always been rejected in new applications. Citizens of this country are also not allowed to enter the buildings of the United Nations with the passport of Taiwan. In this way, critics agree that the UN is failing its own development goals and guidelines. This criticism also brought pressure from the People 's Republic of China, which regards the territories administered by Taiwan as their own territory.
Critics have also accused the UN of bureaucratic inefficiency, waste, and corruption. In 1976, the General Assembly established the Joint Inspection Unit to seek out inefficiencies within the UN system. During the 1990s, the US withheld dues citing inefficiency and only started repayment on the condition that a major reforms initiative was introduced. In 1994, the Office of Internal Oversight Services (OIOS) was established by the General Assembly to serve as an efficiency watchdog. In 1994, former Special Representative of the Secretary - General of the UN to Somalia Mohamed Sahnoun published "Somalia: The Missed Opportunities '', a book in which he analyses the reasons for the failure of the 1992 UN intervention in Somalia, showing that, between the start of the Somali civil war in 1988 and the fall of the Siad Barre regime in January 1991, the UN missed at least three opportunities to prevent major human tragedies; when the UN tried to provide humanitarian assistance, they were totally outperformed by NGOs, whose competence and dedication sharply contrasted with the UN 's excessive caution and bureaucratic inefficiencies. If radical reform was not undertaken, warned Mohamed Sahnoun, then the UN would continue to respond to such crisis with inept improvisation. In 2004, the UN faced accusations that its recently ended Oil - for - Food Programme -- in which Iraq had been allowed to trade oil for basic needs to relieve the pressure of sanctions -- had suffered from widespread corruption, including billions of dollars of kickbacks. An independent inquiry created by the UN found that many of its officials had been involved, as well as raising "significant '' questions about the role of Kojo Annan, the son of Kofi Annan.
In evaluating the UN as a whole, Jacques Fomerand writes that the "accomplishments of the United Nations in the last 60 years are impressive in their own terms. Progress in human development during the 20th century has been dramatic and the UN and its agencies have certainly helped the world become a more hospitable and livable place for millions. '' Evaluating the first 50 years of the UN 's history, the author Stanley Meisler writes that "the United Nations never fulfilled the hopes of its founders, but it accomplished a great deal nevertheless '', citing its role in decolonization and its many successful peacekeeping efforts. The British historian Paul Kennedy states that while the organization has suffered some major setbacks, "when all its aspects are considered, the UN has brought great benefits to our generation and... will bring benefits to our children 's and grandchildren 's generations as well. ''
Official websites
Other
|
what is the movie star wars all about | Star Wars - wikipedia
Trilogies:
Anthology films:
Animated film:
TV films:
Star Wars is an American epic space opera media franchise, centered on a film series created by George Lucas. It depicts the adventures of various characters "a long time ago in a galaxy far, far away ''.
The franchise began in 1977 with the release of the film Star Wars (later subtitled Episode IV: A New Hope in 1981), which became a worldwide pop culture phenomenon. It was followed by the successful sequels The Empire Strikes Back (1980) and Return of the Jedi (1983); these three films constitute the original Star Wars trilogy. A prequel trilogy was released between 1999 and 2005, which received mixed reactions from both critics and fans. A sequel trilogy began in 2015 with the release of Star Wars: The Force Awakens and continued in 2017 with the release of Star Wars: The Last Jedi. The first eight films were nominated for Academy Awards (with wins going to the first two films released) and have been commercial successes, with a combined box office revenue of over US $8.5 billion, making Star Wars the second highest - grossing film series. Spin - off films include the animated Star Wars: The Clone Wars (2008) and Rogue One (2016), the latter of which is the first in a planned series of anthology films.
The series has spawned an extensive media franchise including books, television series, computer and video games, theme park attractions and lands, and comic books, resulting in significant development of the series ' fictional universe. Star Wars holds a Guinness World Records title for the "Most successful film merchandising franchise ''. In 2015, the total value of the Star Wars franchise was estimated at US $42 billion, making Star Wars the second - highest - grossing media franchise of all time.
In 2012, The Walt Disney Company bought Lucasfilm for US $4.06 billion and earned the distribution rights to all subsequent Star Wars films, beginning with the release of The Force Awakens in 2015. The former distributor, 20th Century Fox, was to retain the physical distribution rights for the first two Star Wars trilogies, was to own permanent rights for the original 1977 film and was to continue to hold the rights for the prequel trilogy and the first two sequels to A New Hope until May 2020. Walt Disney Studios currently owns digital distribution rights to all the Star Wars films, excluding A New Hope. On December 14, 2017, the Walt Disney Company announced its pending acquisition of 21st Century Fox, including the film studio and all distribution rights to A New Hope.
The Star Wars franchise takes place in a distant unnamed fictional galaxy at an undetermined point in the ancient past, where many species of aliens (often humanoid) co-exist. People own robotic droids, who assist them in their daily routines, and space travel is common.
The spiritual and mystical element of the Star Wars galaxy is known as "the Force ''. It is described in the original film as "an energy field created by all living things (that) surrounds us, penetrates us, (and) binds the galaxy together ''. The people who are born deeply connected to the Force have better reflexes; through training and meditation, they are able to achieve various supernatural feats (such as telekinesis, clairvoyance, precognition, and mind control). The Force is wielded by two major factions at conflict: the Jedi, who harness the light side of the Force, and the Sith, who use the dark side of the Force through hate and aggression.
In 1971, Universal Studios made a contract for George Lucas to direct two films. In 1973, American Graffiti was completed, and released to critical acclaim including Academy Award nominations for Best Director and Original Screenplay for George Lucas. Months later, Lucas started work on his second film 's script draft, The Journal of the Whills, telling the tale of the training of apprentice CJ Thorpe as a "Jedi - Bendu '' space commando by the legendary Mace Windy. After Universal rejected the film, 20th Century Fox decided to invest on it. On April 17, 1973, Lucas felt frustrated about his story being too difficult to understand, so he began writing a 13 - page script with thematic parallels to Akira Kurosawa 's The Hidden Fortress, this draft was renamed The Star Wars. By 1974, he had expanded the script into a rough draft screenplay, adding elements such as the Sith, the Death Star, and a protagonist named Annikin Starkiller. Numerous subsequent drafts would go through numerous drastic changes, before evolving into the script of the original film.
Lucas insisted that the movie would be part of a 9 - part series and negotiated to retain the sequel rights, to ensure all the movies would be made. Tom Pollock, then Lucas ' lawyer writes: "So in the negotiations that were going on, we drew up a contract with Fox 's head of business affairs Bill Immerman, and me. We came to an agreement that George would retain the sequel rights. Not all the (merchandising rights) that came later, mind you; just the sequel rights. And Fox would get a first opportunity and last refusal right to make the movie. ''
Lucas was offered $50,000 to write, another $50,000 to produce, and $50,000 to direct the film. Later the offer was increased.
Star Wars was released on May 25, 1977. It was followed by The Empire Strikes Back, released on May 21, 1980, and Return of the Jedi, released on May 25, 1983. The sequels were all self - financed by Lucasfilm.
The opening crawl of the sequels disclosed that they were numbered as "Episode V '' and "Episode VI '' respectively, though the films were generally advertised solely under their subtitles. Though the first film in the series was simply titled Star Wars, with its 1981 re-release it had the subtitle Episode IV: A New Hope added to remain consistent with its sequel, and to establish it as the middle chapter of a continuing saga. The plot of the original trilogy centers on the Galactic Civil War of the Rebel Alliance trying to free the galaxy from the clutches of the Galactic Empire, as well as on Luke Skywalker 's quest to become a Jedi.
Near the orbit of the desert planet Tatooine, a Rebel spaceship is intercepted by the Empire. Aboard, the deadliest Imperial agent Darth Vader and his stormtroopers capture Princess Leia Organa, a secret member of the rebellion. Before her capture, Leia makes sure the astromech R2 - D2, along with the protocol droid C - 3PO, escapes with stolen Imperial blueprints stored inside and a holographic message for the retired Jedi Knight Obi - Wan Kenobi, who has been living in exile on Tatooine. The droids fall under the ownership of Luke Skywalker, an orphan farm boy raised by his step - uncle and aunt. Luke helps the droids locate Obi - Wan, now a solitary old hermit known as Ben Kenobi, who reveals himself as a friend of Luke 's absent father, the Jedi Knight Anakin Skywalker. Obi - Wan confides to Luke that Anakin was "betrayed and murdered '' by Vader (who was Obi - Wan 's former Jedi apprentice) years ago, and he gives Luke his father 's former lightsaber to keep. After viewing Leia 's message, they both hire the smuggler Han Solo and his Wookiee co-pilot Chewbacca to, aboard their space freighter the Millennium Falcon, help them deliver the stolen blueprints inside R2 - D2 to the Rebel Alliance with the hope of finding a weakness to the Empire 's planet - destroying space station: the Death Star.
For The Star Wars second draft, Lucas made heavy simplifications. It added a mystical energy field known as "The Force '' and introduced the young hero on a farm as Luke Starkiller. Annikin became Luke 's father, a wise Jedi knight. The third draft killed the father Annikin, replacing him with mentor figure Ben Kenobi. Later, Lucas felt the film would not in fact be the first in the sequence, but a film in the second trilogy in the saga. The draft contained a sub-plot leading to a sequel about "The Princess of Ondos '', and by that time some months later Lucas had negotiated a contract that gave him rights to make two sequels. Not long after, Lucas hired author Alan Dean Foster, to write two sequels as novels. In 1976, a fourth draft had been prepared for principal photography. The film was titled Adventures of Luke Starkiller, as taken from the Journal of the Whills, Saga I: The Star Wars. During production, Lucas changed Luke 's name to Skywalker and altered the title to simply The Star Wars and finally Star Wars. At that point, Lucas was not expecting the film to have sequels. The fourth draft of the script underwent subtle changes it discarded "the Princess of Ondos '' sub-plot, to become a self - contained film, that ended with the destruction of the Galactic Empire itself by way of destroying the Death Star. However, Lucas previously conceived of the film as the first of a series. The intention was that if Star Wars was successful, Lucas could adapt Dean Foster 's novels into low - budget sequels. By that point, Lucas had developed an elaborate backstory to aid his writing process.
Before its release, Lucas considered walking away from Star Wars sequels, thinking the film would be a flop. However the film exceeded all expectations. The success of the film, as well as its merchandise sales, and Lucas desire to create an independent film - making center. Both led Lucas to make Star Wars the basis of an elaborate film serial, and use the profits to finance his film - making center, Skywalker Ranch. Alan Dean Foster was already writing the first sequel - novel Splinter of the Mind 's Eye, released in 1978. But Lucas decided not to adapt Foster 's work, knowing a sequel would be allowed more budget. At first, Lucas envisioned a series of films with no set number of entries, like the James Bond series. In an interview with Rolling Stone in August 1977, he said that he wanted his friends to each take a turn at directing the films and giving unique interpretations on the series. He added that the backstory in which Darth Vader turns to the dark side, kills Luke 's father and fights Obi - Wan Kenobi on a volcano as the Galactic Republic falls would make an excellent sequel.
Three years after the destruction of the Death Star, the Rebels are forced to evacuate their secret base on Hoth as they are hunted by the Empire. At the request of the late Obi - Wan 's spirit, Luke travels to the swamp - infested world of Dagobah to find the exiled Jedi Master Yoda and begin his Jedi training. However, Luke 's training is interrupted by Vader, who lures him into a trap by capturing Han and Leia at Cloud City, governed by Han 's old friend Lando Calrissian. During a fierce lightsaber duel with the Sith Lord, Luke learns that Vader is his father.
After the success of the original film, Lucas hired science fiction author Leigh Brackett to write Star Wars II with him. They held story conferences and, by late November 1977, Lucas had produced a handwritten treatment called The Empire Strikes Back. It was similar to the final film, except that Darth Vader does not reveal he is Luke 's father.
Brackett finished her first draft in early 1978; in it, Luke 's father appeared as a ghost to instruct Luke. Lucas has said he was disappointed with it, but before he could discuss it with her, she died of cancer. With no writer available, Lucas had to write his next draft himself. It was this draft in which Lucas first made use of the "Episode '' numbering for the films; Empire Strikes Back was listed as Episode II. As Michael Kaminski argues in The Secret History of Star Wars, the disappointment with the first draft probably made Lucas consider different directions in which to take the story. He made use of a new plot twist: Darth Vader claims to be Luke 's father. According to Lucas, he found this draft enjoyable to write, as opposed to the yearlong struggles writing the first film, and quickly wrote two more drafts, both in April 1978. This new story point of Darth Vader being Luke 's father had drastic effects on the series. After writing these two drafts, Lucas revised the backstory between Anakin Skywalker, Kenobi, and the Emperor.
With this new backstory in place, Lucas decided that the series would be a trilogy, changing Empire Strikes Back from Episode II to Episode V in the next draft. Lawrence Kasdan, who had just completed writing Raiders of the Lost Ark, was then hired to write the next drafts, and was given additional input from director Irvin Kershner. Kasdan, Kershner, and producer Gary Kurtz saw the film as a more serious and adult film, which was helped by the new, darker storyline, and developed the series from the light adventure roots of the first film.
A year after Vader 's shocking revelation, Luke leads a rescue attempt to save Han from the gangster Jabba the Hutt. Afterward, Luke returns to Dagobah to complete his Jedi training, only to find the 900 - year - old Yoda on his deathbed. In his last words Yoda confirms that Vader is Luke 's father, Anakin Skywalker, and that Luke must confront his father again in order to complete his training. Moments later, the spirit of Obi - Wan reveals to Luke that Leia is his twin sister, but Obi - Wan insists that Luke must face Vader again. As the Rebels lead an attack on the Death Star II, Luke engages Vader in another lightsaber duel as Emperor Palpatine watches; both Sith Lords intend to turn Luke to the dark side of the Force and take him as their apprentice.
By the time Lucas began writing Episode VI in 1981 (then titled Revenge of the Jedi), much had changed. Making Empire Strikes Back was stressful and costly, and Lucas ' personal life was disintegrating. Burned out and not wanting to make any more Star Wars films, he vowed that he was done with the series in a May 1983 interview with Time magazine. Lucas ' 1981 rough drafts had Darth Vader competing with the Emperor for possession of Luke -- and in the second script, the "revised rough draft '', Vader became a sympathetic character. Lawrence Kasdan was hired to take over once again and, in these final drafts, Vader was explicitly redeemed and finally unmasked. This change in character would provide a springboard to the "Tragedy of Darth Vader '' storyline that underlies the prequels.
After losing much of his fortune in a divorce settlement in 1987, George Lucas had no desire to return to Star Wars, and had unofficially canceled the sequel trilogy by the time of Return of the Jedi. At that point, the prequels were only still a series of basic ideas partially pulled from his original drafts of "The Star Wars ''. Nevertheless, technical advances in the late 1980s and 1990s continued to fascinate Lucas, and he considered that they might make it possible to revisit his 20 - year - old material. The popularity of the franchise was reinvigorated by the Star Wars expanded universe storylines set after the original trilogy films, such as the Thrawn trilogy of novels written by Timothy Zahn and the Dark Empire comic book series published by Dark Horse Comics. Due to the renewed popularity of Star Wars, Lucas saw that there was still a large audience. His children were older, and with the explosion of CGI technology he was now considering returning to directing.
The prequel trilogy consists of Episode I: The Phantom Menace, released on May 19, 1999; Episode II: Attack of the Clones, released on May 16, 2002; and Episode III: Revenge of the Sith, released on May 19, 2005. The plot focuses on the fall of the Galactic Republic, as well as the tragedy of Anakin Skywalker 's turn to the dark side.
About 32 years before the start of the Galactic Civil War, the corrupt Trade Federation sets a blockade around the planet Naboo. The Sith Lord Darth Sidious had secretly planned the blockade to give his alter ego, Senator Palpatine, a pretense to overthrow and replace the Supreme Chancellor of the Republic. At the Chancellor 's request, the Jedi Knight Qui - Gon Jinn and his apprentice, a younger Obi - Wan Kenobi, are sent to Naboo to negotiate with the Federation. However, the two Jedi are forced to instead help the Queen of Naboo, Padmé Amidala, escape from the blockade and plead her planet 's crisis before the Republic Senate on Coruscant. When their starship is damaged during the escape, they land on Tatooine for repairs. Palpatine dispatches his first Sith apprentice, Darth Maul, to hunt down the Queen and her Jedi protectors. While on Tatooine, Qui - Gon discovers a nine - year - old slave named Anakin Skywalker. Qui - Gon helps liberate the boy from slavery, believing Anakin to be the "Chosen One '' foretold by a Jedi prophecy to bring balance to the Force. However, the Jedi Council (led by Yoda) suspects the boy possesses too much fear and anger within him.
In 1993, it was announced, in Variety among other sources, that Lucas would be making the prequels. He began penning more to the story, now indicating the series would be a tragic one examining Anakin Skywalker 's fall to the dark side. Lucas began to reevaluate how the prequels would exist relative to the originals; at first they were supposed to be a "filling - in '' of history tangential to the originals, but he later realized that they could form the beginning of one long story that started with Anakin 's childhood and ended with his death. This was the final step towards turning the film series into a "Saga ''. In 1994, Lucas began writing the screenplay to the first prequel, initially titled Episode I: The Beginning. Following the release of that film, Lucas announced that he would be directing the next two, and began work on Episode II.
Ten years after the Battle of Naboo, Anakin is reunited with Padmé, now serving as the Senator of Naboo, and they fall in love despite Anakin 's obligations to the Jedi Order. At the same time, the entire galaxy gets swept up in the Clone Wars between the armies of the Republic, led by the Jedi Order, and the Confederacy of Independent Systems, led by the fallen Jedi Count Dooku.
The first draft of Episode II was completed just weeks before principal photography, and Lucas hired Jonathan Hales, a writer from The Young Indiana Jones Chronicles, to polish it. Unsure of a title, Lucas had jokingly called the film "Jar Jar 's Great Adventure ''. In writing The Empire Strikes Back, Lucas initially decided that Lando Calrissian was a clone and came from a planet of clones which caused the "Clone Wars '' mentioned by both Luke Skywalker and Princess Leia in A New Hope; he later came up with an alternate concept of an army of clone shocktroopers from a remote planet which attacked the Republic and were repelled by the Jedi. The basic elements of that backstory became the plot basis for Episode II, with the new wrinkle added that Palpatine secretly orchestrated the crisis.
Three years after the start of the Clone Wars, Anakin and Obi - Wan lead a rescue mission to save the kidnapped Chancellor Palpatine from Count Dooku and the droid commander General Grievous. Later, Anakin begins to have prophetic visions of his secret wife Padmé dying in childbirth. Palpatine, who had been secretly engineering the Clone Wars to destroy the Jedi Order, convinces Anakin that the dark side of the Force holds the power to save Padmé 's life. Desperate, Anakin submits to Palpatine 's Sith teachings and is renamed Darth Vader. While Palpatine re-organizes the Republic into the tyrannical Empire, Vader participates in the extermination of the Jedi Order; culminating in a lightsaber duel between himself and his former master Obi - Wan on the volcanic planet Mustafar.
Lucas began working on Episode III before Attack of the Clones was released, offering concept artists that the film would open with a montage of seven Clone War battles. As he reviewed the storyline that summer, however, he says he radically re-organized the plot. Michael Kaminski, in The Secret History of Star Wars, offers evidence that issues in Anakin 's fall to the dark side prompted Lucas to make massive story changes, first revising the opening sequence to have Palpatine kidnapped and his apprentice, Count Dooku, murdered by Anakin as the first act in the latter 's turn towards the dark side. After principal photography was complete in 2003, Lucas made even more massive changes in Anakin 's character, re-writing his entire turn to the dark side; he would now turn primarily in a quest to save Padmé 's life, rather than the previous version in which that reason was one of several, including that he genuinely believed that the Jedi were evil and plotting to take over the Republic. This fundamental re-write was accomplished both through editing the principal footage, and new and revised scenes filmed during pick - ups in 2004.
On August 15, 2008, the standalone animated film Star Wars: The Clone Wars was released theatrically as a lead - in to the animated TV series with the same name. This series includes 6 seasons, which were broadcast on Cartoon Network, with the exception of the last one. The final season was cut short following Disney 's purchase of the franchise. There were also two more seasons in the works, but these were also cancelled.
Over the years, Lucas often exaggerated the amount of material he wrote for the series; much of the exaggerations stemmed from the post ‐ 1978 period when the series grew into a phenomenon. Michael Kaminski explained that the exaggerations were both a publicity and security measure, further rationalizing that since the series ' story radically changed throughout the years, it was always Lucas ' intention to change the original story retroactively because audiences would only view the material from his perspective. The exaggerations created rumors of Lucas having plot outlines a sequel trilogy of (Episodes VII, VIII and IX), which would continue the story after 1983 's Episode VI: Return of the Jedi Lucasfilm and George Lucas had denied plans for a sequel trilogy for many years, insisting that Star Wars was meant to be a six - part series, and that no further films would be released after the conclusion of the prequel trilogy in 2005. Although Lucas made an exception by releasing the animated Star Wars: The Clone Wars film in 2008, while promoting it, Lucas maintained his status on the sequel trilogy: "I get asked all the time, ' What happens after Return of the Jedi?, ' and there really is no answer for that. The movies were the story of Anakin Skywalker and Luke Skywalker, and when Luke saves the galaxy and redeems his father, that 's where that story ends. ''
In January 2012, Lucas announced that he would step away from blockbuster films and instead produce smaller arthouse films. Asked whether the criticism he received following the prequel trilogy and the alterations to the re-releases of the original trilogy had influenced his decision to retire, Lucas said: "Why would I make any more when everybody yells at you all the time and says what a terrible person you are? '' Despite insisting that a sequel trilogy would never happen, Lucas began working on story treatments for three new Star Wars films in 2011. In October 2012, The Walt Disney Company agreed to buy Lucasfilm and announced that Star Wars Episode VII would be released in 2015. Later, it was revealed that the three new upcoming films (Episodes VII -- IX) would be based on story treatments that had been written by George Lucas prior to the sale of Lucasfilm. The co-chairman of Lucasfilm, Kathleen Kennedy, became president of the company, reporting to Walt Disney Studios chairman Alan Horn. In addition, Kennedy will serve as executive producer on new Star Wars feature films, with franchise creator and Lucasfilm founder Lucas serving as creative consultant.
The sequel trilogy began with Episode VII: The Force Awakens, released on December 18, 2015. It was followed by Episode VIII: The Last Jedi, released on December 15, 2017.
About 30 years after the destruction of the Death Star II, Luke Skywalker has vanished following the demise of the new Jedi Order he was attempting to build. The remnants of the Empire have become the First Order, and seek to destroy Luke and the New Republic, while the Resistance opposes, led by princess - turned - general Leia Organa and backed by the Republic. On Jakku, Resistance pilot Poe Dameron obtains a map to Luke 's location. Stormtroopers under the command of Kylo Ren, the son of Leia and Han Solo, capture Poe. Poe 's droid BB - 8 escapes with the map, and encounters a scavenger Rey. Kylo tortures Poe and learns of BB - 8. Stormtrooper FN - 2187 defects from the First Order, and frees Poe who dubs him "Finn '', while both escape in a TIE fighter that crashes on Jakku, seemingly killing Poe. Finn finds Rey and BB - 8, but the First Order does too; both escape Jakku in a stolen Millennium Falcon. The Falcon is recaptured by Han and Chewbacca, smugglers again since abandoning the Resistance. They agree to help deliver the map inside BB - 8 to the Resistance.
The screenplay for Episode VII was originally set to be written by Michael Arndt, but in October 2013 it was announced that writing duties would be taken over by Lawrence Kasdan and J.J. Abrams. On January 25, 2013, The Walt Disney Studios and Lucasfilm officially announced J.J. Abrams as Star Wars Episode VII 's director and producer, along with Bryan Burk and Bad Robot Productions.
Right after the destruction of Starkiller Base, Rey finds Luke Skywalker on the planet Ahch - To and convinces him to teach her the ways of the Jedi and seeks answers of her past with the help from Luke and Kylo Ren. Meanwhile, Finn, Leia, Poe, BB - 8, Rose Tico, and the rest of the Resistance make an escape to the planet Crait from the First Order. Kylo Ren assassinates Supreme Leader Snoke and takes control of the First Order. which culminates in a battle on Crait with the Resistance.
On November 20, 2012, The Hollywood Reporter reported that Lawrence Kasdan and Simon Kinberg would write and produce Episodes VIII and IX. Kasdan and Kinberg were later confirmed as creative consultants on those films, in addition to writing standalone films. In addition, John Williams, who wrote the music for the previous six episodes, was hired to compose the music for Episodes VII, VIII and IX.
On March 12, 2015, Lucasfilm announced that Looper director Rian Johnson would direct Episode VIII with Ram Bergman as producer for Ram Bergman Productions. Reports initially claimed Johnson would also direct Episode IX, but it was later confirmed he would write only a story treatment. Johnson later wrote on his Twitter that the information about him writing a treatment for Episode IX is old, and he 's not involved with the writing of that film. When asked about Episode VIII in an August 2014 interview, Johnson said "it 's boring to talk about, because the only thing I can really say is, I 'm just happy. I do n't have the terror I kind of expected I would, at least not yet. I 'm sure I will at some point. ''
Principal photography on The Last Jedi began in February 2016. Additional filming took place in Dubrovnik from March 9 to March 16, 2016, as well as in Ireland in May 2016. Principal photography wrapped in July 2016. On December 27, 2016, Carrie Fisher died after going into cardiac arrest a few days earlier. Before her death, Fisher had completed filming her role as General Leia Organa in The Last Jedi. The film was released on December 15, 2017.
Production on Episode IX was scheduled to begin sometime in 2017. Variety and Reuters reported that Carrie Fisher was slated for a key role in Episode IX. Now, Lucasfilm, Disney and others involved with the film have been forced to find a way to address her death in the upcoming film and alter her character 's role. In January 2017, Lucasfilm stated they would not digitally generate Fisher 's performance for the film. In April 2017, Fisher 's brother Todd and daughter Billie Lourd gave Disney permission to use recent footage of Fisher for the film, but later that month, Kennedy stated that Fisher will not appear in the film. Principal photography of Star Wars: Episode IX is set to begin in July 2018.
On February 5, 2013, Disney CEO Bob Iger confirmed the development of two standalone films, each individually written by Lawrence Kasdan and Simon Kinberg. On February 6, Entertainment Weekly reported one film would focus on Han Solo, while the other on Boba Fett. Disney CFO Jay Rasulo has described the standalone films as origin stories. Kathleen Kennedy explained that the standalone films will not crossover with the films of the sequel trilogy, stating, "George was so clear as to how that works. The canon that he created was the Star Wars saga. Right now, Episode VII falls within that canon. The spin - off movies, or we may come up with some other way to call those films, they exist within that vast universe that he created. There is no attempt being made to carry characters (from the standalone films) in and out of the saga episodes. Consequently, from the creative standpoint, it 's a roadmap that George made pretty clear. ''
In April 2015, Lucasfilm and Kennedy announced that the standalone films would be referred to as the Star Wars Anthology films. Rogue One: A Star Wars Story was released on December 16, 2016 as the first in an anthology series of films separate from the main episodic saga.
The story about the group of rebels who stole the Death Star plans, ending directly before Episode IV: A New Hope.
The idea for the film was conceived by John Knoll who worked as a visual effects supervisor of the prequel trilogy films. In May 2014, Lucasfilm announced Gareth Edwards as the director of the first anthology film, with Gary Whitta writing the first draft, for a release on December 16, 2016. On March 12, 2015, the film 's title was revealed to be Rogue One, with Chris Weitz rewriting the script, and starring Felicity Jones, Ben Mendelsohn, and Diego Luna. In April 2015, a teaser trailer was shown during the closing of the Star Wars Celebration. Lucasfilm announced filming would begin in the summer of 2015 and released the plot synopsis. Director Edwards stated, "It comes down to a group of individuals who do n't have magical powers that have to somehow bring hope to the galaxy. ''; and describing the style of the film as similar to that of a war film: "It 's the reality of war. Good guys are bad. Bad guys are good. It 's complicated, layered; a very rich scenario in which to set a movie. '' After its debut, Rogue One received generally positive reviews, with its performances, action sequences, soundtrack, visual effects and darker tone being praised. The film grossed over US $500 million worldwide within a week of its release. Characters from the animated series appear, Saw Gerrera (from The Clone Wars) in a pivotal role in the plot and Chopper (from Star Wars: Rebels) in a cameo.
A film focusing on Han Solo before the events of Episode IV: A New Hope.
Before selling Lucasfilm to Disney, George Lucas started develpment on a film about a young Han Solo. Lucas hired Star Wars original trilogy veteran script writer Lawrence Kasdan, along his son Jon Kasdan to write the script. The film stars Alden Ehrenreich as a young Han Solo, Joonas Suotamo as Chewbacca (after serving as a double for the character in The Force Awakens and The Last Jedi), Donald Glover as Lando Calrissian, and Emilia Clarke and Woody Harrelson. Directors Phil Lord and Christopher Miller began principal photography on the film, but due to creative differences, the pair left the project in June 2017 with three and a half weeks remaining in principal photography. Academy Award - winning director Ron Howard was announced as their replacement. While his first Star Wars film, Howard had previously collaborated with producing company Lucasfilm as an actor in the George Lucas - directed film American Graffiti (1973) and as director of Willow (1988). Howard was one of the three directors George Lucas asked to direct Episode I: The Phantom Menace, though Howard declined, saying, "George, you should do it! ''. The film is distributed by Walt Disney Studios Motion Pictures and will be released on May 25, 2018.
A third Anthology film will be released in 2020. A writer for the film has been hired as of September 2016.
In February 2013, Entertainment Weekly reported that Lucasfilm hired Josh Trank to direct a Star Wars standalone film, with the news being confirmed soon after. However, in November 2016 Disney announced that their contract with Trank was terminated due to the overwhelmingly negative reviews of Fantastic Four. By 2017, it was reported that the film was still in early development at Lucasfilm, and with reports stating that the film would focus on bounty hunter, Boba Fett. Lucasfilm never confirmed what the plot was about, but revealed that the film Josh Trank left was a different film from Solo: A Star Wars Story.
In August 2016, Ewan McGregor stated he would be open to return to the role of Obi - Wan Kenobi, albeit for a spin - off film on the character, should he be approached, wanting to tell a story between Episode III and IV. Fans showed interest in the idea; a fan - trailer for an Obi - Wan film, with footage from the film Last Days in the Desert (which starred McGregor) became viral and widely praised by fans. The film was voted as the most wanted anthology film in a poll by The Hollywood Reporter despite there being only rumors of the film 's production. Lucasfilm and McGregor for years denied the development of such film, despite fans ' continued interest and rumors. In March 2017, McGregor again stated his interest in starring in a solo film, if Lucasfilm wanted him to. By August 2017, it was reported that a movie centered around Obi - Wan Kenobi is in early developments, with Stephen Daldry in early negotiations to co-write and direct the project. Liam Neeson expressed his interest in returning to the franchise, reprising his role as Qui - Gon Jinn. Joel Edgerton, who played Luke Skywalker 's step - uncle Owen in the prequel trilogy, said he would like to reprise his role in an Obi - Wan standalone film, if it were to be made.
Additional reports stated Lucasfilm was considering various films about different characters including movies focusing on Boba Fett, as well as Jedi Master Yoda. Temuera Morrison has expressed interest in portraying Boba, or Captain Rex, both clones of his previous character Jango Fett. Daniel Logan, who played Boba Fett as a child in Attack of the Clones, has also expressed interest in reprising his role in the rumored Boba Fett film. In 2015, director Guillermo Del Toro pitched an idea to Lucasfilm for a film about Jabba the Hutt, and in 2017, it was reported that it is among the projects being considered by the studio.
Samuel L. Jackson has expressed interest in returning as Mace Windu, insisting that his character survived his apparent death. Ian McDiarmid has also expressed interest in returning as Emperor Palpatine. Fans have also expressed interest towards the possibility of Ahsoka Tano appearing in a live - action film, with Rosario Dawson expressing interest in the role.
In November 2017, Lucasfilm announced that Rian Johnson, the writer / director of The Last Jedi, would be working on a new trilogy. The films will reportedly differ from the Skywalker - focused films in favor of focusing on new characters, Johnson is confirmed to write and direct the first film. On the same day, Disney announced that a live - action Star Wars television series was in development exclusively for their upcoming streaming service.
In February 2018, it was announced that David Benioff and D.B. Weiss would write and produce a series of Star Wars films that are not Skywalker - focused films, similar to but separate from Rian Johnson 's upcoming installments in the franchise.
In March 2018, it was revealed by Deadline that Simon Kinberg is writing the script to an as - of - yet undisclosed film within the franchise. Previously, the filmmaker had been attached to producing the Boba Fett - centered film and is known for co-creating the Star Wars Rebels animated television series.
From 1977 to 2014, the term Expanded Universe (abbreviated as EU), was an umbrella term for all officially licensed Star Wars storytelling materials set outside the events depicted within the theatrical films, including television series, novels, comics, and video games. Lucasfilm maintained internal continuity between the films and television content and the EU material until April 25, 2014, when the company announced all of the EU works would cease production. Existing works would no longer be considered canon to the franchise and subsequent reprints would be rebranded under the Star Wars Legends label, with downloadable content for the massively multiplayer online game Star Wars: The Old Republic being the only Legends material to still be produced. The Star Wars canon was subsequently restructured to only include the existing six feature films, the animated film Star Wars: The Clone Wars (2008), and its companion animated series Star Wars: The Clone Wars. All future projects and creative developments across all types of media would be overseen and coordinated by the Story Group, announced as a division of Lucasfilm created to maintain continuity and a cohesive vision on the storytelling of the franchise. Lucasfilm announced that the change was made "to give maximum creative freedom to the filmmakers and also preserve an element of surprise and discovery for the audience ''., The animated series Star Wars Rebels was the first project produced after the announcement, followed by multiple comics series from Marvel, novels published by Del Rey, and the sequel film The Force Awakens (2015).
In the two - hour Star Wars Holiday Special produced for CBS in 1978, Chewbacca returns to his home planet of Kashyyyk to celebrate "Life Day '' with his family. Along with the stars of the original 1977 film, celebrities Bea Arthur, Art Carney, Diahann Carroll, and Jefferson Starship appear in plot - related skits and musical numbers. Lucas loathed the special and forbade it to ever be aired again after its original broadcast, or reproduced on home video. An 11 - minute animated sequence in the Holiday Special featuring the first appearance of bounty hunter Boba Fett, is considered to be the sole silver lining of the production, with Lucas even including it as a special feature on a 2011 Blu - ray release (making it the only part of the Holiday Special to ever receive an official home media release). The segment is the first Star Wars animation ever produced.
The television film Caravan of Courage: An Ewok Adventure aired on ABC on Thanksgiving weekend in 1984. With a story by Lucas and a screenplay by Bob Carrau, it features the Ewok Wicket from Return of the Jedi as he helps two children rescue their parents from a giant known as Gorax. The 1985 sequel, Ewoks: The Battle for Endor, finds Wicket and his friends protecting their village from invaders.
Nelvana, the animation studio that had animated the animated segment of the Holiday Special was hired to create two animated series. Star Wars: Droids (1985 -- 1986), which aired for one season on ABC, follows the adventures of the droids C - 3PO and R2 - D2, 15 years before the events of the 1977 film Star Wars. Its sister series Star Wars: Ewoks (1985 -- 1987) features the adventures of the Ewoks before Return of the Jedi and the Ewok movies.
After the release of Attack of the Clones, Cartoon Network animated and aired Star Wars: Clone Wars from 2003 to weeks before the 2005 release of Revenge of the Sith, as the series featured event set between those films. It won the Primetime Emmy Awards for Outstanding Animated Program in 2004 and 2005.
Lucas decided to invest in creating his own animation company, Lucasfilm Animation, and used it to create his first in - house Star Wars CGI - animated series. Star Wars: The Clone Wars (2008 -- 2014) was introduced through a 2008 animated film of the same name, and set in the same time period as the previous Clone Wars series (albeit ignoring it). While all previous television works were reassigned to the Legends brand in 2014, Lucasfilm accepted The Clone Wars and its originating film, as part of the canon. All series released after would also be part of the canon. In 2014, Disney XD began airing Star Wars Rebels, the next CGI - animated series. Set between Revenge of the Sith and A New Hope, it followed a band of rebels as they fight the Galactic Empire and helped close some of the arcs in The Clone Wars. Another animated series debuted in 2017, Star Wars Forces of Destiny focused on the female characters of the franchise.
Since 2005, when Lucas announced plans for a television series set between the prequel and original trilogies, the television project has been in varying stages of development at Lucasfilm Producer Rick McCallum revealed the working title, Star Wars: Underworld, in 2012, and said in 2013 that 50 scripts had been written. He called the project "The most provocative, the most bold and daring material that we 've ever done. '' The proposed series explores criminal and political power struggles in the decades prior to A New Hope, and as of December 2015 was still in development at Lucasfilm. In November 2017, Bob Iger discussed the development of a Star Wars series for Disney 's upcoming digital streaming service, due to launch in 2019. It is unknown if the series would be based on the Star Wars Underworld scripts or if it would follow an entirely new idea.
In February 2018, it was reported that there are multiple live action Star Wars TV series currently in development, with "rather significant '' talent involved in the productions. Jon Favreau, who had previously voiced Pre Vizsla in The Clone Wars animated series, will produce and write the television series.
Star Wars - based fiction predates the release of the first film, with the December 1976 novelization of Star Wars, subtitled From the Adventures of Luke Skywalker. Credited to Lucas, it was ghost - written by Alan Dean Foster. The first Expanded Universe story appeared in Marvel Comics ' Star Wars # 7 in January 1978 (the first six issues of the series having been an adaptation of the film), followed quickly by Foster 's novel Splinter of the Mind 's Eye the following month.
Star Wars: From the Adventures of Luke Skywalker is a 1976 novelization of the original film by Alan Dean Foster, who followed it with the sequel Splinter of the Mind 's Eye (1978), which Lucas decided not to film. The film novelizations for The Empire Strikes Back (1980) by Donald F. Glut and Return of the Jedi (1983) by James Kahn followed, as well as The Han Solo Adventures trilogy (1979 -- 1980) by Brian Daley, and The Adventures of Lando Calrissian (1983) trilogy by L. Neil Smith.
Timothy Zahn 's bestselling Thrawn trilogy (1991 -- 1993) reignited interest in the franchise and introduced the popular characters Grand Admiral Thrawn, Mara Jade, Talon Karrde, and Gilad Pellaeon. The first novel, Heir to the Empire, reached # 1 on the New York Times Best Seller list, and the series finds Luke, Leia, and Han facing off against tactical genius Thrawn, who is plotting to retake the galaxy for the Empire. In The Courtship of Princess Leia (1994) by Dave Wolverton, set immediately before the Thrawn trilogy, Leia considers an advantageous political marriage to Prince Isolder of the planet Hapes, but she and Han ultimately marry. Steve Perry 's Shadows of the Empire (1996), set in the as - yet - unexplored time period between The Empire Strikes Back and Return of the Jedi, was part of a multimedia campaign that included a comic book series and video game. The novel introduced the crime lord Prince Xizor, another popular character who would appear in multiple other works. Other notable series from Bantam include the Jedi Academy trilogy (1994) by Kevin J. Anderson, the 14 - book Young Jedi Knights series (1995 -- 1998) by Anderson and Rebecca Moesta, and the X-wing series (1996 -- 2012) by Michael A. Stackpole and Aaron Allston.
Del Rey took over Star Wars book publishing in 1999, releasing what would become a 19 - installment novel series called The New Jedi Order (1999 -- 2003). Written by multiple authors, the series was set 25 to 30 years after the original films and introduced the Yuuzhan Vong, a powerful alien race attempting to invade and conquer the entire galaxy. The bestselling multi-author series Legacy of the Force (2006 -- 2008) chronicles the crossover of Han and Leia 's son Jacen Solo to the dark side of the Force; among his evil deeds, he kills Luke 's wife Mara Jade as a sacrifice to join the Sith. The story parallels the fallen son of Han and Leia, Ben Solo / Kylo Ren, in the 2015 film The Force Awakens. Three series were introduced for younger audiences: the 18 - book Jedi Apprentice (1999 -- 2002) chronicles the adventures of Obi - Wan Kenobi and his master Qui - Gon Jinn in the years before The Phantom Menace; the 11 - book Jedi Quest (2001 -- 2004) follows Obi - Wan and his own apprentice, Anakin Skywalker in between The Phantom Menace and Attack of the Clones; and the 10 - book The Last of the Jedi (2005 -- 2008), set almost immediately after Revenge of the Sith, features Obi - Wan and the last few surviving Jedi. Maul: Lockdown by Joe Schreiber, released in January 2014, was the last Star Wars novel published before Lucasfilm announced the creation of the Star Wars Legends brand.
Though Thrawn had been designated a Legends character in 2014, he was reintroduced into the canon in the 2016 third season of Star Wars Rebels, with Zahn returning to write more novels based in the character, and set in the reworked canon.
Marvel Comics published a Star Wars comic book series from 1977 to 1986. Original Star Wars comics were serialized in the Marvel magazine Pizzazz between 1977 and 1979. The 1977 installments were the first original Star Wars stories not directly adapted from the films to appear in print form, as they preceded those of the Star Wars comic series. From 1985 -- 1987, the animated children 's series Ewoks and Droids inspired comic series from Marvel 's Star Comics line.
In the late 1980s, Marvel dropped a new Star Wars comic it had in development, which was picked up by Dark Horse Comics and published as the popular Dark Empire sequence (1991 -- 1995). Dark Horse subsequently launched dozens of series set after the original film trilogy, including Tales of the Jedi (1993 -- 1998), X-wing Rogue Squadron (1995 -- 1998), Star Wars: Republic (1998 -- 2006), Star Wars Tales (1999 -- 2005), Star Wars: Empire (2002 -- 2006), and Knights of the Old Republic (2006 -- 2010).
After Disney 's acquisition of Lucasfilm, it was announced in January 2014 that in 2015 the Star Wars comics license would return to Marvel Comics, whose parent company, Marvel Entertainment, Disney had purchased in 2009. Launched in 2015, the first three publications in were titled Star Wars, Star Wars: Darth Vader, and the limited series Star Wars: Princess Leia.
Radio adaptations of the films were also produced. Lucas, a fan of the NPR - affiliated campus radio station of his alma mater the University of Southern California, licensed the Star Wars radio rights to KUSC - FM for US $1. The production used John Williams ' original film score, along with Ben Burtt 's sound effects.
The first was written by science fiction author Brian Daley and directed by John Madden. It was broadcast on National Public Radio in 1981, adapting the original 1977 film into 13 - episodes. Mark Hamill and Anthony Daniels reprised their film roles.
The overwhelming success, led to a 10 - episode adaptation of The Empire Strikes Back in 1982. Billy Dee Williams joined the other two stars, reprising his role as Lando Calrissian.
In 1983, Buena Vista Records released an original, 30 - minute Star Wars audio drama titled Rebel Mission to Ord Mantell, written by Daley. In the 1990s, Time Warner Audio Publishing adapted several Star Wars series from Dark Horse Comics into audio dramas: the three - part Dark Empire saga, Tales of the Jedi, Dark Lords of the Sith, the Dark Forces trilogy, and Crimson Empire (1998). Return of the Jedi was adapted into 6 - episodes in 1996, featuring Daniels.
The first officially licensed Star Wars electronic game was Kenner 's 1979 table - top Star Wars Electronic Battle Command. In 1982, Parker Brothers published the first licensed Star Wars video game, Star Wars: The Empire Strikes Back, for the Atari 2600. It was followed in 1983 by Atari 's rail shooter arcade game Star Wars, which used vector graphics and was based on the "Death Star trench run '' scene from the 1977 film. The next game, Return of the Jedi (1984), used more traditional raster graphics, with the following game The Empire Strikes Back (1985) returning to the 1983 's arcade game vector graphics, but recreating the "Battle of Hoth '' scene instead.
Lucasfilm had started its own video game company in the early 1980s, which became known for adventure games and World War II flight combat games. In 1993, LucasArts released Star Wars: X-Wing, the first self - published Star Wars video game and the first space flight simulation based on the franchise. X-Wing was one of the best - selling games of 1993, and established its own series of games. Released in 1995, Dark Forces was the first Star Wars first - person shooter video game. A hybrid adventure game incorporating puzzles and strategy, it featured new gameplay features and graphical elements not then common in other games, made possible by LucasArts ' custom - designed game engine, called the Jedi. The game was well received and well reviewed, and was followed by four sequels. Dark Forces introduced the popular character Kyle Katarn, who would later appear in multiple games, novels, and comics. Katarn is a former Imperial stormtrooper who joins the Rebellion and ultimately becomes a Jedi, a plot arc similar to that of Finn in the 2015 film The Force Awakens.
Disney has partnered with Lenovo to create the Augmented Reality game ' Star Wars: Jedi Challenges ' that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller that will launch in December 2017.
Aside from its well - known science fictional technology, Star Wars features elements such as knighthood, chivalry, and princesses that are related to archetypes of the fantasy genre. The Star Wars world, unlike fantasy and science - fiction films that featured sleek and futuristic settings, was portrayed as dirty and grimy. Lucas ' vision of a "used future '' was further popularized in the science fiction - horror films Alien, which was set on a dirty space freighter; Mad Max 2, which is set in a post-apocalyptic desert; and Blade Runner, which is set in a crumbling, dirty city of the future. Lucas made a conscious effort to parallel scenes and dialogue between films, and especially to parallel the journeys of Luke Skywalker with that of his father Anakin when making the prequels.
Star Wars contains many themes of political science that mainly favor democracy over dictatorship. Political science has been an important element of Star Wars since the franchise first launched in 1977. The plot climax of Revenge of the Sith is modeled after the fall of the democratic Roman Republic and the formation of an empire.
The stormtroopers from the movies share a name with the Nazi stormtroopers (see also Sturmabteilung). Imperial officers ' uniforms resemble some historical German uniforms of World War II and the political and security officers of the Empire resemble the black - clad SS down to the imitation silver death 's head insignia on their officer 's caps. World War II terms were used for names in Star Wars; examples include the planets Kessel (a term that refers to a group of encircled forces) and Hoth (Hermann Hoth was a German general who served on the snow - laden Eastern Front). Palpatine being Chancellor before becoming Emperor mirrors Adolf Hitler 's role as Chancellor before appointing himself Dictator. The Great Jedi Purge alludes to the events of The Holocaust, the Great Purge, the Cultural Revolution, and the Night of the Long Knives. In addition, Lucas himself has drawn parallels between Palpatine and his rise to power to historical dictators such as Julius Caesar, Napoleon Bonaparte, and Adolf Hitler. The final medal awarding scene in A New Hope, however, references Leni Riefenstahl 's Triumph of the Will. The space battles in A New Hope were based on filmed World War I and World War II dogfights.
Continuing the use of Nazi inspiration for the Empire, J.J. Abrams, the director of Star Wars: The Force Awakens, has said that the First Order, an Imperial offshoot which serves as the main antagonist of the sequel trilogy, is inspired by another aspect of the Nazi regime. Abrams spoke of how several Nazis fled to Argentina after the war and he claims that the concept for the First Order came from conversations between the scriptwriters about what would have happened if they had started working together again.
The Star Wars saga has had a significant impact on modern popular culture. Star Wars references are deeply embedded in popular culture; Phrases like "evil empire '' and "May the Force be with you '' have become part of the popular lexicon. The first Star Wars film in 1977 was a cultural unifier, enjoyed by a wide spectrum of people. The film can be said to have helped launch the science fiction boom of the late 1970s and early 1980s, making science fiction films a blockbuster genre or mainstream. This very impact made it a prime target for parody works and homages, with popular examples including Spaceballs, Family Guy 's Laugh It Up, Fuzzball, Robot Chicken 's "Star Wars Episode I '', "Star Wars Episode II '' and "Star Wars Episode III '', and Hardware Wars by Ernie Fosselius.
In 1989, the Library of Congress selected the original Star Wars film for preservation in the U.S. National Film Registry, as being "culturally, historically, or aesthetically significant. '' Its sequel, The Empire Strikes Back, was selected in 2010. Despite these callings for archival, it is unclear whether copies of the 1977 and 1980 theatrical sequences of Star Wars and Empire -- or copies of the 1997 Special Edition versions -- have been archived by the NFR, or indeed if any copy has been provided by Lucasfilm and accepted by the Registry.
The original Star Wars film was a huge success for 20th Century Fox, and was credited for reinvigorating the company. Within three weeks of the film 's release, the studio 's stock price doubled to a record high. Prior to 1977, 20th Century Fox 's greatest annual profits were $37 million, while in 1977, the company broke that record by posting a profit of $79 million. The franchise helped Fox to change from an almost bankrupt production company to a thriving media conglomerate.
Star Wars fundamentally changed the aesthetics and narratives of Hollywood films, switching the focus of Hollywood - made films from deep, meaningful stories based on dramatic conflict, themes and irony to sprawling special - effects - laden blockbusters, as well as changing the Hollywood film industry in fundamental ways. Before Star Wars, special effects in films had not appreciably advanced since the 1950s. The commercial success of Star Wars created a boom in state - of - the - art special effects in the late 1970s. Along with Jaws, Star Wars started the tradition of the summer blockbuster film in the entertainment industry, where films open on many screens at the same time and profitable franchises are important. It created the model for the major film trilogy and showed that merchandising rights on a film could generate more money than the film itself did.
The Star Wars saga has inspired many fans to create their own non-canon material set in the Star Wars galaxy. In recent years, this has ranged from writing fan fiction to creating fan films. In 2002, Lucasfilm sponsored the first annual Official Star Wars Fan Film Awards, officially recognizing filmmakers and the genre. Because of concerns over potential copyright and trademark issues, however, the contest was initially open only to parodies, mockumentaries, and documentaries. Fan fiction films set in the Star Wars universe were originally ineligible, but in 2007, Lucasfilm changed the submission standards to allow in - universe fiction entries. Lucasfilm, for the most part, has allowed but not endorsed the creation of these derivative fan fiction works, so long as no such work attempts to make a profit from or tarnish the Star Wars franchise in any way. While many fan films have used elements from the licensed Expanded Universe to tell their story, they are not considered an official part of the Star Wars canon.
As the characters and the story line of the original trilogy are so well known, educationalists have advocated the use of the films in the classroom as a learning resource. For example, a project in Western Australia honed elementary school students story - telling skills by role playing action scenes from the movies and later creating props and audio / visual scenery to enhance their performance. Others have used the films to encourage second - level students to integrate technology in the science classroom by making prototype light sabers. Similarly, psychiatrists in New Zealand and the US have advocated their use in the university classroom to explain different types of psychopathology.
The success of the Star Wars films led the franchise to become one of the most merchandised franchises in the world. In 1977, while filming the original film, George Lucas decided to take a 500,000 - dollar pay - cut to his own salary as director, in exchange for fully owning the merchandising rights of the franchise to himself. Over the franchise 's lifetime, such exchange cost 20th Century Fox, more than US $20 billion in merchandising revenue profits. Disney acquired the merchandising rights when part of purchasing Lucasfilm.
Kenner made the first Star Wars action figures to coincide with the release of the film, and today the remaining 80 's figures sell at extremely high prices in auctions. Since the 90 's Hasbro holds the rights to create action figures based on the saga. Pez dispensers have been produced. Star Wars was the first intellectual property to be licensed in Lego Group history, which has produced a Star Wars Lego theme. Lego has produced animated parody short films to promote their sets, among them Revenge of the Brick (2005) and The Quest for R2 - D2 (2009), the former parodies Revenge of the Sith, while the later The Clone Wars film. Due to their success, LEGO created animated comedy mini-series among them The Yoda Chronicles (2013 - 2014) and Droid Tales (2015) originally airing on Cartoon Network, but since 2014 moved into Disney XD. The Lego Star Wars video games are critically acclaimed best sellers.
In 1977 with the board game Star Wars: Escape from the Death Star (not to be confused with another board game with the same title, published in 1990). The board game Risk has been adapted to the series in two editions by Hasbro: and Star Wars Risk: The Clone Wars Edition (2005) and Risk: Star Wars Original Trilogy Edition (2006).
Three different official tabletop role - playing games have been developed for the Star Wars universe: a version by West End Games in the 1980s and 1990s, one by Wizards of the Coast in the 2000s, and one by Fantasy Flight Games in the 2010s.
Star Wars trading cards have been published since the first "blue '' series, by Topps, in 1977. Dozens of series have been produced, with Topps being the licensed creator in the United States. Some of the card series are of film stills, while others are original art. Many of the cards have become highly collectible with some very rare "promos '', such as the 1993 Galaxy Series II "floating Yoda '' P3 card often commanding US $1,000 or more. While most "base '' or "common card '' sets are plentiful, many "insert '' or "chase cards '' are very rare. From 1995 until 2001, Decipher, Inc. had the license for, created and produced a collectible card game based on Star Wars; the Star Wars Collectible Card Game (also known as SWCCG).
|
when the federal reserve uses open market operations and sells treasury bonds to banks | Open market operation - wikipedia
An open market operation (OMO) is an activity by a central bank to give (or take) liquidity in its currency to (or from) a bank or a group of banks. The central bank can either buy or sell government bonds in the open market (this is where the name was historically derived from) or, in what is now mostly the preferred solution, enter into a repo or secured lending transaction with a commercial bank: the central bank gives the money as a deposit for a defined period and synchronously takes an eligible asset as collateral. A central bank uses OMO as the primary means of implementing monetary policy. The usual aim of open market operations is -- aside from supplying commercial banks with liquidity and sometimes taking surplus liquidity from commercial banks -- to manipulate the short - term interest rate and the supply of base money in an economy, and thus indirectly control the total money supply, in effect expanding money or contracting the money supply. This involves meeting the demand of base money at the target interest rate by buying and selling government securities, or other financial instruments. Monetary targets, such as inflation, interest rates, or exchange rates, are used to guide this implementation.
The central bank maintains loro accounts for a group of commercial banks, the so - called direct payment banks. A balance on such a loro account (it is a nostro account in the view of the commercial bank) represents central bank money in the regarded currency. Since central bank money currently exists mainly in the form of electronic records (electronic money) rather than in the form of paper or coins (physical money), open market operations can be conducted by simply increasing or decreasing (crediting or debiting) the amount of electronic money that a bank has in its reserve account at the central bank. This does not require the creation of new physical currency, unless a direct payment bank demands to exchange a part of its electronic money against banknotes or coins.
In most developed countries, central banks are not allowed to give loans without requiring suitable assets as collateral. Therefore, most central banks describe which assets are eligible for open market transactions. Technically, the central bank makes the loan and synchronously takes an equivalent amount of an eligible asset supplied by the borrowing commercial bank.
Classical economic theory postulates a distinctive relationship between the supply of central bank money and short - term interest rates: like for a commodity, a higher demand for central bank money would increase its price, the interest rate. When there is an increased demand for base money, the central bank must act if it wishes to maintain the short - term interest rate. It does this by increasing the supply of base money: it goes to the open market to buy a financial asset, such as government bonds. To pay for these assets, new central bank money is generated in the seller 's loro account, increasing the total amount of base money in the economy. Conversely, if the central bank sells these assets in the open market, the base money is reduced.
Technically, the process works because the central bank has the authority to bring money in and out of existence. It is the only point in the whole system with the unlimited ability to produce money. Another organization may be able to influence the open market for a period of time, but the central bank will always be able to overpower their influence with an infinite supply of money.
Side note: Countries that have a free floating currency not pegged to any commodity or other currency have a similar capacity to produce an unlimited amount of net financial assets (bonds), understandably, governments would like to utilize this capacity to meet other political ends like unemployment rate targeting, or relative size of various public services (military, education, health etc.), rather than any specific interest rate. Mostly, however the central bank is prevented by law or convention from giving way to such demands, being required to only generate central bank money in exchange for eligible assets (see above).
In the United States, as of 2006, the Federal Reserve sets an interest rate target for the Federal funds (overnight bank reserves) market. When the actual Federal funds rate is higher than the target, the New York Reserve Bank will usually increase the money supply via a repo (effectively borrowing from the dealers ' perspective; lending for the Reserve Bank). When the actual Federal funds rate is less than the target, the Bank will usually decrease the money supply via a reverse repo (effectively lending from the dealers ' perspective; borrowing for the Reserve Bank).
In the U.S., the Federal Reserve most commonly uses overnight repurchase agreements (repos) to temporarily create money, or reverse repos to temporarily destroy money, which offset temporary changes in the level of bank reserves. The Federal Reserve also makes outright purchases and sales of securities through the System Open Market Account (SOMA) with its manager over the Trading Desk at the New York Reserve Bank. The trade of securities in the SOMA changes the balance of bank reserves, which also affects short - term interest rates. The SOMA manager is responsible for trades that result in a short - term interest rate near the target rate set by the Federal Open Market Committee (FOMC), or create money by the outright purchase of securities. More rarely will it permanently destroy money by the outright sale of securities. These trades are made with a group of about 22 banks and bond dealers called primary dealers.
Money is created or destroyed by changing the reserve account of the bank with the Federal Reserve. The Federal Reserve has conducted open market operations in this manner since the 1920s, through the Open Market Desk at the Federal Reserve Bank of New York, under the direction of the Federal Open Market Committee. The open market operation is also a means through which inflation can be controlled because when treasury bills are sold to commercial banks these banks can no longer give out loans to the public for the period and therefore money is being reduced from circulation.
The European Central Bank has similar mechanisms for their operations; it describes its methods as a four - tiered approach with different goals: beside its main goal of steering and smoothing Eurozone interest rates while managing the liquidity situation in the market the ECB also has the aim of signalling the stance of monetary policy with its operations.
Broadly speaking, the ECB controls liquidity in the banking system via Refinancing Operations, which are basically repurchase agreements, i.e. banks put up acceptable collateral with the ECB and receive a cash loan in return. These are the following main categories of refinancing operations that can be employed depending on the desired outcome:
Refinancing operations are conducted via an auction mechanism. The ECB specifies the amount of liquidity it wishes to auction (called the allotted amount) and asks banks for expressions of interest. In a fixed rate tender the ECB also specifies the interest rate at which it is willing to lend money; alternatively, in a variable rate tender the interest rate is not specified and banks bid against each other (subject to a minimum bid rate specified by the ECB) to access the available liquidity.
MRO auctions are held on Mondays, with settlement (i.e. disbursal of the funds) occurring the following Wednesday. For example, at its auction on 6 October 2008, the ECB made available 250 million in EUR on 8 October at a minimum rate of 4.25 %. It received 271 million in bids, and the allotted amount (250) was awarded at an average weighted rate of 4.99 %.
Since mid-October 2008, however, the ECB has been following a different procedure on a temporary basis, the fixed rate MRO with full allottment. In this case the ECB specifies the rate but not the amount of credit made available, and banks can request as much as they wish (subject as always to being able to provide sufficient collateral). This procedure was made necessary by the financial crisis of 2008 and is expected to end at some time in the future.
Though the ECB 's main refinancing operations (MRO) are from repo auctions with a (bi) weekly maturity and monthly maturation, Longer - Term Refinancing Operations (LTROs) are also issued, which traditionally mature after three months; since 2008, tenders are now offered for six months, 12 months and 36 months.
The Swiss National Bank currently targets the 3 - month Swiss franc LIBOR rate. The primary way the SNB influences the 3 - month Swiss franc LIBOR rate is through open market operations, with the most important monetary policy instrument being repo transactions.
India 's Open Market Operation is much influenced by the fact that it is a developing country and that the capital flows are very different from those in developed countries. Thus Reserve Bank of India (India 's central bank) has to make policies and use instruments accordingly. Prior to the 1991 financial reforms, RBI 's major source of funding and control over credit and interest rates was the CRR (Cash reserve ratio) and the SLR (Statutory Liquidity Ratio). But after the reforms, the use of CRR as an effective tool was de-emphasized and the use of open market operations increased. OMO 's are more effective in adjusting (market liquidity).
The two traditional type of OMO 's used by RBI:
However, even after sidelining CRR as an instrument, there was still less liquidity and skewedness in the market. And thus, on the recommendations of the Narsimham Committee Report (1998), The RBI brought together a Liquidity Adjustment Facility (LAF). It commenced in June, 2000, and it was set up to oversee liquidity on a daily basis and to monitor market interest rates. For the LAF, two rates are set by the RBI: repo rate and reverse repo rate. The repo rate is applicable while selling securities to RBI (daily injection of liquidity), while the reverse repo rate is applicable when banks buy back those securities (daily absorption of liquidity). Also, these interest rates fixed by the RBI also help in determining other market interest rates.
India experiences large capital inflows every day, and even though the OMO and the LAF policies were able to withhold the inflows, another instrument was needed to keep the liquidity intact. Thus, on the recommendations of the Working Group of RBI on instruments of Sterilization (December, 2003), a new scheme known as the Market stabilization scheme (MSS) was set up. The LAF and the OMO 's were dealing with day - to - day liquidity management, whereas the MSS was set up to sterilize the liquidity absorption and make it more enduring.
According to this scheme, the RBI issues additional T - bills and securities to absorb the liquidity. And the money goes into the Market Stabilization scheme Account (MSSA). The RBI can not use this account for paying any interest or discounts and can not credit any premiums to this account. The Government, in collaboration with the RBI, fixes a ceiling amount on the issue of these instruments.
|
when are wheel lock lug nuts commonly used | Lug nut - wikipedia
A lug nut or wheel nut is a fastener, specifically a nut, used to secure a wheel on a vehicle. Typically, lug nuts are found on automobiles, trucks (lorries), and other large vehicles using rubber tires.
A lug nut is a nut with one rounded or conical (tapered) end, used on steel and most aluminum wheels. A set of lug nuts are typically used to secure a wheel to threaded wheel studs and thereby to a vehicle 's axles.
Some designs (Audi, BMW, Mercedes - Benz, Saab, Volkswagen) use lug bolts instead of nuts, which screw into a tapped (threaded) hole in the wheel 's hub or drum brake or disc. This configuration is commonly known as a bolted joint.
The conical lug 's taper is normally 60 degrees (although 45 is common for wheels designed for racing applications), and is designed to center the wheel accurately on the axle, and to reduce the tendency for the nut to loosen, due to fretting induced precession, as the car is driven. One popular alternative to the conical lug seating design is the spherical, or ball seat. Automotive manufactures as Audi, BMW and Honda use this design rather than a tapered seat, but the nut performs the same function. Older style (non-ferrous) alloy wheels have a 1 / 2 to 1 inch cylindrical shank slipping into the wheel to center it and a washer that applies pressure to clamp the wheel to the axle.
Wheel lug nuts may have different shapes. Aftermarket alloy and forged rims often require specific lug nuts to match their mounting holes, so it is often required to get a new set of lug nuts when the rims are changed.
There are 4 common lug nut types:
Lug nuts may be removed using a lug, socket or impact wrench. If the wheel is to be removed then an automotive jack to raise the vehicle and some wheel chocks would be used as well. Wheels that have hubcaps or hub covers need these removed beforehand, typically with a screwdriver, flatbar, or prybar. Lug nuts can be difficult to remove, as they may become frozen to the wheel stud. In such cases a breaker bar or repeated blows from an impact wrench can be used to free them. Alternating between tightening and loosening can free especially stubborn lug nuts.
Lug nuts must be installed in an alternating pattern, commonly referred to as a star pattern. This ensures a uniform distribution of load across the wheel mounting surface. When installing lug nuts, it is recommended to tighten them with a calibrated torque wrench. While a lug, socket or impact wrench may be used to tighten lug nuts the final tightening should be performed by a torque wrench, ensuring an accurate and adequate load is applied. Torque specifications vary by vehicle and wheel type. Both vehicle and wheel manufacturers provide recommended torque values which should be consulted when an installation is done. Failure to abide by the recommended torque value can result in damage to the wheel and brake rotor / drum. Additionally, under tightened lug nuts may come loose with time.
In order to allow early detection of loose lug nuts, some large vehicles are fitted with loose wheel nut indicators. The indicator spins with the nut, so that it can be detected with a visual inspection.
In countries where the theft of alloy wheels is a serious problem, locking nuts (or bolts, as applicable) are available - or already fitted by the vehicle manufacturer - which require a special adaptor ("key '') between the nut and the wrench to fit and remove. The key is normally unique to each set of nuts. Only one locking nut per wheel is normally used, so they are sold in sets of four. Most designs can be defeated using a hardened removal tool which uses a left - hand self - cutting thread to grip the locking nut, although more advanced designs have a spinning outer ring to frustrate such techniques.
In the United States, vehicles manufactured by the Chrysler Corporation used left - hand and right - hand screw thread for different sides of the vehicle to prevent loosening prior to the 1975 models. Most Buicks, Pontiacs, and Oldsmobiles used both left - handed and right - handed lug nuts prior to model year 1965. It was later realized that the taper seat performed the same function. Modern vehicles use right - hand threads on all wheels.
|
index of are you the one season 6 | List of Trailer park Boys episodes - wikipedia
Trailer Park Boys is a Canadian mockumentary television series created and directed by Mike Clattenburg and a continuation of Clattenburg 's 1999 film of the same name. The series focuses on the misadventures of a group of trailer park residents, some of whom are ex-convicts, living in the fictional Sunnyvale Trailer Park in Dartmouth, Nova Scotia.
All episodes from seasons 1 to 7 are directed by series creator Mike Clattenburg, who also co-wrote every episode. John Paul Tremblay who plays Julian and Robb Wells who plays Ricky also co-wrote all episodes. Producer Barrie Dunn who also plays Ray, co-wrote all episodes during the first two seasons, as well as co-writing one season three episode. Season 6 was the last season featuring Cory (Cory Bowles) and Trevor (Michael Jackson) after they left the show. Jackie Torrens, sister of cast member Jonathan Torrens, co-wrote the third - season episodes, and Michael Volpe who serves as a producer, co-wrote an episode. Mike Smith who plays Bubbles, began co-writing episodes from season four to six. Iain MacLeod, who is also a story editor, began co-writing various episodes beginning with season four, and co-wrote all season six and seven episodes. Jonathan Torrens who plays J - Roc began co-writing episodes in season five and six. Timm Hannebohm joined the writing staff in season seven, and co-wrote all episodes.
Starting with season 8, Mike Clattenburg no longer directs or writes any episodes and has no formal involvement in the show after selling the rights to the trio. He is credited at the end of every episode of the revived series as "Based on the original Trailer Park Boys series produced by Mike Clattenburg, Barrie Dunn and Mike Volpe. '' Each episode is written by the series stars John Paul Tremblay (as JP Tremblay), Robb Wells and Mike Smith and is directed by various directors.
On July 4, 2013, it was announced that John Paul Tremblay, Robb Wells and Mike Smith acquired the rights to Trailer Park Boys and confirmed it would return with an eighth season, that would be broadcast on their Internet channel, SwearNet.com. However, on March 5, 2014, it was announced season 8 would premiere exclusively on Netflix, and the streaming service would also make a ninth season.
On March 5, 2014, it was confirmed that Netflix would air seasons 8 and 9. All episodes of season 9 were released on Netflix on March 27, 2015.
A short film about the "Cart Boy '' who later became Bubbles in Trailer Park Boys. Also features Ricky and Jason (who later became Julian) as security guards.
After a psychic predicts his death, a small - time hoodlum named Julian hires a cheap documentary film crew to document the last few days of his mis - spent life.
This special takes us back to 1997 before Randy was Assistant Trailer Park Supervisor, before J - Roc and Tyrone were "urbanized, '' before Barb and Jim broke up, before the Shitmobile lost a door, and before Lahey was an alcoholic.
Leading up to the premiere of season 8, six short web clip episodes were made available for their website, called Season 7.5.
Between Seasons 10 and 11, this mini-season was released on Netflix. Official description: "The Trailer Park Boys are thrilled to get a free trip to Europe, until they arrive and learn about their corporate sponsor 's unusual requirements. ''
|
where does star wars battlefront 2 take place | Star Wars Battlefront II (2017 video game) - wikipedia
Star Wars Battlefront II is an action shooter video game based on the Star Wars film franchise. It is the fourth major installment of the Star Wars: Battlefront series and seventh overall, and a sequel to the 2015 reboot of the series. It was developed by EA DICE, in collaboration with Criterion Games and Motive Studios, and published by Electronic Arts. The game was released worldwide on November 17, 2017 for PlayStation 4, Xbox One, and Microsoft Windows.
Upon release, Battlefront II received mixed reviews from critics. The game was also subject to widespread criticism regarding the status of loot boxes, which could give players substantial gameplay advantages if they purchased the loot boxes with real money. A reply from EA 's community team on Reddit on the topic became the single most downvoted comment in the site 's history -- and in response, EA decided to temporarily remove microtransactions from the game until a later date. In January 2018, EA announced that the micro-transactions will return "in the next few months ''.
Star Wars Battlefront II features a single - player story mode, a customizable character class system, and content based on The Force Awakens and The Last Jedi movies. It also features vehicles and locations from the original, prequel, and sequel Star Wars movie trilogies. It also features heroes and villains that can be played based on characters from the Star Wars movies; the hero roster includes Luke Skywalker, Leia Organa, Han Solo, Chewbacca, Lando Calrissian, Yoda, and Rey, while the villain roster includes Darth Vader, Emperor Palpatine, Boba Fett, Bossk, Iden Versio, Darth Maul, and Kylo Ren at launch.
The game features a full campaign story mode unlike 2015 's Battlefront. The game 's single player protagonist, Iden Versio (Janina Gavankar), leader of an Imperial Special Forces group known as Inferno Squad, participates in multiple events in the 30 years leading up to The Force Awakens. There will be segments in the campaign where the player will be able to control other characters such as Luke Skywalker and Kylo Ren. Players can also play in arcade mode -- an offline single player or local co-op where players can choose which side to play on and which battle to play in. Battles vary from team battles to onslaughts. Alternatively, players can choose to do a custom match, where they can change some of the settings and location.
Instead of the paid Season Pass downloadable content (DLC) seen in the 2015 predecessor, this game is expanded with free DLC provided to all players with a free EA account. Lead actress Janina Gavankar stated that the DLC would be free to all players, using a seasonal structure similar to Overwatch and Rainbow Six Siege. The first season, released in December 13, 2017, was based on the movie Star Wars: The Last Jedi, and included Finn and Captain Phasma as heroes, the planet Crait as a ground map, and a space map above D'Qar.
Star Wars Battlefront II features five multiplayer game modes with the largest supporting up to 40 simultaneous players. Galactic Assault is centered around unique set pieces set across the eleven planets and locations featuring all three Star Wars eras involving a team of 20 attackers against 20 defenders. In Starfighter Assault, battles take place in space and planetary atmospheres involving 12 attackers against 12 defenders, both teams being reinforced with an additional 20 AI ships. Strike has players battling in close quarter scenarios involving a team of eight attackers aiming to capture a unique objective from a team of eight defenders; essentially one team capture the flag. Heroes vs. Villains is a team deathmatch mode involving the heroes and villains in Star Wars Battlefront II based off Star Wars characters, where four light side heroes fight four dark side villains. Blast, the final mode, is standard team deathmatch between two teams of 10 players in which teams try to reach 100 total combined eliminations before the enemy team can. Alongside an update released on February 19, 2018, a new temporary game mode, Jetpack Cargo, was released, in which two teams of eight battle to capture cargo while all players are equipped with a jet - pack.
The single - player story mode campaign in Star Wars Battlefront II takes place in the Star Wars galaxy, beginning around the time of Return of the Jedi, but largely between it and Star Wars: The Force Awakens. Emperor Palpatine plots to lure an unsuspecting Rebel Alliance fleet into a trap using himself and the second Death Star, being constructed above the Forest Moon of Endor, as bait, seeking to crush the Rebellion against his Galactic Empire once and for all. The Imperial Special Forces commando unit Inferno Squad, led by Commander Iden Versio, daughter of Admiral Garrick Versio, and made up of Agents Gideon Hask and Del Meeko, is crucial to the success of this planned Battle of Endor, but the Empire underestimates the strength of the Rebellion as its fleet gathers at Sullust.
Iden Versio is being interrogated for the codes to unlock an Imperial transmission aboard a Rebel Mon Calamari Star Cruiser. She activates her droid, which sneaks to her cell and frees her. Iden had allowed herself to be captured in order to erase the Imperial transmission, which would reveal the Emperor 's plan at Endor. She successfully erases it, then escapes the ship by launching herself into space where she is intercepted by the Corvus, the flagship of Inferno Squad. Iden confirms the mission 's success to Gideon Hask and Del Meeko, other members of her squad.
Later on Endor, Iden, Hask, and Meeko secure the perimeter around the ruined shield generator, and watch with shock and horror as the second Death Star explodes. Vice Admiral Sloane orders a full retreat, and Inferno Squad recovers TIE fighters to escape the planet, which is being overrun by Rebel forces. The Corvus is attacked during their escape, but Inferno fends off Rebel bombers. Iden meets with her father, Admiral Garrick Versio, on his Star Destroyer Eviscerator.
Admiral Versio confirms to Iden that the Emperor has died. A messenger droid displays a hologram of the late Emperor issuing his last command: to begin Operation: Cinder. Admiral Versio sends Iden to an Imperial shipyard to protect Moff Raythe and his Star Destroyer Dauntless, which hosts experimental satellites vital to the success of Operation: Cinder. The Dauntless comes under attack from a Rebel Star Cruiser, but Iden is able to board it with Hask and disable its ion cannons. Afterwards, they are ordered to attack the Imperial shipyard in order to free the Star Destroyer from the locked clamps. Afterwards, the Dauntless opens fire on the Rebel cruiser, destroying it.
Meeko is sent to Pillio and ordered to destroy one of the Emperor 's hidden bases. He encounters Luke Skywalker, who helps him disarm the base 's defenses and fend off the local wildlife. They discover that the base contains the Emperor 's spoils of conquest. Meeko and Luke part amicably, and Meeko begins to question the Empire 's goals and motives. Following this, Iden and Inferno Squad are sent to the Imperial - controlled world of Vardos, in order to retrieve Protectorate Gleb. As the satellites for Operation Cinder begin destroying the planet with terrible storms, Iden and Meeko try to evacuate the civilians in addition to Gleb, causing Agent Hask to betray them. Disillusioned by the Empire 's attack on Vardos, Iden and Meeko escape off world, now traitors to the Empire. They seek out the Rebel Alliance and are taken to General Lando Calrissian, who gives them the choice to help stop Operation: Cinder, or to escape and make new lives for themselves. Choosing to help, they aid Leia Organa in protecting Naboo, destroying the satellites for Operation: Cinder and reactivating the planet 's defenses. After Naboo is liberated, Inferno Squad joins the New Republic.
Iden and Inferno Squad are then sent to Takodana to find Han Solo, who was extracting an Imperial defector carrying critical data in hopes of liberating Kashyyyk and freeing the Wookiees. The data also reveals that Admiral Versio is commanding Imperial operations on Bespin and Sullust. Iden and Del infiltrate Bespin with the intent of capturing Admiral Versio, but he and Hask manage to escape. Meanwhile, Lando investigates the hidden Imperial weapons cache on Sullust, only to find a weapons factory which he destroys. These operations cripple the Imperial fleet, which makes a last stand at Jakku. During the battle, Iden shoots down Hask and boards the Eviscerator, intending to rescue her father. Admiral Versio decides to go down with his ship, feeling obligated to die with the Empire he fought to protect. He instead urges Iden to escape and live a new life, commending her for seeing the weakness of the Empire. Iden takes an escape pod and reunites with Del at the end of the battle. The two embrace and kiss, as the battle marks the end of the Galactic Empire.
Many decades later, Del is captured on Pillio by Protectorate Gleb, who hands him over to Kylo Ren and the First Order. Ren uses the Force to interrogate Del about the location of the map leading to Luke Skywalker. Once Ren succeeds, he leaves Del in the custody of Hask, who survived getting shot down at Jakku. Hask expresses disgust at Del choosing to father a daughter with Iden instead of becoming a soldier and kills him, but not before Del warns him not to confront Iden. Hask then warns Gleb that the Republic can not find out about "Project Resurrection '' and orders her to leave the Corvus on Pillio as bait to lure Iden out of hiding.
Shriv, now an agent for the Resistance, discovers the abandoned Corvus and informs Iden and her daughter, Zay. Shriv also reveals that Del had been helping the Resistance investigate rumors of mass disappearances that may be connected to Project Resurrection before disappearing himself. They head to Athulla, where Del was last seen, to investigate. However, they are ambushed by a Jinata Security fleet. Iden and Zay destroy the fleet and capture the flagship. The surviving Jinata Security crew admit that they had been kidnapping children on the behalf of the First Order, and that Project Resurrection had been moved to Vardos.
Iden, Zay, and Shriv return to Vardos. Iden has Zay stay behind on the Corvus while she and Shriv investigate the surface, where they see a bright red light appear in the sky. Iden and Shriv discover Gleb 's dead body and are then captured by Hask, who taunts them by telling them that he killed Gleb and Del and the First Order has already destroyed the Senate and the Hosnian System - the red lights from earlier (as seen in The Force Awakens). He then orders his Star Destroyer the Retribution to destroy the Corvus along with Zay. Jinata Security personnel, angry at the First Order betraying them, attack Hask 's men, giving Iden and Shriv an opportunity to escape. They rescue Zay who managed to eject in an escape pod before the Corvus was destroyed, and they board the Retribution.
Iden fights her desire to get revenge on Hask and instead focuses on stealing any useful data from the ship to aid the Resistance. They hack a computer terminal and discover that Project Resurrection is an operation by the First Order to kidnap children from across the galaxy and indoctrinate them into stormtroopers. In addition, they discover that the First Order has built up a massive fleet large enough to retake the galaxy. Finally, they find the plans for a First Order Dreadnought and steal them. Shriv then goes to secure an escape craft while Iden and Zay plant explosive charges on the Retribution 's hyperspace generators. Hask ambushes them but is killed by Iden. The destruction of the hyperspace generators pulls the Retribution out of hyperspace near Starkiller Base right as the Resistance destroys it. Iden reveals that she had been mortally wounded during the battle with Hask. She gives the Dreadnought plans to Zay and orders her to escape without her before dying.
Zay and Shriv link up with the Resistance and hand over the Dreadnought plans to Leia. She then orders them to head to the Outer Rim to gather more allies for the Resistance.
On May 10, 2016, the development of Star Wars Battlefront II was announced, led by EA DICE in collaboration with Criterion Games and Jade Raymond 's Motive Studios. The sequel to 2015 's rebooted Star Wars Battlefront features content from the sequel trilogy of films. Creative director Bernd Diemer has stated that the company has replaced the Season Pass system of paid expansion of content, because that system was determined to have "fragmented '' the player community of the 2015 predecessor game. The new expansion system is designed to allow all players "to play longer ''. Executive producer Matthew Webster announced on April 15, 2017 at Star Wars Celebration that the worldwide release of the game would be November 17, 2017. The Battlefront II beta test period started on October 4, 2017, for players who pre-ordered the game. It was expanded to an open beta on October 6, and ran until October 11. A 10 - hour trial version was made available to EA Access and Origin Access subscribers on November 9, 2017.
A tie - in novel, Star Wars Battlefront II: Inferno Squad, was released on July 25, 2017. Written by Christie Golden, it serves as a direct prelude to the game and follows the exploits of the Galactic Empire 's titular squad as it seeks to eliminate what was left of Saw Gerrera 's rebel cell after the events of the 2016 film Rogue One.
On November 10, 2017, Electronic Arts announced the first in a series of free downloadable content for the game, featuring the planets D'Qar and Crait and the playable hero characters Finn and Captain Phasma. This content is a direct tie - in to December 's Star Wars: The Last Jedi.
Star Wars Battlefront II received "mixed or average '' reviews, according to review aggregator Metacritic. Metacritic user reviews for the PlayStation 4 version reached a low rating of 0.8 / 10, labelled as "overwhelming dislike '', due to the controversies and review bombing.
In his 4 / 5 star review for GamesRadar, Andy Hartup praised the multiplayer but criticized the single player modes, saying the game has a "very strong multiplayer offering tarnished by overly complicated character progression, and a lavish, beautiful story campaign lacking in substance or subtlety. '' GameRevolution felt the campaign started strong but weakened as it progressed, praising the multiplayer gameplay while criticizing the micro-transactions, loot box progression system, and locking of heroes.
For EGM 's review, Nick Plessas praised the multiplayer combat, balancing, and variety, but criticized the game 's sustained focus around loot crates. Andrew Reiner of Game Informer gave the game 6.5 / 10, writing "Answering the call for more content, Star Wars Battlefront II offers a full campaign and more than enough multiplayer material, but the entire experience is brought down by microtransactions. '' IGN 's Tom Marks also gave the game 6.5 / 10, saying "Star Wars Battlefront 2 has great feeling blasters, but its progression system makes firing them an unsatisfying grind. ''
The game was nominated for "Best Shooter '', "Best Graphics '' and "Best Multiplayer '' in IGN 's Best of 2017 Awards, and was a runner - up for "Most Disappointing Game '' in Giant Bomb 's 2017 Game of the Year Awards. In Game Informer 's Reader 's Choice Best of 2017 Awards, fewer readers voted for the game for "Best Co-op Multiplayer ''. The website also awarded the game for "Best Graphics '', "Best Audio '' and "Biggest Disappointment '' in their 2017 Shooter of the Year Awards.
In the US, Star Wars Battlefront II was the second best - selling title in November behind rival first - person shooter Call of Duty: WWII. Within its first week on sale in Japan, the PlayStation 4 version sold 38,769 copies, placing it at number four on the all format sales chart.
During pre-release beta trials, the game 's publisher EA was criticized by gamers and the gaming press for introducing a loot box monetization scheme that gives players substantial gameplay advantages through items purchased in - game with real money. Although such items could also be purchased with in - game currency, players would on average have to "grind '' for approximately 40 hours to unlock a single player character such as Darth Vader. Responding to the controversy, developers had adjusted the number of in - game items a player receives through playing the game. However, after the game went into pre-release a number of players and journalists who received the pre-release copy of the game reported various controversial gameplay features, such as rewards being unrelated to the player 's performance in the game. The poorly weighed reward system combined with a weak inactivity detection allowed many players to use rubber bands to tightly tie their game controllers for automatically farming points during multiplayer battles, ruining the experience of other active online players.
On November 12, 2017, a Reddit user complained that although they spent US $80 to purchase the Deluxe Edition of the game, Darth Vader remained inaccessible for play, and the use of this character required a large amount of in - game credits. Players estimated that it would take 40 hours of gameplay to accumulate enough credits to unlock a single hero. In response to the community 's backlash, EA 's Community Team defended the controversial changes by saying their intent to make users earn credits to unlock heroes was to give users a sense of "pride and accomplishment '' after unlocking a hero. This led to many Reddit users becoming frustrated at the response, which generated more than 674,000 downvotes, making it the most downvoted comment in the site 's history. In response to the community 's outrage, EA lowered the cost of credits to unlock heroes by 75 %. However, the credits rewarded for completing the campaign were also reduced.
On the day before release, EA disabled micro-transactions entirely, citing players ' concerns that they gave buyers unfair advantages. They stated their intent to reintroduce them at a later date after unspecified changes had been made.
The uproar from social media and poor press reception on its microtransactions had a negative impact on EA 's share price which dropped by 2.5 % on the launch day of the game. Analysts in Wall Street also lowered their expectation of the game 's financial prospect. A Wall Street analyst writing for CNBC noted how video games are still the cheapest entertainment medium per hour of use, and even with the added microtransactions, playing Battlefront II was still notably cheaper than paying to see the theatrical release of a film.
By the end of November 2017, EA had lost $3 billion in stock value since the launch of the game.
On November 15, two days before release, the Belgian gambling regulator announced that it was investigating the game, alongside Overwatch, to determine whether loot boxes constituted unlicensed gambling. In response to the investigation, EA claimed that Battlefront II 's loot boxes do not constitute gambling. The Minister of Justice of Belgium Koen Geens expressed that if they prove loot boxes violate gambling laws he would start working on banning loot boxes in any future video games sold in the entire European Union.
Reacting to the conclusion of the Belgian gambling regulator 's investigation, the head of Dutch Gambling commission announced a start of their own investigation of Battlefront II and the issue in general, and asked parents "to keep an eye at the games their children play ''. Chris Lee, a member of the Hawaii House of Representatives, called Star Wars: Battlefront II "an online casino designed to trap little kids '' and announced his intention to ban such practices in the state of Hawaii. Another representative compared playing Battlefront II to smoking cigarettes, saying: "We did n't allow Joe Camel to encourage your kids to smoke cigarettes, and we should n't allow Star Wars to encourage your kids to gamble. '' Singapore 's National Council on Problem Gambling are currently monitoring the situation following the uproar on the game, as loot boxes currently do not fall under the Remote Gambling Act. Authorities in Australia are also investigating the situation.
|
write the names of any two major passes | List of mountain passes - wikipedia
This is a list of mountain passes.
17,100 ft (5,200 m)
18,028 ft (5,495 m)
See: List of mountain passes in Kyrgyzstan
See also: Principal passes of the Alps, List of mountain passes in Switzerland.
See:
|
what does dolphin safe on a tuna can mean | Dolphin safe label - wikipedia
Dolphin - safe labels are used to denote compliance with laws or policies designed to minimize dolphin fatalities during fishing for tuna destined for canning.
Some labels impose stricter requirements than others. Dolphin - safe tuna labeling originates in the United States. The term Dolphin Friendly is often used in Europe, and has the same meaning, although in Latin America, the standards for Dolphin Safe / Dolphin Friendly tuna is different than elsewhere. The labels have become increasingly controversial since their introduction, particularly among sustainability groups in the U.S., but this stems from the fact that Dolphin Safe was never meant to be an indication of tuna sustainability. Many U.S. labels that carry dolphin safe label are amongst the least sustainable for oceans, according to Greenpeace 's 2017 Shopping Guide.
While the Dolphin Safe label and its standards has legal status in the United States under the Dolphin Protection Consumer Information Act, a part of the US Marine Mammal Protection Act, tuna companies around the world adhere to the standards on a voluntary basis, managed by the non-governmental organization Earth Island Institute, based in Berkeley, CA. The Inter-American Tropical Tuna Commission has promoted an alternative Dolphin Safe label, which requires 100 % coverage by independent observers on boats and limits the overall mortality of dolphins in the ocean. This label is mostly used in Latin America.
According to the U.S. Consumers Union, Earth Island and U.S. dolphin safe labels provide no guarantee that dolphins are not harmed during the fishing process because verification is neither universal nor independent. Still, tuna fishing boats and canneries operating under any of the various U.S. labeling standards are subject to surprise inspection and observation. For US import, companies face strict charges of fraud for any violation of the label standards, while Earth Island Institute (EII), an independent environmental organization, verifies the standards are met by more than 700 tuna companies outside the U.S through inspections of canneries, storage units, and audits of fishing logs. It should be noted that Earth Island Institute receives donations from the companies it verifies; and EII has never had an external scientific audit of its labeling program, a best practice for eco-labels. International observers are increasingly part of the Dolphin Safe verification process, being present on virtually purse seine tuna boats in the Eastern Pacific Ocean.
Dolphins are a common bycatch in fisheries. There are more than 90,000 dolphins estimated to be killed in tuna fisheries worldwide. These mortalities occur around the globe. True mortality associated with tuna is only known in fisheries with full regulation and onboard observation. The dolphins, who swim closer to the surface than tuna, may be used as an indicator of tuna presence. Labeling was originally intended to discourage fishing boats from netting dolphins with tuna.
The tuna fishery in the Eastern Tropical Pacific is the only fishery that deliberately targets, chases, and nets dolphins, resulting in estimates of 6 - 7million dolphins dying in tuna nets since the practice was introduced in the late 1950s, the largest directed kill of dolphins on Earth. With the onset of the Dolphin Safe label program, started in the US in 1990 but soon spreading to foreign tuna operations, the deaths of dolphins has decreased considerably, with official counts, based on observer coverage, of around 1,000 dolphins per year. However, research by the US National Marine Fisheries Service has shown that chasing the dolphins causes baby dolphins to fall behind the pod, resulting in a large "cryptic '' kill, likely damaging populations of dolphins, as the young starve or are eaten by sharks while the main pod is held by the nets. Thus, claims that tuna fishing can continue to chase and net dolphins and not cause harm are not backed by scientific research.
Dolphins do not associate with Skipjack tuna and this species is most likely to be truly "dolphin safe ''. However, the species of tuna is not always mentioned on the can.
In 1990, the organization Earth Island Institute and tuna companies in the US agreed to define Dolphin Safe tuna as tuna caught without setting nets on or near dolphins. This standard was incorporated into the Marine Mammal Protection Act later that year as the Dolphin Protection Consumer Information Act. Those standards were also adopted by Earth Island Institute in developing agreements with more than 700 tuna companies around the world -- the companies pledged to adhere to the standards and open their operations up to Earth Island 's international monitors.
In 1997, the standards for Dolphin Safe tuna were expanded by Congress with the passage of the International Dolphin Conservation Program Act, amending the Marine Mammal Protection Act to include the standard that no dolphins were killed or seriously injured in a net set to qualify that tuna for a Dolphin Safe label.
In 1999, via the Inter-American Tropical Tuna Commission, several nations adopted the Agreement on the International Dolphin Conservation Program, which set up standards for a different Dolphin Safe / Dolphin Friendly label by nations that continue to chase and net dolphins to catch tuna. The AIDCP standard allows up to 5,000 dolphins be killed annually in tuna net sets, while encouraging the release of dolphins unharmed. Critics note that the AIDCP standard ignores the cryptic kill of baby dolphins and still subjects dolphins to extreme physiological stress, injuries, and mortality.
In a 2008 report, Greenpeace notes dolphin - safe labels may make consumers believe the relevant tuna is environmentally friendly. However, the dolphin - safe label only indicates the by - catch contained no dolphins. It does not specify that the by - catch contained no other species, nor does it imply anything about the environmental impact of the hunt itself.
In May 2012, the World Trade Organization ruled that the dolphin safe label, as used in the U.S., focuses too narrowly on fishing methods, and too narrowly on the Eastern Tropical Pacific. The U.S. label does not address dolphin mortalities in other parts of the world. The US subsequently expanded reporting and verification procedures to all oceans of the world, while maintaining the strong standards for the Dolphin Safe label, to come into compliance with the WTO decision.
In 2013, the Campaign for Eco-Safe Tuna launched a formal campaign to end the use of the dolphin - safe label in the U.S. The grassroots activist group advocates adoption of the International Dolphin Conservation Program (AIDCP) label in place of the current U.S. Department of Commerce label. The AIDCP label is currently in use in the following states or countries: Belize, Colombia, Costa Rica, Ecuador, El Salvador, European Union, Guatemala, Honduras, Mexico, Nicaragua, Panama, Peru, United States, Venezuela. The Campaign for Eco-Safe Tuna represents the tuna fishing industry and government agencies of Latin America that continue to advocate chasing and netting dolphins to catch tuna.
Tuna consumption has declined since awareness of the dolphin - safe issue peaked in 1989. Some critics attribute this to the strict standards of U.S. laws, which they claim have lowered the quality of tuna.
The impact of dolphin - safe standards on the price of tuna is debatable. While the trend in cost has been downward, critics claim that the price would have dropped much further without the dolphin - safe standards.
Early on, Earth Island instituted additional protections for sea turtles and sharks with its cooperating tuna companies. Earth Island first proposed that sea turtles in tuna nets be released in 1996, a provision which has now been adopted by international agreement by all tuna fishing treaty organizations. Earth Island further banned shark finning on tuna vessels in the Dolphin Safe program, a measure which is also slowly being adopted by treaty organizations.
The dolphin - safe labeling program has also been criticized for not providing consumers with information on non-dolphin bycatch. Critics have suggested the "cuteness '' of dolphins is improperly used by environmental groups to raise money and draw attention for the labeling program, while tuna bycatch is in fact a much more significant problem for other species. Over a million sharks die each year as bycatch, as do hundreds of thousands of wahoo, dorado, thousands of marlin and many mola mola. The resulting reduction in numbers of such major predators has a huge environmental impact that 's often overlooked. These figures do not reflect the increasing efforts of tuna fishermen to reduce bycatch through research and improved fishing practices introduced by the tuna fishing treaty organizations and the industry group International Seafood Sustainability Foundation.
Trade organizations, industry groups and environmental advocates have sharply criticized EII 's program in the United States and elsewhere, which is mostly based on self - certifications by fishing captains that they did n't kill dolphins. The groups argue that EII 's dolphin - safe tuna "label means absolutely nothing in terms of sustainability. That label has been used to can tuna that could have caused severe mortalities of dolphins and other marine species of the ecosystem. '' The issue has created economic and diplomatic tension between the U.S. and Mexico. The U.S. ban has been blamed for severe economic problems in fishing villages like Ensenada.
Under the World Trade Organization 's dispute settlement system, two reports have been issued on the discriminatory aspects of the US legislation regarding dolphin - safe labels. The WTO Panel Report was published on 15 September 2011 and the WTO 's Appellate Body Report was published on 16 May 2012.
The US government has strongly opposed these decisions and continues to improve the Dolphin Safe implementation procedures to expand provisions in keeping with the WTO concerns without weakening the Dolphin Safe label standards. On November 20, 2015, the WTO Appellate Body ruled against the United States.
The US strongly opposes the claims by the World Trade Organization, noting that US Dolphin Safe standards provide more protection to dolphins than other weaker standards promoted by the government of Mexico and the Inter-American Tropical Tuna Commission (mainly concerned with promoting tuna fishing) and that the US has strengthened review of Dolphin Safe tuna in other areas of the world. Hundreds of environmental organizations condemn the WTO for putting support for free trade over environmental considerations, such as protection of dolphins. The WTO has not made a final decision on this issue.
Virtually all canned tuna in the UK is labelled as dolphin - safe because the market is almost exclusively skipjack tuna. It is thus not implicated in the dolphin by - catch problem associated with the yellowfin tuna of the Eastern Tropical Pacific consumed in the USA. The concerns being addressed in the UK are different from those in the USA: they are preventative to ensure that tuna sold does not become unsafe for dolphins, rather than rectifying an existing environmental problem.
The dolphin - safe movement in the U.S. was led by environmental and consumer groups in response to the use of total encirclement netting. With this method, fishermen surrounded dolphin pods along with the tuna they were catching and the dolphins were given no chance to escape before the nets were lifted. This resulted in large numbers of dolphins being killed, imperiling the survival of entire species of dolphin, specifically in the Eastern Tropical Pacific.
In 1990, the U.S. passed the Dolphin Protection Consumer Information Act (DPCIA). The law had three main provisions:
This protected dolphins in U.S. waters, but canneries were free to purchase tuna from domestic and foreign fisheries, so the U.S. regulations could not assure U.S. consumers they were purchasing dolphin - safe tuna. However, the US does have strict provisions for reviewing tuna imports, including requiring statements by onboard independent observers (most tuna purse seine vessels in areas that export tuna to the US now have observers onboard), as well as strong fraud protection laws against false claims of Dolphin Safe.
|
what do the stickers on michigan helmets mean | Helmet sticker - wikipedia
Helmet stickers, also known as reward decals and pride stickers, are stickers that are affixed to a high school or college football player 's helmet. They can denote either individual or team accomplishments.
ESPN says the practice of awarding helmet stickers is often wrongly credited to Ernie Biggs, also a trainer at Ohio State under legendary coach Woody Hayes. They instead claim that the practice of awarding stickers began with Jim Young, former assistant coach at Miami in 1965, two years before they were used by the Buckeyes.
An even earlier attribution is given to Gene Stauber, freshman coach at Nebraska (1955 -- 1957) by head coach Pete Elliott. Stauber routinely used stickers throughout his tenure as assistant coach at Illinois (1960 -- 1970), as a 1962 photo of All - American linebacker Dick Butkus indicates. The stickers stem from fighter pilots marking their planes with stickers after kills and / or successful missions.
Michael Pellowski, in his book "Rutgers Football: A Gridiron Tradition in Scarlet, '' credits Rutgers defensive backs coach Dewey King with being "one of the first '' to award decals for helmets in 1961. The stickers were given for interceptions only so they were more difficult to earn. Every time there was an interception, the crowd yelled "give him the star. '' The stars can be seen in this photo of the 1961 team walking from the locker room to the field prior to the season finale against Columbia.
|
where is the star vega in the sky | Vega - wikipedia
Vega, also designated Alpha Lyrae (α Lyrae, abbreviated Alpha Lyr or α Lyr), is the brightest star in the constellation of Lyra, the fifth - brightest star in the night sky, and the second - brightest star in the northern celestial hemisphere, after Arcturus. It is relatively close at only 25 light - years from the Sun, and, together with Arcturus and Sirius, one of the most luminous stars in the Sun 's neighborhood.
Vega has been extensively studied by astronomers, leading it to be termed "arguably the next most important star in the sky after the Sun ''. Vega was the northern pole star around 12,000 BC and will be so again around the year 13,727, when the declination will be + 86 ° 14 '. Vega was the first star other than the Sun to be photographed and the first to have its spectrum recorded. It was one of the first stars whose distance was estimated through parallax measurements. Vega has functioned as the baseline for calibrating the photometric brightness scale and was one of the stars used to define the mean values for the UBV photometric system.
Vega is only about a tenth of the age of the Sun, but since it is 2.1 times as massive, its expected lifetime is also one tenth of that of the Sun; both stars are at present approaching the midpoint of their life expectancies. Vega has an unusually low abundance of the elements with a higher atomic number than that of helium. Vega is also a variable star that varies slightly in brightness. It is rotating rapidly with a velocity of 274 km / s at the equator. This causes the equator to bulge outward due to centrifugal effects, and, as a result, there is a variation of temperature across the star 's photosphere that reaches a maximum at the poles. From Earth, Vega is observed from the direction of one of these poles.
Based on an observed excess emission of infrared radiation, Vega appears to have a circumstellar disk of dust. This dust is likely to be the result of collisions between objects in an orbiting debris disk, which is analogous to the Kuiper belt in the Solar System. Stars that display an infrared excess due to dust emission are termed Vega - like stars.
α Lyrae (Latinised to Alpha Lyrae) is the star 's Bayer designation. The traditional name Vega (earlier Wega) comes from a loose transliteration of the Arabic word wāqi ' meaning "falling '' or "landing '', via the phrase an - nasr al - wāqi ', "the falling eagle ''. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN 's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Vega for this star. It is now so entered in the IAU Catalog of Star Names.
Astrophotography, the photography of celestial objects, began in 1840 when John William Draper took an image of the Moon using the daguerreotype process. On July 17, 1850, Vega became the first star (other than the Sun) to be photographed, when it was imaged by William Bond and John Adams Whipple at the Harvard College Observatory, also with a daguerreotype. Henry Draper took the first photograph of a star 's spectrum in August 1872 when he took an image of Vega, and he also became the first person to show absorption lines in the spectrum of a star. Similar lines had already been identified in the spectrum of the Sun. In 1879, William Huggins used photographs of the spectra of Vega and similar stars to identify a set of twelve "very strong lines '' that were common to this stellar category. These were later identified as lines from the Hydrogen Balmer series. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified.
The distance to Vega can be determined by measuring its parallax shift against the background stars as the Earth orbits the Sun. The first person to publish a star 's parallax was Friedrich G.W. von Struve, when he announced a value of 0.125 arcseconds (0.125 '') for Vega. Friedrich Bessel was skeptical about Struve 's data, and, when Bessel published a parallax of 0.314 '' for the star system 61 Cygni, Struve revised his value for Vega 's parallax to nearly double the original estimate. This change cast further doubt on Struve 's data. Thus most astronomers at the time, including Struve, credited Bessel with the first published parallax result. However, Struve 's initial result was actually close to the currently accepted value of 0.129 '', as determined by the Hipparcos astrometry satellite.
The brightness of a star, as seen from Earth, is measured with a standardized, logarithmic scale. This apparent magnitude is a numerical value that decreases in value with increasing brightness of the star. The faintest stars visible to the unaided eye are sixth magnitude, while the brightest, Sirius, is of magnitude − 1.46. To standardize the magnitude scale, astronomers chose Vega to represent magnitude zero at all wavelengths. Thus, for many years, Vega was used as a baseline for the calibration of absolute photometric brightness scales. However, this is no longer the case, as the apparent magnitude zero point is now commonly defined in terms of a particular numerically specified flux. This approach is more convenient for astronomers, since Vega is not always available for calibration.
The UBV photometric system measures the magnitude of stars through ultraviolet, blue, and yellow filters, producing U, B, and V values, respectively. Vega is one of six A0V stars that were used to set the initial mean values for this photometric system when it was introduced in the 1950s. The mean magnitudes for these six stars were defined as: U − B = B − V = 0. In effect, the magnitude scale has been calibrated so that the magnitude of these stars is the same in the yellow, blue, and ultraviolet parts of the electromagnetic spectrum. Thus, Vega has a relatively flat electromagnetic spectrum in the visual region -- wavelength range 350 -- 850 nanometers, most of which can be seen with the human eye -- so the flux densities are roughly equal; 2000 -- 4000 Jy. However, the flux density of Vega drops rapidly in the infrared, and is near 100 Jy at 5 micrometers.
Photometric measurements of Vega during the 1930s appeared to show that the star had a low - magnitude variability on the order of ± 0.03 magnitudes (around ± 2.8 % luminosity). This range of variability was near the limits of observational capability for that time, and so the subject of Vega 's variability has been controversial. The magnitude of Vega was measured again in 1981 at the David Dunlap Observatory and showed some slight variability. Thus it was suggested that Vega showed occasional low - amplitude pulsations associated with a Delta Scuti variable. This is a category of stars that oscillate in a coherent manner, resulting in periodic pulsations in the star 's luminosity. Although Vega fits the physical profile for this type of variable, other observers have found no such variation. Thus the variability was thought to possibly be the result of systematic errors in measurement. However, a 2007 article surveyed these and other results, and concluded that "A conservative analysis of the foregoing results suggests that Vega is quite likely variable in the 1 - 2 % range, with possible occasional excursions to as much as 4 % from the mean ''. Also, a 2011 article affirms on its abstract that "The long - term (year - to - year) variability of Vega was confirmed ''.
Vega became the first solitary main - sequence star beyond the Sun known to be an X-ray emitter when in 1979 it was observed from an imaging X-ray telescope launched on an Aerobee 350 from the White Sands Missile Range. In 1983, Vega became the first star found to have a disk of dust. The Infrared Astronomical Satellite (IRAS) discovered an excess of infrared radiation coming from the star, and this was attributed to energy emitted by the orbiting dust as it was heated by the star.
Vega can often be seen near the zenith in the mid-northern latitudes during the evening in the Northern Hemisphere summer. From mid-southern latitudes, it can be seen low above the northern horizon during the Southern Hemisphere winter. With a declination of + 38.78 °, Vega can only be viewed at latitudes north of 51 ° S. Therefore, it does not rise at all anywhere in Antarctica or in the southernmost part of South America, including Punta Arenas, Chile (53 ° S). At latitudes to the north of + 51 ° N, Vega remains continually above the horizon as a circumpolar star. Around July 1, Vega reaches midnight culmination when it crosses the meridian at that time.
Each night the positions of the stars appear to change as the Earth rotates. However, when a star is located along the Earth 's axis of rotation, it will remain in the same position and thus is called a pole star. The direction of the Earth 's axis of rotation gradually changes over time in a process known as the precession of the equinoxes. A complete precession cycle requires 25,770 years, during which time the pole of the Earth 's rotation follows a circular path across the celestial sphere that passes near several prominent stars. At present the pole star is Polaris, but around 12,000 BC the pole was pointed only five degrees away from Vega. Through precession, the pole will again pass near Vega around AD 14,000.
This star lies at a vertex of a widely spaced asterism called the Summer Triangle, which consists of Vega plus the two first - magnitude stars Altair, in Aquila, and Deneb in Cygnus. This formation is the approximate shape of a right triangle, with Vega located at its right angle. The Summer Triangle is recognizable in the northern skies for there are few other bright stars in its vicinity. Vega is the brightest of the successive northern pole stars.
Vega 's spectral class is A0V, making it a blue - tinged white main sequence star that is fusing hydrogen to helium in its core. Since more massive stars use their fusion fuel more quickly than smaller ones, Vega 's main - sequence lifetime is roughly one billion years, a tenth of the Sun 's. The current age of this star is about 455 million years, or up to about half its expected total main - sequence lifespan. After leaving the main sequence, Vega will become a class - M red giant and shed much of its mass, finally becoming a white dwarf. At present, Vega has more than twice the mass of the Sun and its full luminosity is about 40 times the Sun 's value. However, because of its high rate of rotation, the pole is considerably brighter than the equator. Because it is seen nearly pole - on, its apparent luminosity from Earth is notably higher, about 57 times the Sun 's value. If Vega is variable, then it may be a Delta Scuti type with a period of about 0.107 days.
Most of the energy produced at Vega 's core is generated by the carbon -- nitrogen -- oxygen cycle (CNO cycle), a nuclear fusion process that combines protons to form helium nuclei through intermediary nuclei of carbon, nitrogen, and oxygen. This process requires a temperature of about 15 million K, which is higher than the core temperature of the Sun, but is less efficient than the Sun 's proton - proton chain reaction fusion reaction. The CNO cycle is highly temperature sensitive, which results in a convection zone about the core that evenly distributes the ' ash ' from the fusion reaction within the core region. The overlying atmosphere is in radiative equilibrium. This is in contrast to the Sun, which has a radiation zone centered on the core with an overlying convection zone.
The energy flux from Vega has been precisely measured against standard light sources. At 5480 Å, the flux is 3,650 Jy with an error margin of 2 %. The visual spectrum of Vega is dominated by absorption lines of hydrogen; specifically by the hydrogen Balmer series with the electron at the n = 2 principal quantum number. The lines of other elements are relatively weak, with the strongest being ionized magnesium, iron, and chromium. The X-ray emission from Vega is very low, demonstrating that the corona for this star must be very weak or non-existent. However, as the pole of Vega is facing Earth and a polar coronal hole may be present, confirmation of a corona as the likely source of the X-rays detected from Vega (or the region very close to Vega) may be difficult as most of any coronal X-rays would not be emitted along the line of sight.
Using spectropolarimetry, a magnetic field has been detected on the surface of Vega by a team of astronomers at the Observatoire du Pic du Midi. This is the first such detection of a magnetic field on a spectral class A star that is not an Ap chemically peculiar star. The average line of sight component of this field has a strength of − 0.6 ± 0.3 G. This is comparable to the mean magnetic field on the Sun. Magnetic fields of roughly 30 gauss have been reported for Vega, compared to about 1 gauss for the Sun. In 2015, bright star spots were detected on the star 's surface -- the first such detection for a normal A-type star, and these features show evidence of rotational modulation with a period of 0.68 days.
When the radius of Vega was measured to high accuracy with an interferometer, it resulted in an unexpectedly large estimated value of 2.73 ± 0.01 times the radius of the Sun. This is 60 % larger than the radius of the star Sirius, while stellar models indicated it should only be about 12 % larger. However, this discrepancy can be explained if Vega is a rapidly rotating star that is being viewed from the direction of its pole of rotation. Observations by the CHARA array in 2005 -- 06 confirmed this deduction.
The pole of Vega -- its axis of rotation -- is inclined no more than five degrees from the line - of - sight to the Earth. At the high end of estimates for the rotation velocity for Vega is 236.2 ± 3.7 km / s along the equator, which is 87.6 % of the speed that would cause the star to start breaking up from centrifugal effects. This rapid rotation of Vega produces a pronounced equatorial bulge, so the radius of the equator is 19 % larger than the polar radius. (The estimated polar radius of this star is 2.362 ± 0.012 solar radii, while the equatorial radius is 2.818 ± 0.013 solar radii.) From the Earth, this bulge is being viewed from the direction of its pole, producing the overly large radius estimate.
The local gravitational acceleration at the poles is greater than at the equator, so, by the Von Zeipel theorem, the local luminosity is also higher at the poles. This is seen as a variation in effective temperature over the star: the polar temperature is near 10,000 K, while the equatorial temperature is 7,600 K. As a result, if Vega were viewed along the plane of its equator, then the luminosity would be about half the apparent luminosity as viewed from the pole. This large temperature difference between the poles and the equator produces a strong "gravity darkening '' effect. As viewed from the poles, this results in a darker (lower - intensity) limb than would normally be expected for a spherically symmetric star. The temperature gradient may also mean that Vega has a convection zone around the equator, while the remainder of the atmosphere is likely to be in almost pure radiative equilibrium.
As Vega had long been used as a standard star for calibrating telescopes, the discovery that it is rapidly rotating may challenge some of the underlying assumptions that were based on it being spherically symmetric. With the viewing angle and rotation rate of Vega now better known, this will allow improved instrument calibrations.
Astronomers term "metals '' those elements with higher atomic numbers than helium. The metallicity of Vega 's photosphere is only about 32 % of the abundance of heavy elements in the Sun 's atmosphere. (Compare this, for example, to a three-fold metallicity abundance in the similar star Sirius as compared to the Sun.) For comparison, the Sun has an abundance of elements heavier than helium of about Z = 0.0172 ± 0.002. Thus, in terms of abundances, only about 0.54 % of Vega consists of elements heavier than helium.
The unusually low metallicity of Vega makes it a weak Lambda Boötis - type star. However, the reason for the existence of such chemically peculiar, spectral class A0 - F0 stars remains unclear. One possibility is that the chemical peculiarity may be the result of diffusion or mass loss, although stellar models show that this would normally only occur near the end of a star 's hydrogen - burning lifespan. Another possibility is that the star formed from an interstellar medium of gas and dust that was unusually metal - poor.
The observed helium to hydrogen ratio in Vega is 0.030 ± 0.005, which is about 40 % lower than the Sun. This may be caused by the disappearance of a helium convection zone near the surface. Energy transfer is instead performed by the radiative process, which may be causing an abundance anomaly through diffusion.
The radial velocity of Vega is the component of this star 's motion along the line - of - sight to the Earth. Movement away from the Earth will cause the light from Vega to shift to a lower frequency (toward the red), or to a higher frequency (toward the blue) if the motion is toward the Earth. Thus the velocity can be measured from the amount of redshift (or blueshift) of the star 's spectrum. Precise measurements of this redshift give a value of − 13.9 ± 0.9 km / s. The minus sign indicates a relative motion toward the Earth.
Motion transverse to the line of sight causes the position of Vega to shift with respect to the more distant background stars. Careful measurement of the star 's position allows this angular movement, known as proper motion, to be calculated. Vega 's proper motion is 202.03 ± 0.63 milli - arcseconds (mas) per year in right ascension -- the celestial equivalent of longitude -- and 287.47 ± 0.54 mas / y in declination, which is equivalent to a change in latitude. The net proper motion of Vega is 327.78 mas / y, which results in angular movement of a degree every 11,000 years.
In the Galactic coordinate system, the space velocity components of Vega are (U, V, W) = (− 16.1 ± 0.3, − 6.3 ± 0.8, − 7.7 ± 0.3) km / s, for a net space velocity of 19 km / s. The radial component of this velocity -- in the direction of the Sun -- is − 13.9 km / s, while the transverse velocity is 9.9 km / s. Although Vega is at present only the fifth - brightest star in the sky, the star is slowly brightening as proper motion causes it to approach the Sun. Vega will make its closest approach in an estimated 264,000 years at a perihelion distance of 13.2 ly (4.04 pc).
Based on this star 's kinematic properties, it appears to belong to a stellar association called the Castor Moving Group. However, Vega may be much older than this group, so the membership remains uncertain. This group contains about 16 stars, including Alpha Librae, Alpha Cephei, Castor, Fomalhaut and Vega. All members of the group are moving in nearly the same direction with similar space velocities. Membership in a moving group implies a common origin for these stars in an open cluster that has since become gravitationally unbound. The estimated age of this moving group is 200 ± 100 million years, and they have an average space velocity of 16.5 km / s.
One of the early results from the Infrared Astronomy Satellite (IRAS) was the discovery of excess infrared flux coming from Vega, beyond what would be expected from the star alone. This excess was measured at wavelengths of 25, 60, and 100 μm, and came from within an angular radius of 10 arcseconds (10 '') centered on the star. At the measured distance of Vega, this corresponded to an actual radius of 80 astronomical units (AU), where an AU is the average radius of the Earth 's orbit around the Sun. It was proposed that this radiation came from a field of orbiting particles with a dimension on the order of a millimeter, as anything smaller would eventually be removed from the system by radiation pressure or drawn into the star by means of Poynting -- Robertson drag. The latter is the result of radiation pressure creating an effective force that opposes the orbital motion of a dust particle, causing it to spiral inward. This effect is most pronounced for tiny particles that are closer to the star.
Subsequent measurements of Vega at 193 μm showed a lower than expected flux for the hypothesized particles, suggesting that they must instead be on the order of 100 μm or less. To maintain this amount of dust in orbit around Vega, a continual source of replenishment would be required. A proposed mechanism for maintaining the dust was a disk of coalesced bodies that were in the process of collapsing to form a planet. Models fitted to the dust distribution around Vega indicate that it is a 120 AU - radius circular disk viewed from nearly pole - on. In addition, there is a hole in the center of the disk with a radius of no less than 80 AU.
Following the discovery of an infrared excess around Vega, other stars have been found that display a similar anomaly that is attributable to dust emission. As of 2002, about 400 of these stars have been found, and they have come to be termed "Vega - like '' or "Vega - excess '' stars. It is believed that these may provide clues to the origin of the Solar System.
By 2005, the Spitzer Space Telescope had produced high - resolution infrared images of the dust around Vega. It was shown to extend out to 43 '' (330 AU) at a wavelength of 24 μm, 70 '' (543 AU) at 70 μm and 105 '' (815 AU) at 160 μm. These much wider disks were found to be circular and free of clumps, with dust particles ranging from 1 -- 50 μm in size. The estimated total mass of this dust is 3 × 10 times the mass of the Earth. Production of the dust would require collisions between asteroids in a population corresponding to the Kuiper Belt around the Sun. Thus the dust is more likely created by a debris disk around Vega, rather than from a protoplanetary disk as was earlier thought.
The inner boundary of the debris disk was estimated at 11 '' ± 2 '', or 70 -- 100 AU. The disk of dust is produced as radiation pressure from Vega pushes debris from collisions of larger objects outward. However, continuous production of the amount of dust observed over the course of Vega 's lifetime would require an enormous starting mass -- estimated as hundreds of times the mass of Jupiter. Hence it is more likely to have been produced as the result of a relatively recent breakup of a moderate - sized (or larger) comet or asteroid, which then further fragmented as the result of collisions between the smaller components and other bodies. This dusty disk would be relatively young on the time scale of the star 's age, and it will eventually be removed unless other collision events supply more dust.
Observations, first with the Palomar Testbed Interferometer by David Ciardi and Gerard van Belle in 2001 and then later confirmed with the CHARA array at Mt. Wilson in 2006 and the Infrared Optical Telescope Array at Mt. Hopkins in 2011, revealed evidence for an inner dust band around Vega. Originating within 8 AU of the star, this exozodiacal dust may be evidence of dynamical perturbations within the system. This may be caused by an intense bombardment of comets or meteors, and may be evidence for the existence of a planetary system.
Observations from the James Clerk Maxwell Telescope in 1997 revealed an "elongated bright central region '' that peaked at 9 '' (70 AU) to the northeast of Vega. This was hypothesized as either a perturbation of the dust disk by a planet or else an orbiting object that was surrounded by dust. However, images by the Keck telescope had ruled out a companion down to magnitude 16, which would correspond to a body with more than 12 times the mass of Jupiter. Astronomers at the Joint Astronomy Centre in Hawaii and at UCLA suggested that the image may indicate a planetary system still undergoing formation.
Determining the nature of the planet has not been straightforward; a 2002 paper hypothesizes that the clumps are caused by a roughly Jupiter - mass planet on an eccentric orbit. Dust would collect in orbits that have mean - motion resonances with this planet -- where their orbital periods form integer fractions with the period of the planet -- producing the resulting clumpiness.
In 2003 it was hypothesized that these clumps could be caused by a roughly Neptune - mass planet having migrated from 40 to 65 AU over 56 million years, an orbit large enough to allow the formation of smaller rocky planets closer to Vega. The migration of this planet would likely require gravitational interaction with a second, higher - mass planet in a smaller orbit.
Using a coronagraph on the Subaru telescope in Hawaii in 2005, astronomers were able to further constrain the size of a planet orbiting Vega to no more than 5 -- 10 times the mass of Jupiter. The issue of possible clumps in the debris disc was revisited in 2007 using newer, more sensitive instrumentation on the Plateau de Bure Interferometer. The observations showed that the debris ring is smooth and symmetric. No evidence was found of the blobs reported earlier, casting doubts on the hypothesized giant planet. The smooth structure has been confirmed in follow - up observations by Hughes et al. (2012) and the Herschel Space Telescope.
Although a planet has yet to be directly observed around Vega, the presence of a planetary system can not yet be ruled out. Thus there could be smaller, terrestrial planets orbiting closer to the star. The inclination of planetary orbits around Vega is likely to be closely aligned to the equatorial plane of this star. From the perspective of an observer on a hypothetical planet around Vega, the Sun would appear as a faint 4.3 magnitude star in the Columba constellation.
The name is believed to be derived from the Arabic term Al Nesr al Waki which appeared in the Al Achsasi al Mouakket star catalogue and was translated into Latin as Vultur Cadens, "the falling eagle / vulture ''. The constellation was represented as a vulture in ancient Egypt, and as an eagle or vulture in ancient India. The Arabic name then appeared in the western world in the Alfonsine Tables, which were drawn up between 1215 and 1270 by order of Alfonso X. Medieval astrolabes of England and Western Europe used the names Wega and Alvaca, and depicted it and Altair as birds.
Among the northern Polynesian people, Vega was known as whetu o te tau, the year star. For a period of history it marked the start of their new year when the ground would be prepared for planting. Eventually this function became denoted by the Pleiades.
The Assyrians named this pole star Dayan - same, the "Judge of Heaven '', while in Akkadian it was Tir - anna, "Life of Heaven ''. In Babylonian astronomy, Vega may have been one of the stars named Dilgan, "the Messenger of Light ''. To the ancient Greeks, the constellation Lyra was formed from the harp of Orpheus, with Vega as its handle. For the Roman Empire, the start of autumn was based upon the hour at which Vega set below the horizon.
In Chinese, 織女 (Zhī Nǚ), meaning Weaving Girl (asterism), refers to an asterism consisting of Vega, ε Lyrae and ζ Lyrae. Consequently, Vega is known as 織女 一 (Zhī Nǚ yī, English: the First Star of Weaving Girl) In Chinese mythology, there is a love story of Qixi (七夕) in which Niulang (牛 郎, Altair) and his two children (β and γ Aquilae) are separated from their mother Zhinü (織女, lit. "weaver girl '', Vega) who is on the far side of the river, the Milky Way. However, one day per year on the seventh day of the seventh month of the Chinese lunisolar calendar, magpies make a bridge so that Niulang and Zhinü can be together again for a brief encounter. The Japanese Tanabata festival, in which Vega is known as Orihime (織姫), is also based on this legend.
In Zoroastrianism, Vega was sometimes associated with Vanant, a minor divinity whose name means "conqueror ''.
The indigenous Boorong people of northwestern Victoria named it as Neilloan, "the flying Loan ''.
In Hindu mythology, Vega is called Abhijit, and is mentioned in Mahabharat: "Contesting against Abhijit (Vega), the constellation Krittika (Pleiades) went to "Vana '' the summer solstice to heat the summer. Then the star Abhijit slipped down in the sky. '' P.V. Vartak suggests that the "slipping of Abhijit '' and ascension of Krittika might refer to the gradual drop of Vega as a pole star since 12,000 BC.
Medieval astrologers counted Vega as one of the Behenian stars and related it to chrysolite and winter savory. Cornelius Agrippa listed its kabbalistic sign under Vultur cadens, a literal Latin translation of the Arabic name. Medieval star charts also listed the alternate names Waghi, Vagieh and Veka for this star.
W.H. Auden 's 1933 poem "A Summer Night (to Geoffrey Hoyland) '' famously opens with the couplet, "Out on the lawn I lie in bed, / Vega conspicuous overhead ''.
Vega became the first star to have a car named after it with the French Facel Vega line of cars from 1954 onwards, and later on, in America, Chevrolet launched the Vega in 1971. Other vehicles named after Vega include the ESA 's Vega launch system and the Lockheed Vega aircraft.
Coordinates: 18 36 56.3364, + 38 ° 47 ′ 01.291 ''
|
which of the following are found in the declaration of independence | United States Declaration of Independence - Wikipedia
The Declaration of Independence is the statement adopted by the Second Continental Congress meeting at the Pennsylvania State House (Independence Hall) in Philadelphia on July 4, 1776, which announced that the thirteen American colonies, then at war with the Kingdom of Great Britain, regarded themselves as thirteen independent sovereign states, no longer under British rule. These states would found a new nation -- the United States of America. John Adams was a leader in pushing for independence, which was passed on July 2 with no opposing vote cast. A committee of five had already drafted the formal declaration, to be ready when Congress voted on independence.
John Adams persuaded the committee to select Thomas Jefferson to compose the original draft of the document, which Congress would edit to produce the final version. The Declaration was ultimately a formal explanation of why Congress had voted on July 2 to declare independence from Great Britain, more than a year after the outbreak of the American Revolutionary War. The next day, Adams wrote to his wife Abigail: "The Second Day of July 1776, will be the most memorable Epocha, in the History of America. '' But Independence Day is actually celebrated on July 4, the date that the Declaration of Independence was approved.
After ratifying the text on July 4, Congress issued the Declaration of Independence in several forms. It was initially published as the printed Dunlap broadside that was widely distributed and read to the public. The source copy used for this printing has been lost, and may have been a copy in Thomas Jefferson 's hand. Jefferson 's original draft, complete with changes made by John Adams and Benjamin Franklin, and Jefferson 's notes of changes made by Congress, are preserved at the Library of Congress. The best - known version of the Declaration is a signed copy that is displayed at the National Archives in Washington, D.C., and which is popularly regarded as the official document. This engrossed copy was ordered by Congress on July 19 and signed primarily on August 2.
The sources and interpretation of the Declaration have been the subject of much scholarly inquiry. The Declaration justified the independence of the United States by listing colonial grievances against King George III, and by asserting certain natural and legal rights, including a right of revolution. Having served its original purpose in announcing independence, references to the text of the Declaration were few in the following years. Abraham Lincoln made it the centerpiece of his rhetoric (as in the Gettysburg Address of 1863) and his policies. Since then, it has become a well - known statement on human rights, particularly its second sentence:
We hold these truths to be self - evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
This has been called "one of the best - known sentences in the English language '', containing "the most potent and consequential words in American history ''. The passage came to represent a moral standard to which the United States should strive. This view was notably promoted by Abraham Lincoln, who considered the Declaration to be the foundation of his political philosophy and argued that it is a statement of principles through which the United States Constitution should be interpreted.
The U.S. Declaration of Independence inspired many other similar documents in other countries, the first being the 1789 Declaration of Flanders issued during the Brabant Revolution in the Austrian Netherlands (modern - day Belgium). It also served as the primary model for numerous declarations of independence across Europe and Latin America, as well as Africa (Liberia) and Oceania (New Zealand) during the first half of the 19th century.
Believe me, dear Sir: there is not in the British empire a man who more cordially loves a union with Great Britain than I do. But, by the God that made me, I will cease to exist before I yield to a connection on such terms as the British Parliament propose; and in this, I think I speak the sentiments of America.
By the time that the Declaration of Independence was adopted in July 1776, the Thirteen Colonies and Great Britain had been at war for more than a year. Relations had been deteriorating between the colonies and the mother country since 1763. Parliament enacted a series of measures to increase revenue from the colonies, such as the Stamp Act of 1765 and the Townshend Acts of 1767. Parliament believed that these acts were a legitimate means of having the colonies pay their fair share of the costs to keep them in the British Empire.
Many colonists, however, had developed a different conception of the empire. The colonies were not directly represented in Parliament, and colonists argued that Parliament had no right to levy taxes upon them. This tax dispute was part of a larger divergence between British and American interpretations of the British Constitution and the extent of Parliament 's authority in the colonies. The orthodox British view, dating from the Glorious Revolution of 1688, was that Parliament was the supreme authority throughout the empire, and so, by definition, anything that Parliament did was constitutional. In the colonies, however, the idea had developed that the British Constitution recognized certain fundamental rights that no government could violate, not even Parliament. After the Townshend Acts, some essayists even began to question whether Parliament had any legitimate jurisdiction in the colonies at all. Anticipating the arrangement of the British Commonwealth, by 1774 American writers such as Samuel Adams, James Wilson, and Thomas Jefferson were arguing that Parliament was the legislature of Great Britain only, and that the colonies, which had their own legislatures, were connected to the rest of the empire only through their allegiance to the Crown.
The issue of Parliament 's authority in the colonies became a crisis after Parliament passed the Coercive Acts (known as the Intolerable Acts in the colonies) in 1774 to punish the Province of Massachusetts for the Boston Tea Party of 1773. Many colonists saw the Coercive Acts as a violation of the British Constitution and thus a threat to the liberties of all of British America. In September 1774, the First Continental Congress convened in Philadelphia to coordinate a response. Congress organized a boycott of British goods and petitioned the king for repeal of the acts. These measures were unsuccessful because King George and the ministry of Prime Minister Lord North were determined not to retreat on the question of parliamentary supremacy. As the king wrote to North in November 1774, "blows must decide whether they are to be subject to this country or independent ''.
Most colonists still hoped for reconciliation with Great Britain, even after fighting began in the American Revolutionary War at Lexington and Concord in April 1775. The Second Continental Congress convened at the Pennsylvania State House in Philadelphia in May 1775, and some delegates hoped for eventual independence, but no one yet advocated declaring it. Many colonists no longer believed that Parliament had any sovereignty over them, yet they still professed loyalty to King George, who they hoped would intercede on their behalf. They were disappointed in late 1775, when the king rejected Congress 's second petition, issued a Proclamation of Rebellion, and announced before Parliament on October 26 that he was considering "friendly offers of foreign assistance '' to suppress the rebellion. A pro-American minority in Parliament warned that the government was driving the colonists toward independence.
Thomas Paine 's pamphlet Common Sense was published in January 1776, just as it became clear in the colonies that the king was not inclined to act as a conciliator. Paine had only recently arrived in the colonies from England, and he argued in favor of colonial independence, advocating republicanism as an alternative to monarchy and hereditary rule. Common Sense introduced no new ideas and probably had little direct effect on Congress 's thinking about independence; its importance was in stimulating public debate on a topic that few had previously dared to openly discuss. Public support for separation from Great Britain steadily increased after the publication of Paine 's enormously popular pamphlet.
Some colonists still held out hope for reconciliation, but developments in early 1776 further strengthened public support for independence. In February 1776, colonists learned of Parliament 's passage of the Prohibitory Act, which established a blockade of American ports and declared American ships to be enemy vessels. John Adams, a strong supporter of independence, believed that Parliament had effectively declared American independence before Congress had been able to. Adams labeled the Prohibitory Act the "Act of Independency '', calling it "a compleat Dismemberment of the British Empire ''. Support for declaring independence grew even more when it was confirmed that King George had hired German mercenaries to use against his American subjects.
Despite this growing popular support for independence, Congress lacked the clear authority to declare it. Delegates had been elected to Congress by thirteen different governments, which included extralegal conventions, ad hoc committees, and elected assemblies, and they were bound by the instructions given to them. Regardless of their personal opinions, delegates could not vote to declare independence unless their instructions permitted such an action. Several colonies, in fact, expressly prohibited their delegates from taking any steps towards separation from Great Britain, while other delegations had instructions that were ambiguous on the issue. As public sentiment grew for separation from Great Britain, advocates of independence sought to have the Congressional instructions revised. For Congress to declare independence, a majority of delegations would need authorization to vote for independence, and at least one colonial government would need to specifically instruct (or grant permission for) its delegation to propose a declaration of independence in Congress. Between April and July 1776, a "complex political war '' was waged to bring this about.
In the campaign to revise Congressional instructions, many Americans formally expressed their support for separation from Great Britain in what were effectively state and local declarations of independence. Historian Pauline Maier identifies more than ninety such declarations that were issued throughout the Thirteen Colonies from April to July 1776. These "declarations '' took a variety of forms. Some were formal written instructions for Congressional delegations, such as the Halifax Resolves of April 12, with which North Carolina became the first colony to explicitly authorize its delegates to vote for independence. Others were legislative acts that officially ended British rule in individual colonies, such as the Rhode Island legislature declaring its independence from Great Britain on May 4, the first colony to do so. Many "declarations '' were resolutions adopted at town or county meetings that offered support for independence. A few came in the form of jury instructions, such as the statement issued on April 23, 1776 by Chief Justice William Henry Drayton of South Carolina: "the law of the land authorizes me to declare... that George the Third, King of Great Britain... has no authority over us, and we owe no obedience to him. '' Most of these declarations are now obscure, having been overshadowed by the declaration approved by Congress on July 2, and signed July 4.
Some colonies held back from endorsing independence. Resistance was centered in the middle colonies of New York, New Jersey, Maryland, Pennsylvania, and Delaware. Advocates of independence saw Pennsylvania as the key; if that colony could be converted to the pro-independence cause, it was believed that the others would follow. On May 1, however, opponents of independence retained control of the Pennsylvania Assembly in a special election that had focused on the question of independence. In response, Congress passed a resolution on May 10 which had been promoted by John Adams and Richard Henry Lee, calling on colonies without a "government sufficient to the exigencies of their affairs '' to adopt new governments. The resolution passed unanimously, and was even supported by Pennsylvania 's John Dickinson, the leader of the anti-independence faction in Congress, who believed that it did not apply to his colony.
As was the custom, Congress appointed a committee to draft a preamble to explain the purpose of the resolution. John Adams wrote the preamble, which stated that because King George had rejected reconciliation and was hiring foreign mercenaries to use against the colonies, "it is necessary that the exercise of every kind of authority under the said crown should be totally suppressed ''. Adams 's preamble was meant to encourage the overthrow of the governments of Pennsylvania and Maryland, which were still under proprietary governance. Congress passed the preamble on May 15 after several days of debate, but four of the middle colonies voted against it, and the Maryland delegation walked out in protest. Adams regarded his May 15 preamble effectively as an American declaration of independence, although a formal declaration would still have to be made.
On the same day that Congress passed Adams 's radical preamble, the Virginia Convention set the stage for a formal Congressional declaration of independence. On May 15, the Convention instructed Virginia 's congressional delegation "to propose to that respectable body to declare the United Colonies free and independent States, absolved from all allegiance to, or dependence upon, the Crown or Parliament of Great Britain ''. In accordance with those instructions, Richard Henry Lee of Virginia presented a three - part resolution to Congress on June 7. The motion was seconded by John Adams, calling on Congress to declare independence, form foreign alliances, and prepare a plan of colonial confederation. The part of the resolution relating to declaring independence read:
Resolved, that these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.
Lee 's resolution met with resistance in the ensuing debate. Opponents of the resolution conceded that reconciliation was unlikely with Great Britain, while arguing that declaring independence was premature, and that securing foreign aid should take priority. Advocates of the resolution countered that foreign governments would not intervene in an internal British struggle, and so a formal declaration of independence was needed before foreign aid was possible. All Congress needed to do, they insisted, was to "declare a fact which already exists ''. Delegates from Pennsylvania, Delaware, New Jersey, Maryland, and New York were still not yet authorized to vote for independence, however, and some of them threatened to leave Congress if the resolution were adopted. Congress, therefore, voted on June 10 to postpone further discussion of Lee 's resolution for three weeks. Until then, Congress decided that a committee should prepare a document announcing and explaining independence in the event that Lee 's resolution was approved when it was brought up again in July.
Support for a Congressional declaration of independence was consolidated in the final weeks of June 1776. On June 14, the Connecticut Assembly instructed its delegates to propose independence and, the following day, the legislatures of New Hampshire and Delaware authorized their delegates to declare independence. In Pennsylvania, political struggles ended with the dissolution of the colonial assembly, and a new Conference of Committees under Thomas McKean authorized Pennsylvania 's delegates to declare independence on June 18. The Provincial Congress of New Jersey had been governing the province since January 1776; they resolved on June 15 that Royal Governor William Franklin was "an enemy to the liberties of this country '' and had him arrested. On June 21, they chose new delegates to Congress and empowered them to join in a declaration of independence.
Only Maryland and New York had yet to authorize independence towards the end of June. Previously, Maryland 's delegates had walked out when the Continental Congress adopted Adams 's radical May 15 preamble, and had sent to the Annapolis Convention for instructions. On May 20, the Annapolis Convention rejected Adams 's preamble, instructing its delegates to remain against independence. But Samuel Chase went to Maryland and, thanks to local resolutions in favor of independence, was able to get the Annapolis Convention to change its mind on June 28. Only the New York delegates were unable to get revised instructions. When Congress had been considering the resolution of independence on June 8, the New York Provincial Congress told the delegates to wait. But on June 30, the Provincial Congress evacuated New York as British forces approached, and would not convene again until July 10. This meant that New York 's delegates would not be authorized to declare independence until after Congress had made its decision.
Political maneuvering was setting the stage for an official declaration of independence even while a document was being written to explain the decision. On June 11, 1776, Congress appointed a "Committee of Five '' to draft a declaration, consisting of John Adams of Massachusetts, Benjamin Franklin of Pennsylvania, Thomas Jefferson of Virginia, Robert R. Livingston of New York, and Roger Sherman of Connecticut. The committee left no minutes, so there is some uncertainty about how the drafting process proceeded; contradictory accounts were written many years later by Jefferson and Adams, too many years to be regarded as entirely reliable -- although their accounts are frequently cited. What is certain is that the committee discussed the general outline which the document should follow and decided that Jefferson would write the first draft. The committee in general, and Jefferson in particular, thought that Adams should write the document, but Adams persuaded the committee to choose Jefferson and promised to consult with him personally. Considering Congress 's busy schedule, Jefferson probably had limited time for writing over the next seventeen days, and likely wrote the draft quickly. He then consulted the others and made some changes, and then produced another copy incorporating these alterations. The committee presented this copy to the Congress on June 28, 1776. The title of the document was "A Declaration by the Representatives of the United States of America, in General Congress assembled. ''
Congress ordered that the draft "lie on the table ''. For two days, Congress methodically edited Jefferson 's primary document, shortening it by a fourth, removing unnecessary wording, and improving sentence structure. They removed Jefferson 's assertion that Britain had forced slavery on the colonies in order to moderate the document and appease persons in Britain who supported the Revolution. Jefferson wrote that Congress had "mangled '' his draft version, but the Declaration that was finally produced was "the majestic document that inspired both contemporaries and posterity, '' in the words of his biographer John Ferling.
Congress tabled the draft of the declaration on Monday, July 1 and resolved itself into a committee of the whole, with Benjamin Harrison of Virginia presiding, and they resumed debate on Lee 's resolution of independence. John Dickinson made one last effort to delay the decision, arguing that Congress should not declare independence without first securing a foreign alliance and finalizing the Articles of Confederation. John Adams gave a speech in reply to Dickinson, restating the case for an immediate declaration.
A vote was taken after a long day of speeches, each colony casting a single vote, as always. The delegation for each colony numbered from two to seven members, and each delegation voted amongst themselves to determine the colony 's vote. Pennsylvania and South Carolina voted against declaring independence. The New York delegation abstained, lacking permission to vote for independence. Delaware cast no vote because the delegation was split between Thomas McKean (who voted yes) and George Read (who voted no). The remaining nine delegations voted in favor of independence, which meant that the resolution had been approved by the committee of the whole. The next step was for the resolution to be voted upon by Congress itself. Edward Rutledge of South Carolina was opposed to Lee 's resolution but desirous of unanimity, and he moved that the vote be postponed until the following day.
On July 2, South Carolina reversed its position and voted for independence. In the Pennsylvania delegation, Dickinson and Robert Morris abstained, allowing the delegation to vote three - to - two in favor of independence. The tie in the Delaware delegation was broken by the timely arrival of Caesar Rodney, who voted for independence. The New York delegation abstained once again since they were still not authorized to vote for independence, although they were allowed to do so a week later by the New York Provincial Congress. The resolution of independence had been adopted with twelve affirmative votes and one abstention. With this, the colonies had officially severed political ties with Great Britain. John Adams predicted in a famous letter, written to his wife on the following day, that July 2 would become a great American holiday. He thought that the vote for independence would be commemorated; he did not foresee that Americans -- including himself -- would instead celebrate Independence Day on the date when the announcement of that act was finalized.
After voting in favor of the resolution of independence, Congress turned its attention to the committee 's draft of the declaration. Over several days of debate, they made a few changes in wording and deleted nearly a fourth of the text and, on July 4, 1776, the wording of the Declaration of Independence was approved and sent to the printer for publication.
There is a distinct change in wording from this original broadside printing of the Declaration and the final official engrossed copy. The word "unanimous '' was inserted as a result of a Congressional resolution passed on July 19, 1776:
Resolved, That the Declaration passed on the 4th, be fairly engrossed on parchment, with the title and stile of "The unanimous declaration of the thirteen United States of America, '' and that the same, when engrossed, be signed by every member of Congress.
The declaration is not divided into formal sections; but it is often discussed as consisting of five parts: introduction, preamble, indictment of King George III, denunciation of the British people, and conclusion.
Asserts as a matter of Natural Law the ability of a people to assume political independence; acknowledges that the grounds for such independence must be reasonable, and therefore explicable, and ought to be explained.
When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature 's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.
Outlines a general philosophy of government that justifies revolution when government harms natural rights.
We hold these truths to be self - evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.
A bill of particulars documenting the king 's "repeated injuries and usurpations '' of the Americans ' rights and liberties.
Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.
He has refused his Assent to Laws, the most wholesome and necessary for the public good.
He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them.
He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only.
He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their Public Records, for the sole purpose of fatiguing them into compliance with his measures.
He has dissolved Representative Houses repeatedly, for opposing with manly firmness of his invasions on the rights of the people.
He has refused for a long time, after such dissolutions, to cause others to be elected, whereby the Legislative Powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice by refusing his Assent to Laws for establishing Judiciary Powers.
He has made Judges dependent on his Will alone for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harass our people and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil Power.
He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:
For quartering large bodies of armed troops among us:
For protecting them, by a mock Trial from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefit of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences:
For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies
For taking away our Charters, abolishing our most valuable Laws and altering fundamentally the Forms of our Governments:
For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our seas, ravaged our coasts, burnt our towns, and destroyed the lives of our people.
He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation, and tyranny, already begun with circumstances of Cruelty & Perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.
He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.
In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince, whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people.
This section essentially finishes the case for independence. The conditions that justified revolution have been shown.
Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.
The signers assert that there exist conditions under which people must change their government, that the British have produced such conditions and, by necessity, the colonies must throw off political ties with the British Crown and become independent states. The conclusion contains, at its core, the Lee Resolution that had been passed on July 2.
We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these united Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.
The first and most famous signature on the engrossed copy was that of John Hancock, President of the Continental Congress. Two future presidents (Thomas Jefferson and John Adams) and a father and great - grandfather of two other presidents (Benjamin Harrison) were among the signatories. Edward Rutledge (age 26) was the youngest signer, and Benjamin Franklin (age 70) was the oldest signer. The fifty - six signers of the Declaration represented the new states as follows (from north to south):
Historians have often sought to identify the sources that most influenced the words and political philosophy of the Declaration of Independence. By Jefferson 's own admission, the Declaration contained no original ideas, but was instead a statement of sentiments widely shared by supporters of the American Revolution. As he explained in 1825:
Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion.
Jefferson 's most immediate sources were two documents written in June 1776: his own draft of the preamble of the Constitution of Virginia, and George Mason 's draft of the Virginia Declaration of Rights. Ideas and phrases from both of these documents appear in the Declaration of Independence. They were, in turn, directly influenced by the 1689 English Declaration of Rights, which formally ended the reign of King James II. During the American Revolution, Jefferson and other Americans looked to the English Declaration of Rights as a model of how to end the reign of an unjust king. The Scottish Declaration of Arbroath (1320) and the Dutch Act of Abjuration (1581) have also been offered as models for Jefferson 's Declaration, but these models are now accepted by few scholars.
Jefferson wrote that a number of authors exerted a general influence on the words of the Declaration. English political theorist John Locke is usually cited as one of the primary influences, a man whom Jefferson called one of "the three greatest men that have ever lived ''. In 1922, historian Carl L. Becker wrote, "Most Americans had absorbed Locke 's works as a kind of political gospel; and the Declaration, in its form, in its phraseology, follows closely certain sentences in Locke 's second treatise on government. '' The extent of Locke 's influence on the American Revolution has been questioned by some subsequent scholars, however. Historian Ray Forrest Harvey argued in 1937 for the dominant influence of Swiss jurist Jean Jacques Burlamaqui, declaring that Jefferson and Locke were at "two opposite poles '' in their political philosophy, as evidenced by Jefferson 's use in the Declaration of Independence of the phrase "pursuit of happiness '' instead of "property ''. Other scholars emphasized the influence of republicanism rather than Locke 's classical liberalism. Historian Garry Wills argued that Jefferson was influenced by the Scottish Enlightenment, particularly Francis Hutcheson, rather than Locke, an interpretation that has been strongly criticized.
Legal historian John Phillip Reid has written that the emphasis on the political philosophy of the Declaration has been misplaced. The Declaration is not a philosophical tract about natural rights, argues Reid, but is instead a legal document -- an indictment against King George for violating the constitutional rights of the colonists. Historian David Armitage has argued that the Declaration was strongly influenced by de Vattel 's The Law of Nations, the dominant international law treatise of the period, and a book that Benjamin Franklin said was "continually in the hands of the members of our Congress ''. Armitage writes, "Vattel made independence fundamental to his definition of statehood ''; therefore, the primary purpose of the Declaration was "to express the international legal sovereignty of the United States ''. If the United States were to have any hope of being recognized by the European powers, the American revolutionaries first had to make it clear that they were no longer dependent on Great Britain. The Declaration of Independence does not have the force of law domestically, but nevertheless it may help to provide historical and legal clarity about the Constitution and other laws.
The Declaration became official when Congress voted for it on July 4; signatures of the delegates were not needed to make it official. The handwritten copy of the Declaration of Independence that was signed by Congress is dated July 4, 1776. The signatures of fifty - six delegates are affixed; however, the exact date when each person signed it has long been the subject of debate. Jefferson, Franklin, and Adams all wrote that the Declaration had been signed by Congress on July 4. But in 1796, signer Thomas McKean disputed that the Declaration had been signed on July 4, pointing out that some signers were not then present, including several who were not even elected to Congress until after that date.
The Declaration was transposed on paper, adopted by the Continental Congress, and signed by John Hancock, President of the Congress, on July 4, 1776, according to the 1911 record of events by the U.S. State Department under Secretary Philander C. Knox. On August 2, 1776, a parchment paper copy of the Declaration was signed by 56 persons. Many of these signers were not present when the original Declaration was adopted on July 4. Signer Matthew Thornton from New Hampshire was seated in the Continental Congress in November; he asked for and received the privilege of adding his signature at that time, and signed on November 4, 1776.
Historians have generally accepted McKean 's version of events, arguing that the famous signed version of the Declaration was created after July 19, and was not signed by Congress until August 2, 1776. In 1986, legal historian Wilfred Ritz argued that historians had misunderstood the primary documents and given too much credence to McKean, who had not been present in Congress on July 4. According to Ritz, about thirty - four delegates signed the Declaration on July 4, and the others signed on or after August 2. Historians who reject a July 4 signing maintain that most delegates signed on August 2, and that those eventual signers who were not present added their names later.
Two future U.S. presidents were among the signatories: Thomas Jefferson and John Adams. The most famous signature on the engrossed copy is that of John Hancock, who presumably signed first as President of Congress. Hancock 's large, flamboyant signature became iconic, and the term John Hancock emerged in the United States as an informal synonym for "signature ''. A commonly circulated but apocryphal account claims that, after Hancock signed, the delegate from Massachusetts commented, "The British ministry can read that name without spectacles. '' Another apocryphal report indicates that Hancock proudly declared, "There! I guess King George will be able to read that! ''
Various legends emerged years later about the signing of the Declaration, when the document had become an important national symbol. In one famous story, John Hancock supposedly said that Congress, having signed the Declaration, must now "all hang together '', and Benjamin Franklin replied: "Yes, we must indeed all hang together, or most assuredly we shall all hang separately. '' The quotation did not appear in print until more than fifty years after Franklin 's death.
The Syng inkstand used at the signing was also used at the signing of the United States Constitution in 1787.
After Congress approved the final wording of the Declaration on July 4, a handwritten copy was sent a few blocks away to the printing shop of John Dunlap. Through the night, Dunlap printed about 200 broadsides for distribution. Before long, the Declaration was read to audiences and reprinted in newspapers throughout the thirteen states. The first official public reading of the document was by John Nixon in the yard of Independence Hall on July 8; public readings also took place on that day in Trenton, New Jersey and Easton, Pennsylvania. A German translation of the Declaration was published in Philadelphia by July 9.
President of Congress John Hancock sent a broadside to General George Washington, instructing him to have it proclaimed "at the Head of the Army in the way you shall think it most proper ''. Washington had the Declaration read to his troops in New York City on July 9, with thousands of British troops on ships in the harbor. Washington and Congress hoped that the Declaration would inspire the soldiers, and encourage others to join the army. After hearing the Declaration, crowds in many cities tore down and destroyed signs or statues representing royal authority. An equestrian statue of King George in New York City was pulled down and the lead used to make musket balls.
British officials in North America sent copies of the Declaration to Great Britain. It was published in British newspapers beginning in mid-August, it had reached Florence and Warsaw by mid-September, and a German translation appeared in Switzerland by October. The first copy of the Declaration sent to France got lost, and the second copy arrived only in November 1776. It reached Portuguese America by Brazilian medical student "Vendek '' José Joaquim Maia e Barbalho, who had met with Thomas Jefferson in Nîmes.
The Spanish - American authorities banned the circulation of the Declaration, but it was widely transmitted and translated: by Venezuelan Manuel García de Sena, by Colombian Miguel de Pombo, by Ecuadorian Vicente Rocafuerte, and by New Englanders Richard Cleveland and William Shaler, who distributed the Declaration and the United States Constitution among creoles in Chile and Indians in Mexico in 1821. The North Ministry did not give an official answer to the Declaration, but instead secretly commissioned pamphleteer John Lind to publish a response entitled Answer to the Declaration of the American Congress. British Tories denounced the signers of the Declaration for not applying the same principles of "life, liberty, and the pursuit of happiness '' to African Americans. Thomas Hutchinson, the former royal governor of Massachusetts, also published a rebuttal. These pamphlets challenged various aspects of the Declaration. Hutchinson argued that the American Revolution was the work of a few conspirators who wanted independence from the outset, and who had finally achieved it by inducing otherwise loyal colonists to rebel. Lind 's pamphlet had an anonymous attack on the concept of natural rights written by Jeremy Bentham, an argument that he repeated during the French Revolution. Both pamphlets asked how the American slaveholders in Congress could proclaim that "all men are created equal '' without freeing their own slaves.
William Whipple, a signer of the Declaration of Independence who had fought in the war, freed his slave Prince Whipple because of revolutionary ideals. In the postwar decades, other slaveholders also freed their slaves; from 1790 to 1810, the percentage of free blacks in the Upper South increased to 8.3 percent from less than one percent of the black population. All Northern states abolished slavery by 1804.
The official copy of the Declaration of Independence was the one printed on July 4, 1776 under Jefferson 's supervision. It was sent to the states and to the Army and was widely reprinted in newspapers. The slightly different "engrossed copy '' (shown at the top of this article) was made later for members to sign. The engrossed version is the one widely distributed in the 21st century. Note that the opening lines differ between the two versions.
The copy of the Declaration that was signed by Congress is known as the engrossed or parchment copy. It was probably engrossed (that is, carefully handwritten) by clerk Timothy Matlack. A facsimile made in 1823 has become the basis of most modern reproductions rather than the original because of poor conservation of the engrossed copy through the 19th century. In 1921, custody of the engrossed copy of the Declaration was transferred from the State Department to the Library of Congress, along with the United States Constitution. After the Japanese attack on Pearl Harbor in 1941, the documents were moved for safekeeping to the United States Bullion Depository at Fort Knox in Kentucky, where they were kept until 1944. In 1952, the engrossed Declaration was transferred to the National Archives and is now on permanent display at the National Archives in the "Rotunda for the Charters of Freedom ''.
The document signed by Congress and enshrined in the National Archives is usually regarded as the Declaration of Independence, but historian Julian P. Boyd argued that the Declaration, like Magna Carta, is not a single document. Boyd considered the printed broadsides ordered by Congress to be official texts, as well. The Declaration was first published as a broadside that was printed the night of July 4 by John Dunlap of Philadelphia. Dunlap printed about 200 broadsides, of which 26 are known to survive. The 26th copy was discovered in The National Archives in England in 2009.
In 1777, Congress commissioned Mary Katherine Goddard to print a new broadside that listed the signers of the Declaration, unlike the Dunlap broadside. Nine copies of the Goddard broadside are known to still exist. A variety of broadsides printed by the states are also extant.
Several early handwritten copies and drafts of the Declaration have also been preserved. Jefferson kept a four - page draft that late in life he called the "original Rough draught ''. It is not known how many drafts Jefferson wrote prior to this one, and how much of the text was contributed by other committee members. In 1947, Boyd discovered a fragment of an earlier draft in Jefferson 's handwriting. Jefferson and Adams sent copies of the rough draft to friends, with slight variations.
During the writing process, Jefferson showed the rough draft to Adams and Franklin, and perhaps to other members of the drafting committee, who made a few more changes. Franklin, for example, may have been responsible for changing Jefferson 's original phrase "We hold these truths to be sacred and undeniable '' to "We hold these truths to be self - evident ''. Jefferson incorporated these changes into a copy that was submitted to Congress in the name of the committee. The copy that was submitted to Congress on June 28 has been lost, and was perhaps destroyed in the printing process, or destroyed during the debates in accordance with Congress 's secrecy rule.
On April 21, 2017 it was announced that a second engrossed copy had been discovered in an archive in Sussex, England. Named by its finders the "Sussex Declaration '', it differs from the National Archives copy (which the finders refer to as the "Matlack Declaration '') in that the signatures on it are not grouped by States. How it came to be in England is not yet known, but the finders believe that the randomness of the signatures points to an origin with signatory James Wilson, who had argued strongly that the Declaration was made not by the States but by the whole people.
The Declaration was neglected in the years immediately following the American Revolution, having served its original purpose in announcing the independence of the United States. Early celebrations of Independence Day largely ignored the Declaration, as did early histories of the Revolution. The act of declaring independence was considered important, whereas the text announcing that act attracted little attention. The Declaration was rarely mentioned during the debates about the United States Constitution, and its language was not incorporated into that document. George Mason 's draft of the Virginia Declaration of Rights was more influential, and its language was echoed in state constitutions and state bills of rights more often than Jefferson 's words. "In none of these documents '', wrote Pauline Maier, "is there any evidence whatsoever that the Declaration of Independence lived in men 's minds as a classic statement of American political principles. ''
Many leaders of the French Revolution admired the Declaration of Independence but were also interested in the new American state constitutions. The inspiration and content of the French Declaration of the Rights of Man and Citizen (1789) emerged largely from the ideals of the American Revolution. Its key drafts were prepared by Lafayette, working closely in Paris with his friend Thomas Jefferson. It also borrowed language from George Mason 's Virginia Declaration of Rights. The declaration also influenced the Russian Empire. The document had a particular impact on the Decembrist revolt and other Russian thinkers.
According to historian David Armitage, the Declaration of Independence did prove to be internationally influential, but not as a statement of human rights. Armitage argued that the Declaration was the first in a new genre of declarations of independence that announced the creation of new states.
Other French leaders were directly influenced by the text of the Declaration of Independence itself. The Manifesto of the Province of Flanders (1790) was the first foreign derivation of the Declaration; others include the Venezuelan Declaration of Independence (1811), the Liberian Declaration of Independence (1847), the declarations of secession by the Confederate States of America (1860 -- 61), and the Vietnamese Proclamation of Independence (1945). These declarations echoed the United States Declaration of Independence in announcing the independence of a new state, without necessarily endorsing the political philosophy of the original.
Other countries have used the Declaration as inspiration or have directly copied sections from it. These include the Haitian declaration of January 1, 1804 during the Haitian Revolution, the United Provinces of New Granada in 1811, the Argentine Declaration of Independence in 1816, the Chilean Declaration of Independence in 1818, Costa Rica in 1821, El Salvador in 1821, Guatemala in 1821, Honduras in (1821), Mexico in 1821, Nicaragua in 1821, Peru in 1821, Bolivian War of Independence in 1825, Uruguay in 1825, Ecuador in 1830, Colombia in 1831, Paraguay in 1842, Dominican Republic in 1844, Texas Declaration of Independence in March 1836, California Republic in November 1836, Hungarian Declaration of Independence in 1849, Declaration of the Independence of New Zealand in 1835, and the Czechoslovak declaration of independence from 1918 drafted in Washington D.C. with Gutzon Borglum among the drafters. The Rhodesian declaration of independence, ratified in November 1965, is based on the American one as well; however, it omits the phrases "all men are created equal '' and "the consent of the governed ''. The South Carolina declaration of secession from December 1860 also mentions the U.S. Declaration of Independence, though it, like the Rhodesian one, omits references to "all men are created equal '' and "consent of the governed ''.
Interest in the Declaration was revived in the 1790s with the emergence of the United States 's first political parties. Throughout the 1780s, few Americans knew or cared who wrote the Declaration. But in the next decade, Jeffersonian Republicans sought political advantage over their rival Federalists by promoting both the importance of the Declaration and Jefferson as its author. Federalists responded by casting doubt on Jefferson 's authorship or originality, and by emphasizing that independence was declared by the whole Congress, with Jefferson as just one member of the drafting committee. Federalists insisted that Congress 's act of declaring independence, in which Federalist John Adams had played a major role, was more important than the document announcing it. But this view faded away, like the Federalist Party itself, and, before long, the act of declaring independence became synonymous with the document.
A less partisan appreciation for the Declaration emerged in the years following the War of 1812, thanks to a growing American nationalism and a renewed interest in the history of the Revolution. In 1817, Congress commissioned John Trumbull 's famous painting of the signers, which was exhibited to large crowds before being installed in the Capitol. The earliest commemorative printings of the Declaration also appeared at this time, offering many Americans their first view of the signed document. Collective biographies of the signers were first published in the 1820s, giving birth to what Garry Wills called the "cult of the signers ''. In the years that followed, many stories about the writing and signing of the document were published for the first time.
When interest in the Declaration was revived, the sections that were most important in 1776 were no longer relevant: the announcement of the independence of the United States and the grievances against King George. But the second paragraph was applicable long after the war had ended, with its talk of self - evident truths and unalienable rights. The Constitution and the Bill of Rights lacked sweeping statements about rights and equality, and advocates of groups with grievances turned to the Declaration for support. Starting in the 1820s, variations of the Declaration were issued to proclaim the rights of workers, farmers, women, and others. In 1848, for example, the Seneca Falls Convention of women 's rights advocates declared that "all men and women are created equal ''.
A key step marking the evolution of the Declaration in the nation 's consciousness is the now well - known painting Declaration of Independence by Connecticut political painter John Trumbull. It was commissioned by the United States Congress in 1817. 12 - by - 18 - foot (3.7 by 5.5 m) in size, it has hung in the United States Capitol Rotunda since 1826. It has been often reproduced, and is the visual image most associated by Americans with the Declaration.
The painting is sometimes incorrectly described as the signing of the Declaration of Independence. In fact, the painting actually shows the five - man drafting committee presenting their draft of the Declaration to the Second Continental Congress, an event that took place on June 28, 1776, and not the signing of the document, which took place later.
The painting, the figures painted from life when possible, does not contain all the signers. Some had died and images could not be located. One figure had participated in the drafting but did not sign the final document; another refused to sign. In fact the membership of the Second Continental Congress changed as time passed, and the figures in the painting were never in the same room at the same time.
It is, however, an accurate depiction of the room in the building known today as Independence Hall, the centerpiece of the Independence National Historical Park in Philadelphia, Pennsylvania. Trumbull visited the room, which was where the Second Continental Congress met, when researching for his painting. At the time it was the Pennsylvania State House.
The apparent contradiction between the claim that "all men are created equal '' and the existence of American slavery attracted comment when the Declaration was first published. As mentioned above, Jefferson had included a paragraph in his initial draft that strongly indicted Great Britain 's role in the slave trade, but this was deleted from the final version. Jefferson himself was a prominent Virginia slave holder, having owned hundreds of slaves. Referring to this seeming contradiction, English abolitionist Thomas Day wrote in a 1776 letter, "If there be an object truly ridiculous in nature, it is an American patriot, signing resolutions of independency with the one hand, and with the other brandishing a whip over his affrighted slaves. ''
In the 19th century, the Declaration took on a special significance for the abolitionist movement. Historian Bertram Wyatt - Brown wrote that "abolitionists tended to interpret the Declaration of Independence as a theological as well as a political document ''. Abolitionist leaders Benjamin Lundy and William Lloyd Garrison adopted the "twin rocks '' of "the Bible and the Declaration of Independence '' as the basis for their philosophies. "As long as there remains a single copy of the Declaration of Independence, or of the Bible, in our land, '' wrote Garrison, "we will not despair. '' For radical abolitionists such as Garrison, the most important part of the Declaration was its assertion of the right of revolution. Garrison called for the destruction of the government under the Constitution, and the creation of a new state dedicated to the principles of the Declaration.
The controversial question of whether to add additional slave states to the United States coincided with the growing stature of the Declaration. The first major public debate about slavery and the Declaration took place during the Missouri controversy of 1819 to 1821. Antislavery Congressmen argued that the language of the Declaration indicated that the Founding Fathers of the United States had been opposed to slavery in principle, and so new slave states should not be added to the country. Proslavery Congressmen led by Senator Nathaniel Macon of North Carolina argued that the Declaration was not a part of the Constitution and therefore had no relevance to the question.
With the antislavery movement gaining momentum, defenders of slavery such as John Randolph and John C. Calhoun found it necessary to argue that the Declaration 's assertion that "all men are created equal '' was false, or at least that it did not apply to black people. During the debate over the Kansas -- Nebraska Act in 1853, for example, Senator John Pettit of Indiana argued that the statement "all men are created equal '' was not a "self - evident truth '' but a "self - evident lie ''. Opponents of the Kansas -- Nebraska Act, including Salmon P. Chase and Benjamin Wade, defended the Declaration and what they saw as its antislavery principles.
The Declaration 's relationship to slavery was taken up in 1854 by Abraham Lincoln, a little - known former Congressman who idolized the Founding Fathers. Lincoln thought that the Declaration of Independence expressed the highest principles of the American Revolution, and that the Founding Fathers had tolerated slavery with the expectation that it would ultimately wither away. For the United States to legitimize the expansion of slavery in the Kansas - Nebraska Act, thought Lincoln, was to repudiate the principles of the Revolution. In his October 1854 Peoria speech, Lincoln said:
Nearly eighty years ago we began by declaring that all men are created equal; but now from that beginning we have run down to the other declaration, that for some men to enslave others is a "sacred right of self - government ''.... Our republican robe is soiled and trailed in the dust.... Let us repurify it. Let us re-adopt the Declaration of Independence, and with it, the practices, and policy, which harmonize with it.... If we do this, we shall not only have saved the Union: but we shall have saved it, as to make, and keep it, forever worthy of the saving.
The meaning of the Declaration was a recurring topic in the famed debates between Lincoln and Stephen Douglas in 1858. Douglas argued that the phrase "all men are created equal '' in the Declaration referred to white men only. The purpose of the Declaration, he said, had simply been to justify the independence of the United States, and not to proclaim the equality of any "inferior or degraded race ''. Lincoln, however, thought that the language of the Declaration was deliberately universal, setting a high moral standard to which the American republic should aspire. "I had thought the Declaration contemplated the progressive improvement in the condition of all men everywhere, '' he said. During the seventh and last joint debate with Steven Douglas at Alton, Illinois on October 15, 1858, Lincoln said about the declaration:
I think the authors of that notable instrument intended to include all men, but they did not mean to declare all men equal in all respects. They did not mean to say all men were equal in color, size, intellect, moral development, or social capacity. They defined with tolerable distinctness in what they did consider all men created equal -- equal in "certain inalienable rights, among which are life, liberty, and the pursuit of happiness. '' This they said, and this they meant. They did not mean to assert the obvious untruth that all were then actually enjoying that equality, or yet that they were about to confer it immediately upon them. In fact, they had no power to confer such a boon. They meant simply to declare the right, so that the enforcement of it might follow as fast as circumstances should permit. They meant to set up a standard maxim for free society which should be familiar to all, constantly looked to, constantly labored for, and even, though never perfectly attained, constantly approximated, and thereby constantly spreading and deepening its influence, and augmenting the happiness and value of life to all people, of all colors, everywhere.
According to Pauline Maier, Douglas 's interpretation was more historically accurate, but Lincoln 's view ultimately prevailed. "In Lincoln 's hands, '' wrote Maier, "the Declaration of Independence became first and foremost a living document '' with "a set of goals to be realized over time ''.
Like Daniel Webster, James Wilson, and Joseph Story before him, Lincoln argued that the Declaration of Independence was a founding document of the United States, and that this had important implications for interpreting the Constitution, which had been ratified more than a decade after the Declaration. The Constitution did not use the word "equality '', yet Lincoln believed that the concept that "all men are created equal '' remained a part of the nation 's founding principles. He famously expressed this belief in the opening sentence of his 1863 Gettysburg Address: "Four score and seven years ago (i.e. in 1776) our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. ''
Lincoln 's view of the Declaration became influential, seeing it as a moral guide to interpreting the Constitution. "For most people now, '' wrote Garry Wills in 1992, "the Declaration means what Lincoln told us it means, as a way of correcting the Constitution itself without overthrowing it. '' Admirers of Lincoln such as Harry V. Jaffa praised this development. Critics of Lincoln, notably Willmoore Kendall and Mel Bradford, argued that Lincoln dangerously expanded the scope of the national government and violated states ' rights by reading the Declaration into the Constitution.
In July 1848, the first woman 's rights convention, the Seneca Falls Convention, was held in Seneca Falls, New York. The convention was organized by Elizabeth Cady Stanton, Lucretia Mott, Mary Ann McClintock, and Jane Hunt. In their "Declaration of Sentiments '', patterned on the Declaration of Independence, the convention members demanded social and political equality for women. Their motto was that "All men and women are created equal '' and the convention demanded suffrage for women. The suffrage movement was supported by William Lloyd Garrison and Frederick Douglass.
The adoption of the Declaration of Independence was dramatized in the 1969 Tony Award -- winning musical 1776, and the 1972 movie of the same name, as well as in the 2008 television miniseries John Adams.
The Declaration was chosen to become the first digitized text (1971).
Since 1976 (the United States Bicentennial), Trumbull 's Declaration of Independence has been used on the back of the United States two - dollar bill.
In 1984, the Memorial to the 56 Signers of the Declaration was dedicated in Constitution Gardens on the National Mall in Washington, D.C., where the signatures of all the original signers are carved in stone with their names, places of residence, and occupations.
The new One World Trade Center building in New York City (2014) is 1776 feet high, to symbolize the year that the Declaration of Independence was signed.
|
explain the validity of oral traditions as a source of history | Oral history - wikipedia
Oral history is the collection and study of historical information about individuals, families, important events, or everyday life using audiotapes, videotapes, or transcriptions of planned interviews. These interviews are conducted with people who participated in or observed past events and whose memories and perceptions of these are to be preserved as an aural record for future generations. Oral history strives to obtain information from different perspectives and most of these can not be found in written sources. Oral history also refers to information gathered in this manner and to a written work (published or unpublished) based on such data, often preserved in archives and large libraries. Knowledge presented by Oral History (OH) is unique in that it shares the tacit perspective, thoughts, opinions and understanding of the interviewee in its primary form.
The term is sometimes used in a more general sense to refer to any information about past events that people who experienced them tell anybody else, but professional historians usually consider this to be oral tradition. However, as the Columbia Encyclopedia explains:
Primitive societies have long relied on oral tradition to preserve a record of the past in the absence of written histories. In Western society, the use of oral material goes back to the early Greek historians Herodotus and Thucydides, both of whom made extensive use of oral reports from witnesses. The modern concept of oral history was developed in the 1940s by Allan Nevins and his associates at Columbia University.
Oral history has become an international movement in historical research. Oral historians in different countries have approached the collection, analysis, and dissemination of oral history in different modes. However, it should also be noted that there are many ways of creating oral histories and carrying out the study of oral history even within individual national contexts.
According to the Columbia Encyclopedia:, the accessibility of tape recorders in the 1960s and 1970s led to oral documentation of the era 's movements and protests. Following this, oral history has increasingly become a respected record type. Some oral historians now also account for the subjective memories of interviewees due to the research of Italian historian Alessandro Portelli and his associates.
Oral histories are also used in many communities to document the experiences of survivors of tragedies. Following the Holocaust, there has emerged a rich tradition of oral history, particularly of Jewish survivors. The United States Holocaust Memorial Museum has an extensive archive of over 70,000 oral history interviews. There are also several organizations dedicated specifically to collecting and preserving oral histories of survivors. Oral history as a discipline has fairly low barriers to entry, so it is an act in which laypeople can readily participate -- in his book Doing Oral History, Donald Ritchie wrote that "oral history has room for both the academic and the layperson. With reasonable training... anyone can conduct a useable oral history. '' This is especially meaningful in cases like the Holocaust, where survivors may be less comfortable telling their story to a journalist than they would be to a historian or family member.
In the United States, there are several organizations dedicated to doing oral history which are not affiliated with universities or specific locations. StoryCorps is one of the most well - known of these: following the model of the Federal Writers ' Project created as part of the Works Progress Administration, StoryCorps ' mission is to record the stories of Americans from all walks of life. On contrast to the scholarly tradition of oral history, StoryCorps subjects are interviewed by people they know. There are a number of StoryCorps initiatives that have targeted specific populations or problems, following in the tradition of using oral history as a method to amplify voices that might otherwise be marginalized.
Since the early 1970s, oral history in Britain has grown from being a method in folklore studies (see for example the work of the School of Scottish Studies in the 1950s) to becoming a key component in community histories. Oral history continues to be an important means by which non-academics can actively participate in the compilation and study of history. However, practitioners across a wide range of academic disciplines have also developed the method into a way of recording, understanding, and archiving narrated memories. Influences have included women 's history and labour history.
In Britain, the Oral History Society has played a key role in facilitating and developing the use of oral history.
A more complete account of the history of oral history in Britain and Northern Ireland can be found at "Making Oral History '' on the Institute of Historical Research 's website.
The Bureau of Military History conducted over 1700 interviews with veterans of the First World War and related episodes in Ireland. The documentation was released for research in 2003.
During 1998 and 1999, 40 BBC local radio stations recorded personal oral histories from a broad cross-section of the population for The Century Speaks series. The result was 640 half - hour radio documentaries, broadcast in the final weeks of the millennium, and one of the largest single oral history collections in Europe, the Millennium Memory Bank (MMB). The interview based recordings are held by the British Library Sound Archive in the oral history collection.
In one of the largest memory project anywhere, The BBC in 2003 - 6 invited its audiences to send in recollections of the homefront in the Second World War. It put 47,000 of the recollections online, along with 15,000 photographs.
Alessandro Portelli is an Italian oral historian. He is known for his work which compared workers ' experiences in Harlan County, Kentucky and Terni, Italy. Other oral historians have drawn on Portelli 's analysis of memory, identity, and the construction of history.
As of 2015, since the government - run historiography in modern Belarus almost fully excludes repression during the epoch when Belarus was part of the Soviet Union, only private initiatives cover these aspects. Citizens ' groups in Belarus use the methods of oral history and record narrative interviews on video: the Virtual Museum of Soviet Repression in Belarus presents a full Virtual museum with intense use of oral history. The Belarusian Oral History Archive project also provides material based on oral history recordings.
Czech oral history began to develop beginning in the 1980s with a focus on social movements and political activism. The practice of oral history and any attempts to document stories prior to this is fairly unknown. (citation needed) The practice of oral history began to take shape in the 1990s. In 2000, The Oral History Center (COH) at the Institute of Contemporary History, Academy of Sciences, Czech Republic (AV ČR) was established with the aim of "systematically support the development of oral history methodology and its application in historical research. ''
In 2001, Post Bellum, a nonprofit organization, was established to "documents the memories of witnesses of the important historical phenomenons of the 20th century '' within the Czech Republic and surrounding European countries. Post Bellum works in partnership with Czech Radio and Institute for the Study of Totalitarian Regimes. Their oral history project Memory of Nation was created in 2008 and interviews are archived online for user access. As of January 2015, the project has more than 2100 published witness accounts in several languages, with more than 24,000 pictures.
Other projects, including articles and books have been funded by the Czech Science Foundation (AV ČR) including:
These publications aim to demonstrate that oral history contributes to the understanding of human lives and history itself, such as the motives behind the dissidents ' activities, the formation of opposition groups, communication between dissidents and state representatives and the emergence of ex-communist elites and their decision - making processes.
Oral history centers in the Czech Republic emphasize educational activities (seminars, lectures, conferences), archiving and maintaining interview collections, and providing consultations to those interested in the method.
Because of repression during the Franco dictatorship (1939 -- 75), the development of oral history in Spain was quite limited until the 1970s. It became well - developed in the early 1980s, and often had a focus on the Civil War years (1936 -- 39), especially regarding the losers whose stories had been suppressed. The field was based at the University of Barcelona. Professor Mercedes Vilanova was a leading exponent, and combined it with her interest in quantification and social history. The Barcelona group sought to integrate oral sources with traditional written sources to create mainstream, not ghettoized, historical interpretations. They sought to give a public voice to neglected groups, such as women, illiterates, political leftists, and ethnic minorities.
Oral history began with a focus on national leaders in the United States, but has expanded to include groups representing the entire population. In Britain, the influence of ' history from below ' and interviewing people who had been ' hidden from history ' was more influential. However, in both countries elite oral history has emerged as an important strand. Scientists, for example, have been covered in numerous oral history projects. Doel (2003) discusses the use of oral interviews by scholars as primary sources, He lists major oral history projects in the history of science begun after 1950. Oral histories, he concludes, can augment the biographies of scientists and help spotlight how their social origins influenced their research. Doel acknowledges the common concerns historians have regarding the validity of oral history accounts. He identifies studies that used oral histories successfully to provide critical and unique insight into otherwise obscure subjects, such as the role scientists played in shaping US policy after World War II. Interviews furthermore can provide road maps for researching archives, and can even serve as a fail - safe resource when written documents have been lost or destroyed. Roger D. Launius (2003) shows the huge size and complexity of the National Aeronautics and Space Administration (NASA) oral history program since 1959. NASA systematically documented its operations through oral histories. They can help to explore broader issues regarding the evolution of a major federal agency. The collection consists primarily of oral histories conducted by scholars working on books about the agency. Since 1996, however, the collection has also included oral histories of senior NASA administrators and officials, astronauts, and project managers, part of a broader project to document the lives of key agency individuals. Launius emphasizes efforts to include such less - well - known groups within the agency as the Astrobiology Program, and to collect the oral histories of women in NASA.
Contemporary oral history involves recording or transcribing eyewitness accounts of historical events. Some anthropologists started collecting recordings (at first especially of Native American folklore) on phonograph cylinders in the late 19th century. In the 1930s, the Federal Writers ' Project -- part of the Works Progress Administration (WPA) -- sent out interviewers to collect accounts from various groups, including surviving witnesses of the Civil War, slavery, and other major historical events. The Library of Congress also began recording traditional American music and folklore onto acetate discs. With the development of audio tape recordings after World War II, the task of oral historians became easier.
In 1946, David P. Boder, a professor of psychology at the Illinois Institute of Technology in Chicago, traveled to Europe to record long interviews with "displaced persons '' -- most of them Holocaust survivors. Using the first device capable of capturing hours of audio -- the wire recorder -- Boder came back with the first recorded Holocaust testimonials and in all likelihood the first recorded oral histories of significant length.
Many state and local historical societies have oral history programs. Sinclair Kopp (2002) report on the Oregon Historical Society 's program. It began in 1976 with the hiring of Charles Digregorio, who had studied at Columbia with Nevins. Thousands of sound recordings, reel - to - reel tapes, transcriptions, and radio broadcasts have made it one of the largest collections of oral history on the Pacific Coast. In addition to political figures and prominent businessmen, the Oregon Historical Society has done interviews with minorities, women, farmers, and other ordinary citizens, who have contributed extraordinary stories reflecting the state 's cultural and social heritage. Hill (2004) encourages oral history projects in high school courses. She demonstrates a lesson plan that encourages the study of local community history through interviews. By studying grassroots activism and the lived experiences of its participants, her high school students came to appreciate how African Americans worked to end Jim Crow laws in the 1950s.
Naison (2005) describes the Bronx African American History Project (BAAHP), an oral community history project developed by the Bronx County Historical Society. Its goal was to document the histories of black working - and middle - class residents of the South Bronx neighborhood of Morrisania in New York City since the 1940s.
The Middle East often requires oral history methods of research, mainly because of the relative lack in written and archival history and its emphasis on oral records and traditions. Furthermore, because of its population transfers, refugees and émigrés become suitable objects for oral history research.
Katharina Lange studied the tribal histories of Syria. The oral histories in this area could not be transposed into tangible, written form due to their positionalities, which Lange describes as "taking sides. '' The positionality of oral history could lead to conflict and tension. The tribal histories are typically narrated by men. While histories are also told by women, they are not accepted locally as "real history. '' Oral histories often detail the lives and feats of ancestors.
Genealogy is a prominent subject in the area. According to Lange, the oral historians often tell their own personalized genealogies to demonstrate their credibility, both in their social standing and their expertise in the field.
From 2003 to 2004, Professors Marianne Kamp and Russell Zanca researched agricultural collectivization in Uzbekistan; one part of their project involved using oral history methodology. The goal of the project was to learn more about life in the 1920s and 1930s to study the impact of the Soviet Union 's conquest. 20 interviews each were conducted in the Fergana valley, Tashkent, Bukhara, Khorezm, and Kashkadarya regions. These interviews uncovered untold stories of famine and death. The oral histories filled in a gap of information missing from the Central State Archive of Uzbekistan.
The rise of oral history is a new trend in historical studies in China that began in the late twentieth century. Some oral historians, stress the collection of eyewitness accounts of the words and deeds of important historical figures and what really happened during those important historical events, which is similar to common practice in the west, while the others focus more on important people and event, asking important figures to describe the decision making and details of important historical events. In December 2004, the Chinese Association of Oral History Studies was established. The establishment of this institution is thought to signal that the field of oral history studies in China has finally moved into a new phase of organized development.
Te - Kong Tong (1920 - 2009) was a prominent Chinese American historian and recognized as the "pioneer of oral history in contemporary China (中国 近代 口述 史学 的 开创 者) ''. His view on oral history of historical characters is that historians should identify the changes of the people, and posit their evolution in parallel with that of history. His work include:
While oral tradition is an integral part of ancient Southeast Asian history, oral history is a relatively recent development. Since the 1960s, oral history has been accorded increasing attention both on institutional as well as individual levels, representing "history from above '' and "history from below ''.
In Oral History and Public Memories, Blackburn writes about oral history as a tool that was used "by political elites and state - run institutions to contribute to the goal of national building '' in postcolonial Southeast Asian countries. Blackburn draws most of his examples of oral history as a vehicle for "history from above '' from Malaysia and Singapore.
In terms of "history from below '', various oral history initiatives are being undertaken in Cambodia in an effort to record lived experiences from the rule of the Khmer Rouge regime while survivors are still living. These initiative take advantage of crowdsourced history to uncover the silences imposed on the oppressed.
Two prominent and ongoing oral history projects out of South Asia stem from time periods of ethnic violence that were decades apart: 1947 and 1984.
The 1947 Partition Archive was founded in 2010 by Guneeta Singe Bhalla, a physicist in Berkeley, California, who began conducting and recording interviews "to collect and preserve the stories of those who lived through this tumultuous time, to make sure this great human tragedy is n't forgotten. '' (1)
The Sikh Diaspora Project was founded in 2014 by Brajesh Samarth, senior lecturer in Hindi - Urdu at Emory University in Atlanta, when he was a lecturer at Stanford University in California. The project focuses on interviews with members of the Sikh diaspora in the U.S. and Canada, including the many who migrated after the 1984 massacre of Sikhs in India.
In 1948, Allan Nevins, a Columbia University historian, established the Columbia Oral History Research Office, now known as the Columbia Center for Oral History, with a mission of recording, transcribing, and preserving oral history interviews. The Regional Oral History Office was founded in 1954 as a division of the University of California, Berkeley 's Bancroft Library. In 1967, American oral historians founded the Oral History Association, and British oral historians founded the Oral History Society in 1969. In 1981, Mansel G. Blackford, a business historian at Ohio State University, argued that oral history was a useful tool to write the history of corporate mergers. More recently, Harvard Business School launched the Creating Emerging Markets project, which "explores the evolution of business leadership in Africa, Asia, and Latin America throughout recent decades '' through oral history. "At its core are interviews, many on video, by the School 's faculty with leaders or former leaders of firms and NGOs who have had a major impact on their societies and enterprises across three continents. '' There are now numerous national organizations and an International Oral History Association, which hold workshops and conferences and publish newsletters and journals devoted to oral history theory and practices. Specialized collections of oral history sometimes have archives of widespread global interest; an example is the Lewis Walpole Library in Farmington, Connecticut, a department of the University Library of Yale.
Historians, folklorists, anthropologists, human geographers, sociologists, journalists, linguists, and many others employ some form of interviewing in their research. Although multi-disciplinary, oral historians have promoted common ethics and standards of practice, most importantly the attaining of the "informed consent '' of those being interviewed. Usually this is achieved through a deed of gift, which also establishes copyright ownership that is critical for publication and archival preservation.
Oral historians generally prefer to ask open - ended questions and avoid leading questions that encourage people to say what they think the interviewer wants them to say. Some interviews are "life reviews, '' conducted with people at the end of their careers. Other interviews focus on a specific period or a specific event in people 's lives, such as in the case of war veterans or survivors of a hurricane.
Feldstein (2004) considers oral history to be akin to journalism, Both are committed to uncovering truths and compiling narratives about people, places, and events. Felstein says each could benefit from adopting techniques from the other. Journalism could benefit by emulating the exhaustive and nuanced research methodologies used by oral historians. The practice of oral historians could be enhanced by utilizing the more sophisticated interviewing techniques employed by journalists, in particular, the use of adversarial encounters as a tactic for obtaining information from a respondent.
The first oral history archives focused on interviews with prominent politicians, diplomats, military officers, and business leaders. By the 1960s and ' 70s, influenced by the rise of new social history, interviewing began to be employed more often when historians investigated history from below. Whatever the field or focus of a project, oral historians attempt to record the memories of many different people when researching a given event. Interviewing a single person provides a single perspective. Individuals may misremember events or distort their account for personal reasons. By interviewing widely, oral historians seek points of agreement among many different sources, and also record the complexity of the issues. The nature of memory -- both individual and community -- is as much a part of the practice of oral history as are the stories collected.
Archaeologists sometimes conduct oral history interviews to learn more about unknown artifacts. Oral interviews can provide narratives, social meaning, and contexts for objects. When describing the use of oral histories in archaeological work, Paul Mullins emphasizes the importance of using these interviews to replace "it - narratives. '' It - narratives are the voices from objects themselves rather than people; according to Mullins, these lead to narratives that are often "sober, pessimistic, or even dystopian. ''
Oral history interviews were used to provide context and social meaning in the Overstone excavation project in Northumberland. Overstone consists of a row of four cottages. The excavation team, consisting of Jane Webster, Louise Tolson, Richard Carlton, and volunteers, found the discovered artifacts difficult to identify. The team first took the artifacts to an archaeology group, but the only person with knowledge about a found fragment recognized the fragment from a type of pot her mother had. This inspired the team to conduct group interviews volunteers who grew up in households using such objects. The team took their reference collection of artifacts to the interviews in order to trigger the memories of volunteers, revealing a "shared cultural identity. ''
In 1997 the Supreme Court of Canada, in the Delgamuukw v. British Columbia trial, ruled that oral histories were just as important as written testimony. Of oral histories, it said "that they are tangential to the ultimate purpose of the fact - finding process at trial -- the determination of the historical truth. ''
Writers who use oral history have often discussed its relationship to historical truth. Gilda O'Neill writes in Lost Voices, an oral history of East End hop - pickers: "I began to worry. Were the women 's, and my, memories true or were they just stories? I realised that I had no ' innocent ' sources of evidence - facts. I had, instead, the stories and their tellers ' reasons for remembering in their own particular ways. ' Duncan Barrett, one of the co-authors of The Sugar Girls describes some of the perils of relying on oral history accounts: "On two occasions, it became clear that a subject was trying to mislead us about what happened -- telling a self - deprecating story in one interview, and then presenting a different, and more flattering, version of events when we tried to follow it up... often our interviewees were keen to persuade us of a certain interpretation of the past, supporting broad, sweeping comments about historical change with specific stories from their lives. '' Alessandro Portelli argues that oral history is valuable nevertheless: "it tells us less about events as such than about their meaning (...) the unique and precious element which oral sources force upon the historian... is the speaker 's subjectivity. ''
Regarding the accuracy of oral history, Jean - Loup Gassend concludes in the book Autopsy of a Battle "I found that each witness account can be broken down into two parts: 1) descriptions of events that the witness participated in directly, and 2) descriptions of events that the witness did not actually participate in, but that he heard about from other sources. The distinction between these two parts of a witness account is of the highest importance. I noted that concerning events that the witnesses participated in, the information provided was surprisingly reliable, as was confirmed by comparison with other sources. The imprecision or mistakes usually concerned numbers, ranks, and dates, the first two tending to become inflated with time. Concerning events that the witness had not participated in personally, the information was only as reliable as whatever the source of information had been (various rumors); that is to say, it was often very unreliable and I usually discarded such information. ''
Another noteworthy case is the Mau Mau uprising in Kenya against the British colonizers. Central to the case was Historian Caroline Elkins ' study on UK 's brutal suppression of the uprising. Elkin 's work on this matter is largely based on oral testimonies of survivors and witnesses, which causes controversy in academia: "Some praised Elkins for breaking the ' code of silence ' that had squelched discussion of British imperial violence. Others branded her a self - aggrandising crusader whose overstated findings had relied on sloppy methods and dubious oral testimonies. '' The British court eventually ruled in the Kenyan claimants ' favor, which also serves as a response to Elkin 's critics as Justice McCombe 's 2011 decision stressed the "substantial documentation supporting accusations of systematic abuses ''. After the ruling, newly discovered files containing relevant records of former colonies from the Hanslope disclosure corroborated Elkin 's finding.
In Guatemalan literature, I, Rigoberta Menchú (1983), brings oral history into the written form through the testimonio genre. I, Rigoberta Menchú is compiled by Venezuelan anthropologist Burgos - Debray, based on a series of interviews she conducted with Menchú. The Menchú - controversy arose when historian David Stoll took issue with Menchú 's claim that "this is a story of all poor Guatemalans ''. In Rigoberta Menchú and the Story of All Poor Guatemalans (1999), Stoll argues that the details in Menchú 's testimonio are inconsistent with his own fieldwork and interviews he conducted with other Mayas. According to Guatemalan novelist and critic Arturo Arias, this controversy highlights a tension in oral history. On one hand, it presents an opportunity to convert the subaltern subject into a "speaking subject ''. On the other hand, it challenges the historical profession in certifying the "factuality of her mediated discourse '' as "subaltern subjects are forced to (translate across epistemological and linguistic frameworks and) use the discourse of the colonizer to express their subjectivity ''.
"Creating Emerging Markets Project ''. Harvard Business School.
|
the one that got away music video cast | The One That Got Away (Katy Perry song) - wikipedia
"The One That Got Away '' is a song by American singer Katy Perry for her third studio album, Teenage Dream (2010). The song was produced by Dr. Luke and Max Martin, both of whom also co-wrote the song with Perry. The song is a mid-tempo pop ballad about a lost love. It features references to the rock band Radiohead as well as the relationship of Johnny Cash and June Carter Cash to express the strength of the relationship. The song was released in October 2011 by Capitol Records as the album 's sixth and final single.
"The One That Got Away '' peaked at number three on Billboard Hot 100, with Teenage Dream becoming the seventh album in the 53 - year history of the Hot 100 to generate at least six top 10s. The single reached the top of the Billboard Hot Dance Club Songs, Adult Top 40, and Mainstream Top 40 charts.
The accompanying music video for the song was directed by Floria Sigismondi and premiered in November 2011, featuring actor Diego Luna. An official remix featuring rapper B.o.B was released to digital retailers on December 20, 2011. An official acoustic rendition was released to digital retailers on January 16, 2012, this version was included in Teenage Dream: The Complete Confection edition. The song has been covered by several artists, including Richard Marx, Jordan Pruitt, and Selena Gomez & the Scene.
On September 13, 2011, at the New York City 's Irving Plaza, Capitol Records confirmed to Billboard that "The One That Got Away '' would be the sixth single from Teenage Dream. Perry said in a statement from the label:
"I 'm so pleased to select ' The One That Got Away ' as my sixth single because this song shows a very different side of me that I have n't shown with my past singles on this record, I think that everyone can relate to this song. I wrote (it) about when you promise someone forever, but you end up not being able to follow through. It 's a bittersweet story. Hopefully, the listener learns from hearing it and never has to say they had ' The One ' get away. ''
Capitol Records said that they are not specifically releasing the song in hopes of it reaching No. 1 and rewriting Hot 100 history (since Perry was the first woman to obtain five No. 1s on the chart from one album), rather the decision came out of "Perry 's fondness for the song, its ear - catching hook and her obvious track record of success at pop radio ''. EMI Music / Capitol Records EVP / marketing and promotion Greg Thompson told Billboard that, "if it goes to No. 1, that would be great, If not, we still have a Katy song on the radio in fourth quarter '', presumably boosting sales for Teenage Dream in the Christmas season. "The One That Got Away '' officially impacted U.S. radio on October 11, 2011.
In late September 2011, Perry wrote the following message on her Twitter account: "The One That Got Away... It 's happening!!! '', along with a picture of the official single artwork. The artwork shows a pink - haired Perry looking up at the sky while wearing a disc - shaped hat. The photo gives a whimsical nod to the 1970s, with its distinctively retro appearance.
Originally titled "In Another Life '', the song was produced by Dr. Luke and Max Martin, both of whom co-wrote it with Perry. It is a midtempo pop song positioned on the piece of E major and has a tempo of 134 beats per minute. Joanna Holcombe from Yahoo! Music noted that the song is about first loves. Leah Greenblatt from Entertainment Weekly, said that the song is "a midtempo ode to a summer - after - high - school love with whom she recalls sharing Mustang makeout sessions to Radiohead ' ''. Michael Wood from Spin magazine said that the song is one of the album 's quieter cuts and that it recall (s) "Perry 's singer - songwriter days at L.A. 's Hotel Café ''. The song follows the chord progression of E -- G ♯ m -- C ♯ m -- A, and Perry 's vocal range spans from B to E. Kitty Empire noticed that Perry 's vocal is wistful throughout the song and that the references to June and Johnny Cash were unexpected. Rob Sheffield from Rolling Stone stated that when Perry sings, ' I was June, and you were my Johnny Cash, ' "it 's understood that she 's thinking of the scrubbed - up Hollywood version of June and Johnny, from Walk the Line. '' In 2017, the singer revealed that "The One That Got Away '' was about Josh Groban.
Kerri Mason from Billboard described the song as "delectable '', noting that it has more texture than anything on Perry 's previous album, One of the Boys. Mikael Wood from Spin Magazine said that although "Perry delivers the gurl - gone - wild stuff with requisite sass '', she actually "sounds more engaged on ' Not Like the Movies, ' and ' The One That Got Away '. Similarly, Kitty Empire from The Guardian praised the collaboration, stating that Perry and Luke are at their most appealing in the song. In a similar note, Rob Sheffield from Rolling Stone, stated that Perry is more at home with the mall romance of "The One That Got Away ''. The same opinion was echoed by Greg Kot from Chicago Tribune who felt that Perry sounds more invested in the more "serious '' songs on the album, such as "The One That Got Away ''. However, he added that it 's as if Perry is "determined to balance the summer frothiness with a few shots of ' adult ' earnestness ''. Leah Greenblatt from Entertainment Weekly was not satisfied with the selection of the song as the sixth single, noting that there are better songs on the album that could have been chosen instead. Robert Copsey of the website Digital Spy awarded the song with four out of five stars and said:
"' Summer after high school, when we first met / We make out in your Mustang, to Radiohead, ' Katy Perry reminisces on the opening of her latest, potentially record - breaking single. We 've always known that she had a penchant for the alt - rock, but we wonder if KP 's 18 - year - old self ever thought she 'd be substituting the sounds of Thom Yorke and co. for the sugar - coated melodies that have made her one of the best - known artists on the planet today? ' Used to steal our parents ' liquor, and climb through the roof / Talk about our future, like we had a clue, ' she continues over a toe - tapping drum beat and delicate piano riff as she agonises over the loss of her one true love. ' In another life, I would be your girl / We 'd keep all our promises / Be us against the world, ' she mourns on a chorus as instantly satisfying as a mugful of Kenco. Word of advice Katy, we 'd keep this one well away from Russell if you want to avoid history repeating itself. ''
On the week ending October 16, 2011, the song debuted at number 94 on Billboard Hot 100. In its sixth week it entered the top 10 making Teenage Dream one of only seven albums in Billboard 's 53 - year history to have six singles enter the top 10. On December 24, 2011, The single later entered the top five, making Teenage Dream one of three albums to have six or more top - five singles from one album on the chart.
On December 31, 2011, the single fell one position. On the charts dated January 7, 2012, helped by the release of the remix with B.o.B., the single reached number three on the Hot 100, and it topped the Hot Dance Club Songs chart, the seventh song on the album to do so setting a new record in the chart. On January 14, 2012, the single stayed in the same position as the previous week on the Hot 100. On January 21, 2012, the single fell from number three to six, and faced the hurdle of breaking the record established by Michael Jackson. On week January 28, 2012, the song remained stable in the same position last week. On February 4, 2012, back to fifth position in the chart.
On February 11, 2012, the single dropped to number nine, but stable in the top ten. On February 18, 2012, the single dropped to number 14, leaving the top ten of the chart. Finally, the song fell down to 21 the next week, making it Perry 's only Teenage Dream single to fail to reach number one, peaking at three. As of January 2015, the song has sold more than 2,750,000 copies in the United States alone. "The One That Got Away '' debuted at number 87 in Australia on the week ending October 10, 2011. before peaking at number 27. In New Zealand, the song debuted at number 40 and later peaked at number 12. It was certified gold by the Recording Industry Association of New Zealand (RIANZ) after selling 7,500 copies there.
Perry started filming for the video on September 30, 2011. Filming ended on October 2, 2011. The video was shot at the Lima Residence, a contemporary home located in Calabasas, an affluent city in Los Angeles County, California. Mexican actor Diego Luna plays Perry 's boyfriend in the video. Photos from the set surfaced online, showing Perry wearing a conservative long - sleeved dress as well as sporting gray hair and prosthetic face wrinkles. The video was directed by Floria Sigismondi, who previously directed the video for "E.T. '' The music video premiered on November 11, 2011.
On November 4, 2011, a teaser for the video was released, narrated by Stevie Nicks. Nicks provides the elderly woman 's voice, speaking about the past and her desire to go back for one day. The video contains scenes of her and her past boyfriend (Luna) fighting, intertwined with scenes of them in love. She is later shown as a nostalgic, elderly woman dressed conservatively and standing by a fence looking into the distance. A seven - minute extended version of the video was shown on November 11, 2011, exclusively at select advance screenings of the motion picture My Week with Marilyn. As of August 2018, the video has more than 640 million views on YouTube.
Released on November 11, 2011, the video begins with an elderly woman (Perry) in a white long - sleeved dress walking through a modernistic home. She blandly walks past her husband (played by Herman Sinitzyn), hinting that the two are in a loveless marriage. While making herself a cup of coffee, the elderly woman, unhappy with her present situation, begins to think about her colorful past when the song begins: her younger self with her artist boyfriend (Luna). As the song plays, the happy girl and the boyfriend paint portraits of each other, dress up wildly, dance at a party, and give each other a makeshift tattoo.
As the elderly woman sadly reminisces while sitting on her (and the husband 's) fancy bedroom alone in a silk nightgown, her younger self and the boyfriend get into an argument which culminates in her splashing red paint on one of his elaborate paintings after he did the same to one of hers and he leaves angrily then drives away. The woman 's younger self appears to her older self 's bedroom with each on the bed as they both sing. The younger version is also shown in her older self 's closet, crying and singing while the boyfriend is seen driving in a Ford Mustang to blow off steam from the fight. At the same time in the present, the older woman is shown driving out of her garage in a similar type of car as the boyfriend is driving in the flashback.
The boyfriend opens the sun visor above him while driving and finds the veil of the dress the younger version of Perry had worn while partying. He stares at the veil, hinting that he decides to make up with her. But he does not notice the large boulders on the road from a small rock slide. He swerves to avoid the rocks and accidentally drives off a cliff, dying in the subsequent crash without getting a chance to make up, while the woman 's younger self also ' dies ' at the same time (possibly representing the death of her colorful personality). The song ends abruptly as the sounds of the car violently rolling down the cliff are heard.
While Johnny Cash 's cover of "You Are My Sunshine '' plays quietly in the background, the woman 's older self now sports a dark conservative long - sleeved dress and is revealed to have driven to that same spot where the boyfriend had died. She walks up to the edge of the cliff and leans against a fence when the boyfriend (either a ghost or a hallucination) appears before her on the other side of the fence. The two hold hands, revealing matching tattoos on their hands. When the older woman snaps back to reality, the Johnny Cash music stops suddenly and the boyfriend vanishes. Saddened, the elderly woman turns back and silently walks away from the cliff as the screen fades to black.
A seven - minute director 's cut version was shown exclusively at select advance screenings of the motion picture My Week with Marilyn on November 11, 2011. The extended version shows a more cinematic side to the plot of the original music video and includes never - before - seen footage, as well as extended dialogue between the characters.
Jillian Mapes of Billboard commented that the video was "beautifully - shot '' and praised the interesting plot. A writer of Rolling Stone wrote: "It 's a cute clip for a sweet song, but the heavy - handed aging makeup is hard to get over. '' Erin Strecker from Entertainment Weekly compared the video with Titanic (1997) and Rihanna 's video for "We Found Love ''. Strecker also noted that the video was more "tragic '' than he was expecting from Perry. Jocelyn Vena of MTV News said: "Katy Perry 's moody, contemplative clip for ' The One That Got Away ' perfectly encapsulates both the joy of falling in love and the heartbreak of letting go. It travels through time and space and recalls the story of Perry 's one that got away. '' Consequence of Sound 's Chris Coplan called the video a "little more somber '' than the videos Perry made for "E.T. '' and "Last Friday Night (T.G.I.F.) ''.
On January 17, 2012, Electronic Arts announced that they would be teaming up with Perry to help promote their new expansion pack for The Sims game franchise called The Sims 3: Showtime, which sees the release of a limited collector 's edition that contains in - game content based on herself. An official music video for "The One That Got Away '' featuring Perry as a Sim was uploaded on EA 's YouTube channel. The storyline shows a female Sim and a male Sim falling in love and getting married. One day, the male Sim collapses in the bathroom floor, he is taken to the hospital and then dies. His wife is then seen mourning at his funeral. Suddenly, she is transported "to another life '', Katy Perry 's Candyfornia featured in her "California Gurls '' music video, where her love interest is still alive and well; they eventually reunite and kiss. It also features most of the in - game content that will be included for the collector 's edition of the Sims 3: Showtime and in the stuff pack The Sims 3: Katy Perry 's Sweet Treats.
"The One That Got Away '' was part of the setlist of Perry 's worldwide 2011 concert tour, California Dreams Tour. On October 16, 2011, Perry performed the song on the UK version of The X Factor. Perry performed the song at the American Music Awards on November 20, 2011. Her AMA 2011 performance was followed by a lengthy standing ovation, and presentation of a special award acknowledging Perry as the only female to have five number - one singles from the same album in the United States. Perry performed the song as part of a Live Lounge special for BBC Radio 1 's Fearne Cotton on March 19, 2012 along with "Part of Me '', "Firework '', "Thinking of You '' and a censored version of "Niggas in Paris ''.
A remix featuring American rapper B.o.B was released in December 2011. B.o.B added two new verses, one at the beginning and another replacing the bridge of the album version of the song. The decision for Capitol Records to release a remix and reduce the price of the song to give Perry a sixth number - one song has been criticized by some, noting that this is not Perry 's first time adding a featured guest to her single releases. The hit single "ET '', was modified with verses from Kanye West, while "Last Friday Night (T.G.I.F.) '' was given a remix featuring Missy Elliott. However, Billboard, which compiles the charts, have issued multiple columns defending Perry and Capitol, underlining that they are operating under chart rules and that numerous other acts, such as Rihanna and Britney Spears, used the same tactics for charting purposes over the years. An acoustic version of the song, produced by Jon Brion, was released to the iTunes Store on January 16, 2012, garnering more favorable reviews, with critics noting that "The One That Got Away '' sounds very naturally as a ballad. Perry 's label, Capitol Records, sponsored a contest in January 2012 encouraging fans to record their own acoustic version of the song for a chance to have it featured on Perry 's Facebook wall. The song was also included in the setlist of The Prismatic World Tour, in which it is interpolated with Perry 's 2009 song "Thinking of You ''.
A violin cover of the song by music student Grace Youn, which Youn had uploaded to YouTube in 2011, was used in the opening scene of Perry 's 2012 documentary, Katy Perry: Part of Me.
Credits adapted from Teenage Dream album liner notes.
sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
|
what were the reasons for the revolt of 1857 | Causes of the Indian rebellion of 1857 - Wikipedia
The Indian Rebellion of 1857 had diverse political, economic, military, religious and social causes.
The sepoys, a generic term used for native Indian soldiers of the Bengal Army derived from the Persian word sepāhī (سپاهی) meaning "infantry soldier '', had their own list of grievances against the British East Indian Company (BEIC) administration, caused mainly by the ethnic gulf between the European officers and their Indian troops. The spark that led to a mutiny in several sepoy companies was the issue of new gunpowder cartridges for the Enfield rifle in February, 1857. A rumour was spread that the cartridges were made from cow and pig fat. Loading the Enfield required tearing open the greased cartridge with one 's teeth. This would have insulted both Hindu and Muslim religious practices; cows were considered holy by Hindus while pigs were considered unclean by Muslims. Underlying grievances over British taxation and recent land annexations by the BEIC were ignited by the sepoy mutineers and within weeks dozens of units of the Indian army joined peasant armies in widespread rebellion. The old aristocracy, both Muslim and Hindu, who were seeing their power steadily eroded by the East India Company, also rebelled against British rule. Another important discontent among the Indian rulers was that the british policies of conquest had created unrest among many indian rulers. The policies like the doctrine of lapse, Subsidiary Alliance deprived Indian rulers of their power and status.
Some Indians were upset with what they saw as the draconian rule of the Company who had embarked on a project of territorial expansion and westernisation that was imposed without any regard for historical subtleties in Indian society. Furthermore, legal changes introduced by the British were accompanied by prohibitions on Indian religious customs and were seen as steps towards forced conversion to Christianity. As early as the Charter Act of 1813 Christian missionaries were encouraged to come to Bombay and Calcutta under BEIC control. The British Governor - General of India from 1848 to 1856 was Lord Dalhousie who passed the Widow Remarriage Act of 1856 which allowed women to remarry, like Christian women. He also passed decrees allowing Hindus who had converted to Christianity to be able to inherit property, which had previously been denied by local practice. Author Pramod Nayar points out that by 1851 there were nineteen Protestant religious societies operating in India whose goal was the conversion of Indians to Christianity. Christian organisations from Britain had additionally created 222 "unattached '' mission stations across India in the decade preceding the rebellion.
Religious disquiet as the cause of rebellion underlies the work of historian William Dalrymple who asserts that the rebels were motivated primarily by resistance to the actions of the British East India Company, especially under James Broun - Ramsay reign, which were perceived as attempts to impose Christianity and Christian laws in India. For instance, once the rebellion was underway, Mughal Emperor Bahadur Shah Zafar met the sepoys on May 11, 1857, he was told: "We have joined hands to protect our religion and our faith. '' They later stood in Chandni Chowk, the main square, and asked the people gathered there, "Brothers, are you with those of the faith? '' Those European men and women who had previously converted to Islam such as Sergeant - Major Gordon, and Abdullah Beg, a former Company soldier, were spared. In contrast, foreign Christians such as Revd Midgeley John Jennings, and Indian converts to Christianity such as one of Zafar 's personal physicians, Dr. Chaman Lal, were killed.
Dalrymple further points out that as late as 6 September, when calling the inhabitants of Delhi to rally against the upcoming Company assault, Zafar issued a proclamation stating that this was a religious war being prosecuted on behalf of ' the faith ', and that all Muslim and Hindu residents of the imperial city, or of the countryside were encouraged to stay true to their faith and creeds. As further evidence, he observes that the Urdu sources of the pre - and post-rebellion periods usually refer to the British not as angrez (the English), goras (whites) or firangis (foreigners), but as kafir (disbeliever) and nasrani (Christians).
Some historians have suggested that the impact of British economic and social ' reforms ' has been greatly exaggerated, since the Company did not have the resources to enforce them, meaning that away from Calcutta their effect was negligible.
Many Indians felt that the Company was asking for heavy tax from the locals. This included an increase in the taxation on land. This seems to have been a very important reason for the spread of the rebellion, keeping in view the speed at which the conflagration ignited in many villages in northern India where farmers rushed to get back their unfairly grabbed title deeds. The resumption of tax free land and confiscation of jagirs (the grant or right to locally control land revenue) caused discontent among the jagirdars and zamindars. Dalhousie had also appointed Inam Commission with powers to confiscate land. Several years before the sepoys ' mutiny, Lord William Bentinck had attacked several jagirs in western Bengal. He also resumed the practice of tax free lands in some areas. These changes caused widespread resentment not only among the landed aristocracy but also caused great havoc to a larger section of the middle - class people. Lands were confiscated from the landlords and auctioned. Rich people like the merchants and moneylenders were therefore able to speculate in British land sales and drive out the most vulnerable peasant farmers.
During the late eighteenth century and the early part of the nineteenth century, the armies of the East India Company, in particular those of the Bengal Presidency, were victorious and indomitable -- the term "high noon of the sepoy army '' has been used by a military historian. The Company had an unbroken series of victories in India, against the Marathas, Mysore, north Indian states, and the Gurkhas, later against the Sikhs, and further afield in China and Burma. The Company had developed a military organisation where, in theory, fealty of the sepoys to the Company was considered the height of "izzat '' or honour, where the European officer replaced the village headman with benevolent figures of authority, and where regiments were mostly recruited from sepoys belonging to the same caste, and community.
Unlike the Madras and Bombay Armies of the BEIC, which were far more diverse, the Bengal Army recruited its regular soldiers almost exclusively amongst the landowning Bhumihars and Rajputs of the Ganges Valley. Though paid marginally less than the Bombay and Madras Presidency troops, there was a tradition of trust between the soldiery and the establishment -- the soldiers felt needed and that the Company would care for their welfare. The soldiers performed well on the field of battle in exchange for which they were rewarded with symbolic heraldic rewards such as battle honours in addition to the extra pay or "batta '' (foreign pay) routinely disbursed for operations committed beyond the established borders of Company rule.
Until the 1840s there had been a widespread belief amongst the Bengal sepoys in the iqbal or continued good fortune of the East India Company. However much of this sense of the invincibility of the British was lost in the First Anglo - Afghan War where poor political judgement and inept British leadership led to the massacre of Elphinstone 's army (which included three Bengal regiments) while retreating from Kabul. When the mood of the sepoys turned against their masters, they remembered Kabul and that the British were not invincible.
Caste privileges and customs within the Bengal Army were not merely tolerated but encouraged in the early years of the Company 's rule. Partly owing to this, Bengal sepoys were not subject to the penalty of flogging as were the European soldiers. This meant that when they came to be threatened by modernising regimes in Calcutta, from the 1840s onwards, the sepoys had become accustomed to very high ritual status, and were extremely sensitive to suggestions that their caste might be polluted. If the caste of high - caste sepoys was considered to be "polluted '', they would have to expend considerable sums of money on ritual purification before being accepted back into society.
There had been earlier indications that all was not well in the armies of the East India Company. As early as 1806, concerns that the sepoys ' caste may be polluted had led to the Vellore Mutiny, which was brutally suppressed. In 1824, there was another mutiny by a regiment ordered overseas in the First Anglo - Burmese War, who were refused transport to carry individual cooking vessels and told to share communal pots. Eleven of the sepoys were executed and hundreds more sentenced to hard labour. In 1851 - 2 sepoys who were required to serve in the Second Anglo - Burmese War also refused to embark, but were merely sent to serve elsewhere.
The pay of the sepoy was relatively low and after Awadh and the Punjab were annexed, the soldiers no longer received extra pay (batta or bhatta) if posted there, because this was no longer considered "foreign service ''. Since the batta made the difference between active service being considered munificent or burdensome, the sepoys repeatedly resented and actively opposed inconsiderate unilateral changes in pay and batta ordered by the Military Audit department. Prior to the period of British rule, any refusal to proceed on service until pay issues were resolved was considered a legitimate form of displaying grievance by Indian troops serving under Indian rulers. Such measures were considered a valid negotiating tactic by the sepoys, likely to be repeated every time such issues arose. In contrast to their Indian predecessors, the British considered such refusals at times to be outright "mutinies '' and therefore to be suppressed brutally. At other times however the Company directly or indirectly conceded the legitimacy of the sepoy 's demands, such as when troops of the Bengal and Madras armies refused to serve in Sindh without batta after its conquest.
The varying stances of the British government, the reduction of allowances and harsh punishments, contributed to a feeling amongst the troops that the Company no longer cared for them. Certain actions of the government, such as increased recruitment of Sikhs and Gurkhas, peoples considered by the Bengal sepoys to be inferior in caste to them, increased the distrust of the sepoys who thought that this was a sign of their services not being needed any more. The transfer of the number 66th which was taken away from a regular Bengal sepoy regiment of the line disbanded over refusal to serve without batta, and given to a Gurkha battalion, was considered by the sepoys as a breach of faith by the Company.
At the beginning of the nineteenth century, British officers were generally closely involved with their troops, speaking Indian languages fluently; participating in local culture through such practices as having regimental flags and weapons blessed by Brahman priests; and frequently having native mistresses. Later, the attitudes of British officers changed with increased intolerance, lack of involvement and unconcern of the welfare of troops becoming manifest more and more. Sympathetic rulers, such as Lord William Bentinck were replaced by arrogant aristocrats, such as Lord Dalhousie, who despised the troops and the populace. As time passed, the powers of the commanding officers reduced and the government became more unfeeling or distant from the concerns of the sepoys.
Finally, officers of an evangelical persuasion in the Company 's Army (such as Herbert Edwardes and Colonel S.G. Wheler of the 34th Bengal Infantry) had taken to preaching to their Sepoys in the hope of converting them to Christianity.
In 1857, the Bengal Army contained 10 regular regiments of Indian cavalry and 74 of infantry. All of the Bengal Native Cavalry regiments and 45 of the infantry units rebelled at some point. Following the disarming and disbandment of an additional seventeen Bengal Native Infantry regiments, which were suspected of planning mutiny, only twelve survived to serve in the new post-mutiny army. Once the first rebellions took place, it was clear to most British commanders that the grievances which led to them were felt throughout the Bengal army and no Indian unit could wholly be trusted, although many officers continued to vouch for their men 's loyalty, even in the face of captured correspondence indicating their intention to rebel.
The Bengal Army also administered, sometimes loosely, 29 regiments of irregular horse and 42 of irregular infantry. Some of these units belonged to states allied to the British or recently absorbed into British - administered territory, and of these, two large contingents from the states of Awadh and Gwalior readily joined the growing rebellion. Other irregular units were raised in frontier areas from communities such as Assamese or Pashtuns to maintain order locally. Few of these participated in the rebellion, and one contingent in particular (the recently raised Punjab Irregular Force) actively participated on the British side.
The Bengal Army also contained three "European '' regiments of infantry, and many artillery units manned by white personnel. Due to the need for technical specialists, the artillery units generally had a higher proportion of British personnel. Although the armies of many Rajas or states which rebelled contained large numbers of guns, the British superiority in artillery was to be decisive in the siege of Delhi after the arrival of a siege train of thirty - two howitzers and mortars.
There were also a number of regiments from the British Army (referred to in India as "Queen 's troops '') stationed in India, but in 1857 several of these had been withdrawn to take part in the Crimean War or the Anglo - Persian War of 1856. The moment at which the sepoys ' grievances led them openly to defy British authority also happened to be the most favourable opportunity to do so.
The rebellion was, literally, started over a gun. Sepoys throughout India were issued with a new rifle, the Pattern 1853 Enfield rifled musket -- a more powerful and accurate weapon than the old but smoothbore Brown Bess they had been using for the previous decades. The rifling inside the musket barrel ensured accuracy at much greater distances than was possible with old muskets. One thing did not change in this new weapon -- the loading process, which did not improve significantly until the introduction of breech loaders and metallic, one - piece cartridges a few decades later.
To load both the old musket and the new rifle, soldiers had to bite the cartridge open and pour the gunpowder it contained into the rifle 's muzzle, then stuff the paper cartridge (overlaid with a thin mixture of beeswax and mutton tallow for waterproofing) into the musket as wadding, the ball being secured to the top of the cartridge and guided into place for ramming down the muzzle. The rifle 's cartridges contained 68 grains of FF blackpowder, and the ball was typically a 530 - grain Pritchett or a Burton - Minié ball.
Despite no discernible reason for a change in practice, some sepoys believed that the cartridges that were standard issue with the new rifle were greased with lard (pork fat) which was regarded as unclean by Muslims and tallow (cow fat) which angered the Hindus as cows were equal to a goddess to them. The sepoys ' British officers dismissed these claims as rumours, and suggested that the sepoys make a batch of fresh cartridges, and grease these with beeswax and mutton fat. This reinforced the belief that the original issue cartridges were indeed greased with lard and tallow.
Another suggestion they put forward was to introduce a new drill, in which the cartridge was not bitten with the teeth but torn open with the hand. The sepoys rejected this, pointing out that they might very well forget and bite the cartridge, not surprising given the extensive drilling that allowed 19th century British and Indian troops to fire three to four rounds per minute. British and Indian military drills of the time required soldiers to bite off the end of the Beeswax paper cartridge, pour the gunpowder contained within down the barrel, stuff the remaining paper cartridge into the barrel, ram the paper cartridge (which included the ball wrapped and tied in place) down the barrel, remove the ram - rod, return the ram - rod, bring the rifle to the ready, set the sights, add a percussion cap, present the rifle, and fire. The musketry books also recommended that, "Whenever the grease around the bullet appears to be melted away, or otherwise removed from the cartridge, the sides of the bullet should be wetted in the mouth before putting it into the barrel; the saliva will serve the purpose of grease for the time being '' This meant that biting a musket cartridge was second nature to the Sepoys, some of whom had decades of service in the Company 's army, and who had been doing musket drill for every day of their service. The first sepoy who rebelled by aiming his loaded weapon at a British officer was Mangal Pandey who was later executed.
There was rumour about an old prophecy that the Company 's rule would end after a hundred years. Their rule in India had begun with the Battle of Plassey in 1757.
Before the rebellion, there were reports that "holy men '' were mysteriously circulating chapatis and lotus flowers among the sepoys. Leader of the British Conservative Party and future prime minister Benjamin Disraeli argued these objects were signs to rebel and evidence of a conspiracy, and the press echoed this belief.
After the rebellion, there was rumour in Britain that Russia was responsible.
|
who holds the record for the nathan's hotdog eating contest | Nathan 's hot dog Eating contest - wikipedia
The Nathan 's Hot Dog Eating Contest is an annual American hot dog competitive eating competition. It is held each year on Independence Day at Nathan 's Famous Corporation 's original, and best - known restaurant at the corner of Surf and Stillwell Avenues in Coney Island, a neighborhood of Brooklyn, New York City.
The contest has gained public attention in recent years due to the stardom of Takeru Kobayashi and Joey Chestnut. The defending champion is Joey Chestnut, who ate 72 hot dogs in the 2017 contest. He beat out Carmen Cincotti and the 2015 champ, Matt Stonie.
Major League Eating (MLE), formerly known as the International Federation of Competitive Eating (IFOCE), has sanctioned the event since 1997. Today, only entrants currently under contract by MLE can compete in the contest.
The field of about 20 contestants typically includes the following:
The competitors stand on a raised platform behind a long table with drinks and Nathan 's Famous hot dogs in buns. Most contestants have water on hand, but other kinds of drinks can and have been used. Condiments are allowed, but usually are not used. The hot dogs are allowed to cool slightly after grilling to prevent possible mouth burns. The contestant that consumes (and keeps down) the most hot dogs and buns (HDB) in ten minutes is declared the winner. The length of the contest has changed over the years, previously 12 minutes, and in some years, only three and a half minutes; since 2008, 10 minutes.
Spectators watch and cheer the eaters on from close proximity. A designated scorekeeper is paired with each contestant, flipping a number board counting each hot dog consumed. Partially eaten hot dogs count and the granularity of measurement is eighths of a length. Hot dogs still in the mouth at the end of regulation count if they are subsequently swallowed. Yellow penalty cards can be issued for "messy eating, '' and red penalty cards can be issued for "reversal of fortune '', which results in disqualification. If there is a tie, the contestants go to a 5 - hot - dog eat - off to see who can eat that many more quickly. Further ties will result in a sudden - death eat - off of eating one more hot dog in the fastest time.
After the winner is declared, a plate showing the number of hot dogs eaten by the winner is brought out for photo opportunities.
The winner of the men 's competition is given possession of the coveted international "bejeweled '' mustard - yellow belt. The belt is of "unknown age and value '' according to IFOCE co-founder George Shea and rests in the country of its owner. In 2011, Sonya Thomas won the inaugural women 's competition and its "bejeweled '' pink belt.
Various other prizes have been awarded over the years. For example, in 2004 Orbitz donated a travel package to the winner. Starting in 2007, cash prizes have been awarded to the top finishers.
The Nathan 's Hot Dog Eating Contest has been held at the original location on Coney Island most years since about 1972, usually in conjunction with Independence Day. According to legend, on July 4, 1916, four immigrants held a hot dog eating contest at Nathan 's Famous stand on Coney Island to settle an argument about who was the most patriotic. The contest has supposedly been held each year since then except 1941 ("as a protest to the war in Europe '') and 1971 (as a protest to political unrest in the U.S.). A man by the name of Jim Mullen is said to have won the first contest, although accounts vary. One account describes Jimmy Durante (who was not an immigrant) as competing in that all - immigrant inaugural contest, which was judged by Eddie Cantor and Sophie Tucker. Another describes the event as beginning "in 1917, and pitted Mae West 's father, Jack, against entertainer Eddie Cantor. '' In 2010, however, promoter Mortimer "Morty '' Matz admitted to having fabricated the legend of the 1916 start date with a man named Max Rosey in the early 1970s as part of a publicity stunt. The legend grew over the years, to the point where The New York Times and other publications were known to have repeatedly listed 1916 as the inaugural year, although no evidence of the contest exists. As Coney Island is often linked with recreational activities of the summer season, several early contests were held on other holidays associated with summer besides Independence Day; Memorial Day contests were scheduled for 1972, 1975, and 1978, and a second 1972 event was held on Labor Day.
In the late 1990s and early 2000s, the competition was dominated by Japanese contestants, particularly Takeru Kobayashi, who won six consecutive contests from 2001 - 2006. In 2001, Kobayashi transformed the competition and the world of competitive eating by downing 50 hot dogs -- smashing the previous record (25.5). The Japanese eater introduced advanced eating and training techniques that shattered previous competitive eating world records. The rise in popularity of the event coincided with the surge in popularity of the worldwide competitive eating circuit.
On July 4, 2011, Sonya Thomas became the champion of the first Nathan 's Hot Dog Eating Contest for Women (previously women had competed against the men, except in one competition that was apparently held in 1975). Eating 40 hot dogs in 10 minutes, Thomas earned the inaugural Pepto - Bismol - sponsored pink belt and won $10,000.
In recent years, a considerable amount of pomp and circumstance have surrounded the days leading up to the event, which has become an annual spectacle of competitive entertainment. The event is presented on an extravagant stage complete with colorful live announcers and an overall party atmosphere. The day before the contest is a public weigh - in with the mayor of New York City. Some competitors don flamboyant costumes and / or makeup, while others may promote themselves with eating - related nicknames. On the morning of the event, they have a heralded arrival to Coney Island on the "bus of champions '' and are called to the stage individually during introductions. In 2013, six - time defending champion Joey Chestnut was escorted to the stage in a sedan chair.
The competition draws many spectators and worldwide press coverage. In 2007, an estimated 50,000 came out to witness the event. In 2004 a three - story - high "Hot Dog Eating Wall of Fame '' was erected at the site of the annual contest. The wall lists past winners, and has a digital clock which counts down the minutes until the next contest. Despite substantial damage suffered at Nathan 's due to Hurricane Sandy in October 2012, the location was repaired, reopened, and the 2013 event was held as scheduled.
ESPN has long enjoyed solid ratings from its broadcast of the Hot Dog Eating Contest on Independence Day, and on July 1, 2014, the network announced it had extended its agreement with Major League Eating and will broadcast the contest through 2024. The event continues to be recognized for its power as a marketing tool.
Controversies usually revolve around supposed breaches of rules that are missed by the judges. For example, NY1 television news editor Phil Ellison reviewed taped footage of the 1999 contest and thought that Steve Keiner started eating at the count of one, but the judge, Mike DeVito -- himself the champion of the 1990, 1993, and 1994 contests -- was stationed directly in front of Keiner and disputed it, saying it was incorrect. Keiner ate 21 1 / 2 dogs, as shown on the Wall of Fame located at Nathan 's flagship store at the corner of Surf and Stillwell avenues in Coney Island. This controversy was created by George Shea, the chief publicist for Nathan 's, because it created much more publicity for the contest. Shea assured Keiner at the end of the contest that he would clear the confusion up but never did. Keiner never participated in any advertising or contests set up by Shea because of this.
Another controversy occurred in 2003 when former NFL player William "The Refrigerator '' Perry competed as a celebrity contestant. Though he had won a qualifier by eating twelve hot dogs, he ate only four at the contest, stopping after just five minutes. Shea stated that the celebrity contestant experiment will likely not be repeated.
At the 2007 contest, the results were delayed to review whether defending champion Takeru Kobayashi had vomited (also known as a "Roman method incident '' or "reversal of fortune '') in the final seconds of regulation. Such an incident results in the disqualification of the competitor under the rules of the IFOCE. The judges ruled in Kobayashi 's favor. A similar incident occurred involving Kobayashi in 2002 in a victory over Eric "Badlands '' Booker.
Takeru Kobayashi has not competed in the contest since 2009 due to his refusal to sign an exclusive contract with Major League Eating, which is the current sanctioning body of the contest. In 2010, he was arrested by police after attempting to jump on the stage after the contest was over and disrupt the proceedings. On August 5, 2010, all charges against Kobayashi were dismissed by a judge in Brooklyn. Despite his six consecutive victories in their annual event, Nathan 's removed Kobayashi 's image from their "Wall of Fame '' in 2011. Kobayashi again refused to compete in 2011, but instead conducted his own hot dog eating exhibition, claiming to have consumed 69 HDB, seven more than Joey Chestnut accomplished in the Nathan 's contest. The sports website Deadspin deemed Kobayashi 's solo appearance "an improbably perfect ' up yours ' to the Nathan 's hot dog eating contest. ''.
* -- Note: though Walter Paul 's 1967 feat is documented in at least two UPI press accounts from the time, he has also been mentioned in passing in more recent press accounts for supposedly establishing the contest 's then - record 17 hot dogs consumed; several other people have similarly been credited for records of 131⁄2, 171⁄2, or 181⁄2 hot dogs consumed. The following feats are not known to be documented more fully in press accounts from the time of their occurrence and, as such, may not be credible and are not included in the Results table above:
"Several years '' before 1986: unspecified contestant, 131⁄2 1979: unspecified contestant, 171⁄2 1978: Walter Paul (described as being from Coney Island, Brooklyn), 17 1968: Walter Paul (described as "a rotund Coney Island carnival caretaker ''), 17 1959: Peter Washburn (described as "a one - armed Brooklyn Carnival worker ''), 181⁄2 or 17 1959: Paul Washburn (described as a carnival worker from Brooklyn), 171⁄2 1959: Walter Paul (described as a 260 - pound man from Brooklyn), 17 1957: Paul Washburn, 171⁄2
In 2003, ESPN aired the contest for the first time on a tape - delayed basis. Starting in 2004, ESPN began airing the contest live. Since 2005, Paul Page has been ESPN 's play - by - play announcer for the event, accompanied by color commentator Richard Shea. In 2011, the women 's competition was carried live on ESPN3, followed by the men 's competition on ESPN. In 2012, ESPN signed an extension to carry the event through 2017. In 2014, ESPN signed an agreement to carry the competition on its networks for 10 years until 2024.
The Nathan 's contest has been featured in these documentaries and TV programs:
News sources typically use puns in head - lines and copy referring to the contest, such as "' Tsunami ' is eating contest 's top dog again, '' "could n't cut the mustard '' (A.P.), "Nathan 's King ready, with relish '' (Daily News) and "To be frank, Fridge faces a real hot - dog consumer '' (ESPN).
Reporter Gersh Kuntzman of the New York Post has been covering the event since the early 1990s and has been a judge at the competition since 2000. Darren Rovell, of ESPN, has competed in a qualifier.
Each contestant has his or her own eating method. Takeru Kobayashi pioneered the "Solomon Method '' at his first competition in 2001. The Solomon method consists of breaking each hot dog in half, eating the two halves at once, and then eating the bun.
"Dunking '' is the most prominent method used today. Because buns absorb water, many contestants dunk the buns in water and squeeze them to make them easier to swallow, and slide down the throat more efficiently.
Other methods used include the "Carlene Pop, '' where the competitor jumps up and down while eating, to force the food down to the stomach. "Buns & Roses '' is a similar trick, but the eater sways from side to side instead. "Juliet - ing '' is a cheating method in which players simply throw the hot dog buns over their shoulders.
Contestants train and prepare for the event in different ways. Some fast, others prefer liquid - only diets before the event. Takeru Kobayashi meditates, drinks water and eats cabbage, then fasts before the event. Several contestants, such as Ed "Cookie '' Jarvis, aim to be "hungry, but not too hungry '' and have a light breakfast the morning of the event.
|
what is the meaning of gram flour in bengali | Gram flour - wikipedia
Gram flour or besan (Hindi: बेसन; Burmese: ပဲမှုန့်; Urdu: بيسن ), is a pulse flour made from a variety of ground chickpea known as Bengal gram. It is a staple ingredient in the cuisine of the Indian subcontinent, including in Indian, Bangladeshi, Burmese, Nepali, Pakistani and Sri Lankan cuisines. Gram flour can be made from either raw or roasted gram beans. The roasted variety is more flavorful, while the raw variety has a slightly bitter taste.
In the form of a paste with water or yogurt, it is also popular as a facial exfoliant in the Indian Subcontinent. When mixed with an equal proportion of water, it can be used as an egg replacement in vegan cooking.
Gram flour contains a high proportion of carbohydrates, no gluten, and a higher proportion of protein than other flours.
Gram flour is most popular in the cuisine of the Indian subcontinent, where it is used to make the following:
In Andhra Pradesh, it is used in a curry with gram flour cakes called Senaga Pindi Kura (Telugu: శెనగ పిండి కూర) and is eaten with Chapati or Puri, mostly during winter for breakfast. Chila (or chilla), a pancake made with gram flour batter, is a popular street food in India.
Along the coast of the Ligurian Sea, flour made from garbanzo beans, which are a different variety of chickpea closely related to Bengal gram, is used to make a thin pancake that is baked in the oven. This popular street food is called farinata in Italian cuisine, fainâ in Genoa, and is known as socca or cade in French cuisine. It is used to make panelle, a fritter in Sicilian cuisine. In Spanish cuisine, gram flour is an ingredient for tortillitas de camarones. Also in Cyprus and Greece, it is used as a garnishing ingredient for the funeral ritual food Koliva, blessed and eaten during Orthodox Memorial services.
In Morocco, they make a dish called karan from gram flour and eggs, which is baked in the oven. A similar famous dish is prepared in Algeria called Garantita or Karantita (believed to be originated from the Spanish term Calentica that means hot).
|
who played miss westlake on the cosby show | Sonia Braga - wikipedia
Sônia Maria Campos Braga (Portuguese pronunciation: (' sõ. nja ' bɾa.ga) born June 8, 1950) is a Brazilian - American actress. She is known in the English - speaking world for her Golden Globe Award nominated performances in Kiss of the Spider Woman (1985) and Moon over Parador (1988). She also received a BAFTA Award nomination in 1981 for Dona Flor and Her Two Husbands (first released in 1976). For the 1994 television film The Burning Season, she was nominated for an Emmy Award and a third Golden Globe Award. Her other television credits include The Cosby Show (1986), Sex and the City (2001), American Family (2002), and Alias (2005).
Braga was born to Hélio Fernando Ferraz Braga and Maria Braga Jaci Campos, costume designer of Maringá. She has two siblings, Ana Júlia and Hélio. Her sister Ana 's daughter Alice Braga is also an actress. Braga 's family moved to Curitiba and then to Campinas. Braga was 8 years old when her father died, and she moved to a convent school in São Paulo. In her teens, she took a job in the noted Brazilian reception center Buffet Torres as a receptionist.
The 14 - year - old Braga was invited by director Vicente Sesso to appear in "teleteatros '' at the Jardim Encantado program. After that, she joined a theater group in presentations in the ABC Paulista region. At 17, she debuted in the play George Dandin in Santo André. In 1968, she participated in the cast of the first Brazilian assembly Hair, directed by Ademar War.
In 1968, Braga was in the film O Bandido da Luz Vermelha, and early ' 70s, appeared in supporting roles in the films A Moreninha and Cléo e Daniel. The following year, she was invited to perform in A Menina do Veleiro Azul, a soap opera produced by TV Excelsior, but the network closed before the soap opera aired. Despite the success on stages and acting in soap operas, it was in the television series, Vila Sésamo, displayed in 1972, that Braga became a household name. After, Braga was invited to join the cast of Irmãos Coragem (1970), a soap opera written by Janete Clair, which aired on Rede Globo.
In 1975, Braga starred in the telenovela Gabriela, in an adaptation of Jorge Amado 's novel Gabriela, Clove and Cinnamon. Directed by Walter Avancini, the soap opera was a great national and international success, establishing Sonia Braga as a sex symbol. Braga returned to embody Jorge Amado 's characters in the film. In 1976, she made the film Dona Flor and Her Two Husbands directed by Bruno Barreto, alongside José Wilker and Mauro Mendonça. The film was a box office hit in Brazilian cinemas and also had major repercussions internationally. In 1983, she starred in Gabriela, alongside Marcello Mastroianni.
In 1976, Braga participated in the cast of Saramandaia. The following year she starred in Espelho Mágico as Cynthia Levy. One of the highlights of the soundtrack of the soap opera is the cover version that Gal Costa recorded of Tigresa, music that Caetano Veloso composed in honor of Braga. In the late 1970s, Braga gave life to another renowned character in Brazilian television, Julia Matos in Dancin ' Days (1978). In the storyline, Braga played an ex-convict who gets out of prison ready to win back the love of her daughter, played by Gloria Pires. In 1979, Sonia Braga ventured into children 's theater in the play No País dos Prequetés. The following year she returned to television in the telenovela Chega Mais alongside Tony Ramos.
In the early 1980s, Braga, who had already made films like Lady on the Bus (1978), decided to devote herself exclusively to the movies. In 1981, she starred in Eu Te Amo directed by Arnaldo Jabor, and won the best actress award at the Gramado Film Festival. She starred in the movie Kiss of the Spider Woman (1985) alongside William Hurt and Raul Julia. Her role led to a Golden Globe nomination for best supporting actress and its success led to her international work. She decided to leave Brazil for a career in the United States, where she lived for 14 years. In 2003, she obtained American citizenship.
Braga was the first Brazilian to present a category at the Oscars. She was announced by Goldie Hawn as one of the most glamorous actresses in the world, before appearing with Michael Douglas, who announced the result of the best short film. Braga competed for many prestigious awards in the United States. For her performance in The Burning Season (1994) she was nominated for the third time for the Golden Globe for best supporting actress. In 1995, she was nominated for an Emmy Award for Best Supporting Actress in a Miniseries or a Movie for The Burning Season, but lost to Shirley Knight. The film details the life of Brazilian activist Chico Mendes. In 1996, won the Lone Star Film & Television Awards, as best supporting actress for her work in Streets of Laredo directed by Joseph Sargent. That same year, director Nicolas Roeg invited her for the lead role in the film Two Deaths alongside Patrick Malahide. Braga also had the lead in Tieta of Agreste (1996), directed by Carlos Diegues.
In 1999, after nearly 20 years away from Brazilian television, the actress made a cameo in the first 15 chapters of the soap opera Força de um Desejo (1999), by Gilberto Braga and Alcides Nogueira, in the role of Helena Silveira, characters of mother Fábio Assunção and Selton Mello. In 2001, she joined the cast of Memórias Póstumas directed by André Klotzel based on The Posthumous Memoirs of Bras Cubas by Machado de Assis. For her performance in this film, she won the Kikito award for best supporting actress in Gramado Film Festival.
In 2001 Braga appeared in Angel Eyes a romantic drama film directed by Luis Mandoki and starring Jennifer Lopez. In 2002, she appeared in American Family, a PBS series created by Gregory Nava that follows the lives of a Latino family in Los Angeles.
In 2006, she returned to work in Globo 's telenovela Páginas da Vida, playing sculptress Tônia. In 2010, she starred in the episode A Adultera da Urca, in the miniseries As Cariocas and in 2011, made a cameo in the Tapas & Beijos series.
Braga has been cast in a recurring role as Lorraine Correia in the sixth season in the series Royal Pains. Braga 's scenes were filmed on location in Mexico and her episodes were aired in August 2014.
Most recently, she appeared in Netflix 's Marvel show Luke Cage as Rosario Dawson 's mother.
Sonia Braga earned rave reviews for her film Aquarius when it premiered at the 2016 Cannes Film Festival. Braga plays a widow and retired music writer who lives in the titular apartment complex and refuses to leave when developers offer her a buy - out. Though the film did not earn an Oscar nomination for Braga, it did contend for Best Foreign Film at France 's Cesar Awards and the Independent Spirit Awards. Braga ranked in the top five in IndieWire 's 2016 critics ' poll for Best Actress.
During the 1980s, Braga had relationships with Van Halen frontman David Lee Roth, with actor Robert Redford and with director Clint Eastwood. She has no children.
|
who plays becca on girlfriends guide to divorce | Girlfriends ' Guide to Divorce - wikipedia
Girlfriends ' Guide to Divorce (also known as Girlfriends ' Guide to Freedom starting with season 3) is an American comedy - drama television series developed by Marti Noxon for the American cable network Bravo. Based on the Girlfriends ' Guides book series by Vicki Iovine, the series revolves around Abby McCarthy, a self - help author who finds solace in new friends and adventures as she faces an impending divorce. Lisa Edelstein portrays the main character Abby. Beau Garrett, Necar Zadegan and Paul Adelstein co-star. Janeane Garofalo was part of the main cast for the first seven episodes of season 1 before departing the cast. She was replaced in episode 8 with Alanna Ubach. Retta recurred during the show 's second season before being promoted to the main cast at the start of season 3.
Produced by Universal Cable Productions, it is the first original scripted series for Bravo. A 13 - episode first season was ordered by the network, which premiered on December 2, 2014. The show debuted to 1.04 million viewers. Critical reception for the series has initially been generally positive, with particular praise towards Edelstein 's performance and the series ' quality over the reality series on Bravo. The show was eventually renewed for a second season, which premiered on December 1, 2015. On April 13, 2016, it was announced that Bravo had renewed the show for a third, fourth and fifth season. On August 5, 2016, It was announced that the fifth season would be the show 's last.
Girlfriends ' Guide to Divorce was met with generally positive reviews from television critics. At Metacritic, which assigns a weighted mean rating out of 100 to reviews from critics, the series received an average score of 69, based on 21 reviews. Lori Rackl of Chicago Sun - Times gave the episode a 4 star rating (out of 4 stars), calling it "a sometimes hilarious, sometimes heartbreaking story about an L.A. - based self - help author '' and added that the first two episodes "reveal a much more nuanced, poignant tale, punctuated by some genuinely funny scenes. '' LaToya Ferguson of The A.V. Club gave the series a grade of "A - '', calling it "a very solid drama '' that should be on HBO or Showtime. Ferguson also praised the characters and the series 's messiness, writing "Visually, it 's almost flawless (there 's one obvious green - screen moment in the pilot, but it 's not Ringer level), but every character here is deeply flawed. '' Los Angeles Times 's Mary McNamara lauded the series ' cast 's portrayal of the characters and deemed the series "smartly acted, crisply written and willing to address all manner of issues -- marriage, betrayal, family economics, friendship, even the pitfalls of public domesticity -- in gratifyingly complex ways. '' Brian Lowry, writing for Variety, applauded the series ' cast and material, noting how it sticks to the network 's demographic while maintaining a level of quality.
Gail Pennington of St. Louis Post-Dispatch called the series "a smart, solid examination of just how messy relationships are and how hard it is to make them work. '' Slate 's Willa Paskin highlighted Edelstein 's portrayal of the lead character, describing her as "very well cast, both commanding and nurturing enough to seem like the ideal advice - giver '' and noted that the series has "a satisfying and complex take on social dynamics in friendship and romance. '' Alessandra Stanley of The New York Times praised Edelstein and Garofalo as "one reason '' the show is entertaining and found the comic side of the series "a lot more fun. '' Time 's writer James Poniewozik praised the writing and Edelstein 's "sympathetic '' performance, noting that the latter "grounds a show that often otherwise plays like young - adult fiction for actual adults. '' However, Poniewozik opined that "there 's one lesson Girlfriends ' Guide to Divorce has (over) learned from its Bravo peers: that there 's no reality so compelling that it ca n't be sweetened with a little Photoshop. '' David Hinckley, writing for the New York Daily News, highlighted the series ' best moments as those showing the messy side of marital discourse while heralding Edelstein 's performance as "memorably moving. '' Margaret Lyons of Vulture was critical of the several aspects of the series, including the characters Abby and Lyla 's attitude on giving their spouses child support, but found the series to be its best "at its nastiest. ''
Girlfriends ' Guide to Divorce premiered in the United States on Bravo on December 2, 2014, in Canada on Slice on January 9, 2015, and in the United Kingdom on Lifetime UK on September 15, 2015.
The first season was released on DVD in region 1 on October 13, 2015. On November 1, 2015 season 1 of Girlfriends ' Guide to Divorce became available to stream in the US for Netflix subscribers. The show is also available from electronic sell - through platforms such as iTunes, Amazon Instant Video, and Vudu.
|
where does the saying joe bloggs come from | Joe Bloggs - wikipedia
"Joe Bloggs '' and "Fred Bloggs '' are placeholder names commonly used in the United Kingdom, Ireland, Australia and New Zealand, for teaching, programming, and other thinking and writing.
In The Princeton Review standardized test preparation courses, "Joe Bloggs '' represents the average test - taker, and students are trained to identify the "Joe Bloggs answer '', or the choice which seems right but may be misleading on harder questions.
"Joe Bloggs '' was a brand name for a clothing range, especially baggy jeans, which was closely associated with the Madchesterscene of the 1990s.
The name Bloggs is believed to have been derived from the East Anglian region of Britain, Norfolk or Suffolk, deriving from bloc, a bloke.
In the United Kingdom and United States, John has historically been one of the most common male first names, and Smith is the most common surname in each, so "John Smith '' is a recurrent pseudonym and placeholder name in those countries (especially in legal contexts).
In the United States, John Doe, John Q. Public, Joe Blow, Joe Sixpack and Joe Schmoe are also used. In Germany, Max Mustermann (male), Erika Mustermann (female), literally "Max / Erika Example - Person, '' and Otto Normalverbraucher ("Otto Normal Consumer '') are used. Other international variations can be found here.
Other placeholders (e.g. in advertisements for store cards / credit cards) sometimes used are Mr / Mrs A Smith or A.N. Other.
|
what are you called if you're from washington | List of demonyms for U.S. States and territories - wikipedia
This is a list of official and notable unofficial terms used to designate the citizens of specific states and territories of the United States.
† - Not officially a U.S. state, rather a U.S. territory or district.
|
damage to this nerve can cause a drooping eyelid | Ptosis (eyelid) - wikipedia
Ptosis is a drooping or falling of the upper eyelid. The drooping may be worse after being awake longer when the individual 's muscles are tired. This condition is sometimes called "lazy eye '', but that term normally refers to the condition amblyopia. If severe enough and left untreated, the drooping eyelid can cause other conditions, such as amblyopia or astigmatism. This is why it is especially important for this disorder to be treated in children at a young age, before it can interfere with vision development.
The term is from Greek πτῶσις "a fall, falling ''.
Ptosis occurs due to dysfunction of the muscles that raise the eyelid or their nerve supply (oculomotor nerve for levator palpebrae superioris and sympathetic nerves for superior tarsal muscle). It can affect one eye or both eyes and is more common in the elderly, as muscles in the eyelids may begin to deteriorate. One can, however, be born with ptosis. Congenital ptosis is hereditary in three main forms. Causes of congenital ptosis remain unknown. Ptosis may be caused by damage / trauma to the muscle which raises the eyelid, damage to the superior cervical sympathetic ganglion or damage to the nerve (3rd cranial nerve (oculomotor nerve)) which controls this muscle. Such damage could be a sign or symptom of an underlying disease such as diabetes mellitus, a brain tumor, a pancoast tumor (apex of lung) and diseases which may cause weakness in muscles or nerve damage, such as myasthenia gravis or Oculopharyngeal muscular dystrophy. Exposure to the toxins in some snake venoms, such as that of the black mamba, may also cause this effect.
Ptosis can be caused by the aponeurosis of the levator muscle, nerve abnormalities, trauma, inflammation or lesions of the lid or orbit. Dysfunctions of the levators may occur as a result of autoimmune antibodies attacking and eliminating the neurotransmitter.
Ptosis may be due to a myogenic, neurogenic, aponeurotic, mechanical or traumatic cause and it usually occurs isolated, but may be associated with various other conditions, like immunological, degenerative, or hereditary disorders, tumors, or infections
Acquired ptosis is most commonly caused by aponeurotic ptosis. This can occur as a result of senescence, dehiscence or disinsertion of the levator aponeurosis. Moreover, chronic inflammation or intraocular surgery can lead to the same effect. Also, wearing contact lenses for long periods of time is thought to have a certain impact on the development of this condition.
Congenital neurogenic ptosis is believed to be caused by the Horner syndrome. In this case, a mild ptosis may be associated with ipsilateral ptosis, iris and areola hypopigmentation and anhidrosis due to the paresis of the Mueller muscle. Acquired Horner syndrome may result after trauma, neoplastic insult, or even vascular disease.
Ptosis due to trauma can ensue after an eyelid laceration with transection of the upper eyelid elevators or disruption of the neural input.
Other causes of ptosis include eyelid neoplasms, neurofibromas or the cicatrization after inflammation or surgery. Mild ptosis may occur with aging. A drooping eyelid can be one of the first signals of a third nerve palsy due to a cerebral aneurysm, that otherwise is asymptomatic and referred to as an oculomotor nerve palsy.
Use of high doses of opioid drugs such as morphine, oxycodone, heroin, or hydrocodone can cause ptosis. Pregabalin (Lyrica), an anticonvulsant drug, has also been known to cause mild ptosis.
Depending upon the cause it can be classified into:
Myasthenia gravis is a common neurogenic ptosis which could be also classified as neuromuscular ptosis because the site of pathology is at the neuromuscular junction. Studies have shown that up to 70 % of myasthenia gravis patients present with ptosis, and 90 % of these patients will eventually develop ptosis. In this case, ptosis can be unilateral or bilateral and its severity tends to be oscillating during the day, because of factors such as fatigue or drug effect. This particular type of ptosis is distinguished from the others with the help of a Tensilon challenge test and blood tests. Also, specific to myasthenia gravis is the fact that coldness inhibits the activity of cholinesterase, which makes possible differentiating this type of ptosis by applying ice onto the eyelids. Patients with myasthenic ptosis are very likely to still experience a variation of the drooping of the eyelid at different hours of the day.
The ptosis caused by the oculomotor palsy can be unilateral or bilateral, as the subnucleus to the levator muscle is a shared, midline structure in the brainstem. In cases in which the palsy is caused by the compression of the nerve by a tumor or aneurysm, it is highly likely to result in an abnormal ipsilateral papillary response and a larger pupil. Surgical third nerve palsy is characterized by a sudden onset of unilateral ptosis and an enlarged or sluggish pupil to the light. In this case, imaging tests such as CTs or MRIs should be considered. Medical third nerve palsy, contrary to surgical third nerve palsy, usually does not affect the pupil and it tends to slowly improve in several weeks. Surgery to correct ptosis due to medical third nerve palsy is normally considered only if the improvement of ptosis and ocular motility are unsatisfactory after half a year. Patients with third nerve palsy tend to have diminished or absent function of the levator.
When caused by Horner 's syndrome, ptosis is usually accompanied by miosis and anhidrosis. In this case, the ptosis is due to the result of interruption innervations to the sympathetic, autonomic Muller 's muscle rather than the somatic levator palpebrae superioris muscle. The lid position and pupil size are typically affected by this condition and the ptosis is generally mild, no more than 2 mm. The pupil might be smaller on the affected side. While 4 % cocaine instilled to the eyes can confirm the diagnosis of Horner 's syndrome, Hydroxyamphetamine eye drops can differentiate the location of the lesion.
Chronic progressive external ophthalmoplegia is a systemic condition that occurs and which usually affects only the lid position and the external eye movement, without involving the movement of the pupil. This condition accounts for nearly 45 % of myogenic ptosis cases. Most patients develop ptosis due to this disease in their adulthood. Characteristic to ptosis caused by this condition is the fact that the protective up rolling of the eyeball when the eyelids are closed is very poor.
Aponeurotic and congenital ptosis may require surgical correction if severe enough to interfere with vision or if cosmetics is a concern. Treatment depends on the type of ptosis and is usually performed by an ophthalmic plastic and reconstructive surgeon, specializing in diseases and problems of the eyelid.
Surgical procedures include:
Non-surgical modalities like the use of "crutch '' glasses or Ptosis crutches or special scleral contact lenses to support the eyelid may also be used.
Ptosis that is caused by a disease may improve if the disease is treated successfully, although some related diseases, such as oculopharyngeal muscular dystrophy currently have no treatments or cures.
Ptosis is derived from the Greek word πτῶσις ("fall ''), and is defined as the "abnormal lowering or prolapse of an organ or body part ''.
|
flight of the conchords most beautiful girl in the room live | Sally (Flight of the Conchords) - wikipedia
"Sally '' is the pilot episode of the American television sitcom Flight of the Conchords. It first aired on HBO on June 17, 2007. In this episode, New Zealanders Jemaine Clement and Bret McKenzie of the band Flight of the Conchords have moved to New York City to try to make it in the United States. At a party, Jemaine falls for, and subsequently begins dating, Sally -- Bret 's former girlfriend. As Jemaine 's attentions focus on Sally, a lonely Bret is forced to deal with the advances of Mel (Kristen Schaal), the band 's obsessed -- and only -- fan. Meanwhile, Murray (Rhys Darby), the band 's manager, helps the band film their first music video, although they can not afford decent costumes or proper video equipment.
"Sally '' received largely positive reviews from critics. According to Nielsen Media Research, "Sally '' drew over 1.2 million viewers. Several of the songs from the episode, most notably "Robots '', "Not Crying '', and "Most Beautiful Girl (In the Room) '' received positive critical acclaim. All three songs were released on the band 's EP The Distant Future, although "Robots '' appeared in a live form. "Robots '' later was re-recorded and released on the band 's debut album Flight of the Conchords, along with "Most Beautiful Girl (In the Room). '' The latter was later nominated for an Emmy award for Outstanding Original Music And Lyrics.
Jemaine (Jemaine Clement) and Bret (Bret McKenzie) attend a party thrown by their friend Dave (Arj Barker). In the crowd Jemaine spots a beautiful woman, Sally (Rachel Blanchard), inspiring him to sing "Most Beautiful Girl (In the Room) ''. Jemaine and Sally leave the party and eventually go back to the band 's apartment, but just as they begin kissing, Bret disturbs them by turning on the light, and an embarrassed Sally leaves. The next morning, Jemaine blames her departure on "the whole situation with the light ''. However, Bret suggests it was because he used to date her himself.
Bret and Jemaine go to a band meeting with their manager Murray (Rhys Darby) in his office in the New Zealand Consulate. Murray criticizes Jemaine for dating his bandmate 's ex, and discusses the need to increase the group 's fan base, which currently consists of only one person: the obsessive Mel (Kristen Schaal). Bret suggests they film a music video. However, unable to afford real video equipment and robot costumes like Daft Punk, they are forced to rely on a camera phone and disappointing cardboard costumes made by Murray. Regardless, they manage to film a video for "Robots ''.
Over the following week, Jemaine spends more time with Sally, leaving Bret feeling lonely and neglected. When Bret suggests hanging out sometime, Jemaine invites him along on a dinner date with Sally, but they all feel a "bit weird '' and Bret leaves early. On the way home, Mel tries to cheer him up but fails miserably. Immediately after dinner, Sally breaks up with Jemaine, leading him to sing "Not Crying '' with Bret.
"Sally '' was written by series co-creators James Bobin, Jemaine Clement, and Bret McKenzie, the latter two starring as the titular Flight of the Conchords. Bobin directed the episode. The episode is the first of the series to feature Rachel Blanchard as Sally. The character returns to disrupt Bret and Jemaine 's lives in the fifth episode, "Sally Returns ''. In "Sally '', the character Mel shows Bret that she carries around a picture of Jemaine 's lips in her wallet. This was inspired by an incident that happened to the band during the filming of their documentary A Texan Odyssey which covered their trip to the 2006 South by Southwest (SXSW) festival in Austin, Texas. The incident was caught on camera and is included in the documentary. Judah Friedlander has a cameo appearance in this episode, playing the role of the man who tries to sell Dave a cake.
The episode contains several cultural references. Murray is wearing a New Zealand All Blacks rugby shirt when the band is in Dave 's pawn shop obtaining a camera. In the same scene, Murray and Bret have a conversation about the band Fleetwood Mac and their album "Rumours ''. Rhys Darby, who played Murray, later asked Mick Fleetwood, the drummer for the band, if he heard the joke and whether or not he enjoyed it. Fleetwood admitted that he "appreciate (d) '' the joke. During the filming of the video for "Robots '', Jemaine tells Murray that he wanted robot costumes "like Daft Punk '' rather than the amateur versions hand - crafted by Murray. Murray replies with a characteristic lack of musical knowledge: "I do n't know who he is. ''
The first song featured in the episode is "The Most Beautiful Girl (In the Room). '' The song, also known as "Part - Time Model '', was based on the conceit of a man "who 's not very good at compliments. '' The song begins after Jemaine sees Sally from across the room at Dave 's party. Jemaine details his seduction of Sally, describing her as being so beautiful she could be a "part - time model ''. This song was voted number 60 in the 2008 Triple J Hottest 100. Later, the song was nominated for an Emmy award for Outstanding Original Music And Lyrics.
The second song featured in the episode is "Robots ''. The song, also known as "Humans Are Dead '', is sung by both Bret and Jemaine. It is set in a post-apocalyptic "distant future '', humorously stated to be the year 2000, where all humans are dead and robots have taken over the world. Within the context of the plot of the show, it is the band 's first music video. Since the band has very limited funds, Murray constructs the robot costumes himself and films the video using a cell phone.
The third and final song featured is "Not Crying ''. The song begins as Sally breaks up with Jemaine. Jemaine denies that he is crying by offering excuses such as "it 's just been raining on my face ''. All three of the songs were released on The Distant Future EP in 2007, however, "The Most Beautiful Girl (In the Room) '' and "Robots '' appeared in live form. The two were subsequently re-recorded in studio form for the band 's debut album, Flight of the Conchords in 2008.
"Sally '' debuted on the internet, a month before the show premiered on HBO. The network, in conjunction with MySpace, iTunes, Yahoo! TV, Movielink, Comcast.net and Roadrunner.com, allowed a promotional version of the episode to be streamed as part of an online marketing campaign to build up word - of - mouth for the series. On television, "Sally '' debuted on the HBO in the United States at 10: 30 PM on Sunday, June 17, 2007 in the time slot preceded by Entourage, and vacated by the last episode of the final season of The Sopranos. The episode received over 1.2 million viewers.
The episode received largely positive reviews from critics. IGN, in an advanced review of the episode, awarded "Sally '' an "amazing '' 9.2 out of 10 rating and called the series "The funniest show you have n't seen yet. '' The review noted that, "Flight of the Conchords deserves the buzz it is slowly building. This is a very funny show. '' Blogcritics reviewer Daniel J. Stasiewski noted that the series was different for HBO, writing, "Flight of the Conchords is n't Entourage or Sex in the City or even Extras. It 's different. And sometimes different is just good. '' Stasiewski, however, did note that the availability of the band 's music on video sites like YouTube meant that watching the series was not worth the cost of a cable subscription. Further more, Stasiweski noted that while "the fun, quirky music videos that pop - up can make this long half - hour worth watching (...) the 10 or so minutes in between numbers are n't groundbreaking comedy. '' Chris Schonberger from Entertainment Weekly gave the episode a largely positive review. He called the new series "the funniest hour of comedy on television '' and noted that the performance of Rhys Darby as Murray Hewitt was excellent, calling his character "scene - stealing ''. Finally, Schonberger positively compared the episode to the 2004 comedy film Napoleon Dynamite, writing, "Indeed, the whole pilot vaguely reminded me of Napoleon in the way that the characters just sort of lurk around and pour their limited energy into absurd activities ''.
|
is nebraska in the middle of the usa | Nebraska - wikipedia
Welcome to NEBRASKAland where the West begins
Nebraska / nɪˈbræskə / (listen) is a state that lies in both the Great Plains and the Midwestern United States. The state is bordered by South Dakota to the north, Iowa to the east and Missouri to the southeast, both across the Missouri River, Kansas to the south, Colorado to the southwest and Wyoming to the west. It is the only triply landlocked U.S. state. Nebraska 's area is just over 77,220 sq mi (200,000 km) with almost 1.9 million people. Its state capital is Lincoln, and its largest city is Omaha, which is on the Missouri River.
Indigenous peoples including Omaha, Missouria, Ponca, Pawnee, Otoe, and various branches of the Lakota (Sioux) tribes lived in the region for thousands of years before European exploration. The state is crossed by many historic trails and was explored by the Lewis and Clark Expedition.
Nebraska was admitted as the 37th state of the United States in 1867. It is the only state in the United States whose legislature is unicameral and officially nonpartisan.
Nebraska is composed of two major land regions: the Dissected Till Plains and the Great Plains. The Dissected Till Plains is a region of gently rolling hills and contains the state 's largest cities, Omaha and Lincoln. The Great Plains occupy most of western Nebraska, characterized by treeless prairie, suitable for cattle - grazing. The state has a large agriculture sector and is a major producer of beef, pork, corn and soybeans. There are two major climatic zones: the eastern half of the state has a humid continental climate (Köppen climate classification Dfa), with a unique warmer subtype considered "warm - temperate '' near the southern plains like in Kansas and Oklahoma which have a predominantly humid subtropical climate. The western half has a primarily semi-arid climate (Koppen BSk). The state has wide variations between winter and summer temperatures, decreasing south through the state. Violent thunderstorms and tornadoes occur primarily during spring and summer, but sometimes in autumn. Chinook winds tend to warm the state significantly in the winter and early spring.
Nebraska 's name is derived from transliteration of the archaic Otoe words Ñí Brásge, pronounced (ɲĩbɾasꜜkɛ) (contemporary Otoe Ñí Bráhge), or the Omaha Ní Btháska, pronounced (nĩbɫðasꜜka), meaning "flat water '', after the Platte River that flows through the state.
Indigenous peoples lived in the region of present - day Nebraska for thousands of years before European exploration. The historic tribes in the state included the Omaha, Missouria, Ponca, Pawnee, Otoe, and various branches of the Lakota (Sioux), some of which migrated from eastern areas into this region. When European exploration, trade, and settlement began, both Spain and France sought to control the region. In the 1690s, Spain established trade connections with the Apaches, whose territory then included western Nebraska. By 1703, France had developed a regular trade with the native peoples along the Missouri River in Nebraska, and by 1719 had signed treaties with several of these peoples. After war broke out between the two countries, Spain dispatched an armed expedition to Nebraska under Lieutenant General Pedro de Villasur in 1720. The party was attacked and destroyed near present - day Columbus by a large force of Pawnees and Otoes, both allied to the French. The massacre ended Spanish exploration of the area for the remainder of the 18th century.
In 1762, during the Seven Years ' War, France ceded the Louisiana territory to Spain. This left Britain and Spain competing for dominance along the Mississippi; by 1773, the British were trading with the native peoples of Nebraska. In response, Spain dispatched two trading expeditions up the Missouri in 1794 and 1795; the second, under James Mackay, established the first European settlement in Nebraska near the mouth of the Platte. Later that year, Mackay 's party built a trading post, dubbed Fort Carlos IV (Fort Charles), near present - day Homer.
In 1819, the United States established Fort Atkinson as the first U.S. Army post west of the Missouri River, just east of present - day Fort Calhoun. The army abandoned the fort in 1827 as migration moved further west. European - American settlement was scarce until 1848 and the California Gold Rush. On May 30, 1854, the US Congress created the Kansas and the Nebraska territories, divided by the Parallel 40 ° North, under the Kansas -- Nebraska Act. The Nebraska Territory included parts of the current states of Colorado, North Dakota, South Dakota, Wyoming, and Montana. The territorial capital of Nebraska was Omaha.
In the 1860s, after the U.S. government forced many of the Native American tribes to cede their lands and settle on reservations, it opened large tracts of land to agricultural development by Europeans and Americans. Under the Homestead Act, thousands of settlers migrated into Nebraska to claim free land granted by the federal government. Because so few trees grew on the prairies, many of the first farming settlers built their homes of sod, as had the Native Americans such as the Omaha. The first wave of settlement gave the territory a sufficient population to apply for statehood. Nebraska became the 37th state on March 1, 1867, and the capital was moved from Omaha to the center at Lancaster, later renamed Lincoln after the recently assassinated President of the United States, Abraham Lincoln. The battle of Massacre Canyon on August 5, 1873, was the last major battle between the Pawnee and the Sioux.
During the 1870s to the 1880s, Nebraska experienced a large growth in population. Several factors contributed to attracting new residents. The first was that the vast prairie land was perfect for cattle grazing. This helped settlers to learn the unfamiliar geography of the area. The second factor was the invention of several farming technologies. Agricultural inventions such as barbed wire, wind mills, and the steel plow, combined with good weather, enabled settlers to use of Nebraska as prime farming land. By the 1880s, Nebraska 's population had soared to more than 450,000 people. The Arbor Day holiday was founded in Nebraska City by territorial governor J. Sterling Morton. The National Arbor Day Foundation is still headquartered in Nebraska City, with some offices in Lincoln.
In the late 19th century, many African Americans migrated from the South to Nebraska as part of the Great Migration, primarily to Omaha which offered working class jobs in meat packing, the railroads and other industries. Omaha has a long history of civil rights activism. Blacks encountered discrimination from other Americans in Omaha and especially from recent European immigrants, ethnic whites who were competing for the same jobs. In 1912, African Americans founded the Omaha chapter of the National Association for the Advancement of Colored People to work for improved conditions in the city and state.
Since the 1960s, Native American activism in the state has increased, both through open protest, activities to build alliances with state and local governments, and in the slower, more extensive work of building tribal institutions and infrastructure. Native Americans in federally recognized tribes have pressed for self - determination, sovereignty and recognition. They have created community schools to preserve their cultures, as well as tribal colleges and universities. Tribal politicians have also collaborated with state and county officials on regional issues.
The state is bordered by South Dakota to the north; Iowa to the east and Missouri to the southeast, across the Missouri River; Kansas to the south; Colorado to the southwest; and Wyoming to the west. The state has 93 counties; it occupies the central portion of the Frontier Strip. Nebraska is split into two time zones, with the state 's eastern half observing Central Time and the western half observing Mountain Time. Three rivers cross the state from west to east. The Platte River, formed by the confluence of the North Platte and the South Platte, runs through the state 's central portion, the Niobrara River flows through the northern part, and the Republican River runs across the southern part.
Nebraska is composed of two major land regions: the Dissected Till Plains and the Great Plains. The easternmost portion of the state was scoured by Ice Age glaciers; the Dissected Till Plains were left after the glaciers retreated. The Dissected Till Plains is a region of gently rolling hills; Omaha and Lincoln are in this region. The Great Plains occupy most of western Nebraska, with the region consisting of several smaller, diverse land regions, including the Sandhills, the Pine Ridge, the Rainwater Basin, the High Plains and the Wildcat Hills. Panorama Point, at 5,424 feet (1,653 m), is Nebraska 's highest point; though despite its name and elevation, it is a relatively low rise near the Colorado and Wyoming borders. A past Nebraska tourism slogan was "Where the West Begins ''; locations given for the beginning of the "West '' include the Missouri River, the intersection of 13th and O Streets in Lincoln (where it is marked by a red brick star), the 100th meridian, and Chimney Rock.
Areas under the management of the National Park Service include:
Areas under the management of the National Forest Service include:
Two major climatic zones are represented in Nebraska: the eastern half of the state has a humid continental climate (Köppen climate classification Dfa), and the western half, a semi-arid climate (Koppen BSk). The entire state experiences wide seasonal variations in temperature and precipitation. Average temperatures are fairly uniform across Nebraska, with hot summers and generally cold winters.
Average annual precipitation decreases east to west from about 31.5 inches (800 mm) in the southeast corner of the state to about 13.8 inches (350 mm) in the Panhandle. Humidity also decreases significantly from east to west. Snowfall across the state is fairly even, with most of Nebraska receiving between 25 and 35 inches (65 and 90 cm) of snow annually. Nebraska 's highest recorded temperature is 118 ° F (48 ° C) at Minden on July 24, 1936 and the lowest recorded temperature is − 47 ° F (− 44 ° C) at Camp Clarke on February 12, 1899.
Nebraska is in Tornado Alley. Thunderstorms are common in the spring and summer months, and violent thunderstorms and tornadoes happen primarily during the spring and summer, though they can also occur in the autumn. The chinook winds from the Rocky Mountains provide a temporary moderating effect on temperatures in western Nebraska during the winter months.
The United States Census Bureau estimates that the population of Nebraska was 1,896,190 on July 1, 2015, a 3.82 % increase since the 2010 United States Census. The center of population of Nebraska is in Polk County, in the city of Shelby.
According to the 2010 Census, 86.1 % of the population was White (82.1 % non-Hispanic white), 4.5 % was Black or African American, 1.0 % American Indian and Alaska Native, 1.8 % Asian, 0.1 % Native Hawaiian and other Pacific Islander, 2.2 % from two or more races. 9.2 % of the total population was of Hispanic or Latino origin (they may be of any race).
As of 2004, the population of Nebraska included about 84,000 foreign - born residents (4.8 % of the population).
The five largest ancestry groups in Nebraska are German (38.6 %), Irish (12.4 %), English (9.6 %), Mexican (8.7 %), and Czech (5.5 %).
Nebraska has the largest Czech American and non-Mormon Danish American population (as a percentage of the total population) in the nation. German Americans are the largest ancestry group in most of the state, particularly in the eastern counties. Thurston County (made up entirely of the Omaha and Winnebago reservations) has an American Indian majority, and Butler County is one of only two counties in the nation with a Czech - American plurality.
The religious affiliations of the people of Nebraska are:
The largest single denominations by number of adherents in 2010 were the Roman Catholic Church (372,838), the Lutheran Church -- Missouri Synod (112,585), the Evangelical Lutheran Church in America (110,110) and the United Methodist Church (109,283).
As of 2011, 31.0 % of Nebraska 's population younger than age 1 were minorities.
Note: Births in table do n't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.
Eighty - nine percent of the cities in Nebraska have fewer than 3,000 people. Nebraska shares this characteristic with five other Midwestern states: Kansas, Oklahoma, North Dakota and South Dakota, and Iowa. Hundreds of towns have a population of fewer than 1,000. Regional population declines have forced many rural schools to consolidate.
Fifty - three of Nebraska 's 93 counties reported declining populations between 1990 and 2000, ranging from a 0.06 % loss (Frontier County) to a 17.04 % loss (Hitchcock County).
More urbanized areas of the state have experienced substantial growth. In 2000, the city of Omaha had a population of 390,007; in 2005, the city 's estimated population was 414,521 (427,872 including the recently annexed city of Elkhorn), a 6.3 % increase over five years. The 2010 census showed that Omaha has a population of 408,958. The city of Lincoln had a 2000 population of 225,581 and a 2010 population of 258,379, a 14.5 % increase.
As of the 2010 Census, there were 530 cities and villages in the state of Nebraska. There are five classifications of cities and villages in Nebraska, which is based upon population. All population figures are 2013 Census Bureau estimates unless flagged by a reference number.
Metropolitan Class City (300,000 or more)
Primary Class City (100,000 -- 299,999)
First Class City (5,000 -- 99,999)
Second Class Cities (800 -- 4,999) and Villages (100 -- 800) make up the rest of the communities in Nebraska. There are 116 second class cities and 382 villages in the state.
Metropolitan areas - 2012 estimate data
Micropolitan areas - 2012 estimate data
Other areas
Nebraska has a progressive income tax. The portion of income from $0 to $2,400 is taxed at 2.56 %; from $2,400 to $17,500, at 3.57 %; from $17,500 to $27,000, at 5.12 %; and income over $27,000, at 6.84 %. The standard deduction for a single taxpayer is $5,700; the personal exemption is $118.
Nebraska has a state sales and use tax of 5.5 %. In addition to the state tax, some Nebraska cities assess a city sales and use tax, in 0.5 % increments, up to a maximum of 1.5 %. Dakota County levies an additional 0.5 % county sales tax. Food and ingredients that are generally for home preparation and consumption are not taxable. All real property within the state of Nebraska is taxable unless specifically exempted by statute. Since 1992, only depreciable personal property is subject to tax and all other personal property is exempt from tax. Inheritance tax is collected at the county level.
The Bureau of Economic Analysis estimates of Nebraska 's gross state product in 2010 was $89.8 billion. Per capita personal income in 2004 was $31,339, 25th in the nation. Nebraska has a large agriculture sector, and is a major producer of beef, pork, corn (maize), soybeans, and sorghum. Other important economic sectors include freight transport (by rail and truck), manufacturing, telecommunications, information technology, and insurance.
As of April 2015, the state 's unemployment rate was 2.5 %, the lowest in the nation.
Kool - Aid was created in 1927 by Edwin Perkins in the city of Hastings, which celebrates the event the second weekend of every August with Kool - Aid Days, and Kool - Aid is the official soft drink of Nebraska. CliffsNotes were developed by Clifton Hillegass of Rising City. He adapted his pamphlets from the Canadian publications, Coles Notes.
Omaha is home to Berkshire Hathaway, whose Chief executive officer (CEO), Warren Buffett, was ranked in March 2009 by Forbes magazine as the second richest person in the world. The city is also home to Mutual of Omaha, InfoUSA, TD Ameritrade, West Corporation, Valmont Industries, Woodmen of the World, Kiewit Corporation, Union Pacific Railroad, and Gallup. Ameritas Life Insurance Corp., Nelnet, Sandhills Publishing Company, and Duncan Aviation are based in Lincoln; The Buckle is based in Kearney. Sidney is the national headquarters for Cabela 's, a specialty retailer of outdoor goods.
The world 's largest train yard, Union Pacific 's Bailey Yard, is in North Platte. The Vise - Grip was invented by William Petersen in 1924, and was manufactured in De Witt until the plant was closed and moved to China in late 2008.
Lincoln 's Kawasaki Motors Manufacturing is the only Kawasaki plant in the world to produce the Jet Ski, All - terrain vehicle (ATV), and Mule lines of product. The facility employs more than 1200 people.
The Spade Ranch, in the Sandhills, is one of Nebraska 's oldest and largest beef cattle operations.
The Union Pacific Railroad, headquartered in Omaha, was incorporated on July 1, 1862, in the wake of the Pacific Railway Act of 1862. Bailey Yard, in North Platte, is the largest railroad classification yard in the world. The route of the original transcontinental railroad runs through the state.
Other major railroads with operations in the state are: Amtrak; BNSF Railway; Canadian National Railway; and Iowa Interstate Railroad.
Interstate Highways through the State of Nebraska The U.S. Routes in Nebraska
Nebraska 's government operates under the framework of the Nebraska Constitution, adopted in 1875, and is divided into three branches: executive, legislative, and judicial.
The head of the executive branch is Governor Pete Ricketts. Other elected officials in the executive branch are Lieutenant Governor Mike Foley, Attorney General Doug Peterson, Secretary of State John A. Gale, State Treasurer Don Stenberg, and State Auditor Charlie Janssen. All elected officials in the executive branch serve four - year terms.
Nebraska is the only state in the United States with a unicameral legislature. Although this house is officially known simply as the "Legislature '', and more commonly called the "Unicameral '', its members call themselves "senators ''. Nebraska 's Legislature is also the only state legislature in the United States that is officially nonpartisan. The senators are elected with no party affiliation next to their names on the ballot, and members of any party can be elected to the positions of speaker and committee chairs. The Nebraska Legislature can also override the governor 's veto with a three - fifths majority, in contrast to the two - thirds majority required in some other states.
The Legislature meets in the third Nebraska State Capitol building, built between 1922 and 1932. It was designed by Bertram G. Goodhue. Built from Indiana limestone, the capitol 's base is a cross within a square. A 400 - foot domed tower rises from this base. The Sower, a 19 - foot bronze statue representing agriculture, crowns the building. When Nebraska became a state in 1867, its legislature consisted of two houses: a House of Representatives and a Senate. For years, U.S. Senator George Norris and other Nebraskans encouraged the idea of a unicameral legislature, and demanded the issue be decided in a referendum. Norris argued:
The constitutions of our various states are built upon the idea that there is but one class. If this be true, there is no sense or reason in having the same thing done twice, especially if it is to be done by two bodies of men elected in the same way and having the same jurisdiction.
Unicameral supporters also argued that a bicameral legislature had a significant undemocratic feature in the committees that reconciled House and Senate legislation. Votes in these committees were secretive, and would sometimes add provisions to bills that neither house had approved. Nebraska 's unicameral legislature today has rules that bills can contain only one subject, and must be given at least five days of consideration. In 1934, due in part to the budgetary pressure of the Great Depression, Nebraska citizens ran a state initiative to vote on a constitutional amendment creating a unicameral legislature, which was approved, which, in effect, abolished the House of Representatives (the lower house).
The judicial system in Nebraska is unified, with the Nebraska Supreme Court having administrative authority over all the courts within the state. Nebraska uses the Missouri Plan for the selection of judges at all levels, including county courts (as the lowest - level courts) and twelve district courts, which contain one or more counties. The Nebraska State Court of Appeals hears appeals from the district courts, juvenile courts, and workers ' compensation courts, and is the final court of appeal.
Nebraska 's U.S. senators are Deb Fischer and Ben Sasse, both Republicans; Fischer, elected in 2012, is the senior.
Nebraska has three representatives in the House of Representatives: Jeff Fortenberry (R) of the 1st district; Don Bacon (R) of the 2nd district; and Adrian Smith (R) of the 3rd district.
Nebraska is one of two states (Maine being the other) that allow for a split in the state 's allocation of electoral votes in presidential elections. Under a 1991 law, two of Nebraska 's five votes are awarded to the winner of the statewide popular vote, while the other three go to the highest vote - getter in each of the state 's three congressional districts.
For most of its history, Nebraska has been a solidly Republican state. Republicans have carried the state in all but one presidential election since 1940: the 1964 landslide election of Lyndon B. Johnson. In the 2004 presidential election, George W. Bush won the state 's five electoral votes by a margin of 33 percentage points (making Nebraska 's the fourth - strongest Republican vote among states) with 65.9 % of the overall vote; only Thurston County, which is majority - Native American, voted for his Democratic challenger John Kerry. In 2008, the state split its electoral votes for the first time: Republican John McCain won the popular vote in Nebraska as a whole and two of its three congressional districts; the second district, which includes the city of Omaha, went for Democrat Barack Obama.
Despite the current Republican domination of Nebraska politics, the state has a long tradition of electing centrist members of both parties to state and federal office; examples include George W. Norris (who served few years in the Senate as an independent), J. James Exon, and Bob Kerrey. Voters have tilted to the right in recent years with the election of conservative Mike Johanns to the U.S. Senate and the 2006 re-election of Ben Nelson, who was considered the most conservative Democrat in the Senate until his retirement in 2013, when he was replaced by conservative Republican Deb Fischer.
Former President Gerald Ford was born in Nebraska, but moved away shortly after birth. Illinois native William Jennings Bryan represented Nebraska in Congress, served as U.S. Secretary of State under President Woodrow Wilson, and unsuccessfully ran for President three times.
University of Nebraska system
Nebraska State College System
Community Colleges
Private colleges / universities
The College World Series has been held in Omaha since 1950. It was held at Rosenblatt Stadium from 1950 through 2010, and at TD Ameritrade Park Omaha since 2011.
The following are National Collegiate Athletic Association Division I college sports programs in Nebraska.
Nebraska has several colleges playing at the NCAA Division II level.
Coordinates: 41 ° 30 ′ N 100 ° 00 ′ W / 41.5 ° N 100 ° W / 41.5; - 100
|
where does a physician list the final diagnosis | Medical diagnosis - wikipedia
Medical diagnosis (abbreviated Dx or D) is the process of determining which disease or condition explains a person 's symptoms and signs. It is most often referred to as diagnosis with the medical context being implicit. The information required for diagnosis is typically collected from a history and physical examination of the person seeking medical care. Often, one or more diagnostic procedures, such as diagnostic tests, are also done during the process. Sometimes posthumous diagnosis is considered a kind of medical diagnosis.
Diagnosis is often challenging, because many signs and symptoms are nonspecific. For example, redness of the skin (erythema), by itself, is a sign of many disorders and thus does not tell the healthcare professional what is wrong. Thus differential diagnosis, in which several possible explanations are compared and contrasted, must be performed. This involves the correlation of various pieces of information followed by the recognition and differentiation of patterns. Occasionally the process is made easy by a sign or symptom (or a group of several) that is pathognomonic.
Diagnosis is a major component of the procedure of a doctor 's visit. From the point of view of statistics, the diagnostic procedure involves classification tests.
The first recorded examples of medical diagnosis are found in the writings of Imhotep (2630 -- 2611 BC) in ancient Egypt (the Edwin Smith Papyrus). A Babylonian medical textbook, the Diagnostic Handbook written by Esagil - kin - apli (fl. 1069 -- 1046 BC), introduced the use of empiricism, logic and rationality in the diagnosis of an illness or disease. Traditional Chinese Medicine, as described in the Yellow Emperor 's Inner Canon or Huangdi Neijing, specified four diagnostic methods: inspection, auscultation - olfaction, interrogation, and palpation. Hippocrates was known to make diagnoses by tasting his patients ' urine and smelling their sweat.
A diagnosis, in the sense of diagnostic procedure, can be regarded as an attempt at classification of an individual 's condition into separate and distinct categories that allow medical decisions about treatment and prognosis to be made. Subsequently, a diagnostic opinion is often described in terms of a disease or other condition, but in the case of a wrong diagnosis, the individual 's actual disease or condition is not the same as the individual 's diagnosis.
A diagnostic procedure may be performed by various health care professionals such as a physician, physical therapist, optometrist, healthcare scientist, chiropractor, dentist, podiatrist, nurse practitioner, or physician assistant. This article uses diagnostician as any of these person categories.
A diagnostic procedure (as well as the opinion reached thereby) does not necessarily involve elucidation of the etiology of the diseases or conditions of interest, that is, what caused the disease or condition. Such elucidation can be useful to optimize treatment, further specify the prognosis or prevent recurrence of the disease or condition in the future.
The initial task is to detect a medical indication to perform a diagnostic procedure. Indications include:
Even during an already ongoing diagnostic procedure, there can be an indication to perform another, separate, diagnostic procedure for another, potentially concomitant, disease or condition. This may occur as a result of an incidental finding of a sign unrelated to the parameter of interest, such as can occur in comprehensive tests such as radiological studies like magnetic resonance imaging or blood test panels that also include blood tests that are not relevant for the ongoing diagnosis.
General components which are present in a diagnostic procedure in most of the various available methods include:
There are a number of methods or techniques that can be used in a diagnostic procedure, including performing a differential diagnosis or following medical algorithms. In reality, a diagnostic procedure may involve components of multiple methods.
The method of differential diagnosis is based on finding as many candidate diseases or conditions as possible that can possibly cause the signs or symptoms, followed by a process of elimination or at least of rendering the entries more or less probable by further medical tests and other processing until, aiming to reach the point where only one candidate disease or condition remains as probable. The final result may also remain a list of possible conditions, ranked in order of probability or severity.
The resultant diagnostic opinion by this method can be regarded more or less as a diagnosis of exclusion. Even if it does not result in a single probable disease or condition, it can at least rule out any imminently life - threatening conditions.
Unless the provider is certain of the condition present, further medical tests, such as medical imaging, are performed or scheduled in part to confirm or disprove the diagnosis but also to document the patient 's status and keep the patient 's medical history up to date.
If unexpected findings are made during this process, the initial hypothesis may be ruled out and the provider must then consider other hypotheses.
In a pattern recognition method the provider uses experience to recognize a pattern of clinical characteristics. It is mainly based on certain symptoms or signs being associated with certain diseases or conditions, not necessarily involving the more cognitive processing involved in a differential diagnosis.
This may be the primary method used in cases where diseases are "obvious '', or the provider 's experience may enable him or her to recognize the condition quickly. Theoretically, a certain pattern of signs or symptoms can be directly associated with a certain therapy, even without a definite decision regarding what is the actual disease, but such a compromise carries a substantial risk of missing a diagnosis which actually has a different therapy so it may be limited to cases where no diagnosis can be made.
The term diagnostic criteria designates the specific combination of signs, symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis.
Some examples of diagnostic criteria, also known as clinical case definitions, are:
Clinical decision support systems are interactive computer programs designed to assist health professionals with decision - making tasks. The clinician interacts with the software utilizing both the clinician 's knowledge and the software to make a better analysis of the patients data than either human or software could make on their own. Typically the system makes suggestions for the clinician to look through and the clinician picks useful information and removes erroneous suggestions.
Other methods that can be used in performing a diagnostic procedure include:
Diagnosis problems are the dominant cause of medical malpractice payments, accounting for 35 % of total payments in a study of 25 years of data and 350,000 claims.
Overdiagnosis is the diagnosis of "disease '' that will never cause symptoms or death during a patient 's lifetime. It is a problem because it turns people into patients unnecessarily and because it can lead to economic waste (overutilization) and treatments that may cause harm. Overdiagnosis occurs when a disease is diagnosed correctly, but the diagnosis is irrelevant. A correct diagnosis may be irrelevant because treatment for the disease is not available, not needed, or not wanted.
Most people will experience at least one diagnostic error in their lifetime, according to a 2015 report by the National Academies of Sciences, Engineering, and Medicine.
Causes and factors of error in diagnosis are:
When making a medical diagnosis, a lag time is a delay in time until a step towards diagnosis of a disease or condition is made. Types of lag times are mainly:
The plural of diagnosis is diagnoses. The verb is to diagnose, and a person who diagnoses is called a diagnostician. The word diagnosis / daɪ. əɡˈnoʊsɪs / is derived through Latin from the Greek word διάγνωσις from διαγιγνώσκειν, meaning "to discern, distinguish ''.
Medical diagnosis or the actual process of making a diagnosis is a cognitive process. A clinician uses several sources of data and puts the pieces of the puzzle together to make a diagnostic impression. The initial diagnostic impression can be a broad term describing a category of diseases instead of a specific disease or condition. After the initial diagnostic impression, the clinician obtains follow up tests and procedures to get more data to support or reject the original diagnosis and will attempt to narrow it down to a more specific level. Diagnostic procedures are the specific tools that the clinicians use to narrow the diagnostic possibilities.
Diagnosis can take many forms. It might be a matter of naming the disease, lesion, dysfunction or disability. It might be a management - naming or prognosis - naming exercise. It may indicate either degree of abnormality on a continuum or kind of abnormality in a classification. It 's influenced by non-medical factors such as power, ethics and financial incentives for patient or doctor. It can be a brief summation or an extensive formulation, even taking the form of a story or metaphor. It might be a means of communication such as a computer code through which it triggers payment, prescription, notification, information or advice. It might be pathogenic or salutogenic. It 's generally uncertain and provisional.
Once a diagnostic opinion has been reached, the provider is able to propose a management plan, which will include treatment as well as plans for follow - up. From this point on, in addition to treating the patient 's condition, the provider can educate the patient about the etiology, progression, prognosis, other outcomes, and possible treatments of her or his ailments, as well as providing advice for maintaining health.
A treatment plan is proposed which may include therapy and follow - up consultations and tests to monitor the condition and the progress of the treatment, if needed, usually according to the medical guidelines provided by the medical field on the treatment of the particular illness.
Relevant information should be added to the medical record of the patient.
A failure to respond to treatments that would normally work may indicate a need for review of the diagnosis.
Sub-types of diagnoses include:
Medical sign Symptom Syndrome
Medical diagnosis Differential diagnosis Prognosis
Acute Chronic Cure / Remission
Disease Eponymous disease Acronym or abbreviation
|
el señor de los cielos temporada 6 actores | El Señor de los cielos (Season 6) - wikipedia
The six season of El Señor de los Cielos, an American television series created by Luis Zelkowicz, that premiered on Telemundo on May 8, 2018.
The season was ordered in May 2017.
Aurelio Casillas recovered all the lost fortune and finally feels the need to retire. But it is time for retribution, the hatred that he sowed since he sold his soul to the drug trafficking demon is now knocking on his door with the face and blood of the many innocent people he destroyed. Aurelio will understand that his riches are an illusion, and that after being the great hunter he was, he will now become the prey. The women he mistreated, the men he betrayed, the political puppets he put in power, and even his own children will turn against him.
On March 30, 2018 People en Español magazine confirmed the first confirmed actors for the season, which are Rafael Amaya, Carmen Aub, Iván Arana, Lisa Owen, Alejandro López, and Jesús Moré. This season features the return of Robinson Díaz as El Cabo, and new cast members including María Conchita Alonso, Juana Arias, Carlos Bardem, Isabella Castillo, Ninel Conde, Guy Ecker, Alberto Guerra, Thali García, Dayana Garroz, Francisco Gattorno, and Fernando Noriega, among others.
After Mauricio Ochmann announced that he would no longer playing El Chema, actor Alberto Guerra joins the series with the same character as Ochmann.
The premiere of the sixth season was watched by 2.14 million viewers, which made Telemundo position itself as the Spanish - language network most watched at 10pm / 9c, thus outperforming its Por amar sin ley competition, that it only obtained a total of 1.44 million viewers. After the good reception obtained by the first two episodes of the season, Telemundo renewed the series for a seventh season during the Upfront for the 2018 -- 19 television season.
|
when did england last play in world cup semi final | England at the FIFA World Cup - wikipedia
The England national football team has competed at the FIFA World Cup since 1950. The FIFA World Cup is the premier competitive international football tournament, first played in 1930, whose finals stage has been held every four years since, except 1942 and 1946, due to the Second World War.
The tournament consists of two parts, the qualification phase and the final phase (officially called the World Cup Finals). The qualification phase, which currently take place over the three years preceding the finals, is used to determine which teams qualify for the finals. The current format of the finals involves thirty - two teams competing for the title, at venues within the host nation (or nations) over a period of about a month. The World Cup Finals is the most widely viewed sporting event in the world, with an estimated 715.1 million people watching the 2006 Final.
England did not enter the competition until 1950, but have entered all eighteen subsequent tournaments. They have failed to qualify for the finals on three occasions, 1974 (West Germany), 1978 (Argentina) and 1994 (United States), and have failed to advance from the group stages on three occasions; at the 1950 FIFA World Cup, the 1958 FIFA World Cup and the 2014 FIFA World Cup. Their best ever performance is winning the Cup in the 1966 tournament held in England, whilst they also finished in fourth place in 1990, in Italy, and in 2018 in Russia. Other than that, the team have reached the quarter - finals on nine occasions, the latest of which were at the 2002 (South Korea / Japan) and the 2006 (Germany).
England are the only team not representing a sovereign state to win the World Cup, which they did in 1966 when they hosted the finals. They defeated West Germany 4 -- 2 after extra time to win the World Cup title. Since then, they have generally reached the knockout stages of almost every competition they have qualified for, including a fourth - place finish in the 1990 and 2018 World Cups. At the world cup, England have had more goalless draws than any other team.
England 's first qualifying campaign for the FIFA World Cup doubled as the 1950 British Home Championship. The series kicked off for England on 15 October 1949 at Ninian Park, Cardiff, against Wales. Stan Mortensen gave England the lead after twenty two minutes, and just seven minutes later, Jackie Milburn doubled the lead. This was the first goal of Milburn 's hat trick, which left England 4 -- 0 up with 20 minutes to play. Mal Griffiths scored a consolation goal for Wales ten minutes from time, but England held on for a comfortable victory.
A month later, England welcomed Ireland to Maine Road, and it began well for the home side as Jack Rowley scored inside six minutes. England were already 6 -- 0 up, thanks to Jack Froggatt, two for Stan Pearson, Stan Mortensen and a second from Rowley, by the time Ireland struck back through Samuel Smyth after 55 minutes. Rowley added a third and a fourth to his tally in the three minutes following Smyth 's goal, however, leaving the score at 8 -- 1 at the hour mark. The frantic scoring rate calmed down after that, with only one apeice before the final whistle, with Stan Pearson completing his brace for England 's ninth, and Bobby Brennan scoring for Ireland.
It was not until May 1950 that England travelled to Hampden Park to face Scotland, who were also undefeated after their games against Ireland and Wales. With the top two from the group qualifying, both teams were guaranteed progression to the finals, and the game was solely for the honour of winning the British Home Championship, and the seeding advantage to be enjoyed upon reaching Brazil. A solitary goal from Roy Bentley gave England the victory, the title and the top spot in the group.
England were seeded in pot one for the finals, which meant they were the favourites to progress from Group 2, which also contained Spain, Chile and the United States. England 's campaign kicked off against Chile in Rio de Janeiro, and, as was expected, England cruised to a 2 -- 0 victory, courtesy of goals from Stan Mortensen and Wilf Mannion.
Their troubles began four days later when they faced the Americans in Belo Horizonte in what has become one of the most famous matches of all time. Joe Gaetjens scored the only goal of the match to give the United States an unlikely victory, which has gone down as one of the World Cup 's greatest upsets. A myth arose that the English newspapers were so confident of an English victory that when the result was telegrammed back, they assumed a misprint and printed the score as 10 -- 1 in England 's favour. However, this has proven to be untrue.
This left England in a sticky situation prior to their final match, against Spain in Rio. They needed to win, and for Chile to beat the United States to stand any chance of going through, and even then they would need the goal averages to fall in their favour. As it turned out, no such calculations were necessary, despite Chile 's victory, as Spain 's Zarra scored the only goal of the game, eliminating England from the competition.
As with their first World Cup, England 's qualifying for the 1954 edition also constituted the 1953 -- 54 British Home Championship. They played Wales at Ninian Park as their first match once again, and the 4 -- 1 result was the same. However, unlike four years earlier, it was the home side that went into the lead, after twenty two minutes through Ivor Allchurch. Despite being 1 -- 0 down at half time, England scored four within eight minutes of the restart; two each for Dennis Wilshaw and Nat Lofthouse.
Goodison Park was the venue for England 's home clash against Ireland, who were newly renamed Northern Ireland. Harold Hassall got England off to a good start with a goal after just ten minutes. Eddie McMorran put the Irish back on terms just before the hour mark, but Hassall completed his brace six minutes later. Lofthouse completed a comfortable 3 -- 1 win for England.
With the top two in the group qualifying for the finals, the final game between England and Scotland, at Hampden Park, settled nothing except the placings within the group, despite Scotland having dropped a point with a 3 -- 3 draw at home to Wales. Allan Brown put the home side ahead after just seven minutes, but it was all square again thanks to Ivor Broadis just four minutes later. Johnny Nicholls gave England the lead for the first time just after half time, and they began to extend a lead after Ronnie Allen 's 68th - minute goal. Jimmy Mullen made the game all but certain seven minutes from time, and although Willie Ormond scored a consolation for Scotland with just 1 minute to play, England topped the competition for the second time in a row.
England were drawn in Group 4 for the finals, with hosts Switzerland, Italy and Belgium. In an odd twist, unique to the 1954 tournament, England and Italy, as the two seeded teams in the group, did not have to play each other.
Equally, Switzerland and Belgium did not have to play each other. England 's first game in Switzerland was against Belgium in Basel, and they suffered a shock as Léopold Anoul put the Belgians into the lead after just five minutes. Ivor Broadis put the favourites back on terms just over twenty minutes later, and although Nat Lofthouse gave England the lead 10 minutes later, it was proving to be tougher than they had expected against the Belgians.
Broadis scored his second just after the hour, but Henri Coppens hit back four minutes later to keep Belgium in the game at only 3 -- 2 down. Anoul completed his brace another four minutes after that to level the scores again. In another oddity peculiar to this World Cup, drawn matches in the group stage would go to extra time, and as such the teams played on with the score at 3 -- 3.
Just one minute into the added period, Lofthouse added a fourth for England and they seemed to have won it, but Jimmy Dickinson scored an own goal three minutes later to put the score back at 4 -- 4. It stayed this way until the extra period was up, and as penalty shoot - outs were yet to be invented and replays were not used in the group, the match was recorded as a draw.
England 's second and final group game was against the hosts in Bern. This proved to be an easier game for the Three Lions, and they scored one goal in each half (from Jimmy Mullen and Dennis Wilshaw respectively) to give them a comfortable win of 2 -- 0. As Switzerland (against England), Italy (against Switzerland) and Belgium (against Italy) had all lost one game, England progressed as group winners, along with Switzerland, who won a play - off against Italy.
England faced the winners of group three and defending champions Uruguay in the quarter - finals. Carlos Borges gave the South Americans the lead inside 5 minutes, but Lofthouse put England back on terms ten minutes later. England were clearly struggling, but held on until just before half time, when Obdulio Varela gave the lead back Uruguay.
Juan Alberto Schiaffino doubled the lead just after the break, but Tom Finney kept England 's foot in the door with his sixty seventh - minute goal. However, it was all over after Javier Ambrois restored the two - goal lead with twelve minutes to play. The score remained at 4 -- 2, and England were eliminated from the cup.
For the first time, England had to play against countries other than the Home Nations to reach the Finals in Sweden. They were drawn against the Republic of Ireland and Denmark. In the qualifying round, England won three out of the four games and drew the other. Four months before the World Cup, Roger Byrne, Duncan Edwards, David Pegg and Tommy Taylor all lost their lives in the Munich air disaster while playing for Manchester United. At the finals, which is the only tournament to have seen all Home Nations take part, the Home Nations were all drawn in different groups.
England were drawn against the Soviet Union (2 -- 2), Brazil (0 -- 0) and Austria (2 -- 2), who finished third in the 1954 World Cup. At the end of the group stage, Soviet Union and England each had three points, and had scored four goals and conceded four goals. This meant there was a play - off to decide the second - placed team in the group, the winner to qualify. England lost the play - off 1 -- 0 and were thus knocked out. The only consolation for England was that they were the only team to play the eventual winners Brazil and not lose.
The third World Cup which took place in South America, saw England qualify by successfully qualifying from the group, which contained Portugal and Luxembourg, defeating Luxembourg on both occasions, and defeating Portugal at home, and drawing in Lisbon.
At the finals, England were drawn in a group with Hungary, Argentina and Bulgaria. England defeated Argentina 3 - 1, thanks to goals from Ron Flowers, Bobby Charlton and Jimmy Greaves, before playing out a goalless draw with Bulgaria, and a 2 -- 1 defeat to Hungary.
England finished in second place behind Hungary and played the winners of group 3, defending champions Brazil, in the quarter - finals. Brazil scored first through Garrincha, before an equaliser for Gerry Hitchens before half time. However, second - half goals from Garrincha and Vavá meant Brazil won the game 3 -- 1, and eliminated England from the competition. This defeat was manager Walter Winterbottom 's last game in charge. Winterbottom had led England to four World Cup Finals. From May 1963, Alf Ramsey became the manager of England.
In the 1966 World Cup Finals, England used their home advantage and, under Ramsey, won their first, and only, World Cup title. England played all their games at Wembley Stadium in London, which became the last time that the hosts were granted this privilege. After drawing 0 -- 0 in the opening game against former champions Uruguay, which started a run of four games all ending goalless. England then beat both France and Mexico 2 -- 0 and qualified for the quarter - finals.
The quarter - finals saw England play Argentina, which ended in a 1 -- 0 win to England. This match saw the start of the rivalry between England and Argentina, when Argentinian Antonio Rattín was dismissed by German referee Rudolf Kreitlein in a very fierce game. A 2 -- 1 win against Portugal in the semi-final then followed. Portugal were the first team to score against England in the tournament. The final saw England play West Germany, with the result finishing in a 4 -- 2 win for England, after extra time.
1970 saw the first World Cup finals take place in North America and England qualified automatically for the tournament by winning the 1966 FIFA World Cup. England were drawn in a group with Romania, former world champions Brazil and Czechoslovakia. Each of the matches only saw one goal, with England defeating Romania and Czechoslovakia, and losing to Brazil. The quarter - final saw a repeat of the 1966 final, with England playing West Germany. England were hampered by the fact that first - choice goalkeeper Gordon Banks was ill, and Peter Bonetti played instead. England led 2 -- 0 with goals by Alan Mullery and Martin Peters, but in the 70th minute, Franz Beckenbauer pulled one goal back for West Germany.
After Beckenbauer 's goal, Ramsey substituted Bobby Charlton, who overtook Billy Wright as England 's most capped player ever, with caps totalling 106. Uwe Seeler equalised for the Germans in the eighty first minute, thereby taking the game into extra time. During extra time, Gerd Müller scored the winning goal for West Germany which saw the German side win 3 -- 2. This turned out to be Charlton 's last game for England.
For the first time, England did not qualify for a World Cup. In a group with Olympic champions Poland and Wales, England could not overtake Poland. After only drawing at home to Wales 1 -- 1 and losing the first leg 2 -- 0 to Poland, meant that England had to beat Poland at home, whilst Poland only needed to draw. Poland managed to withstand England 's attacks in the first half, who had Martin Peters playing for them. Poland took the lead in the 57th minute with a goal from Jan Domarski.
England equalised six minutes later, with a penalty converted by Allan Clarke. England were unable to score any more goals with goalkeeper Jan Tomaszewski keeping England at bay. Brian Clough had previously called Tomaszewski a "clown ''. The commentator of the game then said "it 's all over ''. Poland took this good form to the finals and ended in third place. After failing to qualify, Alf Ramsey resigned from his post and after a time, where Ramsey and his predecessor had lasted a total of 29 years, no manager was able to last in the job for longer than eight years. This ended when Bobby Robson became England manager.
England also did not qualify for the fourth World Cup which took place in South America. This time, England were denied by Italy, who had scored three more goals than England after both teams finished on the same points. Goals scored dictated who qualified after the head - to - head record between the two sides finished the same, following a 2 -- 0 home win for each team. The lower - ranked teams in the group were Finland and Luxembourg, but the size of the wins against them proved to be decisive. Nevertheless, Ron Greenwood was given a second chance in charge of England, after taking the role in 1977.
1982 saw the first time where the European Qualifying Rounds were divided into groups of five teams, where the top two teams qualify for the World Cup. Greenwood used his second chance and took England to Spain by finishing second behind Hungary and above Romania, Switzerland and Norway.
At the finals, England won all three group games, defeating France 3 - 1, with a brace from Bryan Robson, before beating Czechoslovakia 2 -- 0, with a Jozef Barmos own goal, and World Cup newcomers Kuwait 1 -- 0, thanks to a Trevor Francis goal.
The next round saw a second group stage consisting of three teams, a first time event at the World Cup. England drew with West Germany 0 -- 0 and after the Germans beat Spain 2 -- 1, England then had to beat Spain with a two - goal difference to progress to the next round. England, however, only managed a 0 -- 0 draw against the Spanish. England remained unbeaten at the end of the tournament. After the World Cup, Ron Greenwood 's time as England manager ended, and he was replaced by Bobby Robson.
1986 saw the second World Cup to take place in Mexico. England qualified for Mexico 1986 by winning four games and drawing four times against Northern Ireland, who qualified in second place, Romania, Finland and Turkey. In Mexico, England lost their opening game to Portugal 1 -- 0 and could only manage a goalless draw against Morocco. The final group game, however, saw England beat Poland 3 -- 0, which is one of the three highest scores for England at the World Cup, with Gary Lineker scoring a hat - trick.
This result took England to second place and finished behind Morocco. England then also beat Paraguay 3 -- 0 in the Round of 16. In the quarter - finals, England renewed their rivalry with Argentina in a game that has become notorious for the Argentina goals, both scored by Diego Maradona. Maradona 's first goal, known as the Hand of God, was illegal and should not have counted, as he used his hand to punch the ball into the net. However, the referee missed this infringement, and ruled that the goal should stand. Maradona then made the score 2 - 0, famously dribbling from inside Argentina 's half and around several English players before scoring. Gary Lineker pulled back the score to 2 - 1, but England ran out of time to equalise, and were eliminated. Nevertheless, Lineker finished with the Golden Boot by scoring six goals and thereby becoming England 's first Golden Boot winner.
By winning three and drawing three, England qualified for Italia ' 90, the second World Cup to be held in Italy, scoring ten goals and conceding none. England were unbeaten through qualification, winning three games, and drawing the remaining three, but still finished second to Sweden, whom they drew with twice. England profited from Romania 's 3 -- 1 win over Denmark, who, had they won, would have qualified as the third - best second - placed team. West Germany and England were able to qualify for Italia ' 90 as the best second - placed teams in the groups with four teams.
Because a few years previously saw English hooligans at European competition matches, England were forced to play their group games on Sardinia and Sicily. In group F, was the European champions Netherlands, the Republic of Ireland and Egypt and England. After opening the tournament with a 1 -- 1 draw against Ireland and a 0 -- 0 draw against the Dutch, England then beat Egypt 1 -- 0. This was Egypt 's first appearance since the 1934 World Cup. England won the group with four points.
In the next round, England had to play Belgium. The game went to extra time, and in the hundredth and nineteenth minute, David Platt scored the winning goal. England also had to play extra time against Cameroon in the quarter - finals. Cameroon were the first African team to have reached the quarter - finals. England opened the scoring through David Platt, but Cameroon quickly turned around the game to lead 2 - 1. Lineker subsequently won and scored a penalty in the 83rd minute to ensure the game went to extra time. He then scored a second penalty, to see England reach the semi-finals.
In the semi-finals, England met West Germany. There was no separating the two teams after 90 minutes, which made England the first team to have played extra time in three successive World Cup games. There was also no separating the two teams after extra time, thereby taking the game to penalties.
Although English goalkeeper Peter Shilton dived the right way for every penalty, he was unable to save any. German goalkeeper Bodo Illgner, having failed to save any of England 's first three penalties, saved England 's fourth penalty, taken by Stuart Pearce. Olaf Thon then scored for Germany, meaning that England 's Chris Waddle would have to score his fifth penalty and hope that Shilton saved the Germans ' fifth penalty. However, Waddle 's penalty missed completely, going high over the crossbar, thereby resulting in England 's being knocked out of the competition. The third - place playoff between England and Italy saw England lose their only game of the tournament in normal time. Even though this was England 's best finish since the 1966 World Cup, Bobby Robson 's time as England manager had come to an end.
For the 1994 World Cup in the United States, under the leadership of new manager Graham Taylor, England surprisingly did not qualify for the tournament. In a group with six teams, England lost to Norway and the Netherlands, finishing third above Poland, Turkey and San Marino.
England went into their final game with San Marino knowing they would need a seven - goal victory and for Poland to beat the Netherlands in the other match in order to qualify. In the game against San Marino, Davide Gualtieri scored against England after nine seconds, taking the lead for the outsiders. England went on to win 7 -- 1, which was too small a goal margin. Additionally, despite the half - time score between the Poles and the Dutch being 1 - 1, the Dutch went on to win 3 - 1, meaning that however many goals England scored, they could still not qualify. Taylor 's tenure in charge ended and he was replaced by Terry Venables, who was dismissed after England lost the semi-final of Euro 1996, hosted in England.
After missing out on the World Cup in 1994, England, managed by Glenn Hoddle, qualified for the World Cup in France. England were drawn in Group 2 of UEFA qualifying with Italy, Poland, Georgia and Moldova. England beat Poland, Georgia and Moldova both home and away, but a home defeat to Italy in their fourth match meant they went into the final qualifier at the Stadio Olimpico in Rome just a point ahead of the Azzurri and only needing a draw to qualify automatically; defeat would see them have to navigate a play - off to secure qualification. The match finished as a goalless draw and England finished top of the group.
At the finals in France, England played in Group G. England defeated Tunisia 2 -- 0 in the first game, with goals from Alan Shearer and Paul Scholes. Their second match saw England lose 2 -- 1 to Romania; despite an 81st - minute equaliser from Michael Owen, Dan Petrescu scored a winner shortly before injury time. In their final group game, England defeated Colombia 2 -- 0 in the decisive match, thanks to goals from midfielders Darren Anderton and David Beckham. England finished second in Group G, which saw them qualify for the last 16 phase, and play the winner of 1998 FIFA World Cup Group H, Argentina.
In a fiery game containing six yellow cards and two penalties, David Beckham was controversially sent off in the 47th minute for what many felt was at most a yellow card offence, knocking over Diego Simeone. Gabriel Batistuta opened the scoring from the penalty spot in the fifth minute of the game, before an equaliser also from the spot by Alan Shearer four minutes later. England took the lead through Owen, in the 16th minute. Argentina drew level through Javier Zanetti in injury time of the first half.
The game finished 2 - 2, and, as neither team were able to find a winner in extra time, penalties were needed to decide the team that qualified to the next round. While David Seaman did save one penalty, Argentine goalkeeper Carlos Roa managed to save two, including the vital one from David Batty, thereby knocking England out of the World Cup. Beckham subsequently received death threats and was sent bullets in the post.
In 2002 the World Cup took place in Asia for the first time. England, under its first ever foreign manager in Swedish Sven - Göran Eriksson, were able to qualify for the tournament. England were drawn in Group 9, alongside Germany, Finland, Greece and Albania. In the last ever game in the original Wembley Stadium, (which closed after the match) England played Germany, losing 1 -- 0, the only goal scored by Dietmar Hamann. The match was the last under the management of Kevin Keegan, who resigned at the end of the match, and was replaced by Eriksson. By beating Germany 5 -- 1 in Munich, England 's qualifying campaign was revitalised, and they qualified automatically, by drawing 2 -- 2 with Greece. Germany, who could only draw 0 -- 0 with Finland, had to play a play - off game against Ukraine, with England qualifying ahead winning the group.
In Japan, England had to play against Eriksson 's homeland, Sweden, and both settled out for a 1 -- 1 draw. England and Beckham gained a measure of revenge for their previous 1998 defeat in defeating Argentina 1 - 0, thanks to a Beckham penalty. However, England could only manage a disappointing 0 - 0 draw against Nigeria, meaning that although they were able to qualify for the second round, where they played Denmark, they qualified as runners up, which meant that they would meet favourites Brazil in the quarter - finals if they qualified.
England played Denmark in the round of 16 defeating Denmark 3 -- 0, thanks to goals from Micheal Owen, Rio Ferdinand, and Emile Heskey. England played four - time World Cup winners and 1998 runners - up Brazil in the quarter - finals. Despite leading through a Michael Owen goal, a mistake by David Seaman saw England lose 2 -- 1, and Brazil won their fourth World Cup match against England, and went on to win the tournament.
England were drawn into Group 6 of European qualifying for the 2006 World Cup. The group featured other home nations in Wales, and Northern Ireland, as well as Poland (who had eliminated England the last time the World Cup took place in Germany), Azerbaijan and Austria. England won eight of the 10 games, and qualified as group winners, in front of Portugal, despite drawing to Austria in Vienna, and losing to Northern Ireland in Belfast.
In Germany, however, England were less convincing. England played in Group B, alongside Paraguay, Trinidad and Tobago, and Sweden. England started with a 1 -- 0 win against Paraguay; which was won due to a 3rd minute own goal. The second game against first time qualifiers Trinidad and Tobago saw England have to wait until the 83rd minute for England to take the lead, with Peter Crouch opened the scoring with a goal many felt was illegal, and the second goal of the game coming in added time from Steven Gerrard. The last group game saw England play against Sweden, where an eventual 2 - 2 draw saw them qualify for the next round as group winners, thereby avoiding playing hosts Germany.
In the last 16 stage, a free kick from David Beckham saw England win 1 -- 0 against Ecuador and reach the quarter - finals, where they faced Portugal. The game finished goalless, and England once again were knocked out on penalties and Portuguese goalkeeper Ricardo became the first goalkeeper to save three penalties in a penalty shoot - out. Ricardo saved from Frank Lampard, Steven Gerrard, and Jamie Carragher; the only England player who successfully converted his penalty was Owen Hargreaves. Portugal won the shoot - out 3 - 1, despite misses from Petit and Hugo Viana. This game was also Erickson 's final match as England manager.
Qualification for the first African World Cup went successfully for new England manager Fabio Capello, after previous manager Steve McClaren was unable to secure qualification to the Euro 2008. By winning nine times and only losing to Ukraine, England qualified ahead of Croatia, Belarus, Kazakhstan and Andorra. England 's group was seen as a favourable one, containing comparatively much weaker teams. However, England opened up their campaign with a 1 -- 1 draw against the United States, thanks to a major error by goalkeeper Robert Green. They then drew 0 -- 0 against Algeria and were booed off the field by their own fans, drawing the ire of striker Wayne Rooney. England eventually qualified for the next round by beating Slovenia 1 -- 0, but only qualified as runners up to the United States, thereby meaning they would draw favourites Germany.
In the second round match, Germany took the lead after 20 minutes after goalkeeper Manuel Neuer played the ball down the pitch to Miroslav Klose, who opened the scoring. The score became 2 -- 0 to Germany after 32 minutes. Shortly after, England defender Matthew Upson scored a header. Later, Frank Lampard had a shot at goal which was disallowed despite crossing the line; which was confirmed on replays. German goalkeeper Manuel Neuer admitted subsequently he knew the ball had crossed the line, but decided to deceive the referee. The German media reported it was "revenge for Wembley '', while the English media criticised FIFA 's refusal to implement goal - line technology. Ironically, despite his earlier opposition to goal - line technology, Sepp Blatter said that it should be introduced after a Ukrainian goal against England at Euro 2012 was ruled out. As England tried to equalise, Germany used this to their advantage and scored two more goals. This became Germany 's biggest win against England in a World Cup, winning 4 - 1.
Under Roy Hodgson, who replaced Fabio Capello after Euro 2012 after a disagreement between Capello and The FA, England qualified for the second World Cup to be held in Brazil. Ukraine were again one of the opponents in the qualifying rounds. The other opponents included Montenegro, Poland, Moldova and San Marino. After winning six games and drawing four, England qualified unbeaten.
The draw for the finals saw England have to play against Italy and Uruguay, both former world champions, which meant that it was the first ever time three World Champions were drawn in the same group, along with Costa Rica. England lost to Italy and Uruguay, and were thus knocked out after two games. The final match against Costa Rica finished as a goalless draw. This performance was statistically their worst ever performance at a World Cup, ending up with just one point after two losses to Uruguay and Italy and a goalless draw with Costa Rica in the dead rubber match, their lowest points total in the group stage of a world cup.
England played in UEFA Group F in qualification for the 2018 World Cup, in a group of six with Slovakia, Slovenia, Scotland, Lithuania and Malta, with only the winner of the group guaranteed qualification. England went into the qualification process under manager Sam Allardyce, only for Allardyce to leave the post after just one game due to controversy regarding discussing breaking FIFA rules.
Under Allardyce 's replacement, Gareth Southgate, England went undefeated throughout qualification, winning eight matches out of 10, drawing with Slovenia 0 -- 0 in Ljubljana, and drawing 2 -- 2 with Scotland in Glasgow thanks to a 90th - minute equaliser from Harry Kane. This was the third successive major tournament that England were undefeated, having been undefeated in 2014 World Cup, and Euro 2016 qualifying.
Under Gareth Southgate, the England team began their tournament in group G against Tunisia. The game started well for England with a goal from Harry Kane in the 11th minute. Tunisia equalised through Sassi from the penalty spot in the 35th minute following a foul by Kyle Walker. Some controversy followed as various potential offences against Harry Kane, inside the penalty area were ignored by the referee and no VAR checks were carried out. England persevered and scored a second goal in the 91st minute, again by Harry Kane, resulting in a 2 -- 1 victory.
In the second group stage match, England surpassed their record for goals scored in a World Cup match by beating Panama 6 - 1. Jesse Lingard scored the third goal, John Stones scored the first and fourth goals and Harry Kane scored a hat - trick with the second, fifth and sixth goals.
In the final group stage match, England narrowly lost 1 - 0 to Belgium. Adnan Januzaj scored the sole goal of the game; but with both teams having fielded reserve teams, with England specificially making 9 changes. Before the game media outlets stating that a loss could potentially become beneficial, as the winner would be in the half of the draw with four of the top seven sides in the world. The result that meant Belgium topped the group and England finished second, setting up a last 16 clash with Group H winners Colombia.
England played their last 16 match in the Otkritie Arena, Moscow, with the same team as against Tunisia. Harry Kane scored his sixth goal of the tournament, and the third penalty, after once again being fouled in the box from a corner, similarly as against Panama. The score remained at 1 - 0, until stoppage time, where a header from Yerry Mina beat Jordan Pickford in goal to bring the game to extra time. Neither team managed to score in extra time, and the match went to penalties, which England won 4 - 3. As well as being the first knockout match England had won at a major tournament since 2006 (last defeating Ecuador in the last 16); it was the first time England won a World Cup penalty shoot - out. The match was notable for the heated atmosphere the game was played in, with a total of eight yellow cards being shown in the match.
England played against Sweden in their quarter - final at Cosmos Arena, Samara on 7 July 2018. They won the match 2 -- 0, with defender Harry Maguire scoring his first England goal, a header scored from a corner, and Dele Alli a second header from close range. This would send them through to their third World Cup semi-final and their first since 1990, and third overall. The team played Croatia in the semi-final, resulting in a 2 -- 1 loss after extra time. England would later finish fourth in the competition, the best result since 1990. England would lose again to Belgium in the 3rd place playoff, thanks to goals from Thomas Meunier and Eden Hazard, despite a Eric Dier shot being cleared off the line by Toby Alderweireld.
The tournament would see England score nine goals from set - pieces -- the most by a team in a single World Cup tournament since 1966.
Last update: 14 July 2018
Current as of 3 July 2018 after the match against Colombia
Historically, very few English World Cup squad members were playing for a club in a foreign league at the time of their selection to the national squad.
Team Awards
Individual Awards
Team Records
Individual Records
Four FIFA World Cup finals were officiated by English referees, more than by any other football association. The first Englishman to officiate a final, George Reader, is also the oldest World Cup referee to date, as he was 53 years and 236 days old at the 1950 decisive match between Uruguay and Brazil. The other final referees are William Ling (1954), Jack Taylor (1974) and Howard Webb (2010).
Arthur Ellis, who was a linesman at the 1950 final, is part of an elite group of referees who has been called up for three consecutive World Cups (1950 -- 1958).
|
why do airplane tail numbers start with n | Aircraft registration - wikipedia
Every civil aircraft must be marked prominently on its exterior by an alphanumeric string, indicating its country of registration and its unique serial number. This code must also appear in its Certificate of Registration, issued by the relevant National Aviation Authority (NAA). An aircraft can only have one registration, in one jurisdiction.
In accordance with the Convention on International Civil Aviation (also known as the Chicago Convention), all civil aircraft must be registered with a national aviation authority (NAA) using procedures set by each country. Every country, even those not party to the Chicago Convention, has an NAA whose functions include the registration of civil aircraft. An aircraft can only be registered once, in one jurisdiction. The NAA allocates a unique alphanumeric string to identify the aircraft, which also indicates the nationality (i.e., country of registration) of the aircraft, and provides a legal document called a Certificate of Registration, one of the documents which must be carried when the aircraft is in operation.
The registration identifier must be displayed prominently on the aircraft. Most countries also require the registration identifier to be imprinted on a permanent fireproof plate mounted on the fuselage in case of a post-fire / post-crash aircraft accident investigation. Military aircraft typically use tail codes and serial numbers.
Although each aircraft registration identifier is unique, some countries allow it to be re-used when the aircraft has been sold, destroyed or retired. For example, N3794N is assigned to a Mooney M20F. It had been previously assigned to a Beechcraft Bonanza (specifically, the aircraft in which Buddy Holly was killed). Also note that an individual aircraft may be assigned different registrations during its existence. This can be because the aircraft changes ownership, jurisdiction of registration, or in some cases for vanity reasons.
Most often, aircraft are registered in the jurisdiction in which the carrier is resident or based, and may enjoy preferential rights or privileges as a flag carrier for international operations. However, many aircraft are registered in offshore financial centres which provide open aircraft registrations, notably Aruba, Bermuda and the Cayman Islands.
Carriers in emerging markets may be required to register aircraft in an offshore jurisdiction where they are leased or purchased but financed by banks in major onshore financial centres. The financing institution may be reluctant to allow the aircraft to be registered in the carrier 's home country (either because it does not have sufficient regulation governing civil aviation, or because it feels the courts in that country would not cooperate fully if it needed to enforce any security interest over the aircraft), and the carrier is reluctant to have the aircraft registered in the financier 's jurisdiction (often the United States or the United Kingdom) either because of personal or political reasons, or because they fear spurious lawsuits and potential arrest of the aircraft.
For example, in 2003, state carrier Pakistan International Airlines re-registered its entire fleet in the Cayman Islands as part of the financing of its purchase of eight new Boeing 777s after the US Export - Import Bank refused to allow the aircraft to remain registered in Pakistan, and the airline refused to have the aircraft registered in the United States.
The first use of aircraft registrations was based on the radio callsigns allocated at the London International Radiotelegraphic Conference in 1913. The format was a single letter prefix followed by four other letters (like A-BCDE). The major nations operating aircraft were allocated a single letter prefix. Smaller countries had to share a single letter prefix, but were allocated exclusive use of the first letter of the suffix. This was modified by agreement by the International Bureau at Berne and published on April 23, 1913. Although initial allocations were not specifically for aircraft but for any radio user, the International Air Navigation Convention held in Paris in 1919 (Paris Convention of 1919) made allocations specifically for aircraft registrations, based on the 1913 callsign list. The agreement stipulated that the nationality marks were to be followed by a hyphen then a group of four letters that must include a vowel (and for the convention Y was considered to be a vowel). This system operated until the adoption of the revised system in 1928.
The International Radiotelegraph Convention at Washington in 1927 revised the list of markings. These were adopted from 1928 and are the basis of the currently used registrations. The markings have been amended and added to over the years, and the allocations and standards have since 1947 been managed by the International Civil Aviation Organization (ICAO).
Article 20 of the Convention on International Civil Aviation (Chicago Convention), signed in 1944, requires that all aircraft engaged in international air navigation bears its appropriate nationality and registration marks. Upon registration, the aircraft receives its unique "registration '', which must be displayed prominently on the aircraft.
Annex 7 to the Chicago Convention describes the definitions, location, and measurement of nationality and registration marks. The aircraft registration is made up of a prefix selected from the country 's callsign prefix allocated by the International Telecommunication Union (ITU) (making the registration a quick way of determining the country of origin) and the registration suffix. Depending on the country of registration, this suffix is a numeric or alphanumeric code, and consists of one to five characters. A supplement to Annex 7 provides an updated list of approved nationality and common marks used by various countries.
While the Chicago convention sets out the country - specific prefixes used in registration marks, and makes provision for the ways they 're used in international civil aviation and displayed on aircraft, individual countries also make further provision for their formats and the use of registration marks for intranational flight.
When painted on the aircraft 's fuselage, the prefix and suffix are usually separated by a dash (for example, YR - BMA). When entered in a flight plan, the dash is omitted (for example, YRBMA). In some countries that use a number suffix rather than letters, like the United States (N), South Korea (HL), and Japan (JA), the prefix and suffix are connected without a dash. Aircraft flying privately usually use their registration as their radio callsign, but many aircraft flying in commercial operations (especially charter, cargo, and airlines) use the ICAO airline designator or a company callsign.
Some countries will permit an aircraft that will not be flown into the airspace of another country to display the registration with the country prefix omitted - for example, gliders registered in Australia commonly display only the three - letter unique mark, without the "VH - '' national prefix.
Some countries also operate a separate registry system, or use a separate group of unique marks, for gliders, ultralights, and / or other less - common types of aircraft. For example, Germany and Switzerland both use lettered suffixes (in the form D - xxxx and HB - xxx respectively) for most forms of flight - craft but numbers (D - nnnn and HB - nnn) for unpowered gliders. Many other nations register gliders in subgroups beginning with the letter G, such as Norway with LN - Gxx and New Zealand with ZK - Gxx.
In the United States, the registration number is commonly referred to as an "N '' number, because all aircraft registered there have a number starting with the letter N. An alphanumeric system is used because of the large numbers of aircraft registered in the United States. An N - number begins with a run of one or more numeric digits, may end with one or two alphabetic letters, may only consist of one to five characters in total, and must start with a digit other than zero. In addition, N - numbers may not contain the letters I or O, due to their similarities with the numerals 1 and 0.
Each alphabetic letter in the suffix can have one of 24 discrete values, while each numeric digit can be one of 10, except the first, which can take on only one of nine values. This yields a total of 915,399 possible registration numbers in the namespace, though certain combinations are reserved either for government use or for other special purposes. With so many possible calls, radio shortcuts are used. Normally when flying entirely within the United States, an aircraft would not identify itself starting with "N '', since that is assumed. Also, after initial contact is made with an aircraft control site, only the last two or three characters are typically used.
The following are the combinations that could be used:
An older aircraft (registered before 31 December 1948) may have a second letter in its identifier, identifying the category of aircraft. This additional letter is not actually part of the aircraft identification (e.g. NC12345 is the same registration as N12345). Aircraft category letters have not been included on any registration numbers issued since 1 January 1949, but they still appear on antique aircraft for authenticity purposes. The categories were:
For example, N-X - 211, the Ryan NYP aircraft flown by Charles Lindbergh as the Spirit of St. Louis, was registered in the experimental category.
There is a unique overlap in the United States with aircraft having a single number followed by two letters and radio call signs issued by the Federal Communications Commission to Amateur Radio operators holding the Amateur Extra class license. For example, N4YZ is, on the one hand, a Cessna 206 registered to a private individual in California, while N4YZ is also issued to an Amateur Radio operator in North Carolina.
The impact of decolonisation and independence on aircraft registration schemes has varied from place to place. Most countries, upon independence, have had a new allocation granted - in most cases this is from the new country 's new ITU allocation, but neither is it uncommon for the new country to be allocation a subset of their former colonial power 's allocation. For example, after partition in 1947, India retained the VT designation it had received as part of the British Empire 's Vx series allocation, while Pakistan adopted the AP designation from the newly allocated ITU callsigns APA - ASZ.
When this happens it is usually the case that aircraft will be re-registered into the new series retaining as much of the suffix as is possible. For example, when in 1929 the British Dominions at the time established their own aircraft registers, marks were reallocated as follows:
Two oddities created by this reallocation process are the current formats used by the Special Administrative Regions of the People 's Republic of China, Hong Kong and Macau, both of which were returned to PRC control from Britain in 1997 and Portugal in 1999 respectively. Hong Kong 's prefix of VR - H and Macau 's of CS - M, both subdivisions of their colonial powers ' allocations, were replaced by China 's B - prefix without the registration mark being extended, leaving aircraft from both SARs with registration marks of only four characters, as opposed to the norm of five.
|
who won the war between germany and poland | Invasion of Poland - wikipedia
Decisive German and Soviet victory
Ferdinand Čatloš (Army Bernolák)
Germany: 60 divisions, 6 brigades, 9,000 guns, 2,750 tanks, 2,315 aircraft Slovakia: 3 divisions Joined on 17 September: Soviet Union: 33 + divisions, 11 + brigades, 4,959 guns, 4,736 tanks, 3,300 aircraft
Poland: 39 divisions (some of them were never fully mobilized and concentrated), 16 brigades, 4,300 guns, 880 tanks, 400 aircraft
Germany: 16,343 killed, 3,500 missing, 30,300 wounded Slovakia: 37 killed, 11 missing, 114 wounded
Soviet Union: 1,475 killed or missing, 2,383 wounded or: 5,327 killed, missing and wounded
Pacific War
Mediterranean and Middle East
Other campaigns
Contemporaneous wars
The Invasion of Poland, known in Poland as the September Campaign (Kampania wrześniowa) or the 1939 Defensive War (Wojna obronna 1939 roku), and in Germany as the Poland Campaign (Polenfeldzug) or Fall Weiss ("Case White ''), was a joint invasion of Poland by Nazi Germany, the Soviet Union, the Free City of Danzig, and a small Slovak contingent, that marked the beginning of World War II. The German invasion began on 1 September 1939, one week after the signing of the Molotov -- Ribbentrop Pact, while the Soviet invasion commenced on 17 September following the Molotov - Tōgō agreement that terminated the Russian and Japanese hostilities in the east on 16 September. The campaign ended on 6 October with Germany and the Soviet Union dividing and annexing the whole of Poland under the terms of the German -- Soviet Frontier Treaty.
German forces invaded Poland from the north, south, and west the morning after the Gleiwitz incident. As the Wehrmacht advanced, Polish forces withdrew from their forward bases of operation close to the Polish -- German border to more established lines of defence to the east. After the mid-September Polish defeat in the Battle of the Bzura, the Germans gained an undisputed advantage. Polish forces then withdrew to the southeast where they prepared for a long defence of the Romanian Bridgehead and awaited expected support and relief from France and the United Kingdom. While those two countries had pacts with Poland and had declared war on Germany on 3 September, in the end their aid to Poland was very limited.
The Soviet Red Army 's invasion of Eastern Poland on 17 September, in accordance with a secret protocol of the Molotov -- Ribbentrop Pact, rendered the Polish plan of defence obsolete. Facing a second front, the Polish government concluded the defence of the Romanian Bridgehead was no longer feasible and ordered an emergency evacuation of all troops to neutral Romania. On 6 October, following the Polish defeat at the Battle of Kock, German and Soviet forces gained full control over Poland. The success of the invasion marked the end of the Second Polish Republic, though Poland never formally surrendered.
On 8 October, after an initial period of military administration, Germany directly annexed western Poland and the former Free City of Danzig and placed the remaining block of territory under the administration of the newly established General Government. The Soviet Union incorporated its newly acquired areas into its constituent Belarusian and Ukrainian republics, and immediately started a campaign of sovietization. In the aftermath of the invasion, a collective of underground resistance organizations formed the Polish Underground State within the territory of the former Polish state. Many of the military exiles that managed to escape Poland subsequently joined the Polish Armed Forces in the West, an armed force loyal to the Polish government - in - exile.
On 30 January 1933, the Nazi Party, under its leader Adolf Hitler, came to power in Germany. While Weimar Republic had long sought to annex territories belonging to Poland, Hitler 's idea to invade and partition Poland was his own concept and not realization of Weimar 's plans as Bohemia and Austria to Germany, as well as the creation of satellite or puppet states economically subordinate to Germany. As part of this long - term policy, Hitler at first pursued a policy of rapprochement with Poland, trying to improve opinion in Germany, culminating in the German -- Polish Non-Aggression Pact of 1934. Earlier, Hitler 's foreign policy worked to weaken ties between Poland and France, and attempted to manoeuvre Poland into the Anti-Comintern Pact, forming a cooperative front against the Soviet Union. Poland would be granted territory to its northeast in Ukraine and Belarus if it agreed to wage war against the Soviet Union, but the concessions the Poles were expected to make meant that their homeland would become largely dependent on Germany, functioning as little more than a client state. The Poles feared that their independence would eventually be threatened altogether. The population of the Free City of Danzig was strongly in favour of annexation by Germany, as were many of the ethnic German inhabitants of the Polish territory that separated the German exclave of East Prussia from the rest of the Reich. The so - called Polish Corridor constituted land long disputed by Poland and Germany, and inhabited by a Polish majority. The Corridor had become a part of Poland after the Treaty of Versailles. Many Germans also wanted the city of Danzig and its environs (together the Free City of Danzig) to be reincorporated into Germany. Danzig was a port city with a German majority. It had been separated from Germany after Versailles and made into the nominally independent Free City of Danzig. Hitler sought to use this as a reason for war, reverse these territorial losses, and on many occasions made an appeal to German nationalism, promising to "liberate '' the German minority still in the Corridor, as well as Danzig.
The invasion was referred to by Germany as the 1939 Defensive War since Hitler proclaimed that Poland had attacked Germany and that "Germans in Poland are persecuted with a bloody terror and are driven from their homes. The series of border violations, which are unbearable to a great power, prove that the Poles no longer are willing to respect the German frontier. ''
Poland participated with Germany in the partition of Czechoslovakia that followed the Munich Agreement, although they were not part of the agreement. It coerced Czechoslovakia to surrender the region of Český Těšín by issuing an ultimatum to that effect on 30 September 1938, which was accepted by Czechoslovakia on 1 October. This region had a Polish majority and had been disputed between Czechoslovakia and Poland in the aftermath of World War I. The Polish annexation of Slovak territory (several villages in the regions of Čadca, Orava and Spiš) later served as the justification for the Slovak state to join the German invasion.
By 1937, Germany began to increase its demands for Danzig, while proposing that an extraterritorial roadway, part of the Reichsautobahn system, be built in order to connect East Prussia with Germany proper, running through the Polish Corridor. Poland rejected this proposal, fearing that after accepting these demands, it would become increasingly subject to the will of Germany and eventually lose its independence as the Czechs had. Polish leaders also distrusted Hitler. Furthermore, Germany 's collaboration with anti-Polish Ukrainian nationalists from the Organization of Ukrainian Nationalists, which was seen as an effort to isolate and weaken Poland, weakened Hitler 's credibility from the Polish point of view. The British were also wary of Germany 's increasing strength and assertiveness threatening its balance of power strategy. On 31 March 1939, Poland formed a military alliance with the United Kingdom and France, believing that Polish independence and territorial integrity would be defended with their support if it were to be threatened by Germany. On the other hand, British Prime Minister Neville Chamberlain and his Foreign Secretary, Lord Halifax, still hoped to strike a deal with Hitler regarding Danzig (and possibly the Polish Corridor). Chamberlain and his supporters believed war could be avoided and hoped Germany would agree to leave the rest of Poland alone. German hegemony over Central Europe was also at stake. In private, Hitler said in May that Danzig was not the important issue to him, but pursuit of Lebensraum for Germany.
With tensions mounting, Germany turned to aggressive diplomacy. On 28 April 1939, Hitler unilaterally withdrew from both the German - Polish Non-Aggression Pact of 1934 and the London Naval Agreement of 1935. Talks over Danzig and the Corridor broke down and months passed without diplomatic interaction between Germany and Poland. During this interim period, the Germans learned that France and Britain had failed to secure an alliance with the Soviet Union against Germany, and that the Soviet Union was interested in an alliance with Germany against Poland. Hitler had already issued orders to prepare for a possible "solution of the Polish problem by military means '' through the Case White scenario.
In May 1939, in a statement to his generals while they were in the midst of planning the invasion of Poland, Hitler made it clear that the invasion would not come without resistance as it had in Czechoslovakia:
With minor exceptions German national unification has been achieved. Further successes can not be achieved without bloodshed. Poland will always be on the side of our adversaries... Danzig is not the objective. It is a matter of expanding our living space in the east, of making our food supply secure, and solving the problem of the Baltic states. To provide sufficient food you must have sparsely settled areas. There is therefore no question of sparing Poland, and the decision remains to attack Poland at the first opportunity. We can not expect a repetition of Czechoslovakia. There will be fighting.
On August 22, just over a week before the onset of war, Hitler delivered a speech to his military commanders at the Obersalzberg:
The object of the war is... physically to destroy the enemy. That is why I have prepared, for the moment only in the East, my ' Death 's Head ' formations with orders to kill without pity or mercy all men, women, and children of Polish descent or language. Only in this way can we obtain the living space we need.
With the surprise signing of the Molotov -- Ribbentrop Pact on 23 August, the result of secret Nazi - Soviet talks held in Moscow, Germany neutralized the possibility of Soviet opposition to a campaign against Poland and war became imminent. In fact, the Soviets agreed not to aid France or the UK in the event of their going to war with Germany over Poland and, in a secret protocol of the pact, the Germans and the Soviets agreed to divide Eastern Europe, including Poland, into two spheres of influence; the western 1⁄3 of the country was to go to Germany and the eastern 2⁄3 to the Soviet Union.
The German assault was originally scheduled to begin at 04: 00 on 26 August. However, on 25 August, the Polish - British Common Defense Pact was signed as an annex to the Franco - Polish Military Alliance. In this accord, Britain committed itself to the defence of Poland, guaranteeing to preserve Polish independence. At the same time, the British and the Poles were hinting to Berlin that they were willing to resume discussions -- not at all how Hitler hoped to frame the conflict. Thus, he wavered and postponed his attack until 1 September, managing to in effect halt the entire invasion "in mid-leap ''.
However, there was one exception: on the night of 25 -- 6 August, a German sabotage group which had not heard anything about a delay of the invasion made an attack on the Jablunkov Pass and Mosty railway station in Silesia. On the morning of 26 August, this group was repelled by Polish troops. The German side described all this as an incident "caused by an insane individual '' (see Jabłonków Incident).
On 26 August, Hitler tried to dissuade the British and the French from interfering in the upcoming conflict, even pledging that the Wehrmacht forces would be made available to Britain 's empire in the future. The negotiations convinced Hitler that there was little chance the Western Allies would declare war on Germany, and even if they did, because of the lack of "territorial guarantees '' to Poland, they would be willing to negotiate a compromise favourable to Germany after its conquest of Poland. Meanwhile, the increased number of overflights by high - altitude reconnaissance aircraft and cross-border troop movements signaled that war was imminent.
On 29 August, prompted by the British, Germany issued one last diplomatic offer, with Fall Weiss yet to be rescheduled. That evening, the German government responded in a communication that it aimed not only for the restoration of Danzig but also the Polish Corridor (which had not previously been part of Hitler 's demands) in addition to the safeguarding of the German minority in Poland. It said that they were willing to commence negotiations, but indicated that a Polish representative with the power to sign an agreement had to arrive in Berlin the next day while in the meantime it would draw up a set of proposals. The British Cabinet was pleased that negotiations had been agreed to but, mindful of how Emil Hácha had been forced to sign his country away under similar circumstances just months earlier, regarded the requirement for an immediate arrival of a Polish representative with full signing powers as an unacceptable ultimatum. On the night of 30 / 31 August, German Foreign Minister Joachim von Ribbentrop read a 16 - point German proposal to the British ambassador. When the ambassador requested a copy of the proposals for transmission to the Polish government, Ribbentrop refused, on the grounds that the requested Polish representative had failed to arrive by midnight. When Polish Ambassador Lipski went to see Ribbentrop later on 31 August to indicate that Poland was favorably disposed to negotiations, he announced that he did not have the full power to sign, and Ribbentrop dismissed him. It was then broadcast that Poland had rejected Germany 's offer, and negotiations with Poland came to an end. Hitler issued orders for the invasion to commence soon afterwards.
On 30 August, the Polish Navy sent its destroyer flotilla to Britain, executing the Peking Plan. On the same day, Marshal of Poland Edward Rydz - Śmigły announced the mobilization of Polish troops. However, he was pressured into revoking the order by the French, who apparently still hoped for a diplomatic settlement, failing to realize that the Germans were fully mobilized and concentrated at the Polish border. During the night of 31 August, the Gleiwitz incident, a false flag attack on the radio station, was staged near the border city of Gleiwitz in Upper Silesia by German units posing as Polish troops, as part of the wider Operation Himmler. On 31 August 1939, Hitler ordered hostilities against Poland to start at 4: 45 the next morning. Because of the earlier stoppage, Poland managed to mobilize only 70 % of its planned forces, and many units were still forming or moving to their designated frontline positions.
Germany had a substantial numeric advantage over Poland and had developed a significant military before the conflict. The Heer (army) had some 2,400 tanks organized into six panzer divisions, utilizing a new operational doctrine. It held that these divisions should act in coordination with other elements of the military, punching holes in the enemy line and isolating selected units, which would be encircled and destroyed. This would be followed up by less - mobile mechanized infantry and foot soldiers. The Luftwaffe (air force) provided both tactical and strategic air power, particularly dive bombers that disrupted lines of supply and communications. Together, the new methods were nicknamed "Blitzkrieg '' (lightning war). While historian Basil Liddell Hart claimed "Poland was a full demonstration of the Blitzkrieg theory '', some other historians disagree.
Aircraft played a major role in the campaign. Bombers also attacked cities, causing huge losses amongst the civilian population through terror bombing and strafing. The Luftwaffe forces consisted of 1,180 fighters, 290 Ju 87 Stuka dive bombers, 1,100 conventional bombers (mainly Heinkel He 111s and Dornier Do 17s), and an assortment of 550 transport and 350 reconnaissance aircraft. In total, Germany had close to 4,000 aircraft, most of them modern. A force of 2,315 aircraft was assigned to Weiss. Due to its earlier participation in the Spanish Civil War, the Luftwaffe was probably the most experienced, best - trained and best - equipped air force in the world in 1939.
Between 1936 and 1939, Poland invested heavily in the Central Industrial Region. Preparations for a defensive war with Germany were ongoing for many years, but most plans assumed fighting would not begin before 1942. To raise funds for industrial development, Poland sold much of the modern equipment it produced. In 1936, a National Defence Fund was set up to collect funds necessary for strengthening the Polish Armed forces. The Polish Army had approximately a million soldiers, but less than half were mobilized by 1 September. Latecomers sustained significant casualties when public transport became targets of the Luftwaffe. The Polish military had fewer armored forces than the Germans, and these units, dispersed within the infantry, were unable to effectively engage the Germans.
Experiences in the Polish - Soviet War shaped Polish Army organizational and operational doctrine. Unlike the trench warfare of World War I, the Polish - Soviet War was a conflict in which the cavalry 's mobility played a decisive role. Poland acknowledged the benefits of mobility but was unable to invest heavily in many of the expensive, unproven inventions since then. In spite of this, Polish cavalry brigades were used as a mobile mounted infantry and had some successes against both German infantry and cavalry.
The Polish Air Force (Lotnictwo Wojskowe) was at a severe disadvantage against the German Luftwaffe, although it was not destroyed on the ground early on as is commonly believed. The Polish Air Force lacked modern fighters, but its pilots were among the world 's best trained, as proven a year later in the Battle of Britain, in which the Poles played a major part.
Overall, the Germans enjoyed numerical and qualitative aircraft superiority. Poland had only about 600 aircraft, of which only 37 P - 37 Łoś bombers were modern and comparable to its German counterparts. The Polish Air Force had roughly 185 PZL P. 11 and some 95 PZL P. 7 fighters, 175 PZL. 23 Karaś Bs, 35 Karaś As, and by September, over 100 PZL. 37s were produced. However, for the September Campaign, only some 70 % of those aircraft were mobilized. Only 36 PZL. 37s were deployed. All those aircraft were of indigenous Polish design, with the bombers being more modern than fighters, according to the Ludomił Rayski air force expansion plan, which relied on a strong bomber force. The Polish Air Force consisted of a ' Bomber Brigade ', ' Pursuit Brigade ' and aircraft assigned to the various ground armies. The Polish fighters were older than their German counterparts; the PZL P. 11 fighter -- produced in the early 1930s -- had a top speed of only 365 km / h (227 mph), far less than German bombers. To compensate, the pilots relied on its maneuverability and high diving speed.
The tank force consisted of two armored brigades, four independent tank battalions and some 30 companies of TKS tankettes attached to infantry divisions and cavalry brigades. A standard tank of the Polish Army during the invasion of 1939 was the 7TP light tank. It was the first tank in the world to be equipped with a diesel engine and 360 ° Gundlach periscope. The 7TP was significantly better armed than its most common opponents, the German Panzer I and II, but only 140 tanks were produced between 1935 and the outbreak of the war. Poland had also a few relatively modern imported designs, such as 50 Renault R35 tanks and 38 Vickers E tanks.
The Polish Navy was a small fleet of destroyers, submarines and smaller support vessels. Most Polish surface units followed Operation Peking, leaving Polish ports on 20 August and escaping by way of the North Sea to join with the British Royal Navy. Submarine forces participated in Operation Worek, with the goal of engaging and damaging German shipping in the Baltic Sea, but they had much less success. In addition, many merchant marine ships joined the British merchant fleet and took part in wartime convoys.
The September Campaign was devised by General Franz Halder, chief of the general staff, and directed by General Walther von Brauchitsch, the commander in chief of the upcoming campaign. It called for the start of hostilities before a declaration of war, and pursued a doctrine of mass encirclement and destruction of enemy forces. The infantry -- far from completely mechanized but fitted with fast - moving artillery and logistic support -- was to be supported by Panzers and small numbers of truck - mounted infantry (the Schützen regiments, forerunners of the panzergrenadiers) to assist the rapid movement of troops and concentrate on localized parts of the enemy front, eventually isolating segments of the enemy, surrounding, and destroying them. The pre-war "armored idea '' (which an American journalist in 1939 dubbed Blitzkrieg) -- which was advocated by some generals, including Heinz Guderian -- would have had the armor punching holes in the enemy 's front and ranging deep into rear areas; in actuality, the campaign in Poland would be fought along more traditional lines. This stemmed from conservatism on the part of the German high command, who mainly restricted the role of armor and mechanized forces to supporting the conventional infantry divisions.
Poland 's terrain was well suited for mobile operations when the weather cooperated; the country had flat plains with long frontiers totalling almost 5,600 km (3,500 mi), Poland 's long border with Germany on the west and north -- facing East Prussia -- extended 2,000 km (1,200 mi). Those had been lengthened by another 300 km (190 mi) on the southern side in the aftermath of the Munich Agreement of 1938. The German incorporation of Bohemia and Moravia and creation of the German puppet state of Slovakia meant that Poland 's southern flank was also exposed.
Hitler demanded that Poland be conquered in six weeks, but German planners thought that it would require three months. They intended to fully exploit their long border with the great enveloping manoeuver of Fall Weiss. German units were to invade Poland from three directions:
All three assaults were to converge on Warsaw, while the main Polish army was to be encircled and destroyed west of the Vistula. Fall Weiss was initiated on 1 September 1939, and was the first operation of Second World War in Europe.
The Polish determination to deploy forces directly at the German - Polish border, prompted by the Polish - British Common Defense Pact, shaped the country 's defence plan, "Plan West ''. Poland 's most valuable natural resources, industry and population were located along the western border in Eastern Upper Silesia. Polish policy centred on their protection especially since many politicians feared that if Poland were to retreat from the regions disputed by Germany, Britain and France would sign a separate peace treaty with Germany similar to the Munich Agreement of 1938. The fact that none of Poland 's allies had specifically guaranteed Polish borders or territorial integrity did n't help in easing Polish concerns. For these reasons, the Polish government disregarded French advice to deploy the bulk of its forces behind natural barriers such as the Vistula and San rivers, even though some Polish generals supported it as a better strategy. The West Plan did permit the Polish armies to retreat inside the country, but it was supposed to be a slow retreat behind prepared positions and was intended to give the armed forces time to complete its mobilization and execute a general counteroffensive with the support of the Western Allies.
The Polish General Staff had not begun elaborating the "West '' defence plan until 4 March 1939. It was assumed that the Polish Army, fighting in the initial phase of the war alone, would be compelled to defend the western regions of the country. The plan of operations took into account, first of all, the numerical and material superiority of the enemy and, consequently, assumed the defensive character of Polish operations. The Polish intentions were: defence of the western regions judged as indispensable for waging the war, taking advantage of the propitious conditions for counterattacks by reserve units, and avoidance of being smashed before the beginning of Franco / British operations in Western Europe. The operational plan had not been elaborated in detail and concerned only the first stage of operations.
The British and French estimated that Poland would be able to defend itself for two to three months, while Poland estimated it could do so for at least six months. While Poland drafted its estimates based upon the expectation that the Western Allies would honor their treaty obligations and quickly start an offensive of their own, the French and British expected the war to develop into trench warfare much like World War I. The Polish government was not notified of this strategy and based all of its defence plans on promises of quick relief by their Western allies.
Polish forces were stretched thinly along the Polish - German border and lacked compact defence lines and good defence positions along disadvantageous terrain. This strategy also left supply lines poorly protected. One - third of Poland 's forces were massed in or near the Polish Corridor, making them vulnerable to a double envelopment from East Prussia and the west. Another third were concentrated in the north - central part of the country, between the major cities of Łódź and Warsaw. The forward positioning of Polish forces vastly increased the difficulty of carrying out strategic maneuvers, compounded by inadequate mobility, as Polish units often lacked the ability to retreat from their defensive positions as they were being overrun by more mobile German mechanized formations.
As the prospect of conflict increased, the British government pressed Marshal Edward Rydz - Śmigły to evacuate the most modern elements of the Polish Navy from the Baltic Sea. In the event of war the Polish military leaders realized that the ships which remained in the Baltic were likely to be quickly sunk by the Germans. Furthermore, the Danish straits were well within operating range of the German Kriegsmarine and Luftwaffe, so there was little chance of an evacuation plan succeeding if implemented after hostilities began. Four days after the signing of the Polish - British Common Defense Pact, three destroyers of the Polish Navy executed the Peking Plan and consequently evacuated to Great Britain.
Although the Polish military had prepared for conflict, the civilian population remained largely unprepared. Polish pre-war propaganda emphasized that any German invasion would be easily repelled. Consequently, Polish defeats during the German invasion came as a shock to the civilian population. Lacking training for such a disaster, the civilian population panicked and retreated east, spreading chaos, lowering troop morale and making road transportation for Polish troops very difficult.
Following several German - staged incidents (like the Gleiwitz incident, a part of Operation Himmler), which German propaganda used as a pretext to claim that German forces were acting in self - defence, the first regular act of war took place on 1 September 1939, at 04: 40, when the Luftwaffe attacked the Polish town of Wieluń, destroying 75 % of the city and killing close to 1,200 people, most of them civilians. Five minutes later, the old German pre-dreadnought battleship Schleswig - Holstein opened fire on the Polish military transit depot at Westerplatte in the Free City of Danzig on the Baltic Sea. At 08: 00, German troops -- still without a formal declaration of war issued -- attacked near the Polish village of Mokra. The Battle of the Border had begun. Later that day, the Germans attacked on Poland 's western, southern and northern borders, while German aircraft began raids on Polish cities. The main axis of attack led eastwards from Germany through the western Polish border. Supporting attacks came from East Prussia in the north, and a joint German - Slovak tertiary attack by units (Field Army "Bernolák '') from the German - allied Slovak Republic in the south. All three assaults converged on the Polish capital of Warsaw.
France and the UK declared war on Germany on 3 September, but failed to provide any meaningful support. The German - French border saw only a few minor skirmishes, although the majority of German forces, including 85 % of their armoured forces, were engaged in Poland. Despite some Polish successes in minor border battles, German technical, operational and numerical superiority forced the Polish armies to retreat from the borders towards Warsaw and Lwów. The Luftwaffe gained air superiority early in the campaign. By destroying communications, the Luftwaffe increased the pace of the advance which overran Polish airstrips and early warning sites, causing logistical problems for the Poles. Many Polish Air Force units ran low on supplies, 98 of their number withdrew into then - neutral Romania. The Polish initial strength of 400 was reduced to just 54 by 14 September and air opposition virtually ceased.
By 3 September, when Günther von Kluge in the north had reached the Vistula River (some 10 km (6.2 mi) from the German border at that time) and Georg von Küchler was approaching the Narew River, Walther von Reichenau 's armor was already beyond the Warta river; two days later, his left wing was well to the rear of Łódź and his right wing at the town of Kielce. On 7 September the defenders of the Polish capital had fallen back to a 48 km (30 mi) line paralleling the Vistula River where they rallied against German tank thrusts. The defensive line ran between Płońsk and Pułtusk, northwest and northeast of Warsaw, respectively. The right wing of the Poles had been hammered back from Ciechanów about 40 km (25 mi) northwest of Pułtusk pivoting on Płońsk. At one stage in the struggle the Poles were driven from Pułtusk and the Germans threatened to turn the Polish flank and thrust on to the Vistula and Warsaw. Pułtusk, however, was regained in the face of withering German fire. A considerable number of German tanks were captured after a German attack had pierced the line but the Polish defenders outflanked them. By 8 September, one of Reichenau 's armored corps -- having advanced 225 km (140 mi) in the first week of the campaign -- reached the outskirts of Warsaw. Light divisions on Reichenau 's right were on the Vistula between Warsaw and the town of Sandomierz by 9 September while List -- in the south -- was on the San River north and south of the town of Przemyśl. At the same time, Guderian led his 3rd Army tanks across the Narew, attacking the line of the Bug River, already encircling Warsaw. All the German armies made progress in fulfilling their parts of the Fall Weiss plan. The Polish armies were splitting up into uncoordinated fragments, some of which were retreating while others were launching disjointed attacks on the nearest German columns.
Polish forces abandoned the regions of Pomerelia (the Polish Corridor), Greater Poland and Polish Upper Silesia in the first week. The Polish plan for border defence was proven a dismal failure. The German advance as a whole was not slowed. On 10 September, the Polish commander - in - chief -- Marshal Edward Rydz - Śmigły -- ordered a general retreat to the southeast, towards the so - called Romanian Bridgehead. Meanwhile, the Germans were tightening their encirclement of the Polish forces west of the Vistula (in the Łódź area and, still farther west, around Poznań) and also penetrating deeply into eastern Poland. Warsaw -- under heavy aerial bombardment since the first hours of the war -- was attacked on 9 September and was put under siege on 13 September. Around that time, advanced German forces also reached the city of Lwów, a major metropolis in eastern Poland. 1,150 German aircraft bombed Warsaw on 24 September.
The Polish defensive plan called for a strategy of encirclement: they were to allow the Germans to advance in between two Polish Army groups in the line between Berlin and Warsaw - Lodz, at which point Armia Prusy would move in and repulse the German spearhead, trapping them. In order for this to happen, Armia Prusy needed to be fully mobilized by 3 September. However, Polish military planners failed to foresee the speed of the German advance and assumed that Armia Prusy would need to be fully mobilized by 16 September.
The largest battle during this campaign -- the Battle of Bzura -- took place near the Bzura river west of Warsaw and lasted 9 -- 19 September. Polish armies Poznań and Pomorze, retreating from the border area of the Polish Corridor, attacked the flank of the advancing German 8th Army, but the counterattack failed after initial success. After the defeat, Poland lost its ability to take the initiative and counterattack on a large scale. German air power was instrumental during the battle. The Luftwaffe 's offensive broke what remained of Polish resistance in an "awesome demonstration of air power ''. The Luftwaffe quickly destroyed the bridges across the Bzura River. Afterward, the Polish forces were trapped out in the open, and were attacked by wave after wave of Stukas, dropping 50 kg (110 lb) "light bombs '' which caused huge numbers of casualties. The Polish anti-aircraft batteries ran out of ammunition and retreated to the forests, but were then "smoked out '' by the Heinkel He 111 and Dornier Do 17s dropping 100 kg (220 lb) incendiaries. The Luftwaffe left the army with the task of mopping up survivors. The Stukageschwaders alone dropped 388 t (428 short tons) of bombs during this battle.
The Polish government (of President Ignacy Mościcki) and the high command (of Marshal Edward Rydz - Śmigły) left Warsaw in the first days of the campaign and headed southeast, reaching Lublin on 6 September. From there, it moved on 9 September to Kremenez, and on 13 September to Zaleshiki on the Romanian border. Rydz - Śmigły ordered the Polish forces to retreat in the same direction, behind the Vistula and San rivers, beginning the preparations for the defence of the Romanian Bridgehead area.
From the beginning, the German government repeatedly asked Vyacheslav Molotov whether the Soviet Union would keep to its side of the partition bargain. The Soviet forces were holding fast along their designated invasion points pending finalization of the five - month - long undeclared war with Japan in the Far East. On 15 September 1939, the Ambassadors Molotov and Shigenori Tōgō completed their agreement ending the conflict, and the Nomonhan cease - fire went into effect on 16 September 1939. Now cleared of any "second front '' threat from the Japanese, Soviet premier Joseph Stalin ordered his forces into Poland on 17 September. It was agreed that the USSR would relinquish its interest in the territories between the new border and Warsaw in exchange for inclusion of Lithuania in the Soviet "zone of interest ''.
By 17 September, the Polish defence was already broken and the only hope was to retreat and reorganize along the Romanian Bridgehead. However, these plans were rendered obsolete nearly overnight, when the over 800,000 - strong Soviet Red Army entered and created the Belarusian and Ukrainian fronts after invading the eastern regions of Poland in violation of the Riga Peace Treaty, the Soviet - Polish Non-Aggression Pact, and other international treaties, both bilateral and multilateral. Soviet diplomacy had lied that they were "protecting the Ukrainian and Belarusian minorities of eastern Poland since the Polish government had abandoned the country and the Polish state ceased to exist ''.
Polish border defence forces in the east -- known as the Korpus Ochrony Pogranicza -- consisted of about 25 battalions. Edward Rydz - Śmigły ordered them to fall back and not engage the Soviets. This, however, did not prevent some clashes and small battles, such as the Battle of Grodno, as soldiers and local population attempted to defend the city. The Soviets executed numerous Polish officers, including prisoners of war like General Józef Olszyna - Wilczyński. The Organization of Ukrainian Nationalists rose against the Poles, and communist partisans organized local revolts, robbing and killing civilians. Those movements were quickly disciplined by the NKVD. The Soviet invasion was one of the decisive factors that convinced the Polish government that the war in Poland was lost. Before the Soviet attack from the east, the Polish military 's fall - back plan had called for long - term defence against Germany in the south - eastern part of Poland, while awaiting relief from a Western Allies attack on Germany 's western border. However, the Polish government refused to surrender or negotiate a peace with Germany. Instead, it ordered all units to evacuate Poland and reorganize in France.
Meanwhile, Polish forces tried to move towards the Romanian Bridgehead area, still actively resisting the German invasion. From 17 -- 20 September, Polish armies Kraków and Lublin were crippled at the Battle of Tomaszów Lubelski, the second - largest battle of the campaign. The city of Lwów capitulated on 22 September because of Soviet intervention; the city had been attacked by the Germans over a week earlier, and in the middle of the siege, the German troops handed operations over to their Soviet allies. Despite a series of intensifying German attacks, Warsaw -- defended by quickly reorganized retreating units, civilian volunteers and militia -- held out until 28 September. The Modlin Fortress north of Warsaw capitulated on 29 September after an intense 16 - day battle. Some isolated Polish garrisons managed to hold their positions long after being surrounded by German forces. Westerplatte enclave 's tiny garrison capitulated on 7 September and the Oksywie garrison held until 19 September; Hel Fortified Area was defended until 2 October. In the last week of September, Hitler made a speech in the city of Danzig in which he said:
Meantime, Russia felt moved, on its part, to march in for the protection of the interests of the White Russian and Ukrainian people in Poland. We realize now that in England and France this German and Russian co-operation is considered a terrible crime. An Englishman even wrote that it is perfidious -- well, the English ought to know. I believe England thinks this co-operation perfidious because the co-operation of democratic England with bolshevist Russia failed, while National Socialist Germany 's attempt with Soviet Russia succeeded.
Poland never will rise again in the form of the Versailles treaty. That is guaranteed not only by Germany, but also guaranteed by Russia. -- Adolf Hitler, 19 September 1939
Despite a Polish victory at the Battle of Szack, after which the Soviets executed all the officers and NCOs they had captured, the Red Army reached the line of rivers Narew, Bug River, Vistula and San by 28 September, in many cases meeting German units advancing from the other direction. Polish defenders on the Hel peninsula on the shore of the Baltic Sea held out until 2 October. The last operational unit of the Polish Army, General Franciszek Kleeberg 's Samodzielna Grupa Operacyjna "Polesie '', surrendered after the four - day Battle of Kock near Lublin on 6 October marking the end of the September Campaign.
Hundreds of thousands of Polish civilians were killed during the September invasion of Poland and millions more were killed in the following years of German and Soviet occupation. The Polish Campaign was the first action by Adolf Hitler in his attempt to create Lebensraum (living space) for Germans. Nazi propaganda was one of the factors behind the German brutality directed at civilians which had worked relentlessly to convince the German people into believing that the Jews and Slavs were Untermenschen (subhumans).
Starting from the first day of invasion, the German air force (the Luftwaffe) attacked civilian targets and columns of refugees along the roads to terrorize the Polish people, disrupt communications, and target Polish morale. The Luftwaffe killed 6,000 -- 7,000 Polish civilians during the bombing of Warsaw.
The German invasion saw atrocities committed against Polish men, women, and children. The German forces (both SS and the regular Wehrmacht) murdered tens of thousands of Polish civilians (e.g. The 1st SS Panzer Division Leibstandarte SS Adolf Hitler throughout the campaign was notorious for burning villages and committing atrocities in numerous Polish towns, including massacres in Błonie, Złoczew, Bolesławiec, Torzeniec, Goworowo, Mława, and Włocławek).
During Operation Tannenberg, an ethnic cleansing campaign organized by multi elements of the German government, tens of thousands of Polish civilians were shot at 760 mass execution sites by the Einsatzgruppen.
Altogether, the civilian losses of Polish population amounted to about 150,000 -- 200,000. Roughly 1,250 German civilians were also killed during the invasion (and an additional 2,000 died fighting Polish troops as members ethnic German militia such as Volksdeutscher Selbstschutz which constituted a fifth column during the invasion).
Poland was divided between Germany and the Soviet Union. Slovakia gained back those territories taken by Poland in autumn 1938. Lithuania received the city of Vilnius and its environs on 28 October 1939 from the Soviet Union. On 8 and 13 September 1939, the German military districts of "Posen '' (Poznan) -- commanded by General Alfred von Vollard - Bockelberg (de) -- and "Westpreußen '' (West Prussia) -- commanded by General Walter Heitz -- were established in conquered Greater Poland and Pomerelia, respectively. Based on laws of 21 May 1935 and 1 June 1938, the German Wehrmacht delegated civil administrative powers to "Chiefs of Civil Administration '' (Chefs der Zivilverwaltung, CdZ). German dictator Adolf Hitler appointed Arthur Greiser to become the CdZ of the Posen military district, and Danzig 's Gauleiter Albert Forster to become the CdZ of the West Prussian military district. On 3 October, the military districts Lodz and Krakau (Krakow) were set up under command of Generalobersten (Colonel - Generals) Gerd von Rundstedt and Wilhelm List, and Hitler appointed Hans Frank and Arthur Seyss - Inquart as civil heads, respectively. At the same time, Frank was appointed "supreme chief administrator '' for all occupied territories. On 28 September, another secret German - Soviet protocol modified the arrangements of August: all of Lithuania was shifted to the Soviet sphere of influence; in exchange, the dividing line in Poland was moved in Germany 's favour, eastwards towards the Bug River. On 8 October, Germany formally annexed the western parts of Poland with Greiser and Forster as Reichsstatthalter, while the south - central parts were administered as the General Government led by Frank.
Even though water barriers separated most of the spheres of interest, the Soviet and German troops met on numerous occasions. The most remarkable event of this kind occurred at Brest - Litovsk on 22 September. The German 19th Panzer Corps -- commanded by General Heinz Guderian -- had occupied the city, which lay within the Soviet sphere of interest. When the Soviet 29th Tank Brigade -- commanded by S.M. Krivoshein -- approached, the commanders negotiated that the German troops would withdraw and the Soviet troops would enter the city saluting each other. At Brest - Litovsk, Soviet and German commanders held a joint victory parade before German forces withdrew westward behind a new demarcation line. Just three days earlier, however, the parties had a more hostile encounter near Lwow (Lviv, Lemberg), when the German 137th Gebirgsjägerregimenter (mountain infantry regiment) attacked a reconnaissance detachment of the Soviet 24th Tank Brigade; after a few casualties on both sides, the parties turned to negotiations. The German troops left the area, and the Red Army troops entered Lviv on 22 September.
The Molotov -- Ribbentrop pact and the invasion of Poland marked the beginning of a period during which the government of the Soviet Union increasingly tried to convince itself that the actions of Germany were reasonable, and were not developments to be worried about, despite evidence to the contrary. On 7 September 1939, just a few days after France and Britain joined the war against Germany, Stalin explained to a colleague that the war was to the advantage of the Soviet Union, as follows:
A war is on between two groups of capitalist countries... for the redivision of the world, for the domination of the world! We see nothing wrong in their having a good hard fight and weakening each other... Hitler, without understanding it or desiring it, is shaking and undermining the capitalist system... We can manoeuvre, pit one side against the other to set them fighting with each other as fiercely as possible... The annihilation of Poland would mean one fewer bourgeois fascist state to contend with! What would be the harm if as a result of the rout of Poland we were to extend the socialist system onto new territories and populations?
About 65,000 Polish troops were killed in the fighting, with 420,000 others being captured by the Germans and 240,000 more by the Soviets (for a total of 660,000 prisoners). Up to 120,000 Polish troops escaped to neutral Romania (through the Romanian Bridgehead and Hungary), and another 20,000 to Latvia and Lithuania, with the majority eventually making their way to France or Britain. Most of the Polish Navy succeeded in evacuating to Britain as well. German personnel losses were less than their enemies (c. 16,000 killed).
None of the parties to the conflict -- Germany, the Western Allies or the Soviet Union -- expected that the German invasion of Poland would lead to a war that would surpass World War I in its scale and cost. It would be months before Hitler would see the futility of his peace negotiation attempts with the United Kingdom and France, but the culmination of combined European and Pacific conflicts would result in what was truly a "world war ''. Thus, what was not seen by most politicians and generals in 1939 is clear from the historical perspective: The Polish September Campaign marked the beginning of a pan-European war, which combined with the Japanese invasion of China in 1937 and the Pacific War in 1941 to form the global conflict known as World War II.
The invasion of Poland led Britain and France to declare war on Germany on 3 September. However, they did little to affect the outcome of the September Campaign. No declaration of war was issued by Britain and France against the Soviet Union. This lack of direct help led many Poles to believe that they had been betrayed by their Western allies.
On 23 May 1939, Hitler explained to his officers that the object of the aggression was not Danzig, but the need to obtain German Lebensraum and details of this concept would be later formulated in the infamous Generalplan Ost. The invasion decimated urban residential areas, civilians soon became indistinguishable from combatants, and the forthcoming German occupation (both on the annexed territories and in the General Government) was one of the most brutal episodes of World War II, resulting in between 5.47 million and 5.67 million Polish deaths (about 20 % of the country 's total population, and over 90 % of its Jewish minority) -- including the mass murder of 3 million Polish citizens (mainly Jews as part of the final solution) in extermination camps like Auschwitz, in concentration camps, and in numerous ad hoc massacres, where civilians were rounded up, taken to a nearby forest, machine - gunned, and then buried, whether they were dead or not.
According to the Polish Institute of National Remembrance, Soviet occupation between 1939 and 1941 resulted in the death of 150,000 and deportation of 320,000 of Polish citizens, when all who were deemed dangerous to the Soviet regime were subject to sovietization, forced resettlement, imprisonment in labor camps (the Gulags) or murdered, like the Polish officers in the Katyn massacre.
Since October 1939, the Polish army that could escape imprisonment from the Soviets or Nazis were mainly heading for British and French territories. These places were considered safe, because of the pre-war alliance between Great - Britain, France and Poland. Not only did the government escape, but also the national gold supply was evacuated via Romania and brought to the West, notably London and Ottawa. The approximately 75 tonnes (83 short tons) of gold was considered sufficient to field an army for the duration of the war.
From Lemberg to Bordeaux (' Von Lemberg bis Bordeaux '), written by Leo Leixner, a journalist and war correspondent, is a first - hand account of the battles that led to the fall of Poland, the low countries, and France. It includes a rare eye - witness description of the Battle of Węgierska Górka. In August 1939, Leixner joined the Wehrmacht as a war reporter, was promoted to sergeant, and in 1941 published his recollections. The book was originally issued by Franz Eher Nachfolger, the central publishing house of the Nazi Party.
There are several common misconceptions regarding the Polish September Campaign.
The Polish Army did not fight German tanks with horse - mounted cavalry wielding lances and swords. In 1939, only 10 % of the Polish army was made up of cavalry units. Polish cavalry never charged German tanks or entrenched infantry or artillery, but usually acted as mobile infantry (like dragoons) and reconnaissance units and executed cavalry charges only in rare situations against foot soldiers. Other armies (including German and Soviet) also fielded and extensively used elite horse cavalry units at that time. Polish cavalry consisted of eleven brigades, as emphasized by its military doctrine, equipped with anti tank rifles "UR '' and light artillery such as the highly effective Bofors 37 mm anti-tank gun. The myth originated from war correspondents ' reports similar to that of the Battle of Krojanty, where a Polish cavalry brigade was fired upon in ambush by hidden armored vehicles, after it had mounted a successful sabre - charge against German infantry. There have also been cases when Polish cavalry dashing between tanks trying to break out of encirclement gave an impression of an attack.
The Polish Air Force was not destroyed on the ground in the first days of the war. Though numerically inferior, it had been moved from air bases to small camouflaged airfields shortly before the war. Only some trainers and auxiliary aircraft were destroyed on the ground. The Polish Air Force, significantly outnumbered and with its fighters outmatched by more advanced German fighters, remained active until the second week of the campaign, inflicting significant damage on the Luftwaffe. The Luftwaffe lost 285 aircraft to all operational causes, with 279 more damaged, and the Poles lost 333 aircraft.
A common but false belief is that Poland offered little resistance and surrendered quickly. In the first few days, Germany sustained very heavy losses: Poland cost the Germans 993 tanks and armored vehicles as campaign losses of which 300 tanks were never recovered, thousands of soldiers, and 25 % of its air strength. As for duration, the September Campaign lasted about a week and a half less than the Battle of France in 1940 even though the Anglo - French forces were much closer to parity with the Germans in numerical strength and equipment. Furthermore, the Polish Army was preparing the Romanian Bridgehead, which would have prolonged Polish defence, but the plan was cancelled by the Soviet invasion of Poland on 17 September 1939. Poland also never officially surrendered to the Germans. Under German occupation, several guerilla troops continued to fight such as Henryk Dobrzański 's one guerilla troops or organised Armia Krajowa and other underground organisations, or forest partisans: Leśni.
It is often assumed that Blitzkrieg is the strategy that Germany first used in Poland. The ideas of Blitzkrieg and mobile warfare had already been used in Spain, China and Siberia. Many early post-war histories, such as Barrie Pitt 's in The Second World War (BPC Publishing 1966), attribute German victory to "enormous development in military technique which occurred between 1918 and 1940 '', and cite that "Germany, who translated (British inter-war) theories into action... called the result Blitzkrieg ''. That idea has been repudiated by some authors. Matthew Cooper writes:
Throughout the Polish Campaign, the employment of the mechanized units revealed the idea that they were intended solely to ease the advance and to support the activities of the infantry... Thus, any strategic exploitation of the armoured idea was still - born. The paralysis of command and the breakdown of morale were not made the ultimate aim of the... German ground and air forces, and were only incidental by - products of the traditional manoeuvers of rapid encirclement and of the supporting activities of the flying artillery of the Luftwaffe, both of which had as their purpose the physical destruction of the enemy troops. Such was the Vernichtungsgedanke of the Polish campaign.
Vernichtungsgedanke was a strategy dating back to Frederick the Great, and it was applied in the Polish Campaign, little changed from the French campaigns in 1870 or 1914. The use of tanks
... left much to be desired.... Fear of enemy action against the flanks of the advance, fear which was to prove so disastrous to German prospects in the west in 1940 and in the Soviet Union in 1941, was present from the beginning of the war.
John Ellis, writing in Brute Force, asserted that
... there is considerable justice in Matthew Cooper 's assertion that the panzer divisions were not given the kind of strategic mission that was to characterize authentic armoured blitzkrieg, and were almost always closely subordinated to the various mass infantry armies.
Zaloga and Madej, in The Polish Campaign 1939, also address the subject of mythical interpretations of Blitzkrieg and the importance of other arms in the campaign. Western accounts of the September campaign have stressed the shock value of the panzers and Stuka attacks, they have
... tended to underestimate the punishing effect of German artillery on Polish units. Mobile and available in significant quantity, artillery shattered as many units as any other branch of the Wehrmacht.
|
what are the primary and secondary lymphatic structures | Lymphatic system - wikipedia
The lymphatic system is part of the vascular system and an important part of the immune system, comprising a network of lymphatic vessels that carry a clear fluid called lymph (from Latin, lympha meaning "water '') directionally towards the heart. The lymphatic system was first described in the seventeenth century independently by Olaus Rudbeck and Thomas Bartholin. Unlike the circulatory system, the lymphatic system is not a closed system. The human circulatory system processes an average of 20 liters of blood per day through capillary filtration, which removes plasma while leaving the blood cells. Roughly 17 litres of the filtered plasma are reabsorbed directly into the blood vessels, while the remaining three litres remain in the interstitial fluid. One of the main functions of the lymph system is to provide an accessory return route to the blood for the surplus three litres.
The other main function is that of defense in the immune system. Lymph is very similar to blood plasma: it contains lymphocytes. It also contains waste products and cellular debris together with bacteria and proteins. Associated organs composed of lymphoid tissue are the sites of lymphocyte production. Lymphocytes are concentrated in the lymph nodes. The spleen and the thymus are also lymphoid organs of the immune system. The tonsils are lymphoid organs that are also associated with the digestive system. Lymphoid tissues contain lymphocytes, and also contain other types of cells for support. The system also includes all the structures dedicated to the circulation and production of lymphocytes (the primary cellular component of lymph), which also includes the bone marrow, and the lymphoid tissue associated with the digestive system.
The blood does not come into direct contact with the parenchymal cells and tissues in the body (except in case of an injury causing rupture of one or more blood vessels), but constituents of the blood first exit the microvascular exchange blood vessels to become interstitial fluid, which comes into contact with the parenchymal cells of the body. Lymph is the fluid that is formed when interstitial fluid enters the initial lymphatic vessels of the lymphatic system. The lymph is then moved along the lymphatic vessel network by either intrinsic contractions of the lymphatic passages or by extrinsic compression of the lymphatic vessels via external tissue forces (e.g., the contractions of skeletal muscles), or by lymph hearts in some animals. The organization of lymph nodes and drainage follows the organization of the body into external and internal regions; therefore, the lymphatic drainage of the head, limbs, and body cavity walls follows an external route, and the lymphatic drainage of the thorax, abdomen, and pelvic cavities follows an internal route. Eventually, the lymph vessels empty into the lymphatic ducts, which drain into one of the two subclavian veins, near their junction with the internal jugular veins.
The lymphatic system consists of lymphatic organs, a conducting network of lymphatic vessels, and the circulating lymph.
The primary or central lymphoid organs generate lymphocytes from immature progenitor cells.
The thymus and the bone marrow constitute the primary lymphoid organs involved in the production and early clonal selection of lymphocyte tissues. Bone marrow is responsible for both the creation of T cells and the production and maturation of B cells. From the bone marrow, B cells immediately join the circulatory system and travel to secondary lymphoid organs in search of pathogens. T cells, on the other hand, travel from the bone marrow to the thymus, where they develop further. Mature T cells join B cells in search of pathogens. The other 95 % of T cells begin a process of apoptosis, a form of programmed cell death.
Secondary or peripheral lymphoid organs, which include lymph nodes and the spleen, maintain mature naive lymphocytes and initiate an adaptive immune response. The peripheral lymphoid organs are the sites of lymphocyte activation by antigens. Activation leads to clonal expansion and affinity maturation. Mature lymphocytes recirculate between the blood and the peripheral lymphoid organs until they encounter their specific antigen.
Secondary lymphoid tissue provides the environment for the foreign or altered native molecules (antigens) to interact with the lymphocytes. It is exemplified by the lymph nodes, and the lymphoid follicles in tonsils, Peyer 's patches, spleen, adenoids, skin, etc. that are associated with the mucosa - associated lymphoid tissue (MALT).
In the gastrointestinal wall the appendix has mucosa resembling that of the colon, but here it is heavily infiltrated with lymphocytes.
The tertiary lymphoid tissue typically contains far fewer lymphocytes, and assumes an immune role only when challenged with antigens that result in inflammation. It achieves this by importing the lymphocytes from blood and lymph.
The thymus is a primary lymphoid organ and the site of maturation for T cells, the lymphocytes of the adaptive immune system. The thymus increases in size from birth in response to postnatal antigen stimulation, then to puberty and regresses thereafter. The loss or lack of the thymus results in severe immunodeficiency and subsequent high susceptibility to infection. In most species, the thymus consists of lobules divided by septa which are made up of epithelium and is therefore an epithelial organ. T cells mature from thymocytes, proliferate and undergo selection process in the thymic cortex before entering the medulla to interact with epithelial cells.
The thymus provides an inductive environment for development of T cells from hematopoietic progenitor cells. In addition, thymic stromal cells allow for the selection of a functional and self - tolerant T cell repertoire. Therefore, one of the most important roles of the thymus is the induction of central tolerance.
The thymus is largest and most active during the neonatal and pre-adolescent periods. By the early teens, the thymus begins to atrophy and thymic stroma is mostly replaced by adipose tissue. Nevertheless, residual T lymphopoiesis continues throughout adult life.
The main functions of the spleen are:
The spleen synthesizes antibodies in its white pulp and removes antibody - coated bacteria and antibody - coated blood cells by way of blood and lymph node circulation. A study published in 2009 using mice found that the spleen contains, in its reserve, half of the body 's monocytes within the red pulp. These monocytes, upon moving to injured tissue (such as the heart), turn into dendritic cells and macrophages while promoting tissue healing. The spleen is a center of activity of the mononuclear phagocyte system and can be considered analogous to a large lymph node, as its absence causes a predisposition to certain infections.
Like the thymus, the spleen has only efferent lymphatic vessels. Both the short gastric arteries and the splenic artery supply it with blood.
The germinal centers are supplied by arterioles called penicilliary radicles.
Up to the fifth month of prenatal development the spleen creates red blood cells. After birth the bone marrow is solely responsible for hematopoiesis. As a major lymphoid organ and a central player in the reticuloendothelial system, the spleen retains the ability to produce lymphocytes. The spleen stores red blood cells and lymphocytes. It can store enough blood cells to help in an emergency. Up to 25 % of lymphocytes can be stored at any one time.
A lymph node is an organized collection of lymphoid tissue, through which the lymph passes on its way back to the blood. Lymph nodes are located at intervals along the lymphatic system. Several afferent lymph vessels bring in lymph, which percolates through the substance of the lymph node, and is then drained out by an efferent lymph vessel. There are between five and six hundred lymph nodes in the human body, many of which are grouped in clusters in different regions as in the underarm and abdominal areas. Lymph node clusters are commonly found at the base of limbs (groin, armpits) and in the neck, where lymph is collected from regions of the body likely to sustain pathogen contamination from injuries.
The substance of a lymph node consists of lymphoid follicles in an outer portion called the cortex. The inner portion of the node is called the medulla, which is surrounded by the cortex on all sides except for a portion known as the hilum. The hilum presents as a depression on the surface of the lymph node, causing the otherwise spherical lymph node to be bean - shaped or ovoid. The efferent lymph vessel directly emerges from the lymph node at the hilum. The arteries and veins supplying the lymph node with blood enter and exit through the hilum.
The region of the lymph node called the paracortex immediately surrounds the medulla. Unlike the cortex, which has mostly immature T cells, or thymocytes, the paracortex has a mixture of immature and mature T cells. Lymphocytes enter the lymph nodes through specialised high endothelial venules found in the paracortex.
A lymph follicle is a dense collection of lymphocytes, the number, size and configuration of which change in accordance with the functional state of the lymph node. For example, the follicles expand significantly when encountering a foreign antigen. The selection of B cells, or B lymphocytes, occurs in the germinal center of the lymph nodes.
Lymph nodes are particularly numerous in the mediastinum in the chest, neck, pelvis, axilla, inguinal region, and in association with the blood vessels of the intestines.
Lymphoid tissue associated with the lymphatic system is concerned with immune functions in defending the body against infections and the spread of tumors. It consists of connective tissue formed of reticular fibers, with various types of leukocytes, (white blood cells), mostly lymphocytes enmeshed in it, through which the lymph passes. Regions of the lymphoid tissue that are densely packed with lymphocytes are known as lymphoid follicles. Lymphoid tissue can either be structurally well organized as lymph nodes or may consist of loosely organized lymphoid follicles known as the mucosa - associated lymphoid tissue.
The central nervous system also has lymphatic vessels, as discovered by University of Virginia Researchers. The search for T - cell gateways into and out of the meninges uncovered functional meningeal lymphatic vessels lining the dural sinuses, anatomically integrated into the membrane surrounding the brain.
The lymphatic vessels, also called lymph vessels, conduct lymph between different parts of the body. They include the tubular vessels of the lymph capillaries, and the larger collecting vessels -- the right lymphatic duct and the thoracic duct (the left lymphatic duct). The lymph capillaries are mainly responsible for the absorption of interstitial fluid from the tissues, while lymph vessels propel the absorbed fluid forward into the larger collecting ducts, where it ultimately returns to the bloodstream via one of the subclavian veins. These vessels are also called the lymphatic channels or simply lymphatics.
The lymphatics are responsible for maintaining the balance of the body fluids. Its network of capillaries and collecting lymphatic vessels work to efficiently drain and transport extravasated fluid, along with proteins and antigens, back to the circulatory system. Numerous intraluminal valves in the vessels ensure a unidirectional flow of lymph without reflux. Two valve systems are used to achieve this one directional flow -- a primary and a secondary valve system. The capillaries are blind - ended, and the valves at the ends of capillaries use specialised junctions together with anchoring filaments to allow a unidirectional flow to the primary vessels. The collecting lymphatics, however, act to propel the lymph by the combined actions of the intraluminal valves and lymphatic muscle cells.
Lymphatic tissues begin to develop by the end of the fifth week of embryonic development. Lymphatic vessels develop from lymph sacs that arise from developing veins, which are derived from mesoderm.
The first lymph sacs to appear are the paired jugular lymph sacs at the junction of the internal jugular and subclavian veins. From the jugular lymph sacs, lymphatic capillary plexuses spread to the thorax, upper limbs, neck and head. Some of the plexuses enlarge and form lymphatic vessels in their respective regions. Each jugular lymph sac retains at least one connection with its jugular vein, the left one developing into the superior portion of the thoracic duct.
The next lymph sac to appear is the unpaired retroperitoneal lymph sac at the root of the mesentery of the intestine. It develops from the primitive vena cava and mesonephric veins. Capillary plexuses and lymphatic vessels spread from the retroperitoneal lymph sac to the abdominal viscera and diaphragm. The sac establishes connections with the cisterna chyli but loses its connections with neighboring veins.
The last of the lymph sacs, the paired posterior lymph sacs, develop from the iliac veins. The posterior lymph sacs produce capillary plexuses and lymphatic vessels of the abdominal wall, pelvic region, and lower limbs. The posterior lymph sacs join the cisterna chyli and lose their connections with adjacent veins.
With the exception of the anterior part of the sac from which the cisterna chyli develops, all lymph sacs become invaded by mesenchymal cells and are converted into groups of lymph nodes.
The spleen develops from mesenchymal cells between layers of the dorsal mesentery of the stomach. The thymus arises as an outgrowth of the third pharyngeal pouch.
The lymphatic system has multiple interrelated functions:
Lymph vessels called lacteals are in the beginning of the gastrointestinal tract, predominantly in the small intestine. While most other nutrients absorbed by the small intestine are passed on to the portal venous system to drain via the portal vein into the liver for processing, fats (lipids) are passed on to the lymphatic system to be transported to the blood circulation via the thoracic duct. (There are exceptions, for example medium - chain triglycerides are fatty acid esters of glycerol that passively diffuse from the GI tract to the portal system.) The enriched lymph originating in the lymphatics of the small intestine is called chyle. The nutrients that are released to the circulatory system are processed by the liver, having passed through the systemic circulation.
The lymphatic system plays a major role in the body 's immune system, as the primary site for cells relating to adaptive immune system including T - cells and B - cells. Cells in the lymphatic system react to antigens presented or found by the cells directly or by other dendritic cells. When an antigen is recognized, an immunological cascade begins involving the activation and recruitment of more and more cells, the production of antibodies and cytokines and the recruitment of other immunological cells such as macrophages.
The study of lymphatic drainage of various organs is important in the diagnosis, prognosis, and treatment of cancer. The lymphatic system, because of its closeness to many tissues of the body, is responsible for carrying cancerous cells between the various parts of the body in a process called metastasis. The intervening lymph nodes can trap the cancer cells. If they are not successful in destroying the cancer cells the nodes may become sites of secondary tumors.
Lymphadenopathy refers to one or more enlarged lymph nodes. Small groups or individually enlarged lymph nodes are generally reactive in response to infection or inflammation. This is called local lymphadenopathy. When many lymph nodes in different areas of the body are involved, this is called generalised lymphadenopathy. Generalised lymphadenopathy may be caused by infections such as infectious mononucleosis, tuberculosis and HIV, connective tissue diseases such as SLE and rheumatoid arthritis, and cancers, including both cancers of tissue within lymph nodes, discussed below, and metastasis of cancerous cells from other parts of the body, that have arrived via the lymphatic system.
Lymphedema is the swelling caused by the accumulation of lymph, which may occur if the lymphatic system is damaged or has malformations. It usually affects limbs, though the face, neck and abdomen may also be affected. In an extreme state, called elephantiasis, the edema progresses to the extent that the skin becomes thick with an appearance similar to the skin on elephant limbs.
Causes are unknown in most cases, but sometimes there is a previous history of severe infection, usually caused by a parasitic disease, such as lymphatic filariasis.
Lymphangiomatosis is a disease involving multiple cysts or lesions formed from lymphatic vessels.
Lymphedema can also occur after surgical removal of lymph nodes in the armpit (causing the arm to swell due to poor lymphatic drainage) or groin (causing swelling of the leg). Treatment is by manual lymphatic drainage, and is not permanent.
Cancer of the lymphatic system can be primary or secondary. Lymphoma refers to cancer that arises from lymphatic tissue. Lymphoid leukemias and lymphomas are now considered to be tumors of the same type of cell lineage. They are called "leukemia '' when in the blood or marrow and "lymphoma '' when in lymphatic tissue. They are grouped together under the name "lymphoid malignancy ''.
Lymphoma is generally considered as either Hodgkin lymphoma or non-Hodgkin lymphoma. Hodgkin lymphoma is characterised by a particular type of cell, called a Reed -- Sternberg cell, visible under microscope. It is associated with past infection with the Epstein - Barr Virus, and generally causes a painless "rubbery '' lymphadenopathy. It is staged, using Ann Arbor staging. Chemotherapy generally involves the ABVD and may also involve radiotherapy. Non-Hodgkin lymphoma is a cancer characterised by increased proliferation of B - cells or T - cells, generally occurs in an older age group than Hodgkin lymphoma. It is treated according to whether it is high - grade or low - grade, and carries a poorer prognosis than Hodgkin lymphoma.
Lymphangiosarcoma is a malignant soft tissue tumor, whereas lymphangioma is a benign tumor occurring frequently in association with Turner syndrome. Lymphangioleiomyomatosis is a benign tumor of the smooth muscles of the lymphatics that occurs in the lungs.
Lymphoid leukemia is another form of cancer where the host is devoid of different lymphatic cells.
Hippocrates, in the 5th century BC, was one of the first people to mention the lymphatic system. In his work On Joints, he briefly mentioned the lymph nodes in one sentence. Rufus of Ephesus, a Roman physician, identified the axillary, inguinal and mesenteric lymph nodes as well as the thymus during the 1st to 2nd century AD. The first mention of lymphatic vessels was in the 3rd century BC by Herophilos, a Greek anatomist living in Alexandria, who incorrectly concluded that the "absorptive veins of the lymphatics, '' by which he meant the lacteals (lymph vessels of the intestines), drained into the hepatic portal veins, and thus into the liver. The findings of Ruphus and Herophilos were further propagated by the Greek physician Galen, who described the lacteals and mesenteric lymph nodes which he observed in his dissection of apes and pigs in the 2nd century AD.
In the mid 16th century, Gabriele Falloppio (discoverer of the fallopian tubes), described what are now known as the lacteals as "coursing over the intestines full of yellow matter. '' In about 1563 Bartolomeo Eustachi, a professor of anatomy, described the thoracic duct in horses as vena alba thoracis. The next breakthrough came when in 1622 a physician, Gaspare Aselli, identified lymphatic vessels of the intestines in dogs and termed them venae alba et lacteae, which is now known as simply the lacteals. The lacteals were termed the fourth kind of vessels (the other three being the artery, vein and nerve, which was then believed to be a type of vessel), and disproved Galen 's assertion that chyle was carried by the veins. But, he still believed that the lacteals carried the chyle to the liver (as taught by Galen). He also identified the thoracic duct but failed to notice its connection with the lacteals. This connection was established by Jean Pecquet in 1651, who found a white fluid mixing with blood in a dog 's heart. He suspected that fluid to be chyle as its flow increased when abdominal pressure was applied. He traced this fluid to the thoracic duct, which he then followed to a chyle - filled sac he called the chyli receptaculum, which is now known as the cisternae chyli; further investigations led him to find that lacteals ' contents enter the venous system via the thoracic duct. Thus, it was proven convincingly that the lacteals did not terminate in the liver, thus disproving Galen 's second idea: that the chyle flowed to the liver. Johann Veslingius drew the earliest sketches of the lacteals in humans in 1647.
The idea that blood recirculates through the body rather than being produced anew by the liver and the heart was first accepted as a result of works of William Harvey -- a work he published in 1628. In 1652, Olaus Rudbeck (1630 -- 1702), a Swede, discovered certain transparent vessels in the liver that contained clear fluid (and not white), and thus named them hepatico - aqueous vessels. He also learned that they emptied into the thoracic duct, and that they had valves. He announced his findings in the court of Queen Christina of Sweden, but did not publish his findings for a year, and in the interim similar findings were published by Thomas Bartholin, who additionally published that such vessels are present everywhere in the body, not just in the liver. He is also the one to have named them "lymphatic vessels. '' This had resulted in a bitter dispute between one of Bartholin 's pupils, Martin Bogdan, and Rudbeck, whom he accused of plagiarism.
Galen 's ideas prevailed in medicine until the 17th century. It was thought that blood was produced by the liver from chyle contaminated with ailments by the intestine and stomach, to which various spirits were added by other organs, and that this blood was consumed by all the organs of the body. This theory required that the blood be consumed and produced many times over. Even in the 17th century, his ideas were defended by some physicians.
Alexander Monro, of the University of Edinburgh Medical School, was the first to describe the function of the lymphatic system in detail.
"Claude Galien ''. Lithograph by Pierre Roche Vigneron. (Paris: Lith de Gregoire et Deneux, ca. 1865)
Gabriele Falloppio
Portrait of Eustachius
Olaus Rudbeck in 1696.
Thomas Bartholin
Lymph originates in the Classical Latin word lympha "water '', which is also the source of the English word limpid. The spelling with y and ph was influenced by folk etymology with Greek νύμφη (nýmphē) "nymph ''.
The adjective used for the lymph - transporting system is lymphatic. The adjective used for the tissues where lymphocytes are formed is lymphoid. Lymphatic comes from the Latin word lymphaticus, meaning "connected to water. ''
|
the scientist credited for formulating a model in which the earth circles the sun is | Galileo Galilei - wikipedia
Galileo Galilei (Italian: (ɡaliˈlɛːo ɡaliˈlɛi); 15 February 1564 -- 8 January 1642) was an Italian polymath. Galileo is a central figure in the transition from natural philosophy to modern science and in the transformation of the scientific Renaissance into a scientific revolution.
Galileo 's championing of heliocentrism and Copernicanism was controversial during his lifetime, when most subscribed to either geocentrism or the Tychonic system. He met with opposition from astronomers, who doubted heliocentrism because of the absence of an observed stellar parallax. The matter was investigated by the Roman Inquisition in 1615, which concluded that heliocentrism was "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture. '' Galileo later defended his views in Dialogue Concerning the Two Chief World Systems (1632), which appeared to attack Pope Urban VIII and thus alienated him and the Jesuits, who had both supported Galileo up until this point. He was tried by the Inquisition, found "vehemently suspect of heresy '', and forced to recant. He spent the rest of his life under house arrest. While under house arrest, he wrote one of his best - known works, Two New Sciences, in which he summarized work he had done some forty years earlier on the two sciences now called kinematics and strength of materials.
Galileo studied speed and velocity, gravity and free fall, the principle of relativity, inertia, projectile motion and also worked in applied science and technology, describing the properties of pendulums and "hydrostatic balances '', inventing the thermoscope and various military compasses, and using the telescope for scientific observations of celestial objects. His contributions to observational astronomy include the telescopic confirmation of the phases of Venus, the discovery of the four largest satellites of Jupiter, the observation of Saturn 's rings (though he could not see them well enough to discern their true nature) and the analysis of sunspots.
Known for his work as astronomer, physicist, engineer, philosopher, and mathematician, Galileo has been called the "father of observational astronomy '', the "father of modern physics '', the "father of the scientific method '', and even the "father of science ''.
Galileo was born in Pisa (then part of the Duchy of Florence), Italy, on 15 February 1564, the first of six children of Vincenzo Galilei, a famous lutenist, composer, and music theorist, and Giulia (née Ammannati), who had married in 1562. Galileo became an accomplished lutenist himself and would have learned early from his father a scepticism for established authority, the value of well - measured or quantified experimentation, an appreciation for a periodic or musical measure of time or rhythm, as well as the results expected from a combination of mathematics and experiment.
Three of Galileo 's five siblings survived infancy. The youngest, Michelangelo (or Michelagnolo), also became a noted lutenist and composer although he contributed to financial burdens during Galileo 's young adulthood. Michelangelo was unable to contribute his fair share of their father 's promised dowries to their brothers - in - law, who would later attempt to seek legal remedies for payments due. Michelangelo would also occasionally have to borrow funds from Galileo to support his musical endeavours and excursions. These financial burdens may have contributed to Galileo 's early desire to develop inventions that would bring him additional income.
When Galileo Galilei was eight, his family moved to Florence, but he was left with Jacopo Borghini for two years. He then was educated in the Vallombrosa Abbey, about 30 km southeast of Florence.
The surname Galilei derives from the given name of an ancestor, Galileo Bonaiuti, a physician, university teacher and politician who lived in Florence from 1370 to 1450; his descendents had changed their family name from Bonaiuti (or Buonaiuti) to Galilei in his honor in the late 14th century. Galileo Bonaiuti was buried in the same church, the Basilica of Santa Croce in Florence, where about 200 years later his more famous descendant Galileo Galilei was also buried.
It was common for mid-sixteenth century Tuscan families to name the eldest son after the parents ' surname. Hence, Galileo Galilei was not necessarily named after his ancestor Galileo Bonaiuti. The Italian male given name "Galileo '' (and thence the surname "Galilei '') derives from the Latin "Galilaeus '', meaning "of Galilee '', a biblically significant region in Northern Israel.
The biblical roots of Galileo 's name and surname were to become the subject of a famous pun. In 1614, during the Galileo affair, one of Galileo 's opponents, the Dominican priest Tommaso Caccini, delivered against Galileo a controversial and influential sermon. In it he made a point of quoting Acts 1: 11, "Ye men of Galilee, why stand ye gazing up into heaven? ''.
Despite being a genuinely pious Roman Catholic, Galileo fathered three children out of wedlock with Marina Gamba. They had two daughters, Virginia (born in 1600) and Livia (born in 1601), and a son, Vincenzo (born in 1606).
Because of their illegitimate birth, their father considered the girls unmarriageable, if not posing problems of prohibitively expensive support or dowries, which would have been similar to Galileo 's previous extensive financial problems with two of his sisters. Their only worthy alternative was the religious life. Both girls were accepted by the convent of San Matteo in Arcetri and remained there for the rest of their lives. Virginia took the name Maria Celeste upon entering the convent. She died on 2 April 1634, and is buried with Galileo at the Basilica of Santa Croce, Florence. Livia took the name Sister Arcangela and was ill for most of her life. Vincenzo was later legitimised as the legal heir of Galileo and married Sestilia Bocchineri.
Although Galileo seriously considered the priesthood as a young man, at his father 's urging he instead enrolled at the University of Pisa for a medical degree. In 1581, when he was studying medicine, he noticed a swinging chandelier, which air currents shifted about to swing in larger and smaller arcs. To him, it seemed, by comparison with his heartbeat, that the chandelier took the same amount of time to swing back and forth, no matter how far it was swinging. When he returned home, he set up two pendulums of equal length and swung one with a large sweep and the other with a small sweep and found that they kept time together. It was not until the work of Christiaan Huygens, almost one hundred years later, that the tautochrone nature of a swinging pendulum was used to create an accurate timepiece. Up to this point, Galileo had deliberately been kept away from mathematics, since a physician earned a higher income than a mathematician. However, after accidentally attending a lecture on geometry, he talked his reluctant father into letting him study mathematics and natural philosophy instead of medicine. He created a thermoscope, a forerunner of the thermometer, and, in 1586, published a small book on the design of a hydrostatic balance he had invented (which first brought him to the attention of the scholarly world). Galileo also studied disegno, a term encompassing fine art, and, in 1588, obtained the position of instructor in the Accademia delle Arti del Disegno in Florence, teaching perspective and chiaroscuro. Being inspired by the artistic tradition of the city and the works of the Renaissance artists, Galileo acquired an aesthetic mentality. While a young teacher at the Accademia, he began a lifelong friendship with the Florentine painter Cigoli, who included Galileo 's lunar observations in one of his paintings.
In 1589, he was appointed to the chair of mathematics in Pisa. In 1591, his father died, and he was entrusted with the care of his younger brother Michelagnolo. In 1592, he moved to the University of Padua where he taught geometry, mechanics, and astronomy until 1610. During this period, Galileo made significant discoveries in both pure fundamental science (for example, kinematics of motion and astronomy) as well as practical applied science (for example, strength of materials and pioneering the telescope). His multiple interests included the study of astrology, which at the time was a discipline tied to the studies of mathematics and astronomy.
Cardinal Bellarmine had written in 1615 that the Copernican system could not be defended without "a true physical demonstration that the sun does not circle the earth but the earth circles the sun ''. Galileo considered his theory of the tides to provide the required physical proof of the motion of the earth. This theory was so important to him that he originally intended to entitle his Dialogue on the Two Chief World Systems the Dialogue on the Ebb and Flow of the Sea. The reference to tides was removed from the title by order of the Inquisition.
For Galileo, the tides were caused by the sloshing back and forth of water in the seas as a point on the Earth 's surface sped up and slowed down because of the Earth 's rotation on its axis and revolution around the Sun. He circulated his first account of the tides in 1616, addressed to Cardinal Orsini. His theory gave the first insight into the importance of the shapes of ocean basins in the size and timing of tides; he correctly accounted, for instance, for the negligible tides halfway along the Adriatic Sea compared to those at the ends. As a general account of the cause of tides, however, his theory was a failure.
If this theory were correct, there would be only one high tide per day. Galileo and his contemporaries were aware of this inadequacy because there are two daily high tides at Venice instead of one, about twelve hours apart. Galileo dismissed this anomaly as the result of several secondary causes including the shape of the sea, its depth, and other factors. Against the assertion that Galileo was deceptive in making these arguments, Albert Einstein expressed the opinion that Galileo developed his "fascinating arguments '' and accepted them uncritically out of a desire for physical proof of the motion of the Earth. Galileo dismissed the idea, held by his contemporary Johannes Kepler, that the moon caused the tides. (Galileo also took no interest in Kepler 's elliptical orbits of the planets.)
In 1619, Galileo became embroiled in a controversy with Father Orazio Grassi, professor of mathematics at the Jesuit Collegio Romano. It began as a dispute over the nature of comets, but by the time Galileo had published The Assayer (Il Saggiatore) in 1623, his last salvo in the dispute, it had become a much wider controversy over the very nature of science itself. The title page of the book describes Galileo as philosopher and "Matematico Primario '' of the Grand Duke of Tuscany.
Because The Assayer contains such a wealth of Galileo 's ideas on how science should be practised, it has been referred to as his scientific manifesto. Early in 1619, Father Grassi had anonymously published a pamphlet, An Astronomical Disputation on the Three Comets of the Year 1618, which discussed the nature of a comet that had appeared late in November of the previous year. Grassi concluded that the comet was a fiery body which had moved along a segment of a great circle at a constant distance from the earth, and since it moved in the sky more slowly than the moon, it must be farther away than the moon.
Grassi 's arguments and conclusions were criticised in a subsequent article, Discourse on Comets, published under the name of one of Galileo 's disciples, a Florentine lawyer named Mario Guiducci, although it had been largely written by Galileo himself. Galileo and Guiducci offered no definitive theory of their own on the nature of comets although they did present some tentative conjectures that are now known to be mistaken. In its opening passage, Galileo and Guiducci 's Discourse gratuitously insulted the Jesuit Christopher Scheiner, and various uncomplimentary remarks about the professors of the Collegio Romano were scattered throughout the work. The Jesuits were offended, and Grassi soon replied with a polemical tract of his own, The Astronomical and Philosophical Balance, under the pseudonym Lothario Sarsio Sigensano, purporting to be one of his own pupils.
The Assayer was Galileo 's devastating reply to the Astronomical Balance. It has been widely recognized as a masterpiece of polemical literature, in which "Sarsi 's '' arguments are subjected to withering scorn. It was greeted with wide acclaim, and particularly pleased the new pope, Urban VIII, to whom it had been dedicated. In Rome, in the previous decade, Barberini, the future Urban VIII, had come down on the side of Galileo and the Lincean Academy.
Galileo 's dispute with Grassi permanently alienated many of the Jesuits who had previously been sympathetic to his ideas, and Galileo and his friends were convinced that these Jesuits were responsible for bringing about his later condemnation. The evidence for this is at best equivocal, however.
In the Christian world prior to Galileo 's conflict with the Church, the majority of educated people subscribed either to the Aristotelian geocentric view that the earth was the center of the universe and that all heavenly bodies revolved around the Earth, or the Tychonic system that blended geocentrism with heliocentrism. Nevertheless, following the death of Copernicus and before Galileo, heliocentrism was relatively uncontroversial; Copernicus 's work was used by Pope Gregory XIII to reform the calendar in 1582.
Opposition to heliocentrism and Galileo 's writings combined religious and scientific objections and were fueled by political events. Scientific opposition came from Tycho Brahe and others and arose from the fact that, if heliocentrism were true, an annual stellar parallax should be observed, though none was. Copernicus had correctly postulated that parallax was negligible because the stars were so distant. However, Brahe had countered that, since stars appeared to have measurable size, if the stars were that distant, they would be gigantic, and in fact far larger than the Sun or any other celestial body. In Brahe 's system, by contrast, the stars were a little more distant than Saturn, and the Sun and stars were comparable in size.
Religious opposition to heliocentrism arose from Biblical references such as Psalm 93: 1, 96: 10, and 1 Chronicles 16: 30 which include text stating that "the world is firmly established, it can not be moved. '' In the same manner, Psalm 104: 5 says, "the Lord set the earth on its foundations; it can never be moved. '' Further, Ecclesiastes 1: 5 states that "And the sun rises and sets and returns to its place. ''
Galileo defended heliocentrism based on his astronomical observations of 1609 (Sidereus Nuncius 1610). In December 1613, the Grand Duchess Christina of Florence confronted one of Galileo 's friends and followers, Benedetto Castelli, with biblical objections to the motion of the earth. According to Maurice Finocchiaro, this was done in a friendly and gracious manner, out of curiosity. Prompted by this incident, Galileo wrote a letter to Castelli in which he argued that heliocentrism was actually not contrary to biblical texts, and that the bible was an authority on faith and morals, not on science. This letter was not published, but circulated widely.
By 1615, Galileo 's writings on heliocentrism had been submitted to the Roman Inquisition by Father Niccolo Lorini, who claimed that Galileo and his followers were attempting to reinterpret the Bible, which was seen as a violation of the Council of Trent and looked dangerously like Protestantism. Lorini specifically cited Galileo 's letter to Castelli. Galileo went to Rome to defend himself and his Copernican and biblical ideas. At the start of 1616, Monsignor Francesco Ingoli initiated a debate with Galileo, sending him an essay disputing the Copernican system. Galileo later stated that he believed this essay to have been instrumental in the action against Copernicanism that followed. According to Maurice Finocchiaro, Ingoli had probably been commissioned by the Inquisition to write an expert opinion on the controversy, and the essay provided the "chief direct basis '' for the Inquisition 's actions. The essay focused on eighteen physical and mathematical arguments against heliocentrism. It borrowed primarily from the arguments of Tycho Brahe, and it notedly mentioned Brahe 's argument that heliocentrism required the stars to be much larger than the Sun. Ingoli wrote that the great distance to the stars in the heliocentric theory "clearly proves... the fixed stars to be of such size, as they may surpass or equal the size of the orbit circle of the Earth itself. '' The essay also included four theological arguments, but Ingoli suggested Galileo focus on the physical and mathematical arguments, and he did not mention Galileo 's biblical ideas. In February 1616, an Inquisitorial commission declared heliocentrism to be "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture. '' The Inquisition found that the idea of the Earth 's movement "receives the same judgement in philosophy and... in regard to theological truth it is at least erroneous in faith ''. (The original document from the Inquisitorial commission was made widely available in 2014.)
Pope Paul V instructed Cardinal Bellarmine to deliver this finding to Galileo, and to order him to abandon the opinion that heliocentrism was physically true. On 26 February, Galileo was called to Bellarmine 's residence and ordered:
... to abandon completely... the opinion that the sun stands still at the center of the world and the earth moves, and henceforth not to hold, teach, or defend it in any way whatever, either orally or in writing.
The decree of the Congregation of the Index banned Copernicus 's De Revolutionibus and other heliocentric works until correction. Bellarmine 's instructions did not prohibit Galileo from discussing heliocentrism as a mathematical and philosophic idea, so long as he did not advocate for its physical truth.
For the next decade, Galileo stayed well away from the controversy. He revived his project of writing a book on the subject, encouraged by the election of Cardinal Maffeo Barberini as Pope Urban VIII in 1623. Barberini was a friend and admirer of Galileo, and had opposed the condemnation of Galileo in 1616. Galileo 's resulting book, Dialogue Concerning the Two Chief World Systems, was published in 1632, with formal authorization from the Inquisition and papal permission.
Earlier, Pope Urban VIII had personally asked Galileo to give arguments for and against heliocentrism in the book, and to be careful not to advocate heliocentrism. He made another request, that his own views on the matter be included in Galileo 's book. Only the latter of those requests was fulfilled by Galileo.
Whether unknowingly or deliberately, Simplicio, the defender of the Aristotelian geocentric view in Dialogue Concerning the Two Chief World Systems, was often caught in his own errors and sometimes came across as a fool. Indeed, although Galileo states in the preface of his book that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, "Simplicio '' in Italian), the name "Simplicio '' in Italian also has the connotation of "simpleton ''. This portrayal of Simplicio made Dialogue Concerning the Two Chief World Systems appear as an advocacy book: an attack on Aristotelian geocentrism and defence of the Copernican theory. Unfortunately for his relationship with the Pope, Galileo put the words of Urban VIII into the mouth of Simplicio.
Most historians agree Galileo did not act out of malice and felt blindsided by the reaction to his book. However, the Pope did not take the suspected public ridicule lightly, nor the Copernican advocacy.
Galileo had alienated one of his biggest and most powerful supporters, the Pope, and was called to Rome to defend his writings in September 1632. He finally arrived in February 1633 and was brought before inquisitor Vincenzo Maculani to be charged. Throughout his trial, Galileo steadfastly maintained that since 1616 he had faithfully kept his promise not to hold any of the condemned opinions, and initially he denied even defending them. However, he was eventually persuaded to admit that, contrary to his true intention, a reader of his Dialogue could well have obtained the impression that it was intended to be a defence of Copernicanism. In view of Galileo 's rather implausible denial that he had ever held Copernican ideas after 1616 or ever intended to defend them in the Dialogue, his final interrogation, in July 1633, concluded with his being threatened with torture if he did not tell the truth, but he maintained his denial despite the threat.
The sentence of the Inquisition was delivered on 22 June. It was in three essential parts:
According to popular legend, after recanting his theory that the Earth moved around the Sun, Galileo allegedly muttered the rebellious phrase "And yet it moves ''. A 1640s painting by the Spanish painter Bartolomé Esteban Murillo or an artist of his school, in which the words were hidden until restoration work in 1911, depicts an imprisoned Galileo apparently gazing at the words "E pur si muove '' written on the wall of his dungeon. The earliest known written account of the legend dates to a century after his death, but Stillman Drake writes "there is no doubt now that the famous words were already attributed to Galileo before his death ''.
After a period with the friendly Ascanio Piccolomini (the Archbishop of Siena), Galileo was allowed to return to his villa at Arcetri near Florence in 1634, where he spent the remainder of his life under house arrest. Galileo was ordered to read the seven penitential psalms once a week for the next three years. However, his daughter Maria Celeste relieved him of the burden after securing ecclesiastical permission to take it upon herself.
It was while Galileo was under house arrest that he dedicated his time to one of his finest works, Two New Sciences. Here he summarised work he had done some forty years earlier, on the two sciences now called kinematics and strength of materials, published in Holland to avoid the censor. This book has received high praise from Albert Einstein. As a result of this work, Galileo is often called the "father of modern physics ''. He went completely blind in 1638 and was suffering from a painful hernia and insomnia, so he was permitted to travel to Florence for medical advice.
Dava Sobel argues that prior to Galileo 's 1633 trial and judgement for heresy, Pope Urban VIII had become preoccupied with court intrigue and problems of state, and began to fear persecution or threats to his own life. In this context, Sobel argues that the problem of Galileo was presented to the pope by court insiders and enemies of Galileo. Having been accused of weakness in defending the church, Urban reacted against Galileo out of anger and fear.
Galileo continued to receive visitors until 1642, when, after suffering fever and heart palpitations, he died on 8 January 1642, aged 77. The Grand Duke of Tuscany, Ferdinando II, wished to bury him in the main body of the Basilica of Santa Croce, next to the tombs of his father and other ancestors, and to erect a marble mausoleum in his honour.
These plans were dropped, however, after Pope Urban VIII and his nephew, Cardinal Francesco Barberini, protested, because Galileo had been condemned by the Catholic Church for "vehement suspicion of heresy ''. He was instead buried in a small room next to the novices ' chapel at the end of a corridor from the southern transept of the basilica to the sacristy. He was reburied in the main body of the basilica in 1737 after a monument had been erected there in his honour; during this move, three fingers and a tooth were removed from his remains. One of these fingers, the middle finger from Galileo 's right hand, is currently on exhibition at the Museo Galileo in Florence, Italy.
Galileo made original contributions to the science of motion through an innovative combination of experiment and mathematics. More typical of science at the time were the qualitative studies of William Gilbert, on magnetism and electricity. Galileo 's father, Vincenzo Galilei, a lutenist and music theorist, had performed experiments establishing perhaps the oldest known non-linear relation in physics: for a stretched string, the pitch varies as the square root of the tension. These observations lay within the framework of the Pythagorean tradition of music, well - known to instrument makers, which included the fact that subdividing a string by a whole number produces a harmonious scale. Thus, a limited amount of mathematics had long related music and physical science, and young Galileo could see his own father 's observations expand on that tradition.
Galileo was one of the first modern thinkers to clearly state that the laws of nature are mathematical. In The Assayer, he wrote "Philosophy is written in this grand book, the universe... It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures;... '' His mathematical analyses are a further development of a tradition employed by late scholastic natural philosophers, which Galileo learned when he studied philosophy. His work marked another step towards the eventual separation of science from both philosophy and religion; a major development in human thought. He was often willing to change his views in accordance with observation. In order to perform his experiments, Galileo had to set up standards of length and time, so that measurements made on different days and in different laboratories could be compared in a reproducible fashion. This provided a reliable foundation on which to confirm mathematical laws using inductive reasoning.
Galileo showed a modern appreciation for the proper relationship between mathematics, theoretical physics, and experimental physics. He understood the parabola, both in terms of conic sections and in terms of the ordinate (y) varying as the square of the abscissa (x). Galilei further asserted that the parabola was the theoretically ideal trajectory of a uniformly accelerated projectile in the absence of air resistance or other disturbances. He conceded that there are limits to the validity of this theory, noting on theoretical grounds that a projectile trajectory of a size comparable to that of the Earth could not possibly be a parabola, but he nevertheless maintained that for distances up to the range of the artillery of his day, the deviation of a projectile 's trajectory from a parabola would be only very slight.
Based only on uncertain descriptions of the first practical telescope which Hans Lippershey tried to patent in the Netherlands in 1608, Galileo, in the following year, made a telescope with about 3x magnification. He later made improved versions with up to about 30x magnification. With a Galilean telescope, the observer could see magnified, upright images on the earth -- it was what is commonly known as a terrestrial telescope or a spyglass. He could also use it to observe the sky; for a time he was one of those who could construct telescopes good enough for that purpose. On 25 August 1609, he demonstrated one of his early telescopes, with a magnification of about 8 or 9, to Venetian lawmakers. His telescopes were also a profitable sideline for Galileo, who sold them to merchants who found them useful both at sea and as items of trade. He published his initial telescopic astronomical observations in March 1610 in a brief treatise entitled Sidereus Nuncius (Starry Messenger).
Tycho and others had observed the supernova of 1572. Ottavio Brenzoni 's letter of 15 January 1605 to Galileo brought the 1572 supernova and the less bright nova of 1601 to Galileo 's notice. Galileo observed and discussed Kepler 's supernova in 1604. Since these new stars displayed no detectable diurnal parallax, Galileo concluded that they were distant stars, and, therefore, disproved the Aristotelian belief in the immutability of the heavens.
On 7 January 1610, Galileo observed with his telescope what he described at the time as "three fixed stars, totally invisible by their smallness '', all close to Jupiter, and lying on a straight line through it. Observations on subsequent nights showed that the positions of these "stars '' relative to Jupiter were changing in a way that would have been inexplicable if they had really been fixed stars. On 10 January, Galileo noted that one of them had disappeared, an observation which he attributed to its being hidden behind Jupiter. Within a few days, he concluded that they were orbiting Jupiter: he had discovered three of Jupiter 's four largest moons. He discovered the fourth on 13 January. Galileo named the group of four the Medicean stars, in honour of his future patron, Cosimo II de ' Medici, Grand Duke of Tuscany, and Cosimo 's three brothers. Later astronomers, however, renamed them Galilean satellites in honour of their discoverer. These satellites are now called Io, Europa, Ganymede, and Callisto.
His observations of the satellites of Jupiter caused a revolution in astronomy: a planet with smaller planets orbiting it did not conform to the principles of Aristotelian cosmology, which held that all heavenly bodies should circle the Earth, and many astronomers and philosophers initially refused to believe that Galileo could have discovered such a thing. His observations were confirmed by the observatory of Christopher Clavius and he received a hero 's welcome when he visited Rome in 1611. Galileo continued to observe the satellites over the next eighteen months, and by mid-1611, he had obtained remarkably accurate estimates for their periods -- a feat which Kepler had believed impossible.
From September 1610, Galileo observed that Venus exhibited a full set of phases similar to that of the Moon. The heliocentric model of the solar system developed by Nicolaus Copernicus predicted that all phases would be visible since the orbit of Venus around the Sun would cause its illuminated hemisphere to face the Earth when it was on the opposite side of the Sun and to face away from the Earth when it was on the Earth - side of the Sun. On the other hand, in Ptolemy 's geocentric model it was impossible for any of the planets ' orbits to intersect the spherical shell carrying the Sun. Traditionally, the orbit of Venus was placed entirely on the near side of the Sun, where it could exhibit only crescent and new phases. It was, however, also possible to place it entirely on the far side of the Sun, where it could exhibit only gibbous and full phases. After Galileo 's telescopic observations of the crescent, gibbous and full phases of Venus, the Ptolemaic model became untenable. Thus in the early 17th century, as a result of his discovery, the great majority of astronomers converted to one of the various geo - heliocentric planetary models, such as the Tychonic, Capellan and Extended Capellan models, each either with or without a daily rotating Earth. These all had the virtue of explaining the phases of Venus without the vice of the ' refutation ' of full heliocentrism 's prediction of stellar parallax. Galileo 's discovery of the phases of Venus was thus arguably his most empirically practically influential contribution to the two - stage transition from full geocentrism to full heliocentrism via geo - heliocentrism.
Galileo observed the planet Saturn, and at first mistook its rings for planets, thinking it was a three - bodied system. When he observed the planet later, Saturn 's rings were directly oriented at Earth, causing him to think that two of the bodies had disappeared. The rings reappeared when he observed the planet in 1616, further confusing him.
Galileo also observed the planet Neptune in 1612. It appears in his notebooks as one of many unremarkable dim stars. He did not realise that it was a planet, but he did note its motion relative to the stars before losing track of it.
Galileo made naked - eye and telescopic studies of sunspots. Their existence raised another difficulty with the unchanging perfection of the heavens as posited in orthodox Aristotelian celestial physics. An apparent annual variation in their trajectories, observed by Francesco Sizzi and others in 1612 -- 1613, also provided a powerful argument against both the Ptolemaic system and the geoheliocentric system of Tycho Brahe. A dispute over claimed priority in the discovery of sunspots, and in their interpretation, led Galileo to a long and bitter feud with the Jesuit Christoph Scheiner. In the middle was Mark Welser, to whom Scheiner had announced his discovery, and who asked Galileo for his opinion. In fact, there is little doubt that both of them were beaten by David Fabricius and his son Johannes.
Prior to Galileo 's construction of his version of a telescope, Thomas Harriot, an English mathematician and explorer, had already used what he dubbed a "perspective tube '' to observe the moon. Reporting his observations, Harriot noted only "strange spottednesse '' in the waning of the crescent, but was ignorant to the cause. Galileo, due in part to his artistic training and the knowledge of chiaroscuro, had understood the patterns of light and shadow were, in fact, topographical markers. While not being the only one to observe the moon through a telescope, Galileo was the first to deduce the cause of the uneven waning as light occlusion from lunar mountains and craters. In his study, he also made topographical charts, estimating the heights of the mountains. The moon was not what was long thought to have been a translucent and perfect sphere, as Aristotle claimed, and hardly the first "planet '', an "eternal pearl to magnificently ascend into the heavenly empyrian '', as put forth by Dante.
Galileo observed the Milky Way, previously believed to be nebulous, and found it to be a multitude of stars packed so densely that they appeared from Earth to be clouds. He located many other stars too distant to be visible with the naked eye. He observed the double star Mizar in Ursa Major in 1617.
In the Starry Messenger, Galileo reported that stars appeared as mere blazes of light, essentially unaltered in appearance by the telescope, and contrasted them to planets, which the telescope revealed to be discs. But shortly thereafter, in his Letters on Sunspots, he reported that the telescope revealed the shapes of both stars and planets to be "quite round ''. From that point forward, he continued to report that telescopes showed the roundness of stars, and that stars seen through the telescope measured a few seconds of arc in diameter. He also devised a method for measuring the apparent size of a star without a telescope. As described in his Dialogue Concerning the two Chief World Systems, his method was to hang a thin rope in his line of sight to the star and measure the maximum distance from which it would wholly obscure the star. From his measurements of this distance and of the width of the rope, he could calculate the angle subtended by the star at his viewing point. In his Dialogue, he reported that he had found the apparent diameter of a star of first magnitude to be no more than 5 arcseconds, and that of one of sixth magnitude to be about / arcseconds. Like most astronomers of his day, Galileo did not recognise that the apparent sizes of stars that he measured were spurious, caused by diffraction and atmospheric distortion (see seeing disk or Airy disk), and did not represent the true sizes of stars. However, Galileo 's values were much smaller than previous estimates of the apparent sizes of the brightest stars, such as those made by Tycho Brahe (see Magnitude) and enabled Galileo to counter anti-Copernican arguments such as those made by Tycho that these stars would have to be absurdly large for their annual parallaxes to be undetectable. Other astronomers such as Simon Marius, Giovanni Battista Riccioli, and Martinus Hortensius made similar measurements of stars, and Marius and Riccioli concluded the smaller sizes were not small enough to answer Tycho 's argument.
Galileo made a number of contributions to what is now known as engineering, as distinct from pure physics. Between 1595 and 1598, Galileo devised and improved a Geometric and military compass suitable for use by gunners and surveyors. This expanded on earlier instruments designed by Niccolò Tartaglia and Guidobaldo del Monte. For gunners, it offered, in addition to a new and safer way of elevating cannons accurately, a way of quickly computing the charge of gunpowder for cannonballs of different sizes and materials. As a geometric instrument, it enabled the construction of any regular polygon, computation of the area of any polygon or circular sector, and a variety of other calculations. Under Galileo 's direction, instrument maker Marc'Antonio Mazzoleni produced more than 100 of these compasses, which Galileo sold (along with an instruction manual he wrote) for 50 lire and offered a course of instruction in the use of the compasses for 120 lire.
In about 1593, Galileo constructed a thermometer, using the expansion and contraction of air in a bulb to move water in an attached tube.
In 1609, Galileo was, along with Englishman Thomas Harriot and others, among the first to use a refracting telescope as an instrument to observe stars, planets or moons. The name "telescope '' was coined for Galileo 's instrument by a Greek mathematician, Giovanni Demisiani, at a banquet held in 1611 by Prince Federico Cesi to make Galileo a member of his Accademia dei Lincei. The name was derived from the Greek tele = ' far ' and skopein = ' to look or see '. In 1610, he used a telescope at close range to magnify the parts of insects. By 1624, Galileo had used a compound microscope. He gave one of these instruments to Cardinal Zollern in May of that year for presentation to the Duke of Bavaria, and in September, he sent another to Prince Cesi. The Linceans played a role again in naming the "microscope '' a year later when fellow academy member Giovanni Faber coined the word for Galileo 's invention from the Greek words μικρόν (micron) meaning "small '', and σκοπεῖν (skopein) meaning "to look at ''. The word was meant to be analogous with "telescope ''. Illustrations of insects made using one of Galileo 's microscopes and published in 1625, appear to have been the first clear documentation of the use of a compound microscope.
In 1612, having determined the orbital periods of Jupiter 's satellites, Galileo proposed that with sufficiently accurate knowledge of their orbits, one could use their positions as a universal clock, and this would make possible the determination of longitude. He worked on this problem from time to time during the remainder of his life, but the practical problems were severe. The method was first successfully applied by Giovanni Domenico Cassini in 1681 and was later used extensively for large land surveys; this method, for example, was used to survey France, and later by Zebulon Pike of the midwestern United States in 1806. For sea navigation, where delicate telescopic observations were more difficult, the longitude problem eventually required development of a practical portable marine chronometer, such as that of John Harrison. Late in his life, when totally blind, Galileo designed an escapement mechanism for a pendulum clock (called Galileo 's escapement), although no clock using this was built until after the first fully operational pendulum clock was made by Christiaan Huygens in the 1650s.
Galileo was invited on several occasions to advise on engineering schemes to alleviate river flooding. In 1630 Mario Guiducci was probably instrumental in ensuring that he was consulted on a scheme by Bartolotti to cut a new channel for the Bisenzio River near Florence.
Galileo 's theoretical and experimental work on the motions of bodies, along with the largely independent work of Kepler and René Descartes, was a precursor of the classical mechanics developed by Sir Isaac Newton. Galileo conducted several experiments with pendulums. It is popularly believed (thanks to the biography by Vincenzo Viviani) that these began by watching the swings of the bronze chandelier in the cathedral of Pisa, using his pulse as a timer. Later experiments are described in his Two New Sciences. Galileo claimed that a simple pendulum is isochronous, i.e. that its swings always take the same amount of time, independently of the amplitude. In fact, this is only approximately true, as was discovered by Christiaan Huygens. Galileo also found that the square of the period varies directly with the length of the pendulum. Galileo 's son, Vincenzo, sketched a clock based on his father 's theories in 1642. The clock was never built and, because of the large swings required by its verge escapement, would have been a poor timekeeper. (See Engineering above.)
Galileo is lesser known for, yet still credited with, being one of the first to understand sound frequency. By scraping a chisel at different speeds, he linked the pitch of the sound produced to the spacing of the chisel 's skips, a measure of frequency. In 1638, Galileo described an experimental method to measure the speed of light by arranging that two observers, each having lanterns equipped with shutters, observe each other 's lanterns at some distance. The first observer opens the shutter of his lamp, and, the second, upon seeing the light, immediately opens the shutter of his own lantern. The time between the first observer 's opening his shutter and seeing the light from the second observer 's lamp indicates the time it takes light to travel back and forth between the two observers. Galileo reported that when he tried this at a distance of less than a mile, he was unable to determine whether or not the light appeared instantaneously. Sometime between Galileo 's death and 1667, the members of the Florentine Accademia del Cimento repeated the experiment over a distance of about a mile and obtained a similarly inconclusive result. We now know that the speed of light is far too fast to be measured by such methods (with human shutter - openers on Earth).
Galileo put forward the basic principle of relativity, that the laws of physics are the same in any system that is moving at a constant speed in a straight line, regardless of its particular speed or direction. Hence, there is no absolute motion or absolute rest. This principle provided the basic framework for Newton 's laws of motion and is central to Einstein 's special theory of relativity.
A biography by Galileo 's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass. This was contrary to what Aristotle had taught: that heavy objects fall faster than lighter ones, in direct proportion to weight. While this story has been retold in popular accounts, there is no account by Galileo himself of such an experiment, and it is generally accepted by historians that it was at most a thought experiment which did not actually take place. An exception is Drake, who argues that the experiment did take place, more or less as Viviani described it. The experiment described was actually performed by Simon Stevin (commonly known as Stevinus) and Jan Cornets de Groot, although the building used was actually the church tower in Delft in 1586. However, most of his experiments with falling bodies were carried out using inclined planes where both the issues of timing and air resistance were much reduced.
In his 1638 Discorsi, Galileo 's character Salviati, widely regarded as Galileo 's spokesman, held that all unequal weights would fall with the same finite speed in a vacuum. But this had previously been proposed by Lucretius and Simon Stevin. Cristiano Banti 's Salviati also held it could be experimentally demonstrated by the comparison of pendulum motions in air with bobs of lead and of cork which had different weight but which were otherwise similar.
Galileo proposed that a falling body would fall with a uniform acceleration, as long as the resistance of the medium through which it was falling remained negligible, or in the limiting case of its falling through a vacuum. He also derived the correct kinematical law for the distance travelled during a uniform acceleration starting from rest -- namely, that it is proportional to the square of the elapsed time (d ∝ t). Prior to Galileo, Nicole Oresme, in the 14th century, had derived the times - squared law for uniformly accelerated change, and Domingo de Soto had suggested in the 16th century that bodies falling through a homogeneous medium would be uniformly accelerated. Galileo expressed the time - squared law using geometrical constructions and mathematically precise words, adhering to the standards of the day. (It remained for others to re-express the law in algebraic terms).
He also concluded that objects retain their velocity in the absence of any impediments to their motion, thereby contradicting the generally accepted Aristotelian hypothesis that a body could only remain in so - called "violent '', "unnatural '', or "forced '' motion so long as an agent of change (the "mover '') continued to act on it. Philosophical ideas relating to inertia had been proposed by John Philoponus and Jean Buridan. Galileo stated: "Imagine any particle projected along a horizontal plane without friction; then we know, from what has been more fully explained in the preceding pages, that this particle will move along this same plane with a motion which is uniform and perpetual, provided the plane has no limits '' This was incorporated into Newton 's laws of motion (first law).
While Galileo 's application of mathematics to experimental physics was innovative, his mathematical methods were the standard ones of the day, including dozens of examples of an inverse proportion square root method passed down from Fibonacci and Archimedes. The analysis and proofs relied heavily on the Eudoxian theory of proportion, as set forth in the fifth book of Euclid 's Elements. This theory had become available only a century before, thanks to accurate translations by Tartaglia and others; but by the end of Galileo 's life, it was being superseded by the algebraic methods of Descartes.
The concept now named Galileo 's paradox was not original with him. His proposed solution, that infinite numbers can not be compared, is no longer considered useful.
The Galileo affair was largely forgotten after Galileo 's death, and the controversy subsided. The Inquisition 's ban on reprinting Galileo 's works was lifted in 1718 when permission was granted to publish an edition of his works (excluding the condemned Dialogue) in Florence. In 1741, Pope Benedict XIV authorised the publication of an edition of Galileo 's complete scientific works which included a mildly censored version of the Dialogue. In 1758, the general prohibition against works advocating heliocentrism was removed from the Index of prohibited books, although the specific ban on uncensored versions of the Dialogue and Copernicus 's De Revolutionibus remained. All traces of official opposition to heliocentrism by the church disappeared in 1835 when these works were finally dropped from the Index.
Interest in the Galileo affair was revived in the early 19th century, when Protestant polemicists used it (and other events such as the Spanish Inquisition and the myth of the flat Earth) to attack Roman Catholicism. Interest in it has waxed and waned ever since. In 1939, Pope Pius XII, in his first speech to the Pontifical Academy of Sciences, within a few months of his election to the papacy, described Galileo as being among the "most audacious heroes of research... not afraid of the stumbling blocks and the risks on the way, nor fearful of the funereal monuments ''. His close advisor of 40 years, Professor Robert Leiber, wrote: "Pius XII was very careful not to close any doors (to science) prematurely. He was energetic on this point and regretted that in the case of Galileo. ''
On 15 February 1990, in a speech delivered at the Sapienza University of Rome, Cardinal Ratzinger (later Pope Benedict XVI) cited some current views on the Galileo affair as forming what he called "a symptomatic case that permits us to see how deep the self - doubt of the modern age, of science and technology goes today ''. Some of the views he cited were those of the philosopher Paul Feyerabend, whom he quoted as saying "The Church at the time of Galileo kept much more closely to reason than did Galileo himself, and she took into consideration the ethical and social consequences of Galileo 's teaching too. Her verdict against Galileo was rational and just and the revision of this verdict can be justified only on the grounds of what is politically opportune. '' The Cardinal did not clearly indicate whether he agreed or disagreed with Feyerabend 's assertions. He did, however, say "It would be foolish to construct an impulsive apologetic on the basis of such views. ''
On 31 October 1992, Pope John Paul II expressed regret for how the Galileo affair was handled, and issued a declaration acknowledging the errors committed by the Catholic Church tribunal that judged the scientific positions of Galileo Galilei, as the result of a study conducted by the Pontifical Council for Culture. In March 2008, the head of the Pontifical Academy of Sciences, Nicola Cabibbo, announced a plan to honour Galileo by erecting a statue of him inside the Vatican walls. In December of the same year, during events to mark the 400th anniversary of Galileo 's earliest telescopic observations, Pope Benedict XVI praised his contributions to astronomy. A month later, however, the head of the Pontifical Council for Culture, Gianfranco Ravasi, revealed that the plan to erect a statue of Galileo in the grounds of the Vatican had been suspended.
According to Stephen Hawking, Galileo probably bears more of the responsibility for the birth of modern science than anybody else, and Albert Einstein called him the father of modern science.
Galileo 's astronomical discoveries and investigations into the Copernican theory have led to a lasting legacy which includes the categorisation of the four large moons of Jupiter discovered by Galileo (Io, Europa, Ganymede and Callisto) as the Galilean moons. Other scientific endeavours and principles are named after Galileo including the Galileo spacecraft, the first spacecraft to enter orbit around Jupiter, the proposed Galileo global satellite navigation system, the transformation between inertial systems in classical mechanics denoted Galilean transformation and the Gal (unit), sometimes known as the Galileo, which is a non-SI unit of acceleration.
Partly because 2009 was the fourth centenary of Galileo 's first recorded astronomical observations with the telescope, the United Nations scheduled it to be the International Year of Astronomy. A global scheme was laid out by the International Astronomical Union (IAU), also endorsed by UNESCO -- the UN body responsible for educational, scientific and cultural matters. The International Year of Astronomy 2009 was intended to be a global celebration of astronomy and its contributions to society and culture, stimulating worldwide interest not only in astronomy but science in general, with a particular slant towards young people.
Asteroid 697 Galilea is named in his honour.
Galileo is mentioned several times in the "opera '' section of the Queen song, "Bohemian Rhapsody ''. He features prominently in the song "Galileo '' performed by the Indigo Girls and Amy Grant 's Galileo on her Heart in Motion album.
Twentieth - century plays have been written on Galileo 's life, including Life of Galileo (1943) by the German playwright Bertolt Brecht, with a film adaptation (1975) of it, and Lamp At Midnight (1947) by Barrie Stavis, as well as the 2008 play "Galileo Galilei ''.
Kim Stanley Robinson wrote a science fiction novel entitled Galileo 's Dream (2009), in which Galileo is brought into the future to help resolve a crisis of scientific philosophy; the story moves back and forth between Galileo 's own time and a hypothetical distant future and contains a great deal of biographical information.
Galileo Galilei was recently selected as a main motif for a high value collectors ' coin: the € 25 International Year of Astronomy commemorative coin, minted in 2009. This coin also commemorates the 400th anniversary of the invention of Galileo 's telescope. The obverse shows a portion of his portrait and his telescope. The background shows one of his first drawings of the surface of the moon. In the silver ring, other telescopes are depicted: the Isaac Newton Telescope, the observatory in Kremsmünster Abbey, a modern telescope, a radio telescope and a space telescope. In 2009, the Galileoscope was also released. This is a mass - produced, low - cost educational 2 - inch (51 mm) telescope with relatively high quality.
Galileo 's early works describing scientific instruments include the 1586 tract entitled The Little Balance (La Billancetta) describing an accurate balance to weigh objects in air or water and the 1606 printed manual Le Operazioni del Compasso Geometrico et Militare on the operation of a geometrical and military compass.
His early works in dynamics, the science of motion and mechanics were his circa 1590 Pisan De Motu (On Motion) and his circa 1600 Paduan Le Meccaniche (Mechanics). The former was based on Aristotelian -- Archimedean fluid dynamics and held that the speed of gravitational fall in a fluid medium was proportional to the excess of a body 's specific weight over that of the medium, whereby in a vacuum, bodies would fall with speeds in proportion to their specific weights. It also subscribed to the Philoponan impetus dynamics in which impetus is self - dissipating and free - fall in a vacuum would have an essential terminal speed according to specific weight after an initial period of acceleration.
Galileo 's 1610 The Starry Messenger (Sidereus Nuncius) was the first scientific treatise to be published based on observations made through a telescope. It reported his discoveries of:
Galileo published a description of sunspots in 1613 entitled Letters on Sunspots suggesting the Sun and heavens are corruptible. The Letters on Sunspots also reported his 1610 telescopic observations of the full set of phases of Venus, and his discovery of the puzzling "appendages '' of Saturn and their even more puzzling subsequent disappearance. In 1615, Galileo prepared a manuscript known as the "Letter to the Grand Duchess Christina '' which was not published in printed form until 1636. This letter was a revised version of the Letter to Castelli, which was denounced by the Inquisition as an incursion upon theology by advocating Copernicanism both as physically true and as consistent with Scripture. In 1616, after the order by the inquisition for Galileo not to hold or defend the Copernican position, Galileo wrote the "Discourse on the Tides '' (Discorso sul flusso e il reflusso del mare) based on the Copernican earth, in the form of a private letter to Cardinal Orsini. In 1619, Mario Guiducci, a pupil of Galileo 's, published a lecture written largely by Galileo under the title Discourse on the Comets (Discorso Delle Comete), arguing against the Jesuit interpretation of comets.
In 1623, Galileo published The Assayer -- Il Saggiatore, which attacked theories based on Aristotle 's authority and promoted experimentation and the mathematical formulation of scientific ideas. The book was highly successful and even found support among the higher echelons of the Christian church. Following the success of The Assayer, Galileo published the Dialogue Concerning the Two Chief World Systems (Dialogo sopra i due massimi sistemi del mondo) in 1632. Despite taking care to adhere to the Inquisition 's 1616 instructions, the claims in the book favouring Copernican theory and a non Geocentric model of the solar system led to Galileo being tried and banned on publication. Despite the publication ban, Galileo published his Discourses and Mathematical Demonstrations Relating to Two New Sciences (Discorsi e Dimostrazioni Matematiche, intorno a due nuove scienze) in 1638 in Holland, outside the jurisdiction of the Inquisition.
Galileo 's main written works are as follows:
|
who played penny's father on big bang | Keith Carradine - wikipedia
Keith Ian Carradine (born August 8, 1949) is an American actor, singer and songwriter who has had success on stage, film and television. He is perhaps best known for his roles as Tom Frank in Robert Altman 's Nashville, Wild Bill Hickok in the HBO series Deadwood, FBI agent Frank Lundy in Dexter and US President Conrad Dalton in Madam Secretary. In addition, he is a Golden Globe - and Academy Award - winning songwriter. As a member of the Carradine family, he is part of an acting dynasty that began with his father, John Carradine.
Keith Carradine was born in San Mateo, California. He is the son of actress and artist Sonia Sorel (née Henius), and actor John Carradine. His paternal half - brothers are Bruce and David Carradine, his maternal half - brother is Michael Bowen, and his full brothers are Christopher and Robert Carradine. His maternal great - grandfather was biochemist Max Henius, and his maternal great - grandmother was the sister of historian Johan Ludvig Heiberg.
Carradine 's childhood was difficult. He said that his father drank and his mother "was a manic depressive paranoid schizophrenic catatonic -- she had it all. '' His parents were divorced in 1957, when he was eight years old. A bitter custody battle led to his father gaining custody of him and his brothers, Christopher and Robert, after the children had spent three months in a home for abused children as wards of the court. Keith said of the experience, "It was like being in jail. There were bars on the windows, and we were only allowed to see our parents through glass doors. It was very sad. We would stand there on either side of the glass door crying. '' He was raised primarily by his maternal grandmother, and he rarely saw either of his parents. His mother was not permitted to see him for eight years following the custody settlement.
After high school, Carradine entertained the thought of becoming a forest ranger but opted to study drama at Colorado State University. He dropped out after one semester and drifted back to California, moving in with his older half - brother, David. David encouraged Keith to pursue an acting career, paid for his acting and vocal lessons, and helped him get an agent.
As a youth, Carradine had opportunities to appear on stage with his father, John Carradine, in the latter 's productions of Shakespeare. Thus, he had some background in theater when he was cast in the original Broadway run of Hair (1969), which launched his acting career. In that production he started out in the chorus and worked his way up to the lead roles playing Woof and Claude. He said of his involvement in Hair, "I really did n't plan to audition. I just went along with my brother, David, and his girlfriend at the time, Barbara Hershey, and two of their friends. I was simply going to play the piano for them while they sang, but I 'm the one the staff wound up getting interested in. ''
His stage career is further distinguished by his Tony - nominated performance, for Best Actor (Musical) as the title character in the Tony Award - winning musical, the Will Rogers Follies in 1991, for which he also received a Drama Desk nomination. He won the Outer Critics Circle Award for Foxfire with Hume Cronyn and Jessica Tandy, and appeared as Lawrence in Dirty Rotten Scoundrels at the Imperial Theater. In 2008, he appeared as Dr. Farquhar Off - Broadway in Mindgame, a thriller by Antony Horowitz, directed by Ken Russell, who made his New York directorial debut with the production. In March and April 2013, he starred in the Broadway production of Hands on a Hardbody. He was nominated for the Tony Award and the Drama Desk Award for his work.
Carradine 's first notable film appearance was in director Robert Altman 's McCabe & Mrs. Miller (1971). His next film, Emperor of the North Pole (1973), was re-released with a shorter title Emperor of the North. Carradine played a young aspiring hobo. The film was directed by Robert Aldrich and also starred Lee Marvin and Ernest Borgnine. Carradine then starred in Altman 's film Thieves Like Us (1974), then played a principal character, a callow, womanizing folk singer, Tom Frank, in Altman 's critically acclaimed film Nashville (1975; see "Music and song writing ''). He had difficulty shaking the image of Tom Frank following the popularity of the film. He felt the role gave him the reputation of being "a cad. ''
In 1977, Carradine starred opposite Harvey Keitel in Ridley Scott 's The Duellists. Pretty Baby followed in 1978. He has acted in several offbeat films of Altman 's protege Alan Rudolph, playing a disarmingly candid madman in Choose Me (1984), an incompetent petty criminal in Trouble in Mind (1985), and an American artist in 1930s Paris in The Moderns (1988).
He appeared with brothers David and Robert as the Younger brothers in Walter Hill 's film The Long Riders (1980). Keith played Jim Younger in that film. In 1981, he appeared again under Hill 's direction in Southern Comfort. In 1994, he had a cameo role as Will Rogers in Rudolph 's film about Dorothy Parker, Mrs. Parker and the Vicious Circle. He co-starred with Daryl Hannah as homicidal sociopath John Netherwood in the thriller The Tie That Binds (1995). In 2011, he starred in Cowboys and Aliens, an American science fiction western film directed by Jon Favreau also starring Daniel Craig, Harrison Ford, and Olivia Wilde. Carradine traveled to Tuscany in 2012 to executive produce and star in John Charles Jopson 's Edgar Allan Poe inspired film ' Terroir '. In 2013, he starred in Ai n't Them Bodies Saints, which won the 2013 Sundance Film Festival award for cinematography. In 2016 Keith played Edward Dickinson father of Emily Dickinson in A Quiet Passion the biographical film directed and written by Terence Davies about the life of the American poet.
In 2016 Carradine returned to star in his fourth Alan Rudolph film Ray Meets Helen alongside actress Sondra Locke.
His brother, David, said in an interview that Keith could play any instrument he wanted, including bagpipes and the French horn. Like David, Keith integrated his musical talents with his acting performances. In 1975, he performed a song he 'd written, "I 'm Easy '', in the movie Nashville. It was a popular hit, and Carradine won a Golden Globe and an Oscar for Best Original Song for the tune. This led to a brief singing career; he signed a contract with Asylum Records and released two albums -- I 'm Easy (1976) and Lost & Found (1978). In 1984, he appeared in the music video for Madonna 's single "Material Girl. '' In the early 1990s, he played the lead role in the Tony Award - winning musical The Will Rogers Follies.
In 1972, Carradine appeared briefly in the first season of the hit television series, Kung Fu, which starred his brother, David. Keith played a younger version of David 's character, Kwai Chang Caine. In 1987, he starred in the highly rated CBS miniseries Murder Ordained with JoBeth Williams and Kathy Bates. Other TV appearances include My Father My Son (1988), a television film. In 1983, he appeared as Foxy Funderburke, a murderous pedophile, in the television miniseries Chiefs, based on the Stuart Woods novel of the same name. His performance in Chiefs earned him a nomination for an Emmy Award in the "Outstanding Supporting Actor in a Limited Series or a Special '' category. Carradine also starred in the ABC sitcom Complete Savages, and he played Wild Bill Hickok in the HBO series Deadwood.
Carradine hosted the documentary Wild West Tech series on the History Channel in the 2003 -- 2004 season, before handing the job over to his brother, David. In the 2005 miniseries Into the West, produced by Steven Spielberg and Dreamworks, Carradine played Richard Henry Pratt. During the second and fourth seasons of the Showtime series Dexter, he appeared numerous times as FBI Special Agent Frank Lundy. Carradine is credited with guest starring twice on the suspense - drama Criminal Minds, as the psychopathic serial killer Frank Breitkopf. Other shows he appeared in include The Big Bang Theory (as Penny 's father Wyatt), Star Trek: Enterprise ("First Flight '' episode) and the Starz series Crash. Carradine also made two guest appearances on NCIS in 2012 and 2014. Also in 2014, he had a recurring role as Lou Solverson in the FX series Fargo, followed by a recurring role as President Conrad Dalton on Madam Secretary. He was promoted to series regular, starting with the show 's second season.
In 2012, Carradine lent his voice to the video game Hitman: Absolution, voicing the primary antagonist Blake Dexter.
Keith Carradine met Shelley Plimpton in the Broadway musical Hair. She was married to actor Steve Curry, albeit separated, and she and Carradine became romantically involved. After Carradine left the show and was in California he learned that Shelley was pregnant and had reunited with Curry. He met his daughter, Martha Plimpton, when she was four years old, after Shelley and Steve Curry had divorced. He said of Shelley, "She did a hell of a job raising Martha. I was not there. I was a very young man, absolutely terrified. She just took that in, and then she welcomed me into Martha 's life when I was ready. ''
Carradine married Sandra Will on February 6, 1982. They were separated in 1993, before Will filed for divorce in 1999. The couple had two children: Cade Richmond Carradine (born July 19, 1982) and Sorel Johannah Carradine (born June 18, 1985). In 2006, Will pleaded guilty to two counts of perjury for lying to a grand jury about her involvement in the Anthony Pellicano wire tap scandal. She hired, then became romantically involved with, Pellicano after her divorce from Carradine. According to FBI documents, Pellicano tapped Keith Carradine 's telephone and recorded calls between him and girlfriend Hayley Leslie DuMond at Will 's request, along with DuMond 's parents. Carradine filed a civil lawsuit against Will and Pellicano which was settled in 2013 before it went to trial.
On November 18, 2006, Keith Carradine married actress Hayley DuMond, in Turin, Italy. They met in 1997 when they co-starred in the Burt Reynolds film The Hunter 's Moon.
|
who wins ben h season of the bachelor | The Bachelor (season 20) - wikipedia
The 20th season of The Bachelor premiered on January 4, 2016. This season features 26 - year - old Ben Higgins, a software salesman from Warsaw, Indiana.
Higgins attended Indiana University where he was graduated with a BS in business administration and management through the school of Public and environmental affairs (SPEA). In addition, he was also a member of Delta Upsilon 's Indiana chapter where he served as the vice president of external affairs. He placed third on the 11th season of The Bachelorette featuring Kaitlyn Bristowe.
Casting began during the 19th season of the show. On August 24, 2015, during season one, episode four of Bachelor in Paradise: After Paradise, Ben Higgins was announced as the next Bachelor.
The cast includes season 19 runner - up Becca Tilley and fellow contestant Amber James, news anchor Olivia Caridi from WCYB - TV and Joelle "JoJo '' Fletcher, who is a half - sister of Ready for Love star Ben Patton.
The season traveled to many places including Las Vegas, Nevada, Mexico City, Mexico, Pig Island in The Bahamas and the state of Indiana. With appearances from rapper Ice Cube, comedian Kevin Hart, soccer players Alex Morgan, Kelley O'Hara, comedian Terry Fator, and Indiana Pacers basketball player Paul George.
The season began with 28 contestants, including a set of twins.
Caila Quinn was originally chosen to be the bachelorette for season twelve of The Bachelorette, but the producers dropped her just days before filming started and chose Joelle "JoJo '' Fletcher as the next lead instead.
Season 3
Quinn, Lauren Himle, Tiara Soleim, Jami Letain, Shushanna Mkrtychyan, Jennifer Saviano, Isabel "Izzy '' Goodkind, Lace Morris, Jubilee Sharpe, Leah Block, Haley Ferguson, Emily Ferguson, and Amanda Stanton were chosen to compete for the third season of Bachelor in Paradise. Sharpe and Block were eliminated in the first week. The Ferguson sisters quit in the fourth week. Quinn, Goodkind, and Mkrtychyan quit in week five. Letain, Himle, and Soleim were eliminated in week five. Saviano left in week six. Morris and Stanton ended the season engaged with Bachelor Nation alumni, Grant Kemp and Josh Murray respectively.
Season 4
Stanton returned for the fourth season of Bachelor in Paradise, marking her third reality show appearance. The Ferguson twins appeared on the fourth season as well and their third appearance in the Bachelor Nation franchise, but they quit in week four. Stanton quit in week four as well.
Outside of the Bachelor Nation franchise, Higgins and Lauren Bushnell appeared in their own reality series Ben and Lauren: Happily Ever After? on the sister network Freeform. The Ferguson sisters star in the Freeform reality series The Twins: Happily Ever After?.
Higgins appeared in his third Bachelor Nation appearance in The Bachelor Winter Games competing as Team USA. He quit in week three.
|
why do we need sociology instead of just using common sense | Common sense - wikipedia
Common sense is sound practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge that is shared by ("common to '') nearly all people. The first type of common sense is sometimes described as "the knack for seeing things as they are, and doing things as they ought to be done. '' The second type is sometimes described as folk wisdom, "signifying unreflective knowledge not reliant on specialized training or deliberative thought. '' The two types are intertwined, as the person who has common sense is in touch with common - sense ideas, which emerge from the lived experiences of those commonsensical enough to perceive them.
Smedslund defines common sense as "the system of implications shared by the competent users of a language '' and notes, "A proposition in a given context belongs to common sense if and only if all competent users of the language involved agree that the proposition in the given context is true and that its negation is false. ''
The everyday understanding of common sense derives from philosophical discussion involving several European languages. Related terms in other languages include Latin sensus communis, Greek κοινὴ αἴσθησις (koinē aísthēsis), and French bon sens, but these are not straightforward translations in all contexts. Similarly in English, there are different shades of meaning, implying more or less education and wisdom: "good sense '' is sometimes seen as equivalent to "common sense '', and sometimes not.
"Common sense '' has at least two specifically philosophical meanings. One is a capability of the animal soul (Greek psukhē) proposed by Aristotle, which enables different individual senses to collectively perceive the characteristics of physical things such as movement and size, which all physical things have in different combinations, allowing people and other animals to distinguish and identify physical things. This common sense is distinct from basic sensory perception and from human rational thinking, but cooperates with both.
The second special use of the term is Roman - influenced and is used for the natural human sensitivity for other humans and the community. Just like the everyday meaning, both of these refer to a type of basic awareness and ability to judge that most people are expected to share naturally, even if they can not explain why.
All these meanings of "common sense '', including the everyday one, are inter-connected in a complex history and have evolved during important political and philosophical debates in modern Western civilisation, notably concerning science, politics and economics. The interplay between the meanings has come to be particularly notable in English, as opposed to other western European languages, and the English term has become international.
In modern times the term "common sense '' has frequently been used for rhetorical effect, sometimes pejorative, and sometimes appealed to positively, as an authority. It can be negatively equated to vulgar prejudice and superstition, or on the contrary it is often positively contrasted to them as a standard for good taste and as the source of the most basic axioms needed for science and logic. It was at the beginning of the eighteenth century that this old philosophical term first acquired its modern English meaning: "Those plain, self - evident truths or conventional wisdom that one needed no sophistication to grasp and no proof to accept precisely because they accorded so well with the basic (common sense) intellectual capacities and experiences of the whole social body '' This began with Descartes ' criticism of it, and what came to be known as the dispute between "rationalism '' and "empiricism ''. In the opening line of one of his most famous books, Discourse on Method, Descartes established the most common modern meaning, and its controversies, when he stated that everyone has a similar and sufficient amount of common sense (bon sens), but it is rarely used well. Therefore, a skeptical logical method described by Descartes needs to be followed and common sense should not be overly relied upon. In the ensuing 18th century Enlightenment, common sense came to be seen more positively as the basis for modern thinking. It was contrasted to metaphysics, which was, like Cartesianism, associated with the ancien régime. Thomas Paine 's polemical pamphlet Common Sense (1776) has been described as the most influential political pamphlet of the 18th century, affecting both the American and French revolutions. Today, the concept of common sense, and how it should best be used, remains linked to many of the most perennial topics in epistemology and ethics, with special focus often directed at the philosophy of the modern social sciences.
The origin of the term is in the works of Aristotle. The most well - known such case is De Anima Book III, chapter 2, especially at line 425a27. The passage is about how the animal mind converts raw sense perceptions from the five specialized sense perceptions, into perceptions of real things moving and changing, which can be thought about. According to Aristotle 's understanding of perception, each of the five senses perceives one type of "perceptible '' or "sensible '' which is specific (Greek: idia) to it. For example, sight can see colour. But Aristotle was explaining how the animal mind, not just the human mind, links and categorizes different tastes, colours, feelings, smells and sounds in order to perceive real things in terms of the "common sensibles '' (or "common perceptibles ''). In this discussion "common '' (koinos) is a term opposed to specific or particular (idia). The Greek for these common sensibles is ta koina, which means shared or common things, and examples include the oneness of each thing, with its specific shape and size and so on, and the change or movement of each thing. Distinct combinations of these properties are common to all perceived things.
In this passage, Aristotle explained that concerning these koina (such as movement) we already have a sense, a "common sense '' or sense of the common things (Greek: koinē aisthēsis), which does not work by accident (Greek: kata sumbebēkos). And there is no specific (idia) sense perception for movement and other koina, because then we would not perceive the koina at all, except by accident. As examples of perceiving by accident Aristotle mentions using the specific sense perception vision on its own to see that something is sweet, or to recognize a friend by their distinctive color. Lee (2011, p. 31) explains that "when I see Socrates, it is not insofar as he is Socrates that he is visible to my eye, but rather because he is coloured ''. So the normal five individual senses do sense the common perceptibles according to Aristotle (and Plato), but it is not something they necessarily interpret correctly on their own. Aristotle proposes that the reason for having several senses is in fact that it increases the chances that we can distinguish and recognize things correctly, and not just occasionally or by accident. Each sense is used to identify distinctions, such as sight identifying the difference between black and white, but, says Aristotle, all animals with perception must have "some one thing '' that can distinguish black from sweet. The common sense is where this comparison happens, and this must occur by comparing impressions (or symbols or markers; Greek: sēmeion) of what the specialist senses have perceived. The common sense is therefore also where a type of consciousness originates, "for it makes us aware of having sensations at all ''. And it receives physical picture imprints from the imaginative faculty, which are then memories that can be recollected.
The discussion was apparently intended to improve upon the account of Aristotle 's friend and teacher Plato in his Socratic dialogue, the Theaetetus. But Plato 's dialogue presented an argument that recognizing koina is an active thinking process in the rational part of the human soul, making the senses instruments of the thinking part of man. Plato 's Socrates says this kind of thinking is not a kind of sense at all. Aristotle, trying to give a more general account of the souls of all animals, not just humans, moved the act of perception out of the rational thinking soul into this sensus communis, which is something like a sense, and something like thinking, but not rational.
The passage is difficult to interpret and there is little consensus about many of the details. Gregorić (2007, pp. 204 -- 205) has argued that this may be because Aristotle did not use the term as a standardized technical term at all. For example, in some passages in his works, Aristotle seems to use the term to refer to the individual sense perceptions simply being common to all people, or common to various types of animals. There is also difficulty with trying to determine whether the common sense is truly separable from the individual sense perceptions and from imagination, in anything other than a conceptual way as a capability. Aristotle never fully spells out the relationship between the common sense and the imaginative faculty ("phantasia ''), although the two clearly work together in animals, and not only humans, for example in order to enable a perception of time. They may even be the same. Despite hints by Aristotle himself that they were united, early commentators such as Alexander of Aphrodisias and Al - Farabi felt they were distinct, but later, Avicenna emphasized the link, influencing future authors including Christian philosophers. Gregorić (2007, p. 205) argues that Aristotle used the term "common sense '' both to discuss the individual senses when these act as a unity, which Gregorić calls "the perceptual capacity of the soul '', or the higher level "sensory capacity of the soul '' that represents the senses and the imagination working as a unity. According to Gregorić, there appears to have been a standardization of the term koinē aisthēsis as a term for the perceptual capacity (not the higher level sensory capacity), which occurred by the time of Alexander of Aphrodisias at the latest.
Aristotle 's understanding of the soul (Greek psyche) has an extra level of complexity in the form of the nous or "intellect '' -- which is something only humans have and enables humans to perceive things differently from other animals. It works with images coming from the common sense and imagination, using reasoning (Greek logos) as well as the active intellect. The nous identifies the true forms of things, while the common sense identifies shared aspects of things. Though scholars have varying interpretations of the details, Aristotle 's "common sense '' was in any case not rational, in the sense that it implied no ability to explain the perception. Reason or rationality (logos) exists only in man according to Aristotle, and yet some animals can perceive "common perceptibles '' such as change and shape, and some even have imagination according to Aristotle. Animals with imagination come closest to having something like reasoning and nous. Plato, on the other hand was apparently willing to allow that animals could have some level of thought, meaning that he did not have to explain their sometimes complex behavior with a strict division between high - level perception processing and the human - like thinking such as being able to form opinions. Gregorić additionally argues that Aristotle can be interpreted as using the verbs, phronein and noein, to distinguish two types of thinking or awareness, one being found in animals, and the other unique to humans and involving reason. Therefore, in Aristotle (and the medieval Aristotelians) the universals used to identify and categorize things are divided into two. In medieval terminology these are the species sensibilis used for perception and imagination in animals, and the species intelligibilis or apprehendable forms used in the human intellect or nous.
Aristotle also occasionally called the koinē aisthēsis (or one version of it), the prōton aisthētikon, the first of the senses. (According to Gregorić this is specifically in contexts where it refers to the higher order common sense that includes imagination.) Later philosophers developing this line of thought, such as Themistius, Galen, and Al - Farabi, called it the ruler of the senses or ruling sense, apparently a metaphor developed from a section of Plato 's Timaeus (70b). Augustine and some of the Arab writers, also called it the "inner sense ''. The concept of the inner senses, plural, was further developed in the Middle Ages. Under the influence of the great Persian philosophers Al - Farabi and Avicenna, several inner senses came to be listed. "Thomas Aquinas and John of Jandun recognized four internal senses: the common sense, imagination, vis cogitativa, and memory. Avicenna, followed by Robert Grosseteste, Albert the Great, and Roger Bacon, argued for five internal senses: the common sense, imagination, fantasy, vis aestimativa, and memory. '' By the time of Descartes and Hobbes, in the 1600s, the inner senses had been standardized to five wits, which complemented the more well - known five "external '' senses. Under this medieval scheme the common sense was understood to be seated not in the heart, as Aristotle had thought, but in the anterior Galenic ventricle of the brain. The great anatomist Andreas Vesalius however found no connections between the anterior ventricle and the sensory nerves, leading to speculation about other parts of the brain into the 1600s.
Heller - Roazen (2008) writes that "In different ways the philosophers of medieval Latin and Arabic tradition, from Al - Farabi to Avicenna, Averroës, Albert, and Thomas, found in the De Anima and the Parva Naturalia the scattered elements of a coherent doctrine of the "central '' faculty of the sensuous soul. '' It was "one of the most successful and resilient of Aristotelian notions. ''
"Sensus communis '' is the Latin translation of koinē aesthēsis, which came to be recovered by Medieval scholastics when discussing Aristotelian theories of perception. However, in earlier Latin during the Roman empire the term had taken a distinct ethical detour, developing new shades of meaning. These especially Roman meanings were apparently influenced by several Stoic Greek terms with the word koinē (meaning shared or common); not only koinē aisthēsis, but also such terms as koinē nous, koinē ennoia, and koinonoēmosunē, all of which involve nous, something, at least in Aristotle, that would not be present in "lower '' animals.
Another link between Latin communis sensus and Aristotle 's Greek was in rhetoric, a subject that Aristotle was the first to systematize. In rhetoric, a prudent speaker must take account of opinions (Greek doxai) that are widely held. Aristotle referred to such commonly held beliefs not as koinai doxai, which is a term he used for self - evident logical axioms, but with other terms such as endoxa.
In his Rhetoric for example Aristotle mentions "koinōn... tàs písteis '' or "common beliefs '', saying that "our proofs and arguments must rest on generally accepted principles, (...) when speaking of converse with the multitude ''. In a similar passage in his own work on rhetoric, De Oratore, Cicero wrote that "in oratory the very cardinal sin is to depart from the language of everyday life and the usage approved by the sense of the community ''. The sense of the community is in this case one translation of "communis sensus '' in the Latin of Cicero.
Whether the Latin writers such as Cicero deliberately used this Aristotelian term in a new more peculiarly Roman way, probably also influenced by Greek Stoicism, therefore remains a subject of discussion. Schaeffer (1990, p. 112) has proposed for example that the Roman republic maintained a very "oral '' culture whereas in Aristotle 's time rhetoric had come under heavy criticism from philosophers such as Socrates. Peters Agnew (2008) argues, in agreement with Shaftesbury in the 18th century, that the concept developed from the Stoic concept of ethical virtue, influenced by Aristotle, but emphasizing the role of both the individual perception, and shared communal understanding. But in any case a complex of ideas attached itself to the term, to be almost forgotten in the Middle Ages, and eventually returning into ethical discussion in 18th - century Europe, after Descartes.
As with other meanings of common sense, for the Romans of the classical era "it designates a sensibility shared by all, from which one may deduce a number of fundamental judgments, that need not, or can not, be questioned by rational reflection ''. But even though Cicero did at least once use the term in a manuscript on Plato 's Timaeus (concerning a primordial "sense, one and common for all (...) connected with nature ''), he and other Roman authors did not normally use it as a technical term limited to discussion about sense perception, as Aristotle apparently had in De Anima, and as the Scholastics later would in the Middle Ages. Instead of referring to all animal judgment, it was used to describe pre-rational, widely shared human beliefs, and therefore it was a near equivalent to the concept of humanitas. This was a term that could be used by Romans to imply not only human nature, but also humane conduct, good breeding, refined manners, and so on. Apart from Cicero, Quintilian, Lucretius, Seneca, Horace and some of the most influential Roman authors influenced by Aristotle 's rhetoric and philosophy used the Latin term "sensus communis '' in a range of such ways. As C.S. Lewis wrote:
Quintilian says it is better to send a boy to school than to have a private tutor for him at home; for if he is kept away from the herd (congressus) how will he ever learn that sensus which we call communis? (I, ii, 20). On the lowest level it means tact. In Horace the man who talks to you when you obviously do n't want to talk lacks communis sensus.
Compared to Aristotle and his strictest medieval followers, these Roman authors were not so strict about the boundary between animal - like common sense and specially human reasoning. As discussed above, Aristotle had attempted to make a clear distinction between, on the one hand, imagination and the sense perception which both use the sensible koina, and which animals also have; and, on the other hand, nous (intellect) and reason, which perceives another type of koina, the intelligible forms, which (according to Aristotle) only humans have. In other words, these Romans allowed that people could have animal - like shared understandings of reality, not just in terms of memories of sense perceptions, but in terms of the way they would tend to explain things, and in the language they use.
One of the last notable philosophers to accept something like the Aristotelian "common sense '' was Descartes in the 17th century, but he also undermined it. He described this inner faculty when writing in Latin in his Meditations on first philosophy. The common sense is the link between the body and its senses, and the true human mind, which according to Descartes must be purely immaterial. Unlike Aristotle, who had placed it in the heart, by the time of Descartes this faculty was thought to be in the brain, and he located it in the pineal gland. Descartes ' judgement of this common sense was that it was enough to persuade the human consciousness of the existence of physical things, but often in a very indistinct way. To get a more distinct understanding of things, it is more important to be methodical and mathematical. This line of thought was taken further, if not by Descartes himself then by those he influenced, until the concept of a faculty or organ of common sense was itself rejected.
René Descartes is generally credited with making obsolete the notion that there was an actual faculty within the human brain that functioned as a sensus communis. The French philosopher did not fully reject the idea of the inner senses, which he appropriated from the Scholastics. But he distanced himself from the Aristotelian conception of a common sense faculty, abandoning it entirely by the time of his Passions of the Soul (1649).
Contemporaries such as Gassendi and Hobbes went beyond Descartes in some ways in their rejection of Aristotelianism, rejecting explanations involving anything other than matter and motion, including the distinction between the animal - like judgement of sense perception, a special separate common sense, and the human mind or nous, which Descartes had retained from Aristotelianism. In contrast to Descartes who "found it unacceptable to assume that sensory representations may enter the mental realm from without ''...
According to Hobbes (...) man is no different from the other animals. (...) Hobbes ' philosophy constituted a more profound rupture with Peripatetic thought. He accepted mental representations but (...) "All sense is fancy '', as Hobbes famously put it, with the only exception of extension and motion.
But Descartes used two different terms in his work, not only the Latin term "sensus communis '', but also the French term bon sens, with which he opens his Discourse on Method. And this second concept survived better. This work was written in French, and does not directly discuss the Aristotelian technical theory of perception. Bon sens is the equivalent of modern English "common sense '' or "good sense ''. As the Aristotelian meaning of the Latin term began to be forgotten after Descartes, his discussion of bon sens gave a new way of defining "sensus communis '' in various European languages (including Latin, even though Descartes himself did not translate bon sens as sensus communis, but treated them as two separate things).
Schaeffer (1990, p. 2) writes that "Descartes is the source of the most common meaning of common sense today: practical judgment ''. Gilson noted that Descartes actually gave bon sens two related meanings, first the basic and widely shared ability to judge true and false, which he also calls raison (reason); and second, wisdom, the perfected version of the first. The Latin term Descartes uses, bona mensa (good mind), derives from the Stoic author Seneca who only used it in the second sense. Descartes was being original.
The idea that now became influential, developed in both the Latin and French works of Descartes, though coming from different directions, is that common good sense (and indeed sense perception) is not reliable enough for the new Cartesian method of skeptical reasoning. The Cartesian project to replace common good sense with clearly defined mathematical reasoning was aimed at certainty, and not mere probability. It was promoted further by people such as Hobbes, Spinoza, and others and continues to have important impacts on everyday life. In France, the Netherlands, Belgium, Spain and Italy, it was in its initial florescence associated with the administration of Catholic empires of the competing Bourbon, and Habsburg dynasties, both seeking to centralize their power in a modern way, responding to Machiavellianism and Protestantism as part of the so - called counter reformation.
Cartesian theory offered a justification for innovative social change achieved through the courts and administration, an ability to adapt the law to changing social conditions by making the basis for legislation "rational '' rather than "traditional ''.
So after Descartes, critical attention turned from Aristotle and his theory of perception, and more towards Descartes ' own treatment of common good sense, concerning which several 18th - century authors found help in Roman literature.
During the Enlightenment, Descartes ' insistence upon a mathematical - style method of thinking that treated common sense and the sense perceptions sceptically, was accepted in some ways, but also criticized. On the one hand, the approach of Descartes is and was seen as radically sceptical in some ways. On the other hand, like the Scholastics before him, while being cautious of common sense, Descartes was instead seen to rely too much on undemonstrable metaphysical assumptions in order to justify his method, especially in its separation of mind and body (with the sensus communis linking them). Cartesians such as Henricus Regius, Geraud de Cordemoy, and Nicolas Malebranche realized that Descartes 's logic could give no evidence of the "external world '' at all, meaning it had to be taken on faith. Though his own proposed solution was even more controversial, Berkeley famously wrote that enlightenment requires a "revolt from metaphysical notions to the plain dictates of nature and common sense ''. Descartes and the Cartesian "rationalists '', rejected reliance upon experience, the senses and inductive reasoning, and seemed to insist that certainty was possible. The alternative to induction, deductive reasoning, demanded a mathematical approach, starting from simple and certain assumptions. This in turn required Descartes (and later rationalists such as Kant) to assume the existence of innate or "a priori '' knowledge in the human mind - a controversial proposal.
In contrast to the rationalists, the "empiricists '', took their orientation from Francis Bacon, whose arguments for methodical science were earlier than those of Descartes, and less directed towards mathematics and certainty. Bacon is known for his doctrine of the "idols of the mind '', presented in his Novum Organum, and in his Essays described normal human thinking as biased towards believing in lies. But he was also the opponent of all metaphysical explanations of nature, or over-reaching speculation generally, and a proponent of science based on small steps of experience, experimentation and methodical induction. So while agreeing upon the need to help common sense with a methodical approach, he also insisted that starting from common sense, including especially common sense perceptions, was acceptable and correct. He influenced Locke and Pierre Bayle, in their critique of metaphysics, and in 1733 Voltaire "introduced him as the "father '' of the scientific method '' to a French audience, an understanding that was widespread by 1750. Together with this, references to "common sense '' became positive and associated with modernity, in contrast to negative references to metaphysics, which was associated with the ancien regime.
As mentioned above, in terms of the more general epistemological implications of common sense, modern philosophy came to use the term common sense like Descartes, abandoning Aristotle 's theory. While Descartes had distanced himself from it, John Locke abandoned it more openly, while still maintaining the idea of "common sensibles '' that are perceived. But then George Berkeley abandoned both. David Hume agreed with Berkeley on this, and like Locke and Vico saw himself as following Bacon more than Descartes. In his synthesis, which he saw as the first Baconian analysis of man (something the lesser known Vico had claimed earlier), common sense is entirely built up from shared experience and shared innate emotions, and therefore it is indeed imperfect as a basis for any attempt to know the truth or to make the best decision. But he defended the possibility of science without absolute certainty, and consistently described common sense as giving a valid answer to the challenge of extreme skepticism. Concerning such sceptics, he wrote:
But would these prejudiced reasoners reflect a moment, there are many obvious instances and arguments, sufficient to undeceive them, and make them enlarge their maxims and principles. Do they not see the vast variety of inclinations and pursuits among our species; where each man seems fully satisfied with his own course of life, and would esteem it the greatest unhappiness to be confined to that of his neighbour? Do they not feel in themselves, that what pleases at one time, displeases at another, by the change of inclination; and that it is not in their power, by their utmost efforts, to recall that taste or appetite, which formerly bestowed charms on what now appears indifferent or disagreeable? (...) Do you come to a philosopher as to a cunning man, to learn something by magic or witchcraft, beyond what can be known by common prudence and discretion?
Once Thomas Hobbes and Spinoza had applied Cartesian approaches to political philosophy, concerns about the inhumanity of the deductive approach of Descartes increased. With this in mind, Shaftesbury and, much less known at the time, Giambattista Vico, both presented new arguments for the importance of the Roman understanding of common sense, in what is now often referred to, after Hans - Georg Gadamer, as a humanist interpretation of the term. Their concern had several inter-related aspects. One ethical concern was the deliberately simplified method that treated human communities as made up of selfish independent individuals (methodological individualism), ignoring the sense of community that the Romans understood as part of common sense. Another connected epistemological concern was that by considering common good sense as inherently inferior to Cartesian conclusions developed from simple assumptions, an important type of wisdom was being arrogantly ignored.
Shaftesbury 's seminal 1709 essay Sensus Communis: An Essay on the Freedom of Wit and Humour was a highly erudite and influential defense of the use of irony and humour in serious discussions, at least among men of "Good Breeding ''. He drew upon authors such as Seneca, Juvenal, Horace and Marcus Aurelius, for whom, he saw, common sense was not just a reference to widely held vulgar opinions, but something cultivated among educated people living in better communities. One aspect of this, later taken up by authors such as Kant, was good taste. Another very important aspect of common sense particularly interesting to later British political philosophers such as Francis Hutcheson was what came to be called moral sentiment, which is different from a tribal or factional sentiment, but a more general fellow feeling that is very important for larger communities:
A publick Spirit can come only from a social Feeling or Sense of Partnership with Human Kind. Now there are none so far from being Partners in this Sense, or sharers in this common Affection, as they who scarcely know an Equall, nor consider themselves as subject to any law of Fellowship or Community. And thus Morality and good Government go together.
Hutcheson described it as, "a Publick Sense, viz. "our Determination to be pleased with the Happiness of others, and to be uneasy at their Misery. '' '' which, he explains, "was sometimes called κοινονοημοσύνη or Sensus Communis by some of the Antients ''.
A reaction to Shaftesbury in defense of the Hobbesian approach of treating communities as driven by individual self - interest, was not long coming in Bernard Mandeville 's controversial works. Indeed, this approach was never fully rejected, at least in economics. And so despite the criticism heaped upon Mandeville and Hobbes by Adam Smith, Hutcheson 's student and successor in Glasgow university, Smith made self - interest a core assumption within nascent modern economics, specifically as part of the practical justification for allowing free markets.
By the late enlightenment period in the 18th century, the communal sense or empathy pointed to by Shaftesbury and Hutcheson had become the "moral sense '' or "moral sentiment '' referred to by Hume and Adam Smith, the latter writing in plural of the "moral sentiments '' with the key one being sympathy, which was not so much a public spirit as such, but a kind of extension of self - interest. Jeremy Bentham gives a summary of the plethora of terms used in British philosophy by the nineteenth century to describe common sense in discussions about ethics:
Another man comes and alters the phrase: leaving out moral, and putting in common, in the room of it. He then tells you, that his common sense teaches him what is right and wrong, as surely as the other 's moral sense did: meaning by common sense, a sense of some kind or other, which he says, is possessed by all mankind: the sense of those, whose sense is not the same as the author 's, being struck out of the account as not worth taking.
This was at least to some extent opposed to the Hobbesian approach, still today normal in economic theory, of trying to understand all human behaviour as fundamentally selfish, and would also be a foil to the new ethics of Kant. This understanding of a moral sense or public spirit remains a subject for discussion, although the term "common sense '' is no longer commonly used for the sentiment itself. In several European languages, a separate term for this type of common sense is used. For example, French sens commun and German Gemeinsinn are used for this feeling of human solidarity, while bon sens (good sense) and gesunder Verstand (healthy understanding) are the terms for everyday "common sense ''.
According to Gadamer, at least in French and British philosophy a moral element in appeals to common sense (or bon sens), such as found in Reid, remains normal to this day. But according to Gadamer, the civic quality implied in discussion of sensus communis in other European countries did not take root in the German philosophy of the 18th and 19th centuries, despite the fact it consciously imitated much in English and French philosophy. "Sensus communis was understood as a purely theoretical judgment, parallel to moral consciousness (conscience) and taste. '' The concept of sensus communis "was emptied and intellectualized by the German enlightenment ''. But German philosophy was becoming internationally important at this same time.
Gadamer notes one less - known exception -- the Württemberg pietism, inspired by the 18th century Swabian churchman, M. Friedrich Christoph Oetinger, who appealed to Shaftesbury and other Enlightenment figures in his critique of the Cartesian rationalism of Leibniz and Wolff, who were the most important German philosophers before Kant.
Vico, who taught classical rhetoric in Naples (where Shaftesbury died) under a Cartesian - influenced Spanish government, was not widely read until the 20th century, but his writings on common sense have been an important influence upon Hans - Georg Gadamer, Benedetto Croce and Antonio Gramsci. Vico united the Roman and Greek meanings of the term communis sensus. Vico 's initial use of the term, which was of much inspiration to Gadamer for example, appears in his On the Study Methods of our Time, which was partly a defense of his own profession, given the reformist pressure upon both his University and the legal system in Naples. It presents common sense as something adolescents need to be trained in if they are not to "break into odd and arrogant behaviour when adulthood is reached '', whereas teaching Cartesian method on its own harms common sense and stunts intellectual development. Rhetoric and elocution are not just for legal debate, but also educate young people to use their sense perceptions and their perceptions more broadly, building a fund of remembered images in their imagination, and then using ingenuity in creating linking metaphors, in order to make enthymemes. Enthymemes are reasonings about uncertain truths and probabilities -- as opposed to the Cartesian method, which was skeptical of all that could not be dealt with as syllogisms, including raw perceptions of physical bodies. Hence common sense is not just a "guiding standard of eloquence '' but also "the standard of practical judgment ''. The imagination or fantasy, which under traditional Aristotelianism was often equated with the koinē aisthēsis, is built up under this training, becoming the "fund '' (to use Schaeffer 's term) accepting not only memories of things seen by an individual, but also metaphors and images known in the community, including the ones out of which language itself is made.
In its mature version, Vico 's conception of sensus communis is defined by him as "judgment without reflection, shared by an entire class, an entire people, and entire nation, or the entire human race ''. Vico proposed his own anti-Cartesian methodology for a new Baconian science, inspired, he said, by Plato, Tacitus, Francis Bacon and Grotius. In this he went further than his predecessors concerning the ancient certainties available within vulgar common sense. What is required, according to his new science, is to find the common sense shared by different people and nations. He made this a basis for a new and better - founded approach to discuss Natural Law, improving upon Grotius, John Selden, and Pufendorf who he felt had failed to convince, because they could claim no authority from nature. Unlike Grotius, Vico went beyond looking for one single set of similarities amongst nations but also established rules about how natural law properly changes as peoples change, and has to be judged relative to this state of development. He thus developed a detailed view of an evolving wisdom of peoples. Ancient forgotten wisdoms, he claimed, could be re-discovered by analysis of languages and myths formed under the influence of them. This is comparable to both Montesquieu 's Spirit of the Laws, as well as much later Hegelian historicism, both of which apparently developed without any awareness of Vico 's work.
Contemporary with Hume, but critical of Hume 's scepticism, a so - called Scottish school of Common Sense formed, whose basic principle was enunciated by its founder and greatest figure, Thomas Reid:
If there are certain principles, as I think there are, which the constitution of our nature leads us to believe, and which we are under a necessity to take for granted in the common concerns of life, without being able to give a reason for them -- these are what we call the principles of common sense; and what is manifestly contrary to them, is what we call absurd.
Thomas Reid was a successor to Francis Hutcheson and Adam Smith as Professor of Moral Philosophy, Glasgow. While Reid 's interests lay in the defense of common sense as a type of self - evident knowledge available to individuals, this was also part of a defense of natural law in the style of Grotius. He believed that the term common sense as he used it did encompass both the social common sense described by Shaftesbury and Hutcheson, and the perceptive powers described by Aristotelians.
Reid was criticised, partly for his critique of Hume, by Kant and J.S. Mill, who were two of the most important influences in nineteenth century philosophy. He was blamed for over-stating Hume 's scepticism of commonly held beliefs, and more importantly for not perceiving the problem with any claim that common sense could ever fulfill Cartesian (or Kantian) demands for absolute knowledge. Reid furthermore emphasized inborn common sense as opposed to only experience and sense perception. In this way his common sense has a similarity to the assertion of a priori knowledge asserted by rationalists like Descartes and Kant, despite Reid 's criticism of Descartes concerning his theory of ideas. Hume was critical of Reid on this point.
Despite the criticism, the influence of the Scottish school was notable for example upon American pragmatism, and modern Thomism. The influence has been particularly important concerning the epistemological importance of a sensus communis for any possibility of rational discussion between people.
Immanuel Kant developed a new variant of the idea of sensus communis, noting how having a sensitivity for what opinions are widely shared and comprehensible gives a sort of standard for judgment, and objective discussion, at least in the field of aesthetics and taste:
The common Understanding of men (gemeine Menschenverstand), which, as the mere sound (not yet cultivated) Understanding, we regard as the least to be expected from any one claiming the name of man, has therefore the doubtful honour of being given the name of common sense (Namen des Gemeinsinnes) (sensus communis); and in such a way that by the name common (not merely in our language, where the word actually has a double signification, but in many others) we understand vulgar, that which is everywhere met with, the possession of which indicates absolutely no merit or superiority. But under the sensus communis we must include the Idea of a communal sense (eines gemeinschaftlichen Sinnes), i.e. of a faculty of judgement, which in its reflection takes account (a priori) of the mode of representation of all other men in thought; in order as it were to compare its judgement with the collective Reason of humanity, and thus to escape the illusion arising from the private conditions that could be so easily taken for objective, which would injuriously affect the judgement.
Kant saw this concept as answering a particular need in his system: "the question of why aesthetic judgments are valid: since aesthetic judgments are a perfectly normal function of the same faculties of cognition involved in ordinary cognition, they will have the same universal validity as such ordinary acts of cognition ''.
But Kant 's overall approach was very different from those of Hume or Vico. Like Descartes, he rejected appeals to uncertain sense perception and common sense (except in the very specific way he describes concerning aesthetics), or the prejudices of one 's "Weltanschauung '', and tried to give a new way to certainty through methodical logic, and an assumption of a type of a priori knowledge. He was also not in agreement with Reid and the Scottish school, who he criticized in his Prolegomena to Any Future Metaphysics as using "the magic wand of common sense '', and not properly confronting the "metaphysical '' problem defined by Hume, which Kant wanted to be solved scientifically - the problem of how to use reason to consider how one ought to act.
Kant used different words to refer to his aesthetic sensus communis, for which he used Latin or else German Gemeinsinn, and the more general English meaning which he associated with Reid and his followers, for which he used various terms such as gemeinen Menscheverstand, gesunden Verstand, or gemeinen Verstand.
According to Gadamer, in contrast to the "wealth of meaning '' that Vico and Shaftesbury brought from the Roman tradition into their humanism, Kant "developed his moral philosophy in explicit opposition to the doctrine of "moral feeling '' that had been worked out in English philosophy ". The moral imperative "can not be based on feeling, not even if one does not mean an individual 's feeling but common moral sensibility ''. For Kant, the sensus communis only applied to taste, and the meaning of taste was also narrowed as it was no longer understood as any kind of knowledge. Taste, for Kant, is universal only in that it results from "the free play of all our cognitive powers '', and is communal only in that it "abstracts from all subjective, private conditions such as attractiveness and emotion ''.
Kant himself did not see himself as a relativist, and was aiming to give knowledge a more solid basis, but as Richard J. Bernstein remarks, reviewing this same critique of Gadamer:
Once we begin to question whether there is a common faculty of taste (a sensus communis), we are easily led down the path to relativism. And this is what did happen after Kant - so much so that today it is extraordinarily difficult to retrieve any idea of taste or aesthetic judgment that is more than the expression of personal preferences. Ironically (given Kant 's intentions), the same tendency has worked itself out with a vengeance with regards to all judgments of value, including moral judgments.
Continuing the tradition of Reid and the enlightenment generally, the common sense of individuals trying to understand reality continues to be a serious subject in philosophy. In America, Reid influenced C.S. Peirce, the founder of the philosophical movement now known as Pragmatism, which has become internationally influential. One of the names Peirce used for the movement was "Critical Common - Sensism ''. Peirce, who wrote after Charles Darwin, suggested that Reid and Kant 's ideas about inborn common sense could be explained by evolution. But while such beliefs might be well adapted to primitive conditions, they were not infallible, and could not always be relied upon.
Another example still influential today is from G.E. Moore, several of whose essays, such as the 1925 "A Defence of Common Sense '', argued that individuals can make many types of statements about what they judge to be true, and that the individual and everyone else knows to be true. Michael Huemer has advocated an epistemic theory he calls phenomenal conservatism, which he claims to accord with common sense by way of internalist intuition.
In twentieth century philosophy the concept of the sensus communis as discussed by Vico and especially Kant became a major topic of philosophical discussion. The theme of this discussion questions how far the understanding of eloquent rhetorical discussion (in the case of Vico), or communally sensitive aesthetic tastes (in the case of Kant) can give a standard or model for political, ethical and legal discussion in a world where forms of relativism are commonly accepted, and serious dialogue between very different nations is essential. Some philosophers such as Jacques Rancière indeed take the lead from Jean - François Lyotard and refer to the "postmodern '' condition as one where there is "dissensus communis ''.
Hannah Arendt adapted Kant 's concept of sensus communis as a faculty of aesthetic judgement that imagines the judgements of others, into something relevant for political judgement. Thus she created a "Kantian '' political philosophy, which, as she said herself, Kant did not write. She argued that there was often a banality to evil in the real world, for example in the case of someone like Adolf Eichmann, which consisted in a lack of sensus communis and thoughtfulness generally. Arendt and also Jürgen Habermas, who took a similar position concerning Kant 's sensus communis, were criticised by Lyotard for their use of Kant 's sensus communis as a standard for real political judgement. Lyotard also saw Kant 's sensus communis as an important concept for understanding political judgement, not aiming at any consensus, but rather at a possibility of a "euphony '' in "dis - sensus ''. Lyotard claimed that any attempt to impose any sensus communis in real politics would mean imposture by an empowered faction upon others.
In a parallel development, Antonio Gramsci, Benedetto Croce, and later Hans - Georg Gadamer took inspiration from Vico 's understanding of common sense as a kind of wisdom of nations, going beyond Cartesian method. It has been suggested that Gadamer 's most well - known work Truth and Method, can be read as an "extended meditation on the implications of Vico 's defense of the rhetorical tradition in response to the nascent methodologism that ultimately dominated academic enquiry ''. In the case of Gadamer, this was in specific contrast to the sensus communis concept in Kant, which he felt (in agreement with Lyotard) could not be relevant to politics if used in its original sense.
Gadamer came into direct debate with his contemporary Habermas, the so - called Hermeneutikstreit. Habermas, with a self - declared Enlightenment "prejudice against prejudice '' argued that if breaking free from the restraints of language is not the aim of dialectic, then social science will be dominated by whoever wins debates, and thus Gadamer 's defense of sensus communis effectively defends traditional prejudices. Gadamer argued that being critical requires being critical of prejudices including the prejudice against prejudice. Some prejudices will be true. And Gadamer did not share Habermas ' acceptance that aiming at going beyond language through method was not itself potentially dangerous. Furthermore, he insisted that because all understanding comes through language, hermeneutics has a claim to universality. As Gadamer wrote in the "Afterword '' of Truth and Method, "I find it frighteningly unreal when people like Habermas ascribe to rhetoric a compulsory quality that one must reject in favor of unconstrained, rational dialogue ''.
Paul Ricoeur argued that Gadamer and Habermas were both right in part. As a hermeneutist like Gadamer he agreed with him about the problem of lack of any perspective outside of history, pointing out that Habermas himself argued as someone coming from a particular tradition. He also agreed with Gadamer that hermeneutics is a "basic kind of knowing on which others rest. '' But he felt that Gadamer under - estimated the need for a dialectic that was critical and distanced, and attempting to go behind language.
A recent commentator on Vico, John D. Schaeffer has argued that Gadamer 's approach to sensus communis exposed itself to the criticism of Habermas because it "privatized '' it, removing it from a changing and oral community, following the Greek philosophers in rejecting true communal rhetoric, in favour of forcing the concept within a Socratic dialectic aimed at truth. Schaeffer claims that Vico 's concept provides a third option to those of Habermas and Gadamer and he compares it to the recent philosophers Richard J. Bernstein, Bernard Williams, Richard Rorty, and Alasdair MacIntyre, and the recent theorist of rhetoric, Richard Lanham.
The other Enlightenment debate about common sense, concerning common sense as a term for an emotion or drive that is unselfish, also continues to be important in discussion of social science, and especially economics. The axiom that communities can be usefully modeled as a collection of self - interested individuals is a central assumption in much of modern mathematical economics, and mathematical economics has now come to be an influential tool of political decision making.
While the term "common sense '' had already become less commonly used as a term for the empathetic moral sentiments by the time of Adam Smith, debates continue about methodological individualism as something supposedly justified philosophically for methodological reasons (as argued for example by Milton Friedman and more recently by Gary S. Becker, both members of the so - called Chicago school of economics). As in the Enlightenment, this debate therefore continues to combine debates about not only what the individual motivations of people are, but also what can be known about scientifically, and what should be usefully assumed for methodological reasons, even if the truth of the assumptions are strongly doubted. Economics and social science generally have been criticized as a refuge of Cartesian methodology. Hence, amongst critics of the methodological argument for assuming self - centeredness in economics are authors such as Deirdre McCloskey, who have taken their bearings from the above - mentioned philosophical debates involving Habermas, Gadamer, the anti-Cartesian Richard Rorty and others, arguing that trying to force economics to follow artificial methodological laws is bad, and it is better to recognize social science as driven by rhetoric.
Among Catholic theologians, writers such as theologian François Fénelon and philosopher Claude Buffier (1661 - 1737) gave an anti-Cartesian defense of common sense as a foundation for knowledge. Other catholic theologians took up this approach, and attempts were made to combine this with more traditional Thomism, for example Jean - Marie de Lamennais. This was similar to the approach of Thomas Reid, who for example was a direct influence on Théodore Jouffroy. This however meant basing knowledge upon something uncertain, and irrational. Matteo Liberatore, seeking an approach more consistent with Aristotle and Aquinas, equated this foundational common sense with the koinai doxai of Aristotle, that correspond to the communes conceptiones of Aquinas. In the twentieth century, this debate is especially associated with Étienne Gilson and Reginald Garrigou - Lagrange. Gilson pointed out that Liberatore 's approach means categorizing such common beliefs as the existence of God or the immortality of the soul, under the same heading as (in Aristotle and Aquinas) such logical beliefs as that it is impossible for something to exist and not exist at the same time. This, according to Gilson, is going beyond the original meaning. Concerning Liberatore he wrote:
Endeavours of this sort always end in defeat. In order to confer a technical philosophical value upon the common sense of orators and moralists it is necessary either to accept Reid 's common sense as a sort of unjustified and unjustifiable instinct, which will destroy Thomism, or to reduce it to the Thomist intellect and reason, which will result in its being suppressed as a specifically distinct faculty of knowledge. In short, there can be no middle ground between Reid and St. Thomas.
Gilson argued that Thomism avoided the problem of having to decide between Cartesian innate certainties and Reid 's uncertain common sense, and that "as soon as the problem of the existence of the external world was presented in terms of common sense, Cartesianism was accepted ''.
"Good Sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess. And in this it is not likely that all are mistaken: the conviction is rather to be held as testifying that the power of judging aright and of distinguishing Truth from Error, which is properly what is called Good Sense or Reason, is by nature equal in all men; and that the diversity of our opinions, consequently, does not arise from some being endowed with a larger share of Reason than others, but solely from this, that we conduct our thoughts along different ways, and do not fix our attention on the same objects. For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it. The greatest minds, as they are capable of the highest excellencies, are open likewise to the greatest aberrations; and those who travel very slowly may yet make far greater progress, provided they keep always to the straight road, than those who, while they run, forsake it. ''
Some say the Senses receive the Species of things, and deliver them to the Common - sense; and the Common Sense delivers them over to the Fancy, and the Fancy to the Memory, and the Memory to the Judgement, like handing of things from one to another, with many words making nothing understood. (Hobbes, Thomas, "II.: of imagination '', The English Works of Thomas Hobbes of Malmesbury; Now First Collected and Edited by Sir William Molesworth, Bart., 11 vols., 3 (Leviathan), London: Bohn).
|
who did ryan gosling play in mickey mouse clubhouse | The Mickey Mouse Club - wikipedia
Fred Newman (1989 revival, seasons 1 - 6)
The Mickey Mouse Club (now known as Club Mickey Mouse) is an American variety television show that aired intermittently from 1955 to 1996 and returned 2017 to social media. Created by Walt Disney and produced by Walt Disney Productions, the program was first televised from 1955 by ABC, featuring a regular but ever - changing cast of mostly teen performers. Reruns were broadcast by ABC on weekday afternoons during the 1958 -- 1959 season, right after American Bandstand. The show was revived after its initial 1955 -- 1959 run on ABC, first from 1977 to 1979 for first - run syndication, and airing again exclusively on Disney Channel from 1989 to 1996. Then revised in 2017 airing exclusively on social media.
Previous to the television series, there was a theater - based Mickey Mouse Club. The first one started on Saturday, January 4, 1930, at 12 noon at the Fox Dome Theater in Ocean Park, California, with sixty theaters hosting clubs by March 31. The Club released its first issue of the Official Bulletin of the Mickey Mouse Club on April 15, 1930. By 1932, the Club had 1 million members, and in 1933 its first British club opened at Darlington 's Arcade Cinema. In 1935, Disney began to phase out the club.
The Mickey Mouse Club was Walt Disney 's second venture into producing a television series, the first being the Walt Disney anthology television series, initially titled Disneyland. Disney used both shows to help finance and promote the building of the Disneyland theme park. Being busy with these projects and others, Disney turned The Mickey Mouse Club over to Bill Walsh to create and develop the format, initially aided by Hal Adelquist.
The result was a variety show for children, with such regular features as a newsreel, a cartoon, and a serial, as well as music, talent and comedy segments. One unique feature of the show was the Mouseketeer Roll Call, in which many (but not all) of that day 's line - up of regular performers would introduce themselves by name to the television audience. In the serials, teens faced challenges in everyday situations, often overcome by their common sense or through recourse to the advice of respected elders. Mickey Mouse himself appeared in every show not only in vintage cartoons originally made for theatrical release but in opening, interstitial and closing segments made especially for the show. In both the vintage cartoons and in the new animated segments, Mickey was voiced by his creator Walt Disney. (Disney had previously voiced the character theatrically from 1928 to 1947, and then was replaced by sound effects artist Jimmy MacDonald.)
Mickey Mouse Club was hosted by Jimmie Dodd, a songwriter and the Head Mouseketeer, who provided leadership both on and off screen. In addition to his other contributions, he often provided short segments encouraging young viewers to make the right moral choices. These little homilies became known as "Doddisms ''. Roy Williams, a staff artist at Disney, also appeared in the show as the Big Mouseketeer. Roy suggested the Mickey and Minnie Mouse ears worn by the cast members, which he helped create, along with Chuck Keehne, Hal Adelquist, and Bill Walsh.
The main cast members were called Mouseketeers, and they performed in a variety of musical and dance numbers, as well as some informational segments. The most popular of the Mouseketeers comprised the so - called Red Team, which consisted of the following:
Other Mouseketeers that were Red Team members, but not on the show for all three seasons included the following:
The remaining Mouseketeers, consisting of the White or Blue Teams, were Don Agrati (later known as Don Grady when starring as "Robbie '' on the long - running sitcom My Three Sons), Sherry Alberoni, Billie Jean Beanblossom, Eileen Diamond, Dickie Dodd (not related to Jimmie Dodd), Mary Espinosa, Bonnie Lynn Fields, Judy Harriet, Linda Hughes, Dallas Johann, John Lee Johann, Bonni Lou Kern, Charlie Laney, Larry Larsen, Paul Petersen, Lynn Ready, Mickey Rooney Jr., Tim Rooney, Mary Sartori, Bronson Scott, Margene Storey, Ronnie Steiner, and Mark Sutherland. Larry Larsen, on only for the 1956 -- 57 season, was the oldest Mouseketeer, being born in 1939, and Bronson Scott, on only the 1955 - 56 season, was the youngest Mouseketeer, being born in July 1947. Among the thousands who auditioned but did n't make the cut were future vocalist / songwriter Paul Williams and future actress Candice Bergen.
The 39 Mouseketeers, and the seasons they were featured in (with the team color they belonged to listed for each season.
Other notable non-Mouseketeer performers appeared in various dramatic segments:
These non-Mouseketeers primarily appeared in numerous original serials filmed for the series, only some of which have appeared in reruns. Certain Mouseketeers were also featured in some of the serials, particularly Annette Funicello and Darlene Gillespie.
Major serials included the following:
The opening theme, "The Mickey Mouse March, '' was written by the show 's primary adult host, Jimmie Dodd. It was also reprised at the end of each episode, with the slower it 's - time - to - say - goodbye verse. A shorter version of the opening title was used later in the series, in syndication, and on Disney Channel reruns. Dodd also wrote many other songs used in individual segments over the course of the series.
Each day of the week had a special show theme, which was reflected in the various segments. The themes were:
The series ran on ABC Television for an hour each weekday in the 1955 -- 1956 and 1956 -- 1957 seasons (from 5: 00 to 6: 00 pm ET), and only a half - hour weekdays (5: 30 to 6: 00 p.m. ET) in 1957 -- 1958, the final season to feature new programming. Although the show returned for the 1958 -- 1959 season (5: 30 to 6: 00 p.m. ET), these programs were repeats from the first two seasons, re-cut into a half - hour format. The Mickey Mouse Club was featured on Mondays, Wednesdays, and Fridays, and Walt Disney 's Adventure Time, featuring re-runs of The Mickey Mouse Club serials and several re-edited segments from Disneyland and Walt Disney Presents, appeared on Tuesdays and Thursdays.
Although the show remained popular, ABC decided to cancel the show after its fourth season, as Disney and the ABC network could not come to terms for renewal. The cancellation in September 1959 was attributable to several factors: the Disney studios did not realize high - profit margins from merchandise sales, the sponsors were uninterested in educational programming for children, and many commercials were needed in order to pay for the show. After canceling The Mickey Mouse Club, ABC also refused to let Disney air the show on another network. Walt Disney filed a lawsuit against ABC, and won the damages in a settlement; however, he had to agree that both the Mickey Mouse Club and Zorro could not be aired on any major network. This left Walt Disney Presents (initially titled Disneyland, later retitled the Walt Disney 's Wonderful World of Color when it moved to NBC) as the only Disney series left on prime time until 1972 when The Mouse Factory went on the air. The prohibition against major U.S. broadcast network play of the original Mickey Mouse Club (or any later version) became moot when Disney acquired ABC in 1996, but no plans have been announced for an ABC airing of any version of The Mickey Mouse Club produced between 1955 and 1996 or for a new network series.
Although the series had been discontinued in the United States, many members of the cast assembled for highly successful tours of Australia in 1959 and 1960. The television series was very successful in Australia and was still running on Australian television. The cast surprised Australian audiences, as by then they had physically matured and in some cases, bore little resemblance to the young cast with whom Australians were so familiar. Mainstream television did not reach Australia until 1956 so the series screened well into the 1960s when the back catalogue expired.
In response to continuing audience demand, the original Mickey Mouse Club went into edited syndicated half - hour reruns that enjoyed wide distribution starting in the fall of 1962, achieving strong ratings especially during its first three seasons in syndicated release. (because of its popularity in some markets, a few stations continued to carry it into 1968 before the series was finally withdrawn from syndication). Some new features were added such as Fun with Science, aka "Professor Wonderful '' (with scientist Julius Sumner Miller) and Marvelous Marvin in the 1964 -- 1965 season; Jimmie Dodd appeared in several of these new segments before his death in November 1964. Many markets stretched the program back to an hour 's daily run time during the 1960s rerun cycle by adding locally produced and hosted portions involving educational subjects and live audience participation of local children, in a manner not unlike Romper Room.
In response to an upsurge in demand from baby boomers entering adulthood, the show again went into syndicated reruns from January 20, 1975, until January 14, 1977. It has since been rerun on cable specialty channels Disney in the U.S. and Family in Canada. The original Mickey Mouse Club films aired five days a week on The Disney Channel from its launch in 1983 until the third version of the series began in 1989. The last airing of the edited 1950s material was on Disney Channel 's Vault Disney from 1997 to September 2002. During the baseball seasons in 1975 and 1976, WGN - TV in Chicago, Illinois aired the MMC on a delayed basis due to Cubs ballgame coverages.
Annette Funicello and Tim Considine were reunited on The New Mickey Mouse Club in 1977. Darlene Gillespie and Cubby O'Brien were also reunited on another episode of the same series.
31 of the 39 original Mouseketeers were reunited for a TV special, which aired on Disney 's Wonderful World in November 1980.
Cast members Annette Funicello, Bobby Burgess, Tommy Cole, Sharon Baird, Don Grady, and Sherry Alberoni were reunited on the 100th episode of The All - New Mickey Mouse Club, during the show 's third season in 1991.
Mouseketeers Doreen Tracey, Cubby O'Brien, Sherry Alberoni, Sharon Baird, Don Grady, Cheryl Holdridge, Bobby Burgess, Karen Pendleton, Tommy Cole, and Mary Espinosa performed together at Disneyland in the fall of 2005, in observance of Disneyland 's 50th birthday, and the 50th anniversary of the TV premiere of The Mickey Mouse Club.
In the 1977 Walt Disney Productions revived the concept, but modernized the show cosmetically, with a disco re-recording of the theme song and a more ethnically diverse group of young cast members. The sets were brightly colored and simpler than the detailed black and white artwork of the original. Like the original, nearly each day 's episode included a vintage cartoon, though usually in color from the late 1930s and onward. The 1977 Mouseketeers were part of the halftime show of Super Bowl XI on January 9, 1977.
Serials were usually old Disney movies, cut into segments for twice - weekly inclusion. Movies included Third Man on the Mountain, The Misadventures of Merlin Jones and its sequel The Monkey 's Uncle (both starring Tommy Kirk), Emil and the Detectives (retitled The Three Skrinks), Tonka (retitled A Horse Called Comanche), The Horse Without a Head (about a toy horse), and Toby Tyler (starring Kevin Corcoran). In addition, one original serial was produced, The Mystery of Rustler 's Cave, starring Kim Richards and Robbie Rist.
Theme days were:
The series debuted on January 17, 1977, on 38 local television stations in the United States, and by June of that same year, when the series was discontinued, about 70 stations in total had picked up the series. Additional stations picked up the canceled program, which continued to run until January 12, 1979; 130 new episodes, with much of the original material repackaged and a bit of new footage added, and a shortened version of the theme song, were produced to start airing September 5, 1977. Since the 1970s, the series has aired only briefly in reruns, unlike its 1950s predecessor, and while both the 1950s and 1989 / 1990s series had DVD releases of select episodes in July 2005, the 1970s series has been largely forgotten including the generation of youthful viewers. On November 20, 1977, "The Mouseketeers at Walt Disney World '' was shown on The Wonderful World of Disney. WGN - TV in Chicago, Illinois also aired this version on a delayed basis in 1977 and 1978 during the Cubs baseball season due to game coverages.
The cast of twelve (5 boys and 7 girls) had a more diverse ethnic background than the 1950s version. Several 1977 -- 1978 cast members went on to become TV stars and other notable icons.
The show 's most notable alumna was Lisa Whelchel (born in 1963, in Littlefield, Texas), who later starred in the NBC television sitcom The Facts of Life before becoming a well - known Christian author and, most recently, overall runner - up, and winner of the $100,000 viewers ' choice award, on the fall 2012 season of the CBS television reality series Survivor. Mouseketeer Julie Piekarski (born St. Louis, 1963) also appeared with Lisa Whelchel on the first season of The Facts of Life. Kelly Parsons (born Coral Gables, Fla., 1964) went on to become a beauty queen and runner - up to Miss USA.
Other Mouseketeers (from seasons 1 -- 2 (1977)) from the 1977 show:
Disney voice actor and sound effects editor Wayne Allwine voiced Mickey Mouse in the animated lead - ins for the show, replacing Jimmy MacDonald, who in 1947 had replaced Walt Disney as the voice of Mickey for theatrical cartoons. Walt Disney had been the original voice of Mickey and for the original 1954 -- 1959 run provided the voice for animated introductions to the original TV show but had died in 1966. Allwine would keep providing the voice for the character up to his death in 2009.
Future rock musician Courtney Love claims to have auditioned for a part on the show, reading a poem by Sylvia Plath; she was not selected.
Former Mouseketeer Annette Funicello and Mouseketeer serial star Tim Considine guest starred in one episode; Former Mouseketeers Darlene Gillespie and Cubby O'Brien were also reunited on a different episode.
The lyrics of the "Mickey Mouse Club March '' theme song were slightly different from the original, with two additional lines: "He 's our favorite Mouseketeer; we know you will agree '' and "Take some fun and mix in love, our happy recipe. ''
A soundtrack album was released with the show.
A new rendition of the "Mickey Mouse Club March '' was made later on in 1999 by Mannheim Steamroller, a contemporary band, in hopes of connecting new - age children and their parents who watched the Mickey Mouse Club.
This incarnation was not distributed by Disney alone; while Disney did produce the series, it was co-produced and distributed by SFM Entertainment, which also handled 1970s - era syndication of the original 1950s series (Disney since regained sole distribution rights).
Reruns of the original Mickey Mouse Club had aired on The Disney Channel since its 1983 launch. While the show was popular with younger audiences, Disney Channel executives felt that it had become dated over the years, particularly as it was in black - and - white. Their answer was to create a brand - new version of the Club, one geared toward contemporary audiences. Notably, the all - new "club - members '' would wear high - school like Mouseketeer jackets without the iconic Mickey Mouse ears. This show was called The All - New Mickey Mouse Club (also known as "MMC '' to fans).
This version of the series is notable for featuring a number of cast members who went on to international success in music and acting, including Tony Lucca, Britney Spears, Christina Aguilera, Justin Timberlake, Ryan Gosling, JC Chasez, Nikki DeLoach and Keri Russell.
This was the first version of the club to have any studio audience, a fairly small group.
Former Mouseketeer Don Grady guest starred on an episode during the show 's first season. Grady, along with fellow Mouseketeers Annette Funicello, Bobby Burgess, Tommy Cole, Sharon Baird, and Sherry Alberoni were reunited on the 100th episode, during the show 's third season. Funicello would later appear on the show again, in an interview with the Mouseketeer Lindsey Alley.
From the first through fifth seasons, the series aired Monday through Friday, at 5: 30pm. Through Season 6, the show aired Monday to Thursday. In its final season, Season 7, it aired Thursdays only at 7: 00 pm (later moved a half hour later, to 7: 30). The series premiered Monday, April 24, 1989, ended production in October 1994, and aired its last original episode in 1996. Seasons 3 and 5 had the most episodes (55, each season). Seasons 4 and 6 were shorter, having about 35 episodes each. The remaining seasons were a standard 45 episodes (44 in Season 7), each.
The show was known for its sketch comedy. Some of the sketches played off well - known movies, musicals and even cartoons, as well as holiday - related skits. During the final season, some of the skits showed everyday occurrences experienced by teens, often teaching viewers a lesson on how to handle real - life situations.
The series featured music videos of the Mouseketeers singing their versions of popular songs in front of a live studio audience or in the Walt Disney World Resort. This became one of the most popular segments.
A unique feature to the show was the Mouseketeers performing concerts on certain days (which were usually taped the day before or in the summer, when the kids had more time). During the final season, the concerts were replaced primarily by live performances that featured singing and dancing in front of the audience.
This version maintained the "theme day '' format from the previous two versions. When Disney decided to revamp the show for its final season, the show was reduced to a single weekly airing, shown only on Thursdays. Although still produced as a daily series during the final season taping in 1994, The Disney Channel, after cancelling the series once Season 7 production had concluded, decided to air the final season in a weekly format, therefore stretching the first run episodes into early 1996. The final season premiered in May 1995, almost a year after production had started and 6 + months after the series finale was taped.
Theme days were as follows:
The adult co-hosts for the show were Fred Newman (1989 - 1993, Seasons 1 - 6)), Mowava Pryor (1989 - 1990, Seasons 1 - 3)), and Terri Misner Eoff (1991 - 1993 (Seasons 4 - 6)).
The 35 Mouseketeers and the seasons they were featured in:
During the last two seasons of MMC they had a tv series called Emerald Cove with the cast:
On July 9, 2015, it was announced that a new version of the series will debut on July 23, 2015 on Disney Channel (South Korea). The format of revival will include musical performances, games, and skits, as same as the original one in the US. It 's planned that the series will have two pilot episodes and ten regular episodes. The Mouseketeers will consist of nine members of S.M. Entertainment 's pre-debut group called SM Rookies including five boys -- Mark, Jeno, Donghyuck, Jaemin, and Jisung -- and four girls -- Koeun, Hina, Herin, and Lami.
The series was hosted by Leeteuk, the leader of Super Junior.
The show ended on December 17, 2015.
On May 4th, 2017, it was announced that Club Mickey Mouse will be created in Malaysia. The format will include Musical Performance, Games and Comedy Sketchs.
The series will be hosted by YouTube personality, Charis Ow, and will premiered on September 15, 2017.
On September 8, 2017, it was announced that the Mickey Mouse Club will be rebooted under the name "Club Mickey Mouse '' with a new set of Mouseketeers, and for the first time, the series will be available on Facebook and Instagram, rather than its original half hour to full hour format on television, and will be more like a reality show than a variety show, with about 90 % of its content being behind the scenes. This incarnation of the Mickey Mouse Club features eight Mouseketeers who range in age from 15 to 18 (rather than 8 to 14 like the original): Regan Aliyah, Jenna Alvarez, Ky Baldwin, Gabe De Guzman, Leanne Tessa Langston, Brianna Mazzola, Sean Oliu, and Will Simmons. The Mouseketeers will also be joined by Todrick Hall, who will serve as a mentor to the young cast.
|
when did american english diverge from british english | American English - Wikipedia
American English (AmE, AE, AmEng, USEng, en - US), sometimes called United States English or U.S. English, is the set of varieties of the English language native to the United States.
English is the most widely spoken language in the United States and is the common language used by the federal government, considered the de facto language of the country because of its widespread use. English has been given official status by 32 of the 50 state governments. As an example, while both Spanish and English have equivalent status in the local courts of Puerto Rico, under federal law, English is the official language for any matters being referred to the United States district court for the territory.
The use of English in the United States is a result of British colonization of the Americas. The first wave of English - speaking settlers arrived in North America during the 17th century, followed by further migrations in the 18th and 19th centuries. Since then, American English has developed into new dialects, in some cases under the influence of West African and Native American languages, German, Dutch, Irish, Spanish, and other languages of successive waves of immigrants to the United States.
Any American or even Canadian English accent perceived as free of noticeably local, ethnic, or cultural markers is popularly called "General American '', described by sociolinguist William Labov as "a fairly uniform broadcast standard in the mass media ''. Otherwise, however, historical and present linguistic evidence does not support the notion of there being a mainstream standard English of the United States. According to Labov, with the major exception of Southern American English, regional accents throughout the country are not yielding to this broadcast standard. On the contrary, the sound of American English continues to evolve, with some local accents disappearing, but several larger regional accents emerging.
While written American English is (in general) standardized across the country, there are several recognizable variations in the spoken language, both in pronunciation and in vernacular vocabulary. The regional sounds of present - day American English are reportedly engaged in a complex phenomenon of "both convergence and divergence '': some accents are homogenizing and levelling, while others are diversifying and deviating further away from one another. In 2010, William Labov summarized the current state of regional American accents as follows:
Some regional American English has undergone "vigorous new sound changes '' since the mid-nineteenth century onwards, spawning relatively recent Mid-Atlantic (centered on Philadelphia and Baltimore), Western Pennsylvania (centered on Pittsburgh), Inland Northern (centered on Chicago, Detroit, and the Great Lakes region), Midland (centered on Indianapolis, Columbus, and Kansas City) and Western accents, all of which "are now more different from each other than they were fifty or a hundred years ago. '' Meanwhile, the unique features of the Eastern New England (centered on Boston) and New York City accents appear to be stable. "On the other hand, dialects of many smaller cities have receded in favor of the new regional patterns ''; for example, the traditional accents of Charleston and of Cincinnati have given way to the general Midland accent, and of St. Louis now approaches the sounds of an Inland Northern or Midland accent. At the same time, the Southern accent, despite its huge geographic coverage, "is on the whole slowly receding due to cultural stigma: younger speakers everywhere in the South are shifting away from the marked features of Southern speech. '' Finally, the "Hoi Toider '' dialect shows the paradox of receding among younger speakers in North Carolina 's Outer Banks islands, yet strengthening in the islands of the Chesapeake Bay.
The Western dialect, including Californian and New Mexican sub-types (with Pacific Northwest English also, arguably, a sub-type), is defined by:
The North Central ("Upper Midwest '') dialect, including an Upper Michigan sub-type, is defined by:
The Inland Northern ("Great Lakes '') dialect, including its less advanced Western New England (WNE) sub-types, is defined by:
The Midland dialect is defined by:
The Western Pennsylvania dialect, including its advanced Pittsburgh sub-type, is defined by:
The Southern dialects, including several sub-types, are defined by:
The Mid-Atlantic ("Delaware Valley '') dialect, including Philadelphia and Baltimore sub-types, is defined by:
The New York City dialect (with New Orleans English an intermediate sub-type between NYC and Southern) is defined by:
Eastern New England dialect, including Maine and Boston sub-types (with Rhode Island English an intermediate sub-type between ENE and NYC), is defined by:
Below, eleven major American English accents are defined by their particular combinations of certain characteristics:
Marked New England speech is mostly associated with eastern New England, centering on Boston and Providence, and traditionally includes some notable degree of r - dropping (or non-rhoticity), as well as the back tongue positioning of the / uː / vowel (to (u)) and the / aʊ / vowel (to (ɑʊ ~ äʊ)). In and north of Boston, the / ɑːr / sound is famously centralized or even fronted. Boston shows a cot -- caught merger, while Providence keeps the same two vowels sharply distinct.
New York City English, which prevails in a relatively small but nationally recognizable dialect region in and around New York City (including Long Island and northeastern New Jersey). Its features include some notable degree of non-rhoticity and a locally unique short - a vowel pronunciation split. New York City English otherwise broadly follows Northern patterns, except that the / aʊ / vowel is fronted. The cot -- caught merger is markedly resisted around New York City, as depicted in popular stereotypes like tawwk and cawwfee, with this thought vowel being typically tensed and diphthongal.
Most older Southern speech along the Eastern seaboard was non-rhotic, though, today, all local Southern dialects are strongly rhotic, defined most recognizably by the / aɪ / vowel losing its gliding quality and approaching (aː ~ äː), the initiating event for the Southern Vowel Shift, which includes the famous "Southern drawl '' that makes short front vowels into gliding vowels.
Since the mid-twentieth century, a distinctive new Northern speech pattern has developed near the Canadian border of the United States, centered on the central and eastern Great Lakes region (but only on the American side). Linguists call this region the "Inland North '', as defined by its local vowel shift -- occurring in the same region whose "standard Midwestern '' speech was the basis for General American in the mid-20th century (though prior to this recent vowel shift). Those not from this area frequently confuse it with the North Midland dialect treated below, referring to both, plus areas to the immediate west of the Great Lakes region, all collectively as "the Midwest '': a common but vaguely delineated term for what is now the central or north - central United States. The North Central or "Minnesotan '' dialect is also prevalent in the Upper Midwest, and is characterized by influences from the German and Scandinavian settlers of the region (like "yah '' for yes, pronounced similarly to "ja '' in German, Norwegian and Swedish). In parts of Pennsylvania and Ohio, another dialect known as Pennsylvania Dutch English is also spoken.
Between the traditional American dialect areas of the "North '' and "South '' is what linguists have long called the "Midland ''. This geographically overlaps with some states situated in the lower Midwest. West of the Appalachian Mountains begins the broad zone of modern - day Midland speech. Its vocabulary has been divided into two discrete subdivisions, the "North Midland '' that begins north of the Ohio River valley area, and the "South Midland '' speech, which to the American ear has a slight trace of the "Southern accent '' (especially due to some degree of / aɪ / glide weakening). The South Midland dialect follows the Ohio River in a generally southwesterly direction, moves across Arkansas and Oklahoma west of the Mississippi, and peters out in West Texas. Modern Midland speech is transitional regarding a presence or absence of the cot -- caught merger. Historically, Pennsylvania was a home of the Midland dialect; however, this state of early English - speaking settlers has now largely split off into new dialect regions, with distinct Philadelphia and Pittsburgh dialects documented since the latter half of the twentieth century.
A generalized Midland speech continues westward until becoming a somewhat internally diverse Western American English that unites the entire western half of the country. This Western dialect is mostly unified by a firm cot -- caught merger and a conservatively backed pronunciation of the long oh sound in goat, toe, show, etc., but a fronted pronunciation of the long oo sound in goose, lose, tune, etc. Western speech itself contains such advanced sub-types as Pacific Northwest English and California English, with the Chicano English accent also being a sub-type primarily of the Western accent. In the immediate San Francisco area, some older speakers do not have the normal Western cot -- caught merger. The island state of Hawaii, though primarily English - speaking, is also home to a creole language known commonly as Hawaiian Pidgin, and some native Hawaiians may even speak English with a Pidgin accent.
Although no longer region - specific, African American Vernacular English, which remains prevalent particularly among working - and middle - class African Americans, has a close relationship to Southern dialects and has greatly influenced everyday speech of many Americans, including hip hop culture. The same aforementioned socioeconomic groups, but among Hispanic and Latino Americans, have also developed native - speaker varieties of English. The best - studied Latino Englishes are Chicano English, spoken in the West and Midwest, and New York Latino English, spoken in the New York metropolitan area. Additionally, ethnic varieties such as Yeshiva English and "Yinglish '' are spoken by some American Jews, and Cajun Vernacular English by some Cajuns in southern Louisiana.
Compared with English as spoken in England, North American English is more homogeneous, and any North American accent that exhibits a majority of the most common phonological features is known as "General American. '' This section mostly refers to such widespread or mainstream pronunciation features that characterize American English.
Studies on historical usage of English in both the United States and the United Kingdom suggest that spoken American English did not simply deviate away from period British English, but retained certain now - archaic features contemporary British English has since lost. One of these is the rhoticity common in most American accents, because in the 17th century, when English was brought to the Americas, most English in England was also rhotic. The preservation of rhoticity has been further supported by the influences of Hiberno - English, West Country English and Scottish English. In most varieties of North American English, the sound corresponding to the letter ⟨ r ⟩ is a postalveolar approximant (ɹ̠) or retroflex approximant (ɻ) rather than a trill or tap (as often heard, for example, in the English accents of Scotland or India). A unique "bunched tongue '' variant of the approximant r sound is also associated with the United States, and seems particularly noticeable in the Midwest and South.
Traditionally, the "East Coast '' comprises three or four major linguistically distinct regions, each of which possesses English varieties both distinct from each other as well as quite internally diverse: New England, the New York metropolitan area, the Mid-Atlantic states (centering on Philadelphia and Baltimore), and the Southern United States. The only r - dropping (or non-rhotic) regional accents of American English are all spoken along the East Coast, except the Mid-Atlantic region, because these areas were in close historical contact with England and imitated prestigious varieties of English at a time when these were undergoing changes; in particular, the London prestige of non-rhoticity (or dropping the ⟨ r ⟩ sound, except before vowels) from the 17th century onwards, which is now widespread throughout most of England. Today, non-rhoticity is confined in the United States to the accents of eastern New England, the former plantation South, New York City, and African American Vernacular English (though the vowel - consonant cluster found in "bird '', "work '', "hurt '', "learn '', etc. usually retains its r pronunciation today, even in these non-rhotic accents). Other than these varieties, American accents are rhotic, pronouncing every instance of the ⟨ r ⟩ sound.
Many British accents have evolved in other ways compared to which General American English has remained relatively more conservative, for example, regarding the typical southern British features of a trap -- bath split, fronting of / oʊ /, and H - dropping. The innovation of / t / glottaling, which does occur before a consonant (including a syllabic coronal nasal consonant, like in the words button or satin) and word - finally in General American, additionally occurs variably between vowels in British English. On the other hand, General American is more innovative than the dialects of England, or English elsewhere in the world, in a number of its own ways:
Some mergers found in most varieties of both American and British English include:
North America has given the English lexicon many thousands of words, meanings, and phrases. Several thousand are now used in English as spoken internationally.
The process of coining new lexical items started as soon as the colonists began borrowing names for unfamiliar flora, fauna, and topography from the Native American languages. Examples of such names are opossum, raccoon, squash and moose (from Algonquian). Other Native American loanwords, such as wigwam or moccasin, describe articles in common use among Native Americans. The languages of the other colonizing nations also added to the American vocabulary; for instance, cookie, cruller, stoop, and pit (of a fruit) from Dutch; angst, kindergarten, sauerkraut from German, levee, portage ("carrying of boats or goods '') and (probably) gopher from French; barbecue, stevedore, and rodeo from Spanish.
Among the earliest and most notable regular "English '' additions to the American vocabulary, dating from the early days of colonization through the early 19th century, are terms describing the features of the North American landscape; for instance, run, branch, fork, snag, bluff, gulch, neck (of the woods), barrens, bottomland, notch, knob, riffle, rapids, watergap, cutoff, trail, timberline and divide. Already existing words such as creek, slough, sleet and (in later use) watershed received new meanings that were unknown in England.
Other noteworthy American toponyms are found among loanwords; for example, prairie, butte (French); bayou (Choctaw via Louisiana French); coulee (Canadian French, but used also in Louisiana with a different meaning); canyon, mesa, arroyo (Spanish); vlei, skate, kill (Dutch, Hudson Valley).
The word corn, used in England to refer to wheat (or any cereal), came to denote the plant Zea mays, the most important crop in the U.S., originally named Indian corn by the earliest settlers; wheat, rye, barley, oats, etc. came to be collectively referred to as grain. Other notable farm related vocabulary additions were the new meanings assumed by barn (not only a building for hay and grain storage, but also for housing livestock) and team (not just the horses, but also the vehicle along with them), as well as, in various periods, the terms range, (corn) crib, truck, elevator, sharecropping and feedlot.
Ranch, later applied to a house style, derives from Mexican Spanish; most Spanish contributions came after the War of 1812, with the opening of the West. Among these are, other than toponyms, chaps (from chaparreras), plaza, lasso, bronco, buckaroo, rodeo; examples of "English '' additions from the cowboy era are bad man, maverick, chuck ("food '') and Boot Hill; from the California Gold Rush came such idioms as hit pay dirt or strike it rich. The word blizzard probably originated in the West. A couple of notable late 18th century additions are the verb belittle and the noun bid, both first used in writing by Thomas Jefferson.
With the new continent developed new forms of dwelling, and hence a large inventory of words designating real estate concepts (land office, lot, outlands, waterfront, the verbs locate and relocate, betterment, addition, subdivision), types of property (log cabin, adobe in the 18th century; frame house, apartment, tenement house, shack, shanty in the 19th century; project, condominium, townhouse, split - level, mobile home, multi-family in the 20th century), and parts thereof (driveway, breezeway, backyard, dooryard; clapboard, siding, trim, baseboard; stoop (from Dutch), family room, den; and, in recent years, HVAC, central air, walkout basement).
Ever since the American Revolution, a great number of terms connected with the U.S. political institutions have entered the language; examples are run (i.e, for office), gubernatorial, primary election, carpetbagger (after the Civil War), repeater, lame duck (a British term used originally in Banking) and pork barrel. Some of these are internationally used (for example, caucus, gerrymander, filibuster, exit poll).
The development of industry and material innovations throughout the 19th and 20th centuries were the source of a massive stock of distinctive new words, phrases and idioms. Typical examples are the vocabulary of railroading (see further at rail terminology) and transportation terminology, ranging from names of roads (from dirt roads and back roads to freeways and parkways) to road infrastructure (parking lot, overpass, rest area), and from automotive terminology to public transit (for example, in the sentence "riding the subway downtown ''); such American introductions as commuter (from commutation ticket), concourse, to board (a vehicle), to park, double - park and parallel park (a car), double decker or the noun terminal have long been used in all dialects of English.
Trades of various kinds have endowed (American) English with household words describing jobs and occupations (bartender, longshoreman, patrolman, hobo, bouncer, bellhop, roustabout, white collar, blue collar, employee, boss (from Dutch), intern, busboy, mortician, senior citizen), businesses and workplaces (department store, supermarket, thrift store, gift shop, drugstore, motel, main street, gas station, hardware store, savings and loan, hock (also from Dutch)), as well as general concepts and innovations (automated teller machine, smart card, cash register, dishwasher, reservation (as at hotels), pay envelope, movie, mileage, shortage, outage, blood bank).
Already existing English words -- such as store, shop, dry goods, haberdashery, lumber -- underwent shifts in meaning; some -- such as mason, student, clerk, the verbs can (as in "canned goods ''), ship, fix, carry, enroll (as in school), run (as in "run a business ''), release and haul -- were given new significations, while others (such as tradesman) have retained meanings that disappeared in England. From the world of business and finance came break - even, merger, delisting, downsize, disintermediation, bottom line; from sports terminology came, jargon aside, Monday - morning quarterback, cheap shot, game plan (football); in the ballpark, out of left field, off base, hit and run, and many other idioms from baseball; gamblers coined bluff, blue chip, ante, bottom dollar, raw deal, pass the buck, ace in the hole, freeze - out, showdown; miners coined bedrock, bonanza, peter out, pan out and the verb prospect from the noun; and railroadmen are to be credited with make the grade, sidetrack, head - on, and the verb railroad. A number of Americanisms describing material innovations remained largely confined to North America: elevator, ground, gasoline; many automotive terms fall in this category, although many do not (hatchback, sport utility vehicle, station wagon, tailgate, motorhome, truck, pickup truck, to exhaust).
In addition to the above - mentioned loans from French, Spanish, Mexican Spanish, Dutch, and Native American languages, other accretions from foreign languages came with 19th and early 20th century immigration; notably, from Yiddish (chutzpah, schmooze, tush) and German -- hamburger and culinary terms like frankfurter / franks, liverwurst, sauerkraut, wiener, deli (catessen); scram, kindergarten, gesundheit; musical terminology (whole note, half note, etc.); and apparently cookbook, fresh ("impudent '') and what gives? Such constructions as Are you coming with? and I like to dance (for "I like dancing '') may also be the result of German or Yiddish influence.
Finally, a large number of English colloquialisms from various periods are American in origin; some have lost their American flavor (from OK and cool to nerd and 24 / 7), while others have not (have a nice day, for sure); many are now distinctly old - fashioned (swell, groovy). Some English words now in general use, such as hijacking, disc jockey, boost, bulldoze and jazz, originated as American slang. Among the many English idioms of U.S. origin are get the hang of, bark up the wrong tree, keep tabs, run scared, take a backseat, have an edge over, stake a claim, take a shine to, in on the ground floor, bite off more than one can chew, off / on the wagon, stay put, inside track, stiff upper lip, bad hair day, throw a monkey wrench / monkeywrenching, under the weather, jump bail, come clean, come again?, it ai n't over till it 's over, and what goes around comes around.
American English has always shown a marked tendency to use nouns as verbs. Examples of verbed nouns are interview, advocate, vacuum, lobby, pressure, rear - end, transition, feature, profile, spearhead, skyrocket, showcase, service (as a car), corner, torch, exit (as in "exit the lobby ''), factor (in mathematics), gun ("shoot ''), author (which disappeared in English around 1630 and was revived in the U.S. three centuries later) and, out of American material, proposition, graft (bribery), bad - mouth, vacation, major, backpack, backtrack, intern, ticket (traffic violations), hassle, blacktop, peer - review, dope and OD, and, of course verbed as used at the start of this sentence.
Compounds coined in the U.S. are for instance foothill, flatlands, badlands, landslide (in all senses), overview (the noun), backdrop, teenager, brainstorm, bandwagon, hitchhike, smalltime, deadbeat, frontman, lowbrow and highbrow, hell - bent, foolproof, nitpick, about - face (later verbed), upfront (in all senses), fixer - upper, no - show; many of these are phrases used as adverbs or (often) hyphenated attributive adjectives: non-profit, for - profit, free - for - all, ready - to - wear, catchall, low - down, down - and - out, down and dirty, in - your - face, nip and tuck; many compound nouns and adjectives are open: happy hour, fall guy, capital gain, road trip, wheat pit, head start, plea bargain; some of these are colorful (empty nester, loan shark, ambulance chaser, buzz saw, ghetto blaster, dust bunny), others are euphemistic (differently abled (physically challenged), human resources, affirmative action, correctional facility).
Many compound nouns have the form verb plus preposition: add - on, stopover, lineup, shakedown, tryout, spin - off, rundown ("summary ''), shootout, holdup, hideout, comeback, cookout, kickback, makeover, takeover, rollback ("decrease ''), rip - off, come - on, shoo - in, fix - up, tie - in, tie - up ("stoppage ''), stand - in. These essentially are nouned phrasal verbs; some prepositional and phrasal verbs are in fact of American origin (spell out, figure out, hold up, brace up, size up, rope in, back up / off / down / out, step down, miss out, kick around, cash in, rain out, check in and check out (in all senses), fill in ("inform ''), kick in or throw in ("contribute ''), square off, sock in, sock away, factor in / out, come down with, give up on, lay off (from employment), run into and across ("meet ''), stop by, pass up, put up (money), set up ("frame ''), trade in, pick up on, pick up after, lose out).
Noun endings such as - ee (retiree), - ery (bakery), - ster (gangster) and - cian (beautician) are also particularly productive. Some verbs ending in - ize are of U.S. origin; for example, fetishize, prioritize, burglarize, accessorize, itemize, editorialize, customize, notarize, weatherize, winterize, Mirandize; and so are some back - formations (locate, fine - tune, evolute, curate, donate, emote, upholster, peeve and enthuse). Among syntactical constructions that arose in the U.S. are as of (with dates and times), outside of, headed for, meet up with, back of, convince someone to, not about to and lack for.
Americanisms formed by alteration of some existing words include notably pesky, phony, rambunctious, pry (as in "pry open '', from prize), putter (verb), buddy, sundae, skeeter, sashay and kitty - corner. Adjectives that arose in the U.S. are for example, lengthy, bossy, cute and cutesy, grounded (of a child), punk (in all senses), sticky (of the weather), through (as in "through train '', or meaning "finished ''), and many colloquial forms such as peppy or wacky. American blends include motel, guesstimate, infomercial and televangelist.
A number of words and meanings that originated in Middle English or Early Modern English and that have been in everyday use in the United States dropped out in most varieties of British English; some of these have cognates in Lowland Scots. Terms such as fall ("autumn ''), faucet ("tap ''), diaper ("nappy ''), candy ("sweets ''), skillet, eyeglasses and obligate are often regarded as Americanisms. Fall for example came to denote the season in 16th century England, a contraction of Middle English expressions like "fall of the leaf '' and "fall of the year ''.
During the 17th century, English immigration to the British colonies in North America was at its peak and the new settlers took the English language with them. While the term fall gradually became obsolete in Britain, it became the more common term in North America. Gotten (past participle of get) is often considered to be an Americanism, although there are some areas of Britain, such as Lancashire and North East England, that still continue to use it and sometimes also use putten as the past participle for put (which is not done by most speakers of American English).
Other words and meanings, to various extents, were brought back to Britain, especially in the second half of the 20th century; these include hire ("to employ ''), quit ("to stop '', which spawned quitter in the U.S.), I guess (famously criticized by H.W. Fowler), baggage, hit (a place), and the adverbs overly and presently ("currently ''). Some of these, for example monkey wrench and wastebasket, originated in 19th century Britain.
The mandative subjunctive (as in "the City Attorney suggested that the case not be closed '') is livelier in American English than it is in British English. It appears in some areas as a spoken usage and is considered obligatory in contexts that are more formal. The adjectives mad meaning "angry '', smart meaning "intelligent '', and sick meaning "ill '' are also more frequent in American (these meanings are also frequent in Hiberno - English) than British English.
Linguist Bert Vaux created a survey, completed in 2003, polling English speakers across the United States about the specific words they would use in everyday speech for various concepts. This 2003 study concluded that:
American English and British English (BrE) often differ at the levels of phonology, phonetics, vocabulary, and, to a much lesser extent, grammar and orthography. The first large American dictionary, An American Dictionary of the English Language, known as Webster 's Dictionary, was written by Noah Webster in 1828, codifying several of these spellings.
Differences in grammar are relatively minor, and do not normally affect mutual intelligibility; these include: different use of some auxiliary verbs; formal (rather than notional) agreement with collective nouns; different preferences for the past forms of a few verbs (for example, AmE / BrE: learned / learnt, burned / burnt, snuck / sneaked, dove / dived) although the purportedly "British '' forms can occasionally be seen in American English writing as well; different prepositions and adverbs in certain contexts (for example, AmE in school, BrE at school); and whether or not a definite article is used, in very few cases (AmE to the hospital, BrE to hospital; contrast, however, AmE actress Elizabeth Taylor, BrE the actress Elizabeth Taylor). Often, these differences are a matter of relative preferences rather than absolute rules; and most are not stable, since the two varieties are constantly influencing each other, and American English is not a standardized set of dialects.
Differences in orthography are also minor. The main differences are that American English usually uses spellings such as flavor for British flavour, fiber for fibre, defense for defence, analyze for analyse, license for licence, catalog for catalogue and traveling for travelling. Noah Webster popularized such spellings in America, but he did not invent most of them. Rather, "he chose already existing options (...) on such grounds as simplicity, analogy or etymology ''. Other differences are due to the francophile tastes of the 19th century Victorian era Britain (for example they preferred programme for program, manoeuvre for maneuver, cheque for check, etc.). AmE almost always uses - ize in words like realize. BrE prefers - ise, but also uses - ize on occasion (see Oxford spelling).
There are a few differences in punctuation rules. British English is more tolerant of run - on sentences, called "comma splices '' in American English, and American English requires that periods and commas be placed inside closing quotation marks even in cases in which British rules would place them outside. American English also favors the double quotation mark over single.
AmE sometimes favors words that are morphologically more complex, whereas BrE uses clipped forms, such as AmE transportation and BrE transport or where the British form is a back - formation, such as AmE burglarize and BrE burgle (from burglar). However, while individuals usually use one or the other, both forms will be widely understood and mostly used alongside each other within the two systems.
British English also differs from American English in that "schedule '' can be pronounced with either (sk) or (ʃ).
Click on a coloured area to see an article about English in that country or region
|
who established the nam viet kingdom in the third century bce | History of Vietnam - Wikipedia
Vietnam 's recorded history stretches back to the mid-to - late 3rd century BCE, when Âu Lạc and Nanyue (Nam Việt in Vietnamese) were established (Nanyue conquered Âu Lạc in 179 BCE). Pre-historic Vietnam was home to some of the world 's earliest civilizations and societies -- making them one of the world 's first people who practiced agriculture and rice cultivation. The Red River valley formed a natural geographic and economic unit, bounded to the north and west by mountains and jungles, to the east by the sea and to the south by the Red River Delta. According to myth the first Vietnamese state was founded in 2879 BC, but archaeological studies suggest development towards chiefdoms during the late Bronze Age Đông Sơn culture.
Vietnam 's peculiar geography made it a difficult country to attack, which is why Vietnam under the Hùng kings was for so long an independent and self - contained state. Once Vietnam did succumb to foreign rule, however, it proved unable to escape from it, and for 1,100 years, Vietnam had been successively governed by a series of Chinese dynasties: the Han, Eastern Wu, Jin, Liu Song, Southern Qi, Liang, Sui, Tang, and Southern Han; leading to the loss of native cultural heritage, language, and much of national identity. At certain periods during these 1,100 years, Vietnam was independently governed under the Triệus, Trưng Sisters, Early Lýs, Khúcs and Dương Đình Nghệ -- although their triumphs and reigns were temporary.
During the Chinese domination of North Vietnam, several civilizations flourished in what is today central and south Vietnam, particularly the Funanese and Cham. The founders and rulers of these governments, however, were not native to Vietnam. From the 10th century onwards, the Vietnamese, emerging in their heartland of the Red River Delta, began to conquer these civilizations.
When Ngô Quyền (King of Vietnam, 939 -- 944) restored sovereign power in the country, the next millennium was advanced by the accomplishments of successive dynasties: Ngôs, Đinhs, Early Lês, Lýs, Trầns, Hồs, Later Trầns, Later Lês, Mạcs, Trịnhs, Nguyễns, Tây Sơns and again Nguyễns. At various points during the imperial dynasties, Vietnam was ravaged and divided by civil wars and witnessed interventions by the Songs, Mongol Yuans, Chams, Mings, Siam, Manchus, French.
The Ming Empire conquered the Red River valley for a while before native Vietnamese regained control and the French Empire reduced Vietnam to a French dependency for nearly a century, followed by an occupation by the Japanese Empire. Political upheaval and Communist insurrection put an end to the monarchy after World War II, and the country was proclaimed a republic.
±
According to mythology, for almost three millennia -- from its beginning around 2879 B.C. (or early 7th century BC) to its conquest by Thục Phán in 258 B.C. -- the Hồng Bàng period was divided into 18 dynasties, with each dynasty being based on the lineage of the kings. Throughout this era, the country encountered many changes, some being very drastic. Due to the limitation of the written evidence, the main sources of information about the Hồng Bàng period are the many vestiges, objects and artifacts that have been recovered from archaeological sites - as well as a considerable amount of legend. The land began as several tribal states, with King Kinh Dương Vương grouping all the vassal states at around 2879 BC. The ancient Vietnamese rulers of this period are collectively known as the Hùng kings (Vietnamese: Hùng Vương).
From ancient times, modern northern Vietnam and southern China were peopled by many races. Lộc Tục (c. 2919 -- 2794 BC) succeeded his predecessor as tribal chief and made the first attempts to incorporate all tribes around 2879 BC. As he succeeded in grouping all the vassal states within his territory, a convocation of the subdued tribes proclaimed him King Kinh Dương Vương, as the leader of the unified ancient Vietnamese nation. Kinh Dương Vương called his newly born country Xích Quỷ and reigned over the confederacy that occupied the Red River Delta in present - day Northern Vietnam and part of southeastern China, seeing the beginnings of nationhood for Vietnam under one supreme ruler, the Hùng king, also starting the Hồng Bàng period.
According to stories of the period, the First Hùng dynasty only had one ruler, Kinh Dương Vương himself, and witnessed the first two capitals in Vietnamese history, at Ngàn Hống and Nghĩa Lĩnh. Sùng Lãm (c. 2825 BC --?) was Kinh Dương Vương 's successor and founded the Second Hùng dynasty. The next line of kings that followed renamed the country Văn Lang.
The Third Hùng dynasty lasted from approximately 2524 BC to 2253 BC. The administrative rule of the Lạc tướng, Bố chính, and Lạc hầu began.
The period of the Fourth Hùng dynasty (c. 2252 -- 1913 BC) saw the evidence for early Vietnamese calendar system recorded on stone tools and the population from the mountainous areas moved out and began to settle in the open along the rivers to join the agricultural activities.
The Fifth Hùng dynasty lasted from approximately 1912 BC to 1713 BC.
Then, during the Sixth Hùng dynasty, Văn Lang was invaded by the mysterious people called the Xích Tỵ, as the king battled Văn Lang back to greatness.
The Seventh dynasty started with Lang Liêu, a son of the last king of the Sixth dynasty. Lang Liêu was a prince who won a culinary contest; he then won the throne because his creations, bánh chưng (rice cake), reflect his deep understanding of the land 's vital economy: rice farming. The Seventh dynasty and well into the early first millennium BC was a period of stabilizing, saw a civilization flourishing to continue its greatness.
The first millennium BC was a period that went from the Twelfth dynasty to the Eighteenth dynasty. It was when the Vietnamese Bronze Age culture further flourished and attained an unprecedented level of realism, finally culminating in the opening stage of the Vietnamese Iron Age.
The Eighteenth dynasty was the last ruling dynasty during the Hùng king epoch. It fell to the Âu Việt in 258 BC after the last Hùng king was defeated in battle.
This period contained some accounts that mixes historical facts with legends. The Legend of Gióng tells of a youth going to war to save the country, wearing iron armor, riding an armored horse, and wielding an iron staff, showed that metalworking was sophisticated. The Legend of the Magic Crossbow, about a crossbow that can deliver thousands of arrows, showed extensive use of archery in warfare.
Fishing and hunting supplemented the main rice crop. Arrowheads and spears were dipped in poison to kill larger animals such as elephants. Betel nuts were widely chewed and the lower classes rarely wore clothing more substantial than a loincloth. Every spring, a fertility festival was held which featured huge parties and sexual abandon. Religion consisted of primitive animistic cults.
Since around 2000 BC, stone hand tools and weapons improved extraordinarily in both quantity and variety. Pottery reached a higher level of technique and decoration style. The Vietnamese people were mainly agriculturists, growing the wet rice Oryza, which became the main staple of their diet. During the later stage of the first half of the 2nd millennium BC, the first appearance of bronze tools took place despite these tools still being rare. By about 1000 BC, bronze replaced stone for about 40 percent of edged tools and weapons, rising to about 60 percent. Here, there were not only bronze weapons, axes, and personal ornaments, but also sickles and other agriculture tools. Toward the closure of the Bronze Age, bronze accounts for more than 90 percent of tools and weapons, and there are exceptionally extravagant graves -- the burial places of powerful chieftains -- containing some hundreds of ritual and personal bronze artifacts such as musical instruments, bucket - shaped ladles, and ornament daggers. After 1000 BC, the ancient Vietnamese people became skilled agriculturalists as they grew rice and kept buffaloes and pigs. They were also skilled fishermen and bold sailors, whose long dug - out canoes traversed the eastern sea.
Modern central and southern Vietnam were not originally part of the Vietnamese state. The peoples of those areas developed a distinct culture from the ancient Vietnamese in the Red River Delta region. For instance, the 1st millennium BC Sa Huỳnh culture in the areas of present - day central Vietnam revealed a considerable use of iron and decorative items made from glass, semi-precious and precious stones such as agate, carnelian, rock crystal, amethyst, and nephrite. The culture also showed evidence of an extensive trade network. The Sa Huỳnh people were most likely the predecessors of the Cham people, an Austronesian - speaking people and the founders of the kingdom of Champa.
By the 3rd century BC, another Viet group, the Âu Việt, emigrated from present - day southern China to the Red River delta and mixed with the indigenous Văn Lang population. In 257 BC, a new kingdom, Âu Lạc, emerged as the union of the Âu Việt and the Lạc Việt, with Thục Phán proclaiming himself "An Dương Vương '' ("King An Dương ''). Some modern Vietnamese believe that Thục Phán came upon the Âu Việt territory (modern - day northernmost Vietnam, western Guangdong, and southern Guangxi province, with its capital in what is today Cao Bằng Province).
After assembling an army, he defeated and overthrew the eighteenth dynasty of Hùng kings, around 258 BC. He proclaimed himself An Dương Vương ("King An Dương ''). He then renamed his newly acquired state from Văn Lang to Âu Lạc and established the new capital at Phong Khê in the present - day Phú Thọ town in northern Vietnam, where he tried to build the Cổ Loa Citadel (Cổ Loa Thành), the spiral fortress approximately ten miles north of that new capital. However, records showed that espionage resulted in the downfall of An Dương Vương. At his capital, Cổ Loa, he built many concentric walls around the city for defensive purposes. These walls, together with skilled Âu Lạc archers, kept the capital safe from invaders.
In 207 BC, Qin warlord Triệu Đà (pinyin: Zhao Tuo) established his own independent kingdom in present - day Guangdong / Guangxi area. He proclaimed his new kingdom as Nam Việt (pinyin: Nanyue), starting the Triệu dynasty. Triệu Đà later appointed himself a commandant of central Guangdong, closing the borders and conquering neighboring districts and titled himself "King of Nam Viet '' In 179 BC, he defeated King An Dương Vương and annexed Âu Lạc.
This period is controversial as on one side, some Vietnamese historians consider Triệu 's rule as the starting point of the Chinese domination, since Triệu Đà was a former Qin general, whereas others consider it still an era of Vietnamese independence as the Triệu family in Nam Việt were assimilated to local culture. They ruled independently of what then constituted the Han Empire. At one point, Triệu Đà even declared himself Emperor, equal to the Han Emperor in the north.
In 111 BC, Han troops invaded Nam Việt and established new territories, dividing Vietnam into Giao Chỉ (pinyin: Jiaozhi), now the Red River delta; Cửu Chân from modern - day Thanh Hóa to Hà Tĩnh; and Nhật Nam (pinyin: Rinan), from modern - day Quảng Bình to Huế. While governors and top officials were Chinese, the original Vietnamese nobles (Lạc Hầu, Lạc Tướng) from the Hồng Bàng period still managed in some of the highlands.
In 40 AD, the Trưng Sisters led a successful revolt against Han Governor Su Dung (Vietnamese: Tô Định) and recaptured 65 states (including modern Guangxi). Trưng Trắc became the Queen (Trưng Nữ Vương). In 43 AD, Emperor Guangwu of Han sent his famous general Ma Yuan (Vietnamese: Mã Viện) with a large army to quell the revolt. After a long, difficult campaign, Ma Yuan suppressed the uprising and the Trung Sisters committed suicide to avoid capture. To this day, the Trưng Sisters are revered in Vietnam as the national symbol of Vietnamese women.
Learning a lesson from the Trưng revolt, the Han and other successful Chinese dynasties took measures to eliminate the power of the Vietnamese nobles. The Vietnamese elites were educated in Chinese culture and politics. A Giao Chỉ prefect, Shi Xie, ruled Vietnam as an autonomous warlord for forty years and was posthumously deified by later Vietnamese emperors. Nearly 200 years passed before the Vietnamese attempted another revolt. In 225 another woman, Triệu Thị Trinh, popularly known as Lady Triệu (Bà Triệu), led another revolt which lasted until 248. Once again, the uprising failed and Triệu Thị Trinh threw herself into a river.
At the same time, in present - day Central Vietnam, there was a successful revolt of Cham nations in 192. Chinese dynasties called it Lin - Yi (Lin village; Vietnamese: Lâm Ấp). It later became a powerful kingdom, Champa, stretching from Quảng Bình to Phan Thiết (Bình Thuận).
In the period between the beginning of the Chinese Age of Fragmentation and the end of the Tang dynasty, several revolts against Chinese rule took place, such as those of Lý Bôn and his general and heir Triệu Quang Phục; and those of Mai Thúc Loan and Phùng Hưng. All of them ultimately failed, yet most notable were those led by Lý Bôn and Triệu Quang Phục, whose Early Lý dynasty ruled for almost half a century, from 544 to 602, before Sui China reconquered their kingdom Vạn Xuân.
During the Tang dynasty, Vietnam was called Annam until 866. With its capital around modern Bắc Ninh, Annam became a flourishing trading outpost, receiving goods from the southern seas. The Book of the Later Han recorded that in 166 the first envoy from the Roman Empire to China arrived by this route, and merchants were soon to follow. The 3rd - century Tales of Wei (Weilüe) mentioned a "water route '' (the Red River) from Annam into what is now southern Yunnan. From there, goods were taken over land to the rest of China via the regions of modern Kunming and Chengdu.
In 866, Annam was renamed Tĩnh Hải quân. Early in the 10th century, as China became politically fragmented, successive lords from the Khúc clan, followed by Dương Đình Nghệ, ruled Tĩnh Hải quân autonomously under the Tang title of Jiedushi (Vietnamese: Tiết Độ Sứ), Virtuous Lord, but stopped short of proclaiming themselves kings.
In 938, Southern Han sent troops to conquer autonomous Giao Châu. Ngô Quyền, Dương Đình Nghệ 's son - in - law, defeated the Southern Han fleet at the Battle of Bạch Đằng (938). He then proclaimed himself King Ngô and effectively began the age of independence for Vietnam.
The basic nature of Vietnamese society changed little during the nearly 1,000 years between independence from China in the 10th century and the French conquest in the 19th century. The king was the ultimate source of political authority, the final dispenser of justice, law, and supreme commander - in - chief of the armed forces, as well as overseer of religious rituals. Administration was carried out by mandarins who were trained exactly like their Chinese counterparts (i.e. by rigorous study of Confucian texts). Overall, Vietnam remained very efficiently and stably governed except in times of war and dynastic breakdown, and its administrative system was probably far more advanced than that of any other Southeast Asian states and was more highly centralized and stable governed among Asian states. No serious challenge to the king 's authority ever arose, as titles of nobility were bestowed purely as honors and were not hereditary. Periodic land reforms broke up large estates and ensured that powerful landowners could not emerge. No religious / priestly class ever arose outside of the mandarins either. This stagnant absolutism ensured a stable, well - ordered society, but also resistance to social, cultural, or technological innovations. Reformers looked only to the past for inspiration.
Literacy remained the provenance of the upper classes. Initially, Chinese was used for writing purposes, but by the 13th century, a set of derivative characters known as Chữ Nôm emerged that allowed native Vietnamese words to be written. However, it remained limited to poetry, literature, and practical texts like medicine while all state and official documents were written in Classical Chinese. Aside from some mining and fishing, agriculture was the primary activity of most Vietnamese, and economic development and trade were not promoted or encouraged by the state.
Ngô Quyền 's untimely death after a short reign resulted in a power struggle for the throne, resulting in the country 's first major civil war, the upheaval of Twelfth Warlords (Loạn Thập Nhị Sứ Quân). The war lasted from 944 to 968 until the clan led by Đinh Bộ Lĩnh defeated the other warlords, unifying the country. Đinh Bộ Lĩnh founded the Đinh dynasty and proclaimed himself Đinh Tiên Hoàng (Đinh the Majestic Emperor) and renamed the country from Tĩnh Hải quân to Đại Cồ Việt (literally "Great Viet Land ''), with its capital in Hoa Lư (modern - day Ninh Bình Province). The new emperor introduced strict penal codes to prevent chaos from happening again. He then tried to form alliances by granting the title of Queen to five women from the five most influential families.
In 979, Emperor Đinh Tiên Hoàng and his crown prince Đinh Liễn were assassinated, leaving his lone surviving son, the 6 - year - old Đinh Toàn, to assume the throne. Taking advantage of the situation, Song China invaded Annam. Facing such a grave threat to national independence, the commander of the armed forces, (Thập Đạo Tướng Quân) Lê Hoàn took the throne, founding the Early Lê dynasty. A capable military tactician, Lê Hoan realized the risks of engaging the mighty Song troops head on; thus he tricked the invading army into Chi Lăng Pass, then ambushed and killed their commander, quickly ending the threat to his young nation in 981. The Song dynasty withdrew their troops and Lê Hoàn was referred to in his realm as Emperor Đại Hành (Đại Hành Hoàng Đế). Emperor Lê Đại Hành was also the first Vietnamese monarch who began the southward expansion process against the kingdom of Champa.
Emperor Lê Đại Hành 's death in 1005 resulted in infighting for the throne amongst his sons. The eventual winner, Lê Long Đĩnh, became the most notorious tyrant in Vietnamese history. He devised sadistic punishments of prisoners for his own entertainment and indulged in deviant sexual activities. Toward the end of his short life -- he died at the age of 24. Lê Long Đĩnh had become so ill that he had to lie down when meeting with his officials in court.
When the king Lê Long Đĩnh died in 1009, a palace guard commander named Lý Công Uẩn was nominated by the court to take over the throne, and founded the Lý dynasty. This event is regarded as the beginning of another golden era in Vietnamese history, with the following dynasties inheriting the Lý dynasty 's prosperity and doing much to maintain and expand it. The way Lý Công Uẩn ascended to the throne was rather uncommon in Vietnamese history. As a high - ranking military commander residing in the capital, he had all opportunities to seize power during the tumultuous years after Emperor Lê Hoàn 's death, yet preferring not to do so out of his sense of duty. He was in a way being "elected '' by the court after some debate before a consensus was reached.
The Lý dynasty is credited for laying down a concrete foundation for the nation of Vietnam. Leaving Hoa Lư, a natural fortification surrounded by mountains and rivers, Lý Công Uẩn moved his court to the new capital in present - day Hanoi and called it Thăng Long (Ascending Dragon). Lý Công Uẩn thus departed from the militarily defensive mentality of his predecessors and envisioned a strong economy as the key to national survival. The third emperor of the dynasty, Lý Thánh Tông renamed the country "Đại Việt '' (大越, Great Viet). Successive Lý emperors continued to accomplish far - reaching feats: building a dike system to protect rice farms; founding the Quốc Tử Giám, the first noble university; holding regular examinations to select capable commoners for government positions once every three years; organizing a new system of taxation; establishing humane treatment of prisoners. Women were holding important roles in Lý society as the court ladies were in charge of tax collection. The Lý dynasty also promoted Buddhism, yet maintained a pluralistic attitude toward the three main philosophical systems of the time: Buddhism, Confucianism, and Taoism.
The Lý dynasty had two major wars with Song China, and a few invasive campaigns against neighboring Champa in the south. The most notable battle took place on Chinese territory in 1075. Upon learning that a Song invasion was imminent, the Lý army and navy totaling about 100,000 men under the command of Lý Thường Kiệt, and Tông Đản used amphibious operations to preemptively destroy three Song military installations at Yongzhou, Qinzhou, and Lianzhou in present - day Guangdong and Guangxi, and killed 100,000 Chinese. The Song dynasty took revenge and invaded Đại Việt in 1076, but the Song troops were held back at the Battle of Như Nguyệt River commonly known as the Cầu river, now in Bắc Ninh province about 40 km from the current capital, Hanoi. Neither side was able to force a victory, so the Lý dynasty proposed a truce, which the Song emperor accepted. Champa and the powerful Khmer Empire took advantage of the Lý dynasty 's distraction with the Song to pillage Đại Việt 's southern provinces. Together they invaded Vietnam in 1128 and 1132. Further invasions followed in the subsequent decades.
Toward the end of the Lý dynasty, a powerful court minister named Trần Thủ Độ forced the emperor Lý Huệ Tông to become a Buddhist monk and Lý Chiêu Hoàng, Huệ Tông 's young daughter, to become queen. Trần Thủ Độ then arranged the marriage of Chiêu Hoàng to his nephew Trần Cảnh and eventually had the throne transferred to Trần Cảnh, thus begun the Trần dynasty.
Trần Thủ Độ viciously purged members of the Lý nobility; some Lý princes escaped to Korea, including Lý Long Tường. After the purge, the Trần emperors ruled the country in similar manner to the Lý kings. Noted Trần dynasty accomplishments include the creation of a system of population records based at the village level, the compilation of a formal 30 - volume history of Đại Việt (Đại Việt Sử Ký) by Lê Văn Hưu, and the rising in status of the Nôm script, a system of writing for Vietnamese language. The Trần dynasty also adopted a unique way to train new emperors: when a crown prince reached the age of 18, his predecessor would abdicate and turn the throne over to him, yet holding the title of Retired Emperor (Thái Thượng Hoàng), acting as a mentor to the new Emperor. Despite continued Champa - Khmer attacks, the Trần managed to arrange several periods of peace with them.
During the Trần dynasty, the armies of the Mongol Empire under Möngke Khan and Kublai Khan invaded Annam in 1258, 1285, and 1287 88. Annam repelled all attacks of the Yuan Mongols during the reign of Kublai Khan. Three Mongol armies said to have numbered from 300,000 to 500,000 men were defeated. The key to Annam 's successes was to avoid the Mongols ' strength in open field battles and city sieges -- the Trần court abandoned the capital and the cities. The Mongols were then countered decisively at their weak points, which were battles in swampy areas such as Chương Dương, Hàm Tử, Vạn Kiếp and on rivers such as Vân Đồn and Bạch Đằng. The Mongols also suffered from tropical diseases and loss of supplies to Trần army 's raids. The Yuan - Trần war reached its climax when the retreating Yuan fleet was decimated at the Battle of Bạch Đằng (1288). The military architect behind Annam 's victories was Commander Trần Quốc Tuấn, more popularly known as Trần Hưng Đạo. In order to avoid further disastrous campaigns, the Tran and Champa acknowledged Mongol supremacy.
It was also during this period that the Trần emperors waged many wars against the southern kingdom of Champa, continuing the Vietnamese long history of southern expansion (known as Nam tiến) that had begun shortly after gaining independence in the 10th century. Often, they encountered strong resistance from the Chams. Champa was made a tributary state of Vietnam in 1312, but ten years later regained independence and Cham troops led by king Chế Bồng Nga (Cham: Po Binasuor or Che Bonguar) killed king Trần Duệ Tông in battle and even laid siege to Đại Việt 's capital Thăng Long in 1377 and again in 1383. However, the Trần dynasty was successful in gaining two Champa provinces, located around present - day Huế, through the peaceful means of the political marriage of Princess Huyền Trân to a Cham king.
The wars with Champa and the Mongols left Vietnam exhausted and bankrupt. The Trần dynasty was in turn overthrown by one of its own court officials, Hồ Quý Ly. Hồ Quý Ly forced the last Trần emperor to abdicate and assumed the throne in 1400. He changed the country name to Đại Ngu and moved the capital to Tây Đô, Western Capital, now Thanh Hóa. Thăng Long was renamed Đông Đô, Eastern Capital. Although widely blamed for causing national disunity and losing the country later to the Ming Empire, Hồ Quý Ly 's reign actually introduced a lot of progressive, ambitious reforms, including the addition of mathematics to the national examinations, the open critique of Confucian philosophy, the use of paper currency in place of coins, investment in building large warships and cannons, and land reform. He ceded the throne to his son, Hồ Hán Thương, in 1401 and assumed the title Thái Thượng Hoàng, in similar manner to the Trần kings.
In 1407, under the pretext of helping to restore the Trần dynasty, Chinese Ming troops invaded Đại Ngu and captured Hồ Quý Ly and Hồ Hán Thương. The Hồ dynasty came to an end after only 7 years in power. The Ming occupying force annexed Đại Ngu into the Ming Empire after claiming that there was no heir to Trần throne. Vietnam, weakened by dynastic feuds and the wars with Champa, quickly succumbed. The Ming conquest was harsh. Vietnam was annexed directly as a province of China, the old policy of cultural assimilation again imposed forcibly, and the country was ruthlessly exploited. However, by this time, Vietnamese nationalism had reached a point where attempts to sinicize them could only strengthen further resistance. Almost immediately, Trần loyalists started a resistance war. The resistance, under the leadership of Trần Quĩ at first gained some advances, yet as Trần Quĩ executed two top commanders out of suspicion, a rift widened within his ranks and resulted in his defeat in 1413.
In 1418, a wealthy farmer, Lê Lợi, led the Lam Sơn uprising against the Ming from his base of Lam Sơn (Thanh Hóa province). Overcoming many early setbacks and with strategic advices from Nguyễn Trãi, Lê Lợi 's movement finally gathered momentum, marched northward, and launched a siege at Đông Quan (now Hanoi), the capital of the Ming occupation. The Ming Emperor sent a reinforcement force, but Lê Lợi staged an ambush and killed the Ming commander, Liu Shan, in Chi Lăng. Ming troops at Đông Quan surrendered. The Lam Sơn revolution killed 300,000 Ming soldiers. In 1428, Lê Lợi ascended to the throne and began the Hậu Lê dynasty (Posterior or Later Lê). Lê Lợi renamed the country back to Đại Việt and moved the capital back to Thăng Long.
The Lê dynasty carried out land reforms to revitalize the economy after the war. Unlike the Lý and Trần kings, who were more influenced by Buddhism, the Lê kings leaned toward Confucianism. A comprehensive set of laws, the Hồng Đức code was introduced with some strong Confucian elements, yet also included some progressive rules, such as the rights of women. Art and architecture during the Lê dynasty also became more influenced by Chinese styles than during the Lý and Trần dynasty. The Lê dynasty commissioned the drawing of national maps and had Ngô Sĩ Liên continue the task of writing Đại Việt 's history up to the time of Lê Lợi. King Lê Thánh Tông opened hospitals and had officials distribute medicines to areas affected with epidemics.
Overpopulation and land shortages stimulated a Vietnamese expansion south. In 1471, Le troops led by king Lê Thánh Tông invaded Champa and captured its capital Vijaya. This event effectively ended Champa as a powerful kingdom, although some smaller surviving Cham states lasted for a few centuries more. It initiated the dispersal of the Cham people across Southeast Asia. With the kingdom of Champa mostly destroyed and the Cham people exiled or suppressed, Vietnamese colonization of what is now central Vietnam proceeded without substantial resistance. However, despite becoming greatly outnumbered by Vietnamese settlers and the integration of formerly Cham territory into the Vietnamese nation, the majority of Cham people nevertheless remained in Vietnam and they are now considered one of the key minorities in modern Vietnam. Vietnamese armies also raided the Mekong Delta, which the decaying Khmer Empire could no longer defend. The city of Huế, founded in 1600 lies close to where the Champa capital of Indrapura once stood. In 1479, King Lê Thánh Tông also campaigned against Laos in the Vietnamese -- Lao War and captured its capital Luang Prabang, in which later the city was totally ransacked and destroyed by the Vietnamese. He made further incursions westwards into the Irrawaddy River region in modern - day Burma before withdrawing. At his withdrawal, Vietnam extended in what would be considered as "the first Southeast Asian Empire '' and perhaps, one of the most powerful nation in Asia.
The Lê dynasty was overthrown by its general named Mạc Đăng Dung in 1527. He killed the Lê emperor and proclaimed himself emperor, starting the Mạc dynasty. After defeating many revolutions for two years, Mạc Đăng Dung adopted the Trần dynasty 's practice and ceded the throne to his son, Mạc Đăng Doanh, and he became Thái Thượng Hoàng.
Meanwhile, Nguyễn Kim, a former official in the Lê court, revolted against the Mạc and helped king Lê Trang Tông restore the Lê court in the Thanh Hóa area. Thus a civil war began between the Northern Court (Mạc) and the Southern Court (Restored Lê). Nguyễn Kim 's side controlled the southern part of Annam (from Thanhhoa to the south), leaving the north (including Đông Kinh - Hanoi) under Mạc control. When Nguyễn Kim was assassinated in 1545, military power fell into the hands of his son - in - law, Trịnh Kiểm. In 1558, Nguyễn Kim 's son, Nguyễn Hoàng, suspecting that Trịnh Kiểm might kill him as he had done to his brother to secure power, asked to be governor of the far south provinces around present - day Quảng Bình to Bình Định. Hoang pretended to be insane, so Kiem was fooled into thinking that sending Hoang south was a good move as Hoang would be quickly killed in the lawless border regions. However, Hoang governed the south effectively while Trịnh Kiểm, and then his son Trịnh Tùng, carried on the war against the Mạc. Nguyễn Hoàng sent money and soldiers north to help the war but gradually he became more and more independent, transforming their realm 's economic fortunes by turning it into an international trading post.
The civil war between the Lê / Trịnh and Mạc dynasties ended in 1592, when the army of Trịnh Tùng conquered Hanoi and executed king Mạc Mậu Hợp. Survivors of the Mạc royal family fled to the northern mountains in the province of Cao Bằng and continued to rule there until 1677 when Trịnh Tạc conquered this last Mạc territory. The Lê kings, ever since Nguyễn Kim 's restoration, only acted as figureheads. After the fall of the Mạc dynasty, all real power in the north belonged to the Trịnh lords. Meanwhile, the Ming court reluctantly decided on a military intervention into the Vietnamese civil war, but Mạc Đăng Dung offered ritual submission to the Ming Empire, which was accepted.
In the year 1600, Nguyễn Hoàng also declared himself Lord (officially "Vương '', popularly "Chúa '') and refused to send more money or soldiers to help the Trịnh. He also moved his capital to Phú Xuân, modern - day Huế. Nguyễn Hoàng died in 1613 after having ruled the south for 55 years. He was succeeded by his 6th son, Nguyễn Phúc Nguyên, who likewise refused to acknowledge the power of the Trịnh, yet still pledged allegiance to the Lê king.
Trịnh Tráng succeeded Trịnh Tùng, his father, upon his death in 1623. Tráng ordered Nguyễn Phúc Nguyên to submit to his authority. The order was refused twice. In 1627, Trịnh Tráng sent 150,000 troops southward in an unsuccessful military campaign. The Trịnh were much stronger, with a larger population, economy and army, but they were unable to vanquish the Nguyễn, who had built two defensive stone walls and invested in Portuguese artillery.
The Trịnh -- Nguyễn War lasted from 1627 until 1672. The Trịnh army staged at least seven offensives, all of which failed to capture Phú Xuân. For a time, starting in 1651, the Nguyễn themselves went on the offensive and attacked parts of Trịnh territory. However, the Trịnh, under a new leader, Trịnh Tạc, forced the Nguyễn back by 1655. After one last offensive in 1672, Trịnh Tạc agreed to a truce with the Nguyễn Lord Nguyễn Phúc Tần. The country was effectively divided in two.
The West 's exposure to Annam and Annamese exposure to Westerners dated back to 166 AD with the arrival of merchants from the Roman Empire, to 1292 with the visit of Marco Polo, and the early 16th century with the arrival of Portuguese in 1516 and other European traders and missionaries. Alexandre de Rhodes, a French Jesuit priest, improved on earlier work by Portuguese missionaries and developed the Vietnamese romanized alphabet Quốc Ngữ in Dictionarium Annamiticum Lusitanum et Latinum in 1651. Various European efforts to establish trading posts in Vietnam failed, but missionaries were allowed to operate for some time until the mandarins began concluding that Christianity (which had succeeded in converting up to a tenth of the population by 1700) was a threat to the Confucian social order since it condemned ancestor worship as idolatry. Vietnamese attitudes to Europeans and Christianity hardened as they began to increasingly see it as a way of undermining society.
Between 1627 and 1775, two powerful families had partitioned the country: the Nguyễn lords ruled the South and the Trịnh lords ruled the North. The Trịnh -- Nguyễn War gave European traders the opportunities to support each side with weapons and technology: the Portuguese assisted the Nguyễn in the South while the Dutch helped the Trịnh in the North. The Trịnh and the Nguyễn maintained a relative peace for the next hundred years, during which both sides made significant accomplishments. The Trịnh created centralized government offices in charge of state budget and producing currency, unified the weight units into a decimal system, established printing shops to reduce the need to import printed materials from China, opened a military academy, and compiled history books.
Meanwhile, the Nguyễn lords continued the southward expansion by the conquest of the remaining Cham land. Việt settlers also arrived in the sparsely populated area known as "Water Chenla '', which was the lower Mekong Delta portion of the former Khmer Empire. Between the mid-17th century to mid-18th century, as the former Khmer Empire was weakened by internal strife and Siamese invasions, the Nguyễn Lords used various means, political marriage, diplomatic pressure, political and military favors, to gain the area around present - day Saigon and the Mekong Delta. The Nguyễn army at times also clashed with the Siamese army to establish influence over the former Khmer Empire.
In 1771, the Tây Sơn revolution broke out in Quy Nhơn, which was under the control of the Nguyễn lord. The leaders of this revolution were three brothers named Nguyễn Nhạc, Nguyễn Lữ, and Nguyễn Huệ, not related to the Nguyễn lords. By 1776, the Tây Sơn had occupied all of the Nguyễn Lord 's land and killed almost the entire royal family. The surviving prince Nguyễn Phúc Ánh (often called Nguyễn Ánh) fled to Siam, and obtained military support from the Siamese king. Nguyễn Ánh came back with 50,000 Siamese troops to regain power, but was defeated at the Battle of Rạch Gầm -- Xoài Mút and almost killed. Nguyễn Ánh fled Vietnam, but he did not give up.
The Tây Sơn army commanded by Nguyễn Huệ marched north in 1786 to fight the Trịnh Lord, Trịnh Khải. The Trịnh army failed and Trịnh Khải committed suicide. The Tây Sơn army captured the capital in less than two months. The last Lê emperor, Lê Chiêu Thống, fled to Qing China and petitioned the Qianlong Emperor for help. The Qianlong Emperor supplied Lê Chiêu Thống with a massive army of around 200,000 troops to regain his throne from the usurper. Nguyễn Huệ proclaimed himself Emperor Quang Trung and defeated the Qing troops with 100,000 men in a surprise 7 day campaign during the lunar new year (Tết). There was even a rumor saying that Quang Trung had also planned to conquer China, although it was unclear. During his reign, Quang Trung envisioned many reforms but died by unknown reason on the way march south in 1792, at the age of 40. During the reign of Emperor Quang Trung, Đại Việt was in fact divided into three political entities. The Tây Sơn leader, Nguyễn Nhạc, ruled the centre of the country from his capital Qui Nhơn. Emperor Quang Trung ruled the north from the capital Phú Xuân Huế. In the South, Nguyễn Ánh, assisted by many talented recruits from the South, captured Gia Định (present - day Saigon) in 1788 and established a strong base for his force.
After Quang Trung 's death, the Tây Sơn dynasty became unstable as the remaining brothers fought against each other and against the people who were loyal to Nguyễn Huệ 's infant son. Nguyễn Ánh sailed north in 1799, capturing Tây Sơn 's stronghold Qui Nhơn. In 1801, his force took Phú Xuân, the Tây Sơn capital. Nguyễn Ánh finally won the war in 1802, when he sieged Thăng Long (Hanoi) and executed Nguyễn Huệ 's son, Nguyễn Quang Toản, along with many Tây Sơn generals and officials. Nguyễn Ánh ascended the throne and called himself Emperor Gia Long. Gia is for Gia Định, the old name of Saigon; Long is for Thăng Long, the old name of Hanoi. Hence Gia Long implied the unification of the country. The Nguyễn dynasty lasted until Bảo Đại 's abdication in 1945. As China for centuries had referred to Đại Việt as Annam, Gia Long asked the Manchu Qing emperor to rename the country, from Annam to Nam Việt. To prevent any confusion of Gia Long 's kingdom with Triệu Đà 's ancient kingdom, the Manchu emperor reversed the order of the two words to Việt Nam. The name Vietnam is thus known to be used since Emperor Gia Long 's reign. Recently historians have found that this name had existed in older books in which Vietnamese referred to their country as Vietnam.
The Period of Division with its many tragedies and dramatic historical developments inspired many poets and gave rise to some Vietnamese masterpieces in verse, including the epic poem The Tale of Kiều (Truyện Kiều) by Nguyễn Du, Song of a Soldier 's Wife (Chinh Phụ Ngâm) by Đặng Trần Côn and Đoàn Thị Điểm, and a collection of satirical, erotically charged poems by a female poet, Hồ Xuân Hương.
In 1784, during the conflict between Nguyễn Ánh, the surviving heir of the Nguyễn lords, and the Tây Sơn dynasty, a French Roman Catholic prelate, Pigneaux de Behaine, sailed to France to seek military backing for Nguyễn Ánh. At Louis XVI 's court, Pigneaux brokered the Little Treaty of Versailles which promised French military aid in exchange for Vietnamese concessions. However, because of the French Revolution, Pigneaux 's plan failed to materialize. He went to the French territory of Pondichéry (India), and secured two ships, a regiment of Indian troops, and a handful of volunteers and returned to Vietnam in 1788. One of Pigneaux 's volunteers, Jean - Marie Dayot, reorganized Nguyễn Ánh 's navy along European lines and defeated the Tây Sơn at Qui Nhơn in 1792. A few years later, Nguyễn Ánh 's forces captured Saigon, where Pigneaux died in 1799. Another volunteer, Victor Olivier de Puymanel would later build the Gia Định fort in central Saigon.
After Nguyễn Ánh established the Nguyễn dynasty in 1802, he tolerated Catholicism and employed some Europeans in his court as advisors. His successors were more conservative Confucians and resisted Westernization. The next Nguyễn emperors, Minh Mạng, Thiệu Trị, and Tự Đức brutally suppressed Catholicism and pursued a ' closed door ' policy, perceiving the Westerners as a threat, following events such as the Lê Văn Khôi revolt when a French missionary, Fr. Joseph Marchand, encouraged local Catholics to revolt in an attempt to install a Catholic emperor. Catholics, both Vietnamese and foreign - born, were persecuted in retaliation. Trade with the West slowed during this period. There were frequent uprisings against the Nguyễns, with hundreds of such events being recorded in the annals. These acts were soon being used as excuses for France to invade Vietnam. The early Nguyễn dynasty had engaged in many of the constructive activities of its predecessors, building roads, digging canals, issuing a legal code, holding examinations, sponsoring care facilities for the sick, compiling maps and history books, and exerting influence over Cambodia and Laos.
Under the orders of Napoleon III of France, Rigault de Genouilly 's gunships attacked the port of Đà Nẵng in 1858, causing significant damage, yet failed to gain any foothold, in the process being afflicted by the humidity and tropical diseases. De Genouilly decided to sail south and captured the poorly defended city of Gia Định (present - day Ho Chi Minh City). From 1859 to 1867, French troops expanded their control over all six provinces on the Mekong delta and formed a colony known as Cochinchina.
A few years later, French troops landed in northern Vietnam (which they called Tonkin) and captured Hà Nội twice in 1873 and 1882. The French managed to keep their grip on Tonkin although, twice, their top commanders Francis Garnier and Henri Rivière, were ambushed and killed fighting pirates of the Black Flag Army hired by the mandarins. France assumed control over the whole of Vietnam after the Tonkin Campaign (1883 -- 1886). French Indochina was formed in October 1887 from Annam (Trung Kỳ, central Vietnam), Tonkin (Bắc Kỳ, northern Vietnam), Cochinchina (Nam Kỳ, southern Vietnam, and Cambodia, with Laos added in 1893). Within French Indochina, Cochinchina had the status of a colony, Annam was nominally a protectorate where the Nguyễn dynasty still ruled, and Tonkin had a French governor with local governments run by Vietnamese officials.
After Gia Định fell to French troops, many resistance movements broke out in occupied areas, some led by former court officers, such as Trương Định, some by peasants, such as Nguyễn Trung Trực, who sank the French gunship L'Esperance using guerilla tactics. In the north, most movements were led by former court officers and lasted decades, with Phan Đình Phùng fighting in central Vietnam until 1895. In the northern mountains, former bandit leader Hoàng Hoa Thám fought until 1911. Even the teenage Nguyễn Emperor Hàm Nghi left the Imperial Palace of Huế in 1885 with regent Tôn Thất Thuyết and started the Cần Vương ("Save the King '') movement, trying to rally the people to resist the French. He was captured in 1888 and exiled to French Algeria. Guerrillas of the Cần Vương movement murdered around a third of Vietnam 's Christian population during the rebellion. Decades later, two more Nguyễn kings, Thành Thái and Duy Tân were also exiled to Africa for having anti-French tendencies. The former was deposed on the pretext of insanity and Duy Tân was caught in a conspiracy with the mandarin Trần Cao Vân trying to start an uprising. However, lack of modern weapons and equipment prevented these resistance movements from being able to engage the French in open combat. The various anti-French revolts started by mandarins were carried out with the primary goal of restoring the old feudal society. However, by 1900 a new generation of Vietnamese were coming of age who had never lived in precolonial Vietnam. These young activists were as eager as their grandparents to see independence restored, but they realized that returning to the feudal order was not feasible and that modern technology and governmental systems were needed. Having been exposed to Western philosophy, they aimed to establish a republic upon independence, departing from the royalist sentiments of the Cần Vương movements. Some of them set up Vietnamese independence societies in Japan, which many viewed as a model society (i.e. an Asian nation that had modernized, but retained its own culture and institutions).
There emerged two parallel movements of modernization. The first was the Đông Du ("Go East '') Movement started in 1905 by Phan Bội Châu. Châu 's plan was to send Vietnamese students to Japan to learn modern skills, so that in the future they could lead a successful armed revolt against the French. With Prince Cường Để, he started two organizations in Japan: Duy Tân Hội and Việt Nam Công Hiến Hội. Due to French diplomatic pressure, Japan later deported Châu. Phan Châu Trinh, who favored a peaceful, non-violent struggle to gain independence, led a second movement, Duy Tân (Modernization), which stressed education for the masses, modernizing the country, fostering understanding and tolerance between the French and the Vietnamese, and peaceful transitions of power. The early part of the 20th century saw the growing in status of the Romanized Quốc Ngữ alphabet for the Vietnamese language. Vietnamese patriots realized the potential of Quốc Ngữ as a useful tool to quickly reduce illiteracy and to educate the masses. The traditional Chinese scripts or the Nôm script were seen as too cumbersome and too difficult to learn. The use of prose in literature also became popular with the appearance of many novels; most famous were those from the Tự Lực Văn Đoàn literary circle.
As the French suppressed both movements, and after witnessing revolutionaries in action in China and Russia, Vietnamese revolutionaries began to turn to more radical paths. Phan Bội Châu created the Việt Nam Quang Phục Hội in Guangzhou, planning armed resistance against the French. In 1925, French agents captured him in Shanghai and spirited him to Vietnam. Due to his popularity, Châu was spared from execution and placed under house arrest until his death in 1940. In 1927, the Việt Nam Quốc Dân Đảng (Vietnamese Nationalist Party), modeled after the Kuomintang in China, was founded, and the party launched the armed Yên Bái mutiny in 1930 in Tonkin which resulted in its chairman, Nguyễn Thái Học and many other leaders captured and executed by the guillotine.
Marxism was also introduced into Vietnam with the emergence of three separate Communist parties; the Indochinese Communist Party, Annamese Communist Party and the Indochinese Communist Union, joined later by a Trotskyist movement led by Tạ Thu Thâu. In 1930, the Communist International (Comintern) sent Nguyễn Ái Quốc to Hong Kong to coordinate the unification of the parties into the Vietnamese Communist Party (CPV) with Trần Phú as the first Secretary General. Later the party changed its name to the Indochinese Communist Party as the Comintern, under Stalin, did not favor nationalistic sentiments. Being a leftist revolutionary living in France since 1911, Nguyễn Ái Quốc participated in founding the French Communist Party and in 1924 traveled to the Soviet Union to join the Comintern. Through the late 1920s, he acted as a Comintern agent to help build Communist movements in Southeast Asia. During the 1930s, the CPV was nearly wiped out under French suppression with the execution of top leaders such as Phú, Lê Hồng Phong, and Nguyễn Văn Cừ.
During World War II, Japan invaded Indochina in 1940, keeping the Vichy French colonial administration in place as a puppet. In 1941 Nguyễn Ái Quốc, now known as Hồ Chí Minh, arrived in northern Vietnam to form the Việt Minh Front, and it was supposed to be an umbrella group for all parties fighting for Vietnam 's independence, but was dominated by the Communist Party. The Việt Minh had a modest armed force and during the war worked with the American Office of Strategic Services to collect intelligence on the Japanese. A famine broke out in 1944 -- 45. Japan 's defeat by World War II Allies created a power vacuum for Vietnamese nationalists of all parties to seize power in August 1945, forcing Emperor Bảo Đại to abdicate and ending the Nguyễn dynasty. Their initial success in staging uprisings and in seizing control of most of the country by September 1945 was partially undone, however, by the return of the French a few months later.
According to a 2018 study in the Journal of Conflict Resolution covering Vietnam - China relations from 1365 to 1841, the relations could be characterized as a "hierarchic tributary system ''. The study found that "the Vietnamese court explicitly recognized its unequal status in its relations with China through a number of institutions and norms. Vietnamese rulers also displayed very little military attention to their relations with China. Rather, Vietnamese leaders were clearly more concerned with quelling chronic domestic instability and managing relations with kingdoms to their south and west. ''
In September 1945, Hồ Chí Minh proclaimed the Democratic Republic of Vietnam (DRV) and held the position of chairman (Chủ Tịch). Communist rule was cut short, however, by nationalist Chinese and British occupation forces whose presence tended to support the Communist Party 's political opponents. In 1946, Vietnam had its first National Assembly election (won by the Viet Minh in central and northern Vietnam), which drafted the first constitution, but the situation was still precarious: the French tried to regain power by force; some Cochinchinese politicians formed a seceding government the Republic of Cochinchina (Cộng hòa Nam Kỳ) while the non-Communist and Communist forces were engaging each other in sporadic battle. Stalinists purged Trotskyists. Religious sects and resistance groups formed their own militias. The Communists eventually suppressed all non-Communist parties but failed to secure a peace deal with France.
Full - scale war broke out between the Việt Minh and France in late 1946 and the First Indochina War officially began. Realizing that colonialism was coming to an end worldwide, France decided to bring former emperor Bảo Đại back to power, as a political alternative to Ho Chi Minh. A Provisional Central Government was formed in 1948, reuniting Annam and Tonkin, but the complete reunification of Vietnam was delayed for a year because of the problems posed by Cochinchina 's legal status. In July 1949, the State of Vietnam was officially proclaimed, as a semi-independent country within the French Union, with Bảo Đại as Head of State. France was finally persuaded to relinquish its colonies in Indochina in 1954 when Viet Minh forces defeated the French at Dien Bien Phu. The 1954 Geneva Conference left Vietnam a divided nation, with Hồ Chí Minh 's communist DRV government ruling the North from Hanoi and Ngô Đình Diệm 's Republic of Vietnam, supported by the United States, ruling the South from Saigon. Between 1953 and 1956, the North Vietnamese government instituted various agrarian reforms, including "rent reduction '' and "land reform '', which resulted in significant political oppression. During the land reform, testimony from North Vietnamese witnesses suggested a ratio of one execution for every 160 village residents, which extrapolated nationwide would indicate nearly 100,000 executions. Because the campaign was concentrated mainly in the Red River Delta area, a lower estimate of 50,000 executions became widely accepted by scholars at the time. However, declassified documents from the Vietnamese and Hungarian archives indicate that the number of executions was much lower than reported at the time, although likely greater than 13,500. In the South, Diem went about crushing political and religious opposition, imprisoning or killing tens of thousands.
Along with the split between northern and southern Vietnam in geographical territory came the divergence in their distinctive choices for institutional political structure. Northern Vietnam (Dai Viet) opted for a centralized bureaucratic regime while the southern is based on a patron - client mechanism heavily relied on personalized rule. During this period, due to this structural difference, the north and south revealed different patterns in their economic activities, the long - term effect of which still persist up to today. Citizens that have previously lived in the bureaucratic state are more likely to have higher household consumption and get more engaged in civic activities; and the state itself tends to have stronger fiscal capacity for taxation inherited from the previous institution.
As a result of the Vietnam (Second Indochina) War (1954 -- 75), Viet Cong and regular People 's Army of Vietnam (PAVN) forces of the DRV unified the country under communist rule. In this conflict, the North and the Viet Cong -- with logistical support from the Soviet Union -- defeated the Army of the Republic of Vietnam, which sought to maintain South Vietnamese independence with the support of the U.S. military, whose troop strength peaked at 540,000 during the communist - led Tet Offensive in 1968. The North did not abide by the terms of the 1973 Paris Agreement, which officially settled the war by calling for free elections in the South and peaceful reunification. Two years after the withdrawal of the last U.S. forces in 1973, Saigon, the capital of South Vietnam, fell to the communists, and the South Vietnamese army surrendered in 1975. In 1976, the government of united Vietnam renamed Saigon as Hồ Chí Minh City in honor of Hồ, who died in 1969. The war left Vietnam devastated, with the total death toll standing at between 966,000 and 3.8 million, and many thousands more crippled by weapons and substances such as napalm and Agent Orange. The government of Vietnam says that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include the children of people who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to contaminated Agent Orange. The United States government has challenged these figures as being unreliable.
In the post-1975 period, it was immediately apparent that the effectiveness of Communist Party (CPV) policies did not necessarily extend to the party 's peacetime nation - building plans. Having unified North and South politically, the CPV still had to integrate them socially and economically. In this task, CPV policy makers were confronted with the South 's resistance to communist transformation, as well as traditional animosities arising from cultural and historical differences between North and South. In the aftermath of the war, under Lê Duẩn 's administration, there were no mass executions of South Vietnamese who had collaborated with the U.S. or the Saigon government, confounding Western fears. However, up to 300,000 South Vietnamese were sent to reeducation camps, where many endured torture, starvation, and disease while being forced to perform hard labor. The New Economic Zones program was implemented by the Vietnamese communist government after the Fall of Saigon. Between 1975 and 1980, more than 1 million northerners migrated to the south and central regions formerly under the Republic of Vietnam. This program, in turn, displaced around 750,000 to over 1 million Southerners from their homes and forcibly relocated them to uninhabited mountainous forested areas.
Compounding economic difficulties were new military challenges. In the late 1970s, Cambodia under the Khmer Rouge regime started harassing and raiding Vietnamese villages at the common border. To neutralize the threat, PAVN invaded Cambodia in 1978 and overran its capital of Phnom Penh, driving out the incumbent Khmer Rouge regime. In response, as an action to support the pro-Beijing Khmer Rouge regime, China increased its pressure on Vietnam, and sent troops into Northern Vietnam in 1979 to "punish '' Vietnam. Relations between the two countries had been deteriorating for some time. Territorial disagreements along the border and in the South China Sea that had remained dormant during the Vietnam War were revived at the war 's end, and a postwar campaign engineered by Hanoi against the ethnic Chinese Hoa community elicited a strong protest from Beijing. China was displeased with Vietnam 's alliance with the Soviet Union. During its prolonged military occupation of Cambodia in 1979 -- 89, Vietnam 's international isolation extended to relations with the United States. The United States, in addition to citing Vietnam 's minimal cooperation in accounting for Americans who were missing in action (MIAs) as an obstacle to normal relations, barred normal ties as long as Vietnamese troops occupied Cambodia. Washington also continued to enforce the trade embargo imposed on Hanoi at the conclusion of the war in 1975.
The harsh postwar crackdown on remnants of capitalism in the South led to the collapse of the economy during the 1980s. With the economy in shambles, the communist government altered its course and adopted consensus policies that bridged the divergent views of pragmatists and communist traditionalists. Throughout the 1980s, Vietnam received nearly $3 billion a year in economic and military aid from the Soviet Union and conducted most of its trade with the USSR and other Comecon countries. In 1986, Nguyễn Văn Linh, who was elevated to CPV general secretary the following year, launched a campaign for political and economic renewal (Đổi Mới). His policies were characterized by political and economic experimentation that was similar to simultaneous reform agenda undertaken in the Soviet Union. Reflecting the spirit of political compromise, Vietnam phased out its reeducation effort. The communist government stopped promoting agricultural and industrial cooperatives. Farmers were permitted to till private plots alongside state - owned land, and in 1990 the communist government passed a law encouraging the establishment of private businesses.
After President Bill Clinton visited Vietnam in 2000, it virtually marked the new era of Vietnam. Vietnam has become an increasingly attractive destination of economic development. Throughout that time, Vietnam has played more significant role in the world 's stage. Its economic reforms successfully changed Vietnam and making Vietnam more relevant in the ASEAN and international stage. Also, due to Vietnam 's importance, many powers turn to be favoring Vietnam for their circumstances.
However, Vietnam also faces disputes, mostly with Cambodia over the border, and especially, China, over the South China Sea. In 2016, President Barack Obama became the 3rd U.S. Head of State to visit Vietnam, helping normalize relations into a higher level, by lifting embargo of lethal weapons, allowing Vietnam to buy lethal weapons and modernize its military.
Vietnam is expected to be a newly industrialized country, and also, a regional power in the future. Vietnam is one of Next Eleven countries.
For the most part of its history, the geographical boundary of present - day Vietnam covered 3 ethnically distinct states: a Vietnamese state, a Cham state, and a part of the Khmer Empire. The Vietnamese nation originated in the Red River Delta in present - day Northern Vietnam and expanded over its history to the current boundary. It went through a lot of name changes, with Văn Lang being used the longest. Below is a summary of names:
Except the Hồng Bàng and Tây Sơn dynasties, all Vietnamese dynasties are named after the king 's family name, unlike the Chinese dynasties, whose names are dictated by the dynasty founders and often used as the country 's name. Nguyễn Huệ 's "Tây Sơn dynasty '' is rather a name created by historians to avoid confusion with Nguyễn Ánh 's Nguyễn dynasty.
The historian Professor Liam Kelley of the University of Hawaii at Manoa on his Le Minh Khai 's SEAsian History Blog wrote on how Vietnamese ultra-nationalists misleadingly reinterpreted outdated theories by western geography professors in order to further a Vietnamese nationalist agenda by claiming that Vietnamese invented rice cultivation and therefore were responsible for civilization while Chinese were pastoralists The outdated theory has been dis - proven with rice cultivation found to not originate in southeast Asia and the Vietnamese interpretations of the original theories were wrong. Vietnamese ultra-nationalists also claim the Yijing.
Professor Liam Kelley criticized the theory of Edouard Chavannes that southeastern China was the origin of the Vietnamese before they ended up in their current location.
Vietnam claims that Phong Châu 峯 州 was the capital of the Hùng Kings.
Professor Liam Kelley argued that the Tran dynasty constructed Âu Lạc as a way of connecting Vietnam with their homeland of Fujian.
The bronze drums of Đông Sơn which date back to far before the advent of native Vietnamese historical records were never seen by Vietnamese before the modern era as a "symbol '' of Vietnam. After the Trung Sisters production of bronze drums stopped in Vietnam.
Đại Việt Sử Ký Toàn Thư copied the mythical accounts of the Huayang guozhi 華陽 國 志. Liam Kelley disproved the notion that the Phong Châu 峯 州 was the capital of the Hùng kings.
Michael Churchman criticized the fact that modern historians falsely project modern day animosity between Vietnamese and Chinese onto the past history of Vietnam under Chinese rule and falsely portraying past people as "freedom fighters '' or oppressors in a made - up narrative of resistance when there was no such ethnic boundary. Professor Liam Kelley criticized O.W. Wolters for doing this. Pre-1500s Vietnam had Confucianism as an integral component according to Liam Kelly. Confucianism influenced traditional education in Vietnam. Confucianism in Vietnam: A State of the Field Essay was written by Liam Kelley while Lost Modernities was written by Woodside and "Annam '': A New Analysis of Sino - Viet - Muong Linguistic Contact.
According to Professor Liam Kelley during the Tang dynasty native spirits were subsumed into Daoism and the Daoist view of these spirits completely replaced the original native tales. Buddhism and Daoist replaced native narratives surrounding Mount Yên Tử 安子 山.
People from Song dynasty China like Zhao Zhong and Xu Zongdao fled to Tran dynasty ruled Vietnam after the Mongol invasion of the Song. The Tran dynasty originated from the Fujian region of China as did the Daoist cleric Xu Zongdao who recorded the Mongol invasion and referred to them as "Northern bandits ''.
Wu Bozong 吳伯宗 (b. 1334 - d. 1384) was sent as ambassador to Annam and wrote down in the Rongjinji 榮 進 集 that the Tran dynasty monarch said to him in a reply his Wu 's inquiry on Annam 's affairs where the Tran ruler said that Annam proudly adhered to Tang dynasty and Han dynasty customs.
欲 問 安南 事, 安南 風俗 淳. 衣冠 唐 制度, 禮 樂 漢 君臣. 玉 甕 開 新酒, 金 刀 斫 細 鱗. 年 年 二 三 月, 桃李 一般 春.
Dục vấn An Nam sự, An Nam phong tục thuần. Y quan Đường chế độ, Lễ nhạc Hán quân thần. Ngọc ủng khai tân tửu, Kim đao chước tế lân. Niên niên nhị tam nguyệt, Đào lý nhất ban xuân.
The Ming dynasty included the monarchs of the Ly and Tran dynasties in its list of "important people '' of Annam.
Professor Liam Kelley (Le Minh Khai) suggested that the "north '' in Bình Ngô đại cáo referred to the Ming collaborationist Hanoi scholars while the south referred to Thanh Hóa, the base of Lê Lợi since the text referred to "Dai Viet '' and did not introduce China before mentioning north. cited John Whitmore and challenged the claim that "Ngô '' referred to Ming dynasty China but instead referred to the Chinese settled Red River Delta area of Vietnam. It was English and French foreign languages translations which bowdlerized "south '' into "Vietnam '' and "north '' into China even though people today have no true idea of what south and north referred to in the original text. He believes that it was the Ming collaborationist scholars of Hanoi who were referred to as the "Ngô '' and that it was not a term used for Chinese as is currently though in Vietnam, and that the Bình Ngô đại cáo not directed at China. In the 20th century for propaganda purposes against French colonialism, the development of the new genre of "resistance literature '' spurred a change in how "Bình Ngô đại cáo '' was looked at. Kelley suggested that the "Bình Ngô đại cáo '' drew on a previous Ming text. North and South in Bình Ngô đại cáo might have referred to internal divisions in Vietnam (Hanoi vs Thanh Hoa) rather than China vs Vietnam. The Hồ dynasty 's rule and Vietnamese who worked with the Ming were attacked in the "Bình Ngô đại cáo '' by Lê Lợi. The "Bình Ngô đại cáo '' criticized a people called "Ngô '' in Vietnam, and it did not refer to the Ming Chinese. It saidthat Song dynasty clothing was worn by the Tran and Ming while it slammed and criticized Mongol Yuan customs followed by the Ngô.
The Đại Việt sử ký toàn thư contained a constructed genealogy tracing back the political legitimacy of Vietnam 's rulers to the Chinese Emperor Shennong similar to how the Northern Wei traced the legitimacy of the Tuoba to the Yellow Emperor. Đại Việt sử ký toàn thư traced the ancestry of the Hùng kings to Consort Âu and Lord Lạc Long who had 100 sons from an egg sac.
The purpose of tracing back to Shennong was to claim that the length of Vietnam 's history rivaled China 's.
In the 17th century Vietnamese historians like Ngô Thì Sĩ and Jesuits like Martinio Martini studied texts on the Hồng Bàng Dynasty like Đại Việt sử ký toàn thư and used mathematics to deduce that the information on them were nonsense given the impossible reign years of the monarchs. However, modern Vietnamese now believe that the information is true. Ngô Thì Sĩ used critical analysis of historical texts to question the relations between Zhao Tuo 's Nanyue Kingdom in Guangdong and the Vietnamese inhabited Red River Delta, concluding that the Red River Delta was a mere vassal to Nanyue and not an integral part of it in addition to criticizing the existence of the Hồng Bàng Dynasty.
Modern Vietnamese nationalists seek to stress local Vietnamese influence in history and downplay the role of foreign origin monarchs like the fact that the family of the Tran dynasty rulers originated in China. Vietnamese historians have sought to construct a fantasy of a continuous succession since the Hung Kings of local political units in Vietnam. Vietnamese scholars and historians have debated over whether to regard Zhao Tuo as part of the "orthodox succession '' of rulers or as "enemy invader ''.
Professor Liam Kelley suggested that before Chinese rule the Red River Delta was not under a unified polity.
Both Chinese and Vietnamese sovereigns were honored at a temple constructed by the Nguyen dynasty.
The Nguyen Empoeror Minh Mang sinicized ethnic minorities such as Cambodians, claimed the legacy of Confucianism and China 's Han dynasty for Vietnam, and used the term Han people 漢人 to refer to the Vietnamese. Minh Mang declared that "We must hope that their barbarian habits will be subconsciously dissipated, and that they will daily become more infected by Han (Sino - Vietnamese) customs. '' This policies were directed at the Khmer and hill tribes. The Nguyen lord Nguyen Phuc Chu had referred to Vietnamese as "Han people '' in 1712 when differentiating between Vietnamese and Chams.
Minh Mang used the name "Trung Quốc '' 中國 to refer to Vietnam. Vietnam also referred to itself as Trung Hạ 中 夏.
Chinese clothing was forced on Vietnamese people by the Nguyễn.
Modern Vietnamese have retroactively labelled figures like Trần Ích Tắc as "traitor '' to Annam, even though the word for traitor did not exist in Vietnamese during his time and Vietnamese histories like Đại Việt sử ký toàn thư do not refer to him as a traitor.
South Vietnam retained elements of Chinese culture and grammar in their language while North Vietnam actively engaged in a campaign to remove them - while North Vietnam maintained a pro-China position. it was the Cultural Revolution which led to North Vietnam encouraging anti-China sentiment.
Many anti-Vietnam war protesters bought into a narrative that Vietnam 's history consisted of Chinese invasion for 2,000 years and that Vietnam was a united country.
Before modern times scholars in Vietnam wanted to copy China 's civilization which they perceived as more civilized but since the French introduced nationalism Vietnam sought to present itself in a different aspect as a civilizational rival.
A Vietnamese forged and manufactured a fake ancient mythical script claimed to have been used in ancient Vietnam. Modern Vietnamese historians inserted word changes and altered the meanings of texts written by ancient Vietnamese historians on how battles between rebels in Vietnam and the Chinese states such as the Chen dynasty and Southern Han were viewed. The Nguyễn Dynasty initiated government sponsored ceremonies to the Hùng kings. The French may have established the ceremony on the Hùng kings death and the Hùng Kings had an annual event established for them by Hồ Chí Minh. Due to psychological embarrassment over their rule by foreign imperialists, ancient historical texts were edited for nationalistic purposes by modern Vietnamese historians.
In the Mekong Delta area of Cochinchina many Vietnamese and Chinese conducted illegal commercial activities. During the rule of the Chinese Kingdom of Eastern Wu over Vietnam the local people learned Chinese after Chinese people were moved down to live with them.
John D. Phan has suggested a new analysis of the linguistic situation in Vietnam under Chinese rule suggesting that a Middle Chinese dialect was spoken by the people of the Red River Delta during the Tang dynasty by drawing on Sino - Vietnamese vocabulary which showed evidence that it was derived from an existing language and that this Middle Chinese dialect was later displaced by a Muong language influenced by Chinese.
|
who is the present home minister of uttar pradesh | Rajnath Singh - wikipedia
Rajnath Ram Badan Singh (born 10 July 1951) is an Indian politician belonging to the Bharatiya Janata Party (BJP) who currently serves as the Home Minister. He previously served as the Chief Minister of Uttar Pradesh and as a Cabinet Minister in the Vajpayee Government. He has also served as the President of the BJP twice, 2005 to 2009 and 2013 to 2014. He began his career as a physics lecturer and used his long - term association with the Rashtriya Swayamsevak Sangh (RSS) to become involved with the Janata Party.
Singh was born in the small village of Bhabhaura in Chandauli district, Uttar Pradesh in a Hindu family. His father was Ram Badan Singh and his mother was Gujarati Devi. He was born into a family of farmers and went on to secure a master 's degree in physics, acquiring first division results from the Gorakhpur University. Rajnath Singh had been associated with the Rashtriya Swayamsevak Sangh since 1964, at the age of 13 and remained connected with the organisation even during his employment as a physics lecturer in Mirzapur. In 1974, he was appointed secretary for the Mirzapur unit of the Bharatiya Jan Sangh, predecessor of Bhartiya Janta Party. He also worked at PWD as a junior Engineer.
In 1975, aged 24, Singh was appointed District President of the Jana Sangh. In 1977, he was elected Member of Legislative Assembly from the Mirzapur. He became the State President of the BJP youth wing in 1984, the National general secretary in 1986 and the National President in 1988. He was also elected into the Uttar Pradesh legislative council.
In 1991, he became Education Minister in the first BJP government in the state of Uttar Pradesh. Major highlights of his tenure as Education Minister included Anti-Copying Act, 1992, which made copying a non-bailable offence, rewriting history texts and incorporating vedic mathematics into the syllabus. In April 1994, he was elected into the Rajya Sabha (Upper House of the Parliament) and he became involved with the Advisory committee on Industry (1994 -- 96), Consultative Committee for the Ministry of Agriculture, Business Advisory Committee, House Committee and the Committee on Human Resource Development. On 25 March 1997, he became the President of the BJP 's unit in Uttar Pradesh and in 1999 he became the Union Cabinet Minister for Surface Transport.
In 2000, he became Chief Minister of Uttar Pradesh and was twice elected as MLA from Haidergarh in 2001 and 2002. He tried to rationalise the reservation structure in government jobs by introducing the most Backward Classes among the OBC and SC, so that the benefit of reservation can reach the lowest status of Society.
In 2003, Singh was appointed as the Minister of Agriculture and subsequently for Food Processing in the NDA Government led by Atal Bihari Vajpayee, and was faced with the difficult task of maintaining one of the most volatile areas of India 's economy. During this period he initiated a few epoch - making projects including the Kisan Call Centre and Farm Income Insurance Scheme. He brought down interest rates on Agriculture loans and also established Farmer Commission and initiated Farms Income Insurance Scheme.
After the BJP lost power in the 2004 general elections, it was forced to sit in the Opposition. After the resignation of prominent figure Lal Krishna Advani, and the murder of strategist Pramod Mahajan, Singh sought to rebuild the party by focusing on the most basic Hindutva ideologies. He announced his position of "no compromise '' in relation to the building of a Ram Temple in Ayodhya at any cost and commended the rule of Vajpayee as Prime Minister, pointing towards all the developments the NDA made for the ordinary people of India. He also criticised the role of the English language in India, claiming that it caused erosion of cultural values.
He became the BJP National President on December 31, 2005, a post he held till December 19, 2009. In May 2009, he was elected MP from Ghaziabad in Uttar Pradesh.
On 24 January 2013, following the resignation of Nitin Gadkari due to corruption charges, Singh was re-elected as the BJP 's National President.
Singh is on record shortly after the law Section 377 of the Indian Penal Code was re-instated in 2013, claiming that his party is "unambiguously '' in favour of the law, also claiming that "We will state (at an all - party meeting if it is called) that we support Section 377 because we believe that homosexuality is an unnatural act and can not be supported. ''
He contested the 2014 Lok Sabha elections from Lucknow constituency and was subsequently elected as a Member of the Parliament.
In 2014, Rajnath said that 75 per cent of the people in India either speak or know Hindi.
He was appointed the Union Minister of Home Affairs in the Narendra Modi government and was sworn in on 26 May 2014.
He triggered controversy amid the protests over the police action at Jawaharlal Nehru University (JNU), on 14 February 2016, claiming that the "JNU incident '' was supported by Lashkar - e-Taiba chief Hafiz Saeed.
In May 2016, he claimed that infiltration from Pakistan declined by 52 % in a period of two years.
On 9 April 2017, he launched Bharat Ke Veer Web portal and Application with Bollywood actor Akshay Kumar. This was an initiative taken by him for the welfare of Martyrs ' family.
An official anthem was launched on 20 January 2018 for the cause ' Bharat Ke Veer ' by him along with film star Akshay Kumar, and other ministers Kiren Rijiju, Hansraj Ahir.
On 21 May 2018, he commissioned Bastariya Battalion. As Union Home Minister, Rajnath Singh attended the passing out parade of 241 Bastariya Battalion of CRPF in Ambikapur, Chhattisgarh on 21 May 2018.
|
based on the triple bottom line what is the main idea of sustainability | Triple bottom line - wikipedia
Triple bottom line (or otherwise noted as TBL or 3BL) is an accounting framework with three parts: social, environmental (or ecological) and financial. Some organizations have adopted the TBL framework to evaluate their performance in a broader perspective to create greater business value. The term was coined by John Elkington in 1994.
In traditional business accounting and common usage, the "bottom line '' refers to either the "profit '' or "loss '', which is usually recorded at the very bottom line on a statement of revenue and expenses. Over the last 50 years, environmentalists and social justice advocates have struggled to bring a broader definition of bottom line into public consciousness by introducing full cost accounting. For example, if a corporation shows a monetary profit, but their asbestos mine causes thousands of deaths from asbestosis, and their copper mine pollutes a river, and the government ends up spending taxpayer money on health care and river clean - up, how do we perform a full societal cost benefit analysis? The triple bottom line adds two more "bottom lines '': social and environmental (ecological) concerns. With the ratification of the United Nations and ICLEI TBL standard for urban and community accounting in early 2007, this became the dominant approach to public sector full cost accounting. Similar UN standards apply to natural capital and human capital measurement to assist in measurements required by TBL, e.g. the EcoBudget standard for reporting ecological footprint. The TBL seems to be fairly widespread in South African media, as found in a 1990 -- 2008 study of worldwide national newspapers.
An example of an organization seeking a triple bottom line would be a social enterprise run as a non-profit, but earning income by offering opportunities for handicapped people who have been labelled "unemployable '', to earn a living by recycling. The organization earns a profit, which is controlled by a volunteer Board, and ploughed back into the community. The social benefit is the meaningful employment of disadvantaged citizens, and the reduction in the society 's welfare or disability costs. The environmental benefit comes from the recycling accomplished. In the private sector, a commitment to corporate social responsibility (CSR) implies a commitment to transparent reporting about the business ' material impact for good on the environment and people. Triple bottom line is one framework for reporting this material impact. This is distinct from the more limited changes required to deal only with ecological issues. The triple bottom line has also been extended to encompass four pillars, known as the quadruple bottom line (QBL). The fourth pillar denotes a future - oriented approach (future generations, intergenerational equity, etc.). It is a long - term outlook that sets sustainable development and sustainability concerns apart from previous social, environmental, and economic considerations.
The challenges of putting the TBL into practice relate to the measurement of social and ecological categories. Despite this, the TBL framework enables organizations to take a longer - term perspective and thus evaluate the future consequences of decisions.
Sustainable development was defined by the Brundtland Commission of the United Nations in 1987. Triple bottom line (TBL) accounting expands the traditional reporting framework to take into account social and environmental performance in addition to financial performance. In 1981, Freer Spreckley first articulated the triple bottom line in a publication called ' Social Audit - A Management Tool for Co-operative Working '. In this work, he argued that enterprises should measure and report on financial performance, social wealth creation, and environmental responsibility.
The phrase "triple bottom line '' was articulated more fully by John Elkington in his 1997 book Cannibals with Forks: the Triple Bottom Line of 21st Century Business. A Triple Bottom Line Investing group advocating and publicizing these principles was founded in 1998 by Robert J. Rubinstein.
For reporting their efforts companies may demonstrate their commitment to corporate social responsibility (CSR) through the following:
The concept of TBL demands that a company 's responsibility lies with stakeholders rather than shareholders. In this case, "stakeholders '' refers to anyone who is influenced, either directly or indirectly, by the actions of the firm. Examples of stakeholders include employees, customers, suppliers, local residents, government agencies, and creditors. According to the stakeholder theory, the business entity should be used as a vehicle for coordinating stakeholder interests, instead of maximizing shareholder (owner) profit. A growing number of financial institutions incorporate a triple bottom line approach in their work. It is at the core of the business of banks in the Global Alliance for Banking on Values, for example.
The Detroit - based Avalon International Breads interprets the triple bottom line as consisting of "Earth '', "Community '', and "Employees ''.
The triple bottom line consists of social equity, economic, and environmental factors. The phrase, "people, planet, and profit '' to describe the triple bottom line and the goal of sustainability, was coined by John Elkington in 1994 while at Sustain Ability, and was later used as the title of the Anglo - Dutch oil company Shell 's first sustainability report in 1997. As a result, one country in which the 3P concept took deep root was The Netherlands.
The people, social equity, or human capital bottom line pertains to fair and beneficial business practices toward labour and the community and region in which a corporation conducts its business. A TBL company conceives a reciprocal social structure in which the well - being of corporate, labour and other stakeholder interests are interdependent.
An enterprise dedicated to the triple bottom line seeks to provide benefit to many constituencies and not to exploit or endanger any group of them. The "upstreaming '' of a portion of profit from the marketing of finished goods back to the original producer of raw materials, for example, a farmer in fair trade agricultural practice, is a common feature. In concrete terms, a TBL business would not use child labour and monitor all contracted companies for child labour exploitation, would pay fair salaries to its workers, would maintain a safe work environment and tolerable working hours, and would not otherwise exploit a community or its labour force. A TBL business also typically seeks to "give back '' by contributing to the strength and growth of its community with such things as health care and education. Quantifying this bottom line is relatively new, problematic and often subjective. The Global Reporting Initiative (GRI) has developed guidelines to enable corporations and NGOs alike to comparably report on the social impact of a business.
The planet, environmental bottom line, or natural capital bottom line refers to sustainable environmental practices. A TBL company endeavors to benefit the natural order as much as possible or at the least do no harm and minimise environmental impact. A TBL endeavour reduces its ecological footprint by, among other things, carefully managing its consumption of energy and non-renewables and reducing manufacturing waste as well as rendering waste less toxic before disposing of it in a safe and legal manner. "Cradle to grave '' is uppermost in the thoughts of TBL manufacturing businesses, which typically conduct a life cycle assessment of products to determine what the true environmental cost is from the growth and harvesting of raw materials to manufacture to distribution to eventual disposal by the end user.
Currently, the cost of disposing of non-degradable or toxic products is borne financially by governments and environmentally by the residents near the disposal site and elsewhere. In TBL thinking, an enterprise which produces and markets a product which will create a waste problem should not be given a free ride by society. It would be more equitable for the business which manufactures and sells a problematic product to bear part of the cost of its ultimate disposal.
Ecologically destructive practices, such as overfishing or other endangering depletions of resources are avoided by TBL companies. Often environmental sustainability is the more profitable course for a business in the long run. Arguments that it costs more to be environmentally sound are often specious when the course of the business is analyzed over a period of time. Generally, sustainability reporting metrics are better quantified and standardized for environmental issues than for social ones. A number of respected reporting institutes and registries exist including the Global Reporting Initiative, CERES, Institute 4 Sustainability and others.
The ecological bottom line is akin to the concept of eco-capitalism.
The profit or economic bottom line deals with the economic value created by the organization after deducting the cost of all inputs, including the cost of the capital tied up. It therefore differs from traditional accounting definitions of profit. In the original concept, within a sustainability framework, the "profit '' aspect needs to be seen as the real economic benefit enjoyed by the host society. It is the real economic impact the organization has on its economic environment. This is often confused to be limited to the internal profit made by a company or organization (which nevertheless remains an essential starting point for the computation). Therefore, an original TBL approach can not be interpreted as simply traditional corporate accounting profit plus social and environmental impacts unless the "profits '' of other entities are included as a social benefit.
Following the initial publication of the triple bottom line concept, students and practitioners have sought greater detail in how the pillars can be evaluated.
The people concept for example can be viewed in three dimensions -- organisational needs, individual needs, and community issues.
Equally, profit is a function of both a healthy sales stream, which needs a high focus on customer service, coupled with the adoption of a strategy to develop new customers to replace those that die away.
And planet can be divided into a multitude of subdivisions, although reduce, reuse and recycle is a succinct way of steering through this division.
The following business - based arguments support the concept of TBL:
Fiscal policy of governments usually claims to be concerned with identifying social and natural deficits on a less formal basis. However, such choices may be guided more by ideology than by economics. The primary benefit of embedding one approach to measurement of these deficits would be first to direct monetary policy to reduce them, and eventually achieve a global monetary reform by which they could be systematically and globally reduced in some uniform way.
The argument is that the Earth 's carrying capacity is at risk, and that in order to avoid catastrophic breakdown of climate or ecosystems, there is need for comprehensive reform of global financial institutions similar in scale to what was undertaken at Bretton Woods in 1944.
With the emergence of an externally consistent green economics and agreement on definitions of potentially contentious terms such as full - cost accounting, natural capital and social capital, the prospect of formal metrics for ecological and social loss or risk has grown less remote since the 1990s.
In the United Kingdom in particular, the London Health Observatory has undertaken a formal programme to address social deficits via a fuller understanding of what "social capital '' is, how it functions in a real community (that being the City of London), and how losses of it tend to require both financial capital and significant political and social attention from volunteers and professionals to help resolve. The data they rely on is extensive, building on decades of statistics of the Greater London Council since World War II. Similar studies have been undertaken in North America.
Studies of the value of Earth have tried to determine what might constitute an ecological or natural life deficit. The Kyoto Protocol relies on some measures of this sort, and actually relies on some value of life calculations that, among other things, are explicit about the ratio of the price of a human life between developed and developing nations (about 15 to 1). While the motive of this number was to simply assign responsibility for a cleanup, such stark honesty opens not just an economic but political door to some kind of negotiation -- presumably to reduce that ratio in time to something seen as more equitable. As it is, people in developed nations can be said to benefit 15 times more from ecological devastation than in developing nations, in pure financial terms. According to the IPCC, they are thus obliged to pay 15 times more per life to avoid a loss of each such life to climate change -- the Kyoto Protocol seeks to implement exactly this formula, and is therefore sometimes cited as a first step towards getting nations to accept formal liability for damage inflicted on ecosystems shared globally.
Advocacy for triple bottom line reforms is common in Green Parties. Some of the measures undertaken in the European Union towards the Euro currency integration standardize the reporting of ecological and social losses in such a way as to seem to endorse in principle the notion of unified accounts, or unit of account, for these deficits.
To address financial bottom line profitability concerns, some argue that focusing on the TBL will indeed increase profit for the shareholders in the long run. In practice, John Mackey, CEO of Whole Foods, uses Whole Foods 's Community Giving Days as an example. On days when Whole Foods donates 5 % of their sales to charity, this action benefits the community, creates goodwill with customers, and energizes employees -- which may lead to increased, sustainable profitability in the long - run.
While many people agree with the importance of good social conditions and preservation of the environment, there are also many who disagree with the triple bottom line as the way to enhance these conditions. The following are the reasons why:
A focus on people, planet and profit has led to legislation changes around the world, often through social enterprise or social investment or through the introduction of a new legal form, the Community Interest Company. In the United States, the BCorp movement has been part of a call for legislation change to allow and encourage a focus on social and environmental impact, with BCorp a legal form for a company focused on "stakeholders, not just shareholders ''.
In Australia, the triple bottom line was adopted as a part of the State Sustainability Strategy, and accepted by the Government of Western Australia but its status was increasingly marginalised by subsequent premiers Alan Carpenter and Colin Barnett and is in doubt.
|
which best describes the benefits of having insurance | Insurance - Wikipedia
Insurance is a means of protection from financial loss. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.
An entity which provides insurance is known as an insurer, insurance company, insurance carrier or underwriter. A person or entity who buys insurance is known as an insured or as a policyholder. The insurance transaction involves the insured assuming a guaranteed and known relatively small loss in the form of payment to the insurer in exchange for the insurer 's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms, and usually involves something in which the insured has an insurable interest established by ownership, possession, or preexisting relationship.
The insured receives a contract, called the insurance policy, which details the conditions and circumstances under which the insurer will compensate the insured. The amount of money charged by the insurer to the insured for the coverage set forth in the insurance policy is called the premium. If the insured experiences a loss which is potentially covered by the insurance policy, the insured submits a claim to the insurer for processing by a claims adjuster. The insurer may hedge its own risk by taking out reinsurance, whereby another insurance company agrees to carry some of the risk, especially if the primary insurer deems the risk too large for it to carry.
Methods for transferring or distributing risk were practiced by Chinese and Babylonian traders as long ago as the 3rd and 2nd millennia BC, respectively. Chinese merchants travelling treacherous river rapids would redistribute their wares across many vessels to limit the loss due to any single vessel 's capsizing. The Babylonians developed a system which was recorded in the famous Code of Hammurabi, c. 1750 BC, and practiced by early Mediterranean sailing merchants. If a merchant received a loan to fund his shipment, he would pay the lender an additional sum in exchange for the lender 's guarantee to cancel the loan should the shipment be stolen, or lost at sea.
Circa 800 BC, the inhabitants of Rhodes created the ' general average '. This allowed groups of merchants to pay to insure their goods being shipped together. The collected premiums would be used to reimburse any merchant whose goods were jettisoned during transport, whether due to storm or sinkage.
Separate insurance contracts (i.e., insurance policies not bundled with loans or other kinds of contracts) were invented in Genoa in the 14th century, as were insurance pools backed by pledges of landed estates. The first known insurance contract dates from Genoa in 1347, and in the next century maritime insurance developed widely and premiums were intuitively varied with risks. These new insurance contracts allowed insurance to be separated from investment, a separation of roles that first proved useful in marine insurance.
Insurance became far more sophisticated in Enlightenment era Europe, and specialized varieties developed.
Property insurance as we know it today can be traced to the Great Fire of London, which in 1666 devoured more than 13,000 houses. The devastating effects of the fire converted the development of insurance "from a matter of convenience into one of urgency, a change of opinion reflected in Sir Christopher Wren 's inclusion of a site for ' the Insurance Office ' in his new plan for London in 1667. '' A number of attempted fire insurance schemes came to nothing, but in 1681, economist Nicholas Barbon and eleven associates established the first fire insurance company, the "Insurance Office for Houses, '' at the back of the Royal Exchange to insure brick and frame homes. Initially, 5,000 homes were insured by his Insurance Office.
At the same time, the first insurance schemes for the underwriting of business ventures became available. By the end of the seventeenth century, London 's growing importance as a center for trade was increasing demand for marine insurance. In the late 1680s, Edward Lloyd opened a coffee house, which became the meeting place for parties in the shipping industry wishing to insure cargoes and ships, and those willing to underwrite such ventures. These informal beginnings led to the establishment of the insurance market Lloyd 's of London and several related shipping and insurance businesses.
The first life insurance policies were taken out in the early 18th century. The first company to offer life insurance was the Amicable Society for a Perpetual Assurance Office, founded in London in 1706 by William Talbot and Sir Thomas Allen. Edward Rowe Mores established the Society for Equitable Assurances on Lives and Survivorship in 1762.
It was the world 's first mutual insurer and it pioneered age based premiums based on mortality rate laying "the framework for scientific insurance practice and development '' and "the basis of modern life assurance upon which all life assurance schemes were subsequently based. ''
In the late 19th century "accident insurance '' began to become available. The first company to offer accident insurance was the Railway Passengers Assurance Company, formed in 1848 in England to insure against the rising number of fatalities on the nascent railway system.
By the late 19th century governments began to initiate national insurance programs against sickness and old age. Germany built on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s. In the 1880s Chancellor Otto von Bismarck introduced old age pensions, accident insurance and medical care that formed the basis for Germany 's welfare state. In Britain more extensive legislation was introduced by the Liberal government in the 1911 National Insurance Act. This gave the British working classes the first contributory system of insurance against illness and unemployment. This system was greatly expanded after the Second World War under the influence of the Beveridge Report, to form the first modern welfare state.
Insurance involves pooling funds from many insured entities (known as exposures) to pay for the losses that some may incur. The insured entities are therefore protected from risk for a fee, with the fee being dependent upon the frequency and severity of the event occurring. In order to be an insurable risk, the risk insured against must meet certain characteristics. Insurance as a financial intermediary is a commercial enterprise and a major part of the financial services industry, but individual entities can also self - insure through saving money for possible future losses.
Risk which can be insured by private companies typically shares seven common characteristics:
When a company insures an individual entity, there are basic legal requirements and regulations. Several commonly cited legal principles of insurance include:
To "indemnify '' means to make whole again, or to be reinstated to the position that one was in, to the extent possible, prior to the happening of a specified event or peril. Accordingly, life insurance is generally not considered to be indemnity insurance, but rather "contingent '' insurance (i.e., a claim arises on the occurrence of a specified event). There are generally three types of insurance contracts that seek to indemnify an insured:
From an insured 's standpoint, the result is usually the same: the insurer pays the loss and claims expenses.
If the Insured has a "reimbursement '' policy, the insured can be required to pay for a loss and then be "reimbursed '' by the insurance carrier for the loss and out of pocket costs including, with the permission of the insurer, claim expenses.
Under a "pay on behalf '' policy, the insurance carrier would defend and pay a claim on behalf of the insured who would not be out of pocket for anything. Most modern liability insurance is written on the basis of "pay on behalf '' language which enables the insurance carrier to manage and control the claim.
Under an "indemnification '' policy, the insurance carrier can generally either "reimburse '' or "pay on behalf of '', whichever is more beneficial to it and the insured in the claim handling process.
An entity seeking to transfer risk (an individual, corporation, or association of any type, etc.) becomes the ' insured ' party once risk is assumed by an ' insurer ', the insuring party, by means of a contract, called an insurance policy. Generally, an insurance contract includes, at a minimum, the following elements: identification of participating parties (the insurer, the insured, the beneficiaries), the premium, the period of coverage, the particular loss event covered, the amount of coverage (i.e., the amount to be paid to the insured or beneficiary in the event of a loss), and exclusions (events not covered). An insured is thus said to be "indemnified '' against the loss covered in the policy.
When insured parties experience a loss for a specified peril, the coverage entitles the policyholder to make a claim against the insurer for the covered amount of loss as specified by the policy. The fee paid by the insured to the insurer for assuming the risk is called the premium. Insurance premiums from many insureds are used to fund accounts reserved for later payment of claims -- in theory for a relatively few claimants -- and for overhead costs. So long as an insurer maintains adequate funds set aside for anticipated losses (called reserves), the remaining margin is an insurer 's profit.
Insurance can have various effects on society through the way that it changes who bears the cost of losses and damage. On one hand it can increase fraud; on the other it can help societies and individuals prepare for catastrophes and mitigate the effects of catastrophes on both households and societies.
Insurance can influence the probability of losses through moral hazard, insurance fraud, and preventive steps by the insurance company. Insurance scholars have typically used moral hazard to refer to the increased loss due to unintentional carelessness and insurance fraud to refer to increased risk due to intentional carelessness or indifference. Insurers attempt to address carelessness through inspections, policy provisions requiring certain types of maintenance, and possible discounts for loss mitigation efforts. While in theory insurers could encourage investment in loss reduction, some commentators have argued that in practice insurers had historically not aggressively pursued loss control measures -- particularly to prevent disaster losses such as hurricanes -- because of concerns over rate reductions and legal battles. However, since about 1996 insurers have begun to take a more active role in loss mitigation, such as through building codes.
According to the study books of The Chartered Insurance Institute, there are variant methods of insurance as follows:
The business model is to collect more in premium and investment income than is paid out in losses, and to also offer a competitive price which consumers will accept. Profit can be reduced to a simple equation:
Insurers make money in two ways:
The most complicated aspect of the insurance business is the actuarial science of ratemaking (price - setting) of policies, which uses statistics and probability to approximate the rate of future claims based on a given risk. After producing rates, the insurer will use discretion to reject or accept risks through the underwriting process.
At the most basic level, initial ratemaking involves looking at the frequency and severity of insured perils and the expected average payout resulting from these perils. Thereafter an insurance company will collect historical loss data, bring the loss data to present value, and compare these prior losses to the premium collected in order to assess rate adequacy. Loss ratios and expense loads are also used. Rating for different risk characteristics involves at the most basic level comparing the losses with "loss relativities '' -- a policy with twice as many losses would therefore be charged twice as much. More complex multivariate analyses are sometimes used when multiple characteristics are involved and a univariate analysis could produce confounded results. Other statistical methods may be used in assessing the probability of future losses.
Upon termination of a given policy, the amount of premium collected minus the amount paid out in claims is the insurer 's underwriting profit on that policy. Underwriting performance is measured by something called the "combined ratio '', which is the ratio of expenses / losses to premiums. A combined ratio of less than 100 % indicates an underwriting profit, while anything over 100 indicates an underwriting loss. A company with a combined ratio over 100 % may nevertheless remain profitable due to investment earnings.
Insurance companies earn investment profits on "float ''. Float, or available reserve, is the amount of money on hand at any given moment that an insurer has collected in insurance premiums but has not paid out in claims. Insurers start investing insurance premiums as soon as they are collected and continue to earn interest or other income on them until claims are paid out. The Association of British Insurers (gathering 400 insurance companies and 94 % of UK insurance services) has almost 20 % of the investments in the London Stock Exchange.
In the United States, the underwriting loss of property and casualty insurance companies was $142.3 billion in the five years ending 2003. But overall profit for the same period was $68.4 billion, as the result of float. Some insurance industry insiders, most notably Hank Greenberg, do not believe that it is forever possible to sustain a profit from float without an underwriting profit as well, but this opinion is not universally held.
Naturally, the float method is difficult to carry out in an economically depressed period. Bear markets do cause insurers to shift away from investments and to toughen up their underwriting standards, so a poor economy generally means high insurance premiums. This tendency to swing between profitable and unprofitable periods over time is commonly known as the underwriting, or insurance, cycle.
Claims and loss handling is the materialized utility of insurance; it is the actual "product '' paid for. Claims may be filed by insureds directly with the insurer or through brokers or agents. The insurer may require that the claim be filed on its own proprietary forms, or may accept claims on a standard industry form, such as those produced by ACORD.
Insurance company claims departments employ a large number of claims adjusters supported by a staff of records management and data entry clerks. Incoming claims are classified based on severity and are assigned to adjusters whose settlement authority varies with their knowledge and experience. The adjuster undertakes an investigation of each claim, usually in close cooperation with the insured, determines if coverage is available under the terms of the insurance contract, and if so, the reasonable monetary value of the claim, and authorizes payment.
The policyholder may hire their own public adjuster to negotiate the settlement with the insurance company on their behalf. For policies that are complicated, where claims may be complex, the insured may take out a separate insurance policy add - on, called loss recovery insurance, which covers the cost of a public adjuster in the case of a claim.
Adjusting liability insurance claims is particularly difficult because there is a third party involved, the plaintiff, who is under no contractual obligation to cooperate with the insurer and may in fact regard the insurer as a deep pocket. The adjuster must obtain legal counsel for the insured (either inside "house '' counsel or outside "panel '' counsel), monitor litigation that may take years to complete, and appear in person or over the telephone with settlement authority at a mandatory settlement conference when requested by the judge.
If a claims adjuster suspects under - insurance, the condition of average may come into play to limit the insurance company 's exposure.
In managing the claims handling function, insurers seek to balance the elements of customer satisfaction, administrative handling expenses, and claims overpayment leakages. As part of this balancing act, fraudulent insurance practices are a major business risk that must be managed and overcome. Disputes between insurers and insureds over the validity of claims or claims handling practices occasionally escalate into litigation (see insurance bad faith).
Insurers will often use insurance agents to initially market or underwrite their customers. Agents can be captive, meaning they write only for one company, or independent, meaning that they can issue policies from several companies. The existence and success of companies using insurance agents is likely due to improved and personalized service. Companies also use Broking firms, Banks and other corporate entities (like Self Help Groups, Microfinance Institutions, NGOs etc.) to market their products.
Any risk that can be quantified can potentially be insured. Specific kinds of risk that may give rise to claims are known as perils. An insurance policy will set out in detail which perils are covered by the policy and which are not. Below are non-exhaustive lists of the many different types of insurance that exist. A single policy that may cover risks in one or more of the categories set out below. For example, vehicle insurance would typically cover both the property risk (theft or damage to the vehicle) and the liability risk (legal claims arising from an accident). A home insurance policy in the United States typically includes coverage for damage to the home and the owner 's belongings, certain legal claims against the owner, and even a small amount of coverage for medical expenses of guests who are injured on the owner 's property.
Business insurance can take a number of different forms, such as the various kinds of professional liability insurance, also called professional indemnity (PI), which are discussed below under that name; and the business owner 's policy (BOP), which packages into one policy many of the kinds of coverage that a business owner needs, in a way analogous to how homeowners ' insurance packages the coverages that a homeowner needs.
Auto insurance protects the policyholder against financial loss in the event of an incident involving a vehicle they own, such as in a traffic collision.
Coverage typically includes:
Gap insurance covers the excess amount on your auto loan in an instance where your insurance company does not cover the entire loan. Depending on the company 's specific policies it might or might not cover the deductible as well. This coverage is marketed for those who put low down payments, have high interest rates on their loans, and those with 60 - month or longer terms. Gap insurance is typically offered by a finance company when the vehicle owner purchases their vehicle, but many auto insurance companies offer this coverage to consumers as well.
Health insurance policies cover the cost of medical treatments. Dental insurance, like medical insurance, protects policyholders for dental costs. In most developed countries, all citizens receive some health coverage from their governments, paid for by taxation. In most countries, health insurance is often part of an employer 's benefits.
Casualty insurance insures against accidents, not necessarily tied to any specific property. It is a broad spectrum of insurance that a number of other types of insurance could be classified, such as auto, workers compensation, and some liability insurances.
Life insurance provides a monetary benefit to a decedent 's family or other designated beneficiary, and may specifically provide for income to an insured person 's family, burial, funeral and other final expenses. Life insurance policies often allow the option of having the proceeds paid to the beneficiary either in a lump sum cash payment or an annuity. In most states, a person can not purchase a policy on another person without their knowledge.
Annuities provide a stream of payments and are generally classified as insurance because they are issued by insurance companies, are regulated as insurance, and require the same kinds of actuarial and investment management expertise that life insurance requires. Annuities and pensions that pay a benefit for life are sometimes regarded as insurance against the possibility that a retiree will outlive his or her financial resources. In that sense, they are the complement of life insurance and, from an underwriting perspective, are the mirror image of life insurance.
Certain life insurance contracts accumulate cash values, which may be taken by the insured if the policy is surrendered or which may be borrowed against. Some policies, such as annuities and endowment policies, are financial instruments to accumulate or liquidate wealth when it is needed.
In many countries, such as the United States and the UK, the tax law provides that the interest on this cash value is not taxable under certain circumstances. This leads to widespread use of life insurance as a tax - efficient method of saving as well as protection in the event of early death.
In the United States, the tax on interest income on life insurance policies and annuities is generally deferred. However, in some cases the benefit derived from tax deferral may be offset by a low return. This depends upon the insuring company, the type of policy and other variables (mortality, market return, etc.). Moreover, other income tax saving vehicles (e.g., IRAs, 401 (k) plans, Roth IRAs) may be better alternatives for value accumulation.
Burial insurance is a very old type of life insurance which is paid out upon death to cover final expenses, such as the cost of a funeral. The Greeks and Romans introduced burial insurance c. 600 CE when they organized guilds called "benevolent societies '' which cared for the surviving families and paid funeral expenses of members upon death. Guilds in the Middle Ages served a similar purpose, as did friendly societies during Victorian times.
Property insurance provides protection against risks to property, such as fire, theft or weather damage. This may include specialized forms of insurance such as fire insurance, flood insurance, earthquake insurance, home insurance, inland marine insurance or boiler insurance. The term property insurance may, like casualty insurance, be used as a broad category of various subtypes of insurance, some of which are listed below:
Liability insurance is a very broad superset that covers legal claims against the insured. Many types of insurance include an aspect of liability coverage. For example, a homeowner 's insurance policy will normally include liability coverage which protects the insured in the event of a claim brought by someone who slips and falls on the property; automobile insurance also includes an aspect of liability insurance that indemnifies against the harm that a crashing car can cause to others ' lives, health, or property. The protection offered by a liability insurance policy is twofold: a legal defense in the event of a lawsuit commenced against the policyholder and indemnification (payment on behalf of the insured) with respect to a settlement or court verdict. Liability policies typically cover only the negligence of the insured, and will not apply to results of wilful or intentional acts by the insured.
Often a commercial insured 's liability insurance program consists of several layers. The first layer of insurance generally consists of primary insurance, which provides first dollar indemnity for judgments and settlements up to the limits of liability of the primary policy. Generally, primary insurance is subject to a deductible and obligates the insured to defend the insured against lawsuits, which is normally accomplished by assigning counsel to defend the insured. In many instances, a commercial insured may elect to self - insure. Above the primary insurance or self - insured retention, the insured may have one or more layers of excess insurance to provide coverage additional limits of indemnity protection. There are a variety of types of excess insurance, including "stand - alone '' excess policies (policies that contain their own terms, conditions, and exclusions), "follow form '' excess insurance (policies that follow the terms of the underlying policy except as specifically provided), and "umbrella '' insurance policies (excess insurance that in some circumstances could provide coverage that is broader than the underlying insurance).
Credit insurance repays some or all of a loan when the borrower is insolvent.
Some communities prefer to create virtual insurance amongst themselves by other means than contractual risk transfer, which assigns explicit numerical values to risk. A number of religious groups, including the Amish and some Muslim groups, depend on support provided by their communities when disasters strike. The risk presented by any given person is assumed collectively by the community who all bear the cost of rebuilding lost property and supporting people whose needs are suddenly greater after a loss of some kind. In supportive communities where others can be trusted to follow community leaders, this tacit form of insurance can work. In this manner the community can even out the extreme differences in insurability that exist among its members. Some further justification is also provided by invoking the moral hazard of explicit insurance contracts.
In the United Kingdom, The Crown (which, for practical purposes, meant the civil service) did not insure property such as government buildings. If a government building was damaged, the cost of repair would be met from public funds because, in the long run, this was cheaper than paying insurance premiums. Since many UK government buildings have been sold to property companies, and rented back, this arrangement is now less common and may have disappeared altogether.
In the United States, the most prevalent form of self - insurance is governmental risk management pools. They are self - funded cooperatives, operating as carriers of coverage for the majority of governmental entities today, such as county governments, municipalities, and school districts. Rather than these entities independently self - insure and risk bankruptcy from a large judgment or catastrophic loss, such governmental entities form a risk pool. Such pools begin their operations by capitalization through member deposits or bond issuance. Coverage (such as general liability, auto liability, professional liability, workers compensation, and property) is offered by the pool to its members, similar to coverage offered by insurance companies. However, self - insured pools offer members lower rates (due to not needing insurance brokers), increased benefits (such as loss prevention services) and subject matter expertise. Of approximately 91,000 distinct governmental entities operating in the United States, 75,000 are members of self - insured pools in various lines of coverage, forming approximately 500 pools. Although a relatively small corner of the insurance market, the annual contributions (self - insured premiums) to such pools have been estimated up to 17 billion dollars annually.
Insurance companies may be classified into two groups:
General insurance companies can be further divided into these sub categories.
In most countries, life and non-life insurers are subject to different regulatory regimes and different tax and accounting rules. The main reason for the distinction between the two types of company is that life, annuity, and pension business is very long - term in nature -- coverage for life assurance or a pension can cover risks over many decades. By contrast, non-life insurance cover usually covers a shorter period, such as one year.
Insurance companies are generally classified as either mutual or proprietary companies. Mutual companies are owned by the policyholders, while shareholders (who may or may not own policies) own proprietary insurance companies.
Demutualization of mutual insurers to form stock companies, as well as the formation of a hybrid known as a mutual holding company, became common in some countries, such as the United States, in the late 20th century. However, not all states permit mutual holding companies.
Other possible forms for an insurance company include reciprocals, in which policyholders reciprocate in sharing risks, and Lloyd 's organizations.
Insurance companies are rated by various agencies such as A. M. Best. The ratings include the company 's financial strength, which measures its ability to pay claims. It also rates financial instruments issued by the insurance company, such as bonds, notes, and securitization products.
Reinsurance companies are insurance companies that sell policies to other insurance companies, allowing them to reduce their risks and protect themselves from very large losses. The reinsurance market is dominated by a few very large companies, with huge reserves. A reinsurer may also be a direct writer of insurance risks as well.
Captive insurance companies may be defined as limited - purpose insurance companies established with the specific objective of financing risks emanating from their parent group or groups. This definition can sometimes be extended to include some of the risks of the parent company 's customers. In short, it is an in - house self - insurance vehicle. Captives may take the form of a "pure '' entity (which is a 100 % subsidiary of the self - insured parent company); of a "mutual '' captive (which insures the collective risks of members of an industry); and of an "association '' captive (which self - insures individual risks of the members of a professional, commercial or industrial association). Captives represent commercial, economic and tax advantages to their sponsors because of the reductions in costs they help create and for the ease of insurance risk management and the flexibility for cash flows they generate. Additionally, they may provide coverage of risks which is neither available nor offered in the traditional insurance market at reasonable prices.
The types of risk that a captive can underwrite for their parents include property damage, public and product liability, professional indemnity, employee benefits, employers ' liability, motor and medical aid expenses. The captive 's exposure to such risks may be limited by the use of reinsurance.
Captives are becoming an increasingly important component of the risk management and risk financing strategy of their parent. This can be understood against the following background:
There are also companies known as "insurance consultants ''. Like a mortgage broker, these companies are paid a fee by the customer to shop around for the best insurance policy amongst many companies. Similar to an insurance consultant, an ' insurance broker ' also shops around for the best insurance policy amongst many companies. However, with insurance brokers, the fee is usually paid in the form of commission from the insurer that is selected rather than directly from the client.
Neither insurance consultants nor insurance brokers are insurance companies and no risks are transferred to them in insurance transactions. Third party administrators are companies that perform underwriting and sometimes claims handling services for insurance companies. These companies often have special expertise that the insurance companies do not have.
The financial stability and strength of an insurance company should be a major consideration when buying an insurance contract. An insurance premium paid currently provides coverage for losses that might arise many years in the future. For that reason, the viability of the insurance carrier is very important. In recent years, a number of insurance companies have become insolvent, leaving their policyholders with no coverage (or coverage only from a government - backed insurance pool or other arrangement with less attractive payouts for losses). A number of independent rating agencies provide information and rate the financial viability of insurance companies.
Global insurance premiums grew by 2.7 % in inflation - adjusted terms in 2010 to $4.3 trillion, climbing above pre-crisis levels. The return to growth and record premiums generated during the year followed two years of decline in real terms. Life insurance premiums increased by 3.2 % in 2010 and non-life premiums by 2.1 %. While industrialised countries saw an increase in premiums of around 1.4 %, insurance markets in emerging economies saw rapid expansion with 11 % growth in premium income. The global insurance industry was sufficiently capitalised to withstand the financial crisis of 2008 and 2009 and most insurance companies restored their capital to pre-crisis levels by the end of 2010. With the continuation of the gradual recovery of the global economy, it is likely the insurance industry will continue to see growth in premium income both in industrialised countries and emerging markets in 2011.
Advanced economies account for the bulk of global insurance. With premium income of $1.62 trillion, Europe was the most important region in 2010, followed by North America $1.409 trillion and Asia $1.161 trillion. Europe has however seen a decline in premium income during the year in contrast to the growth seen in North America and Asia. The top four countries generated more than a half of premiums. The United States and Japan alone accounted for 40 % of world insurance, much higher than their 7 % share of the global population. Emerging economies accounted for over 85 % of the world 's population but only around 15 % of premiums. Their markets are however growing at a quicker pace. The country expected to have the biggest impact on the insurance share distribution across the world is China. According to Sam Radwan of ENHANCE International LLC, low premium penetration (insurance premium as a % of GDP), an ageing population and the largest car market in terms of new sales, premium growth has averaged 15 -- 20 % in the past five years, and China is expected to be the largest insurance market in the next decade or two.
In the United States, insurance is regulated by the states under the McCarran - Ferguson Act, with "periodic proposals for federal intervention '', and a nonprofit coalition of state insurance agencies called the National Association of Insurance Commissioners works to harmonize the country 's different laws and regulations. The National Conference of Insurance Legislators (NCOIL) also works to harmonize the different state laws.
In the European Union, the Third Non-Life Directive and the Third Life Directive, both passed in 1992 and effective 1994, created a single insurance market in Europe and allowed insurance companies to offer insurance anywhere in the EU (subject to permission from authority in the head office) and allowed insurance consumers to purchase insurance from any insurer in the EU. As far as insurance in the United Kingdom, the Financial Services Authority took over insurance regulation from the General Insurance Standards Council in 2005; laws passed include the Insurance Companies Act 1973 and another in 1982, and reforms to warranty and other aspects under discussion as of 2012.
The insurance industry in China was nationalized in 1949 and thereafter offered by only a single state - owned company, the People 's Insurance Company of China, which was eventually suspended as demand declined in a communist environment. In 1978, market reforms led to an increase in the market and by 1995 a comprehensive Insurance Law of the People 's Republic of China was passed, followed in 1998 by the formation of China Insurance Regulatory Commission (CIRC), which has broad regulatory authority over the insurance market of China.
In India IRDA is insurance regulatory authority. As per the section 4 of IRDA Act 1999, Insurance Regulatory and Development Authority (IRDA), which was constituted by an act of parliament. National Insurance Academy, Pune is apex insurance capacity builder institute promoted with support from Ministry of Finance and by LIC, Life & General Insurance companies.
In 2017, within the framework of the joint project of the Bank of Russia and Yandex, a special check mark (a green circle with a tick and ' Реестр ЦБ РФ ' (Unified state register of insurance entities) text box) appeared in the search for Yandex system, informing the consumer that the company 's financial services are offered on the marked website, which has the status of an insurance company, a broker or a mutual insurance association.
Insurance is just a risk transfer mechanism wherein the financial burden which may arise due to some fortuitous event is transferred to a bigger entity called an Insurance Company by way of paying premiums. This only reduces the financial burden and not the actual chances of happening of an event. Insurance is a risk for both the insurance company and the insured. The insurance company understands the risk involved and will perform a risk assessment when writing the policy. As a result, the premiums may go up if they determine that the policyholder will file a claim. If a person is financially stable and plans for life 's unexpected events, they may be able to go without insurance. However, they must have enough to cover a total and complete loss of employment and of their possessions. Some states will accept a surety bond, a government bond, or even making a cash deposit with the state.
An insurance company may inadvertently find that its insureds may not be as risk - averse as they might otherwise be (since, by definition, the insured has transferred the risk to the insurer), a concept known as moral hazard. This ' insulates ' many from the true costs of living with risk, negating measures that can mitigate or adapt to risk and leading some to describe insurance schemes as potentially maladaptive. To reduce their own financial exposure, insurance companies have contractual clauses that mitigate their obligation to provide coverage if the insured engages in behavior that grossly magnifies their risk of loss or liability.
For example, life insurance companies may require higher premiums or deny coverage altogether to people who work in hazardous occupations or engage in dangerous sports. Liability insurance providers do not provide coverage for liability arising from intentional torts committed by or at the direction of the insured. Even if a provider desired to provide such coverage, it is against the public policy of most countries to allow such insurance to exist, and thus it is usually illegal.
Insurance policies can be complex and some policyholders may not understand all the fees and coverages included in a policy. As a result, people may buy policies on unfavorable terms. In response to these issues, many countries have enacted detailed statutory and regulatory regimes governing every aspect of the insurance business, including minimum standards for policies and the ways in which they may be advertised and sold.
For example, most insurance policies in the English language today have been carefully drafted in plain English; the industry learned the hard way that many courts will not enforce policies against insureds when the judges themselves can not understand what the policies are saying. Typically, courts construe ambiguities in insurance policies against the insurance company and in favor of coverage under the policy.
Many institutional insurance purchasers buy insurance through an insurance broker. While on the surface it appears the broker represents the buyer (not the insurance company), and typically counsels the buyer on appropriate coverage and policy limitations, in the vast majority of cases a broker 's compensation comes in the form of a commission as a percentage of the insurance premium, creating a conflict of interest in that the broker 's financial interest is tilted towards encouraging an insured to purchase more insurance than might be necessary at a higher price. A broker generally holds contracts with many insurers, thereby allowing the broker to "shop '' the market for the best rates and coverage possible.
Insurance may also be purchased through an agent. A tied agent, working exclusively with one insurer, represents the insurance company from whom the policyholder buys (while a free agent sells policies of various insurance companies). Just as there is a potential conflict of interest with a broker, an agent has a different type of conflict. Because agents work directly for the insurance company, if there is a claim the agent may advise the client to the benefit of the insurance company. Agents generally can not offer as broad a range of selection compared to an insurance broker.
An independent insurance consultant advises insureds on a fee - for - service retainer, similar to an attorney, and thus offers completely independent advice, free of the financial conflict of interest of brokers or agents. However, such a consultant must still work through brokers or agents in order to secure coverage for their clients.
In the United States, economists and consumer advocates generally consider insurance to be worthwhile for low - probability, catastrophic losses, but not for high - probability, small losses. Because of this, consumers are advised to select high deductibles and to not insure losses which would not cause a disruption in their life. However, consumers have shown a tendency to prefer low deductibles and to prefer to insure relatively high - probability, small losses over low - probability, perhaps due to not understanding or ignoring the low - probability risk. This is associated with reduced purchasing of insurance against low - probability losses, and may result in increased inefficiencies from moral hazard.
Redlining is the practice of denying insurance coverage in specific geographic areas, supposedly because of a high likelihood of loss, while the alleged motivation is unlawful discrimination. Racial profiling or redlining has a long history in the property insurance industry in the United States. From a review of industry underwriting and marketing materials, court documents, and research by government agencies, industry and community groups, and academics, it is clear that race has long affected and continues to affect the policies and practices of the insurance industry.
In July 2007, The Federal Trade Commission (FTC) released a report presenting the results of a study concerning credit - based insurance scores in automobile insurance. The study found that these scores are effective predictors of risk. It also showed that African - Americans and Hispanics are substantially overrepresented in the lowest credit scores, and substantially underrepresented in the highest, while Caucasians and Asians are more evenly spread across the scores. The credit scores were also found to predict risk within each of the ethnic groups, leading the FTC to conclude that the scoring models are not solely proxies for redlining. The FTC indicated little data was available to evaluate benefit of insurance scores to consumers. The report was disputed by representatives of the Consumer Federation of America, the National Fair Housing Alliance, the National Consumer Law Center, and the Center for Economic Justice, for relying on data provided by the insurance industry.
All states have provisions in their rate regulation laws or in their fair trade practice acts that prohibit unfair discrimination, often called redlining, in setting rates and making insurance available.
In determining premiums and premium rate structures, insurers consider quantifiable factors, including location, credit scores, gender, occupation, marital status, and education level. However, the use of such factors is often considered to be unfair or unlawfully discriminatory, and the reaction against this practice has in some instances led to political disputes about the ways in which insurers determine premiums and regulatory intervention to limit the factors used.
An insurance underwriter 's job is to evaluate a given risk as to the likelihood that a loss will occur. Any factor that causes a greater likelihood of loss should theoretically be charged a higher rate. This basic principle of insurance must be followed if insurance companies are to remain solvent. Thus, "discrimination '' against (i.e., negative differential treatment of) potential insureds in the risk evaluation and premium - setting process is a necessary by - product of the fundamentals of insurance underwriting. For instance, insurers charge older people significantly higher premiums than they charge younger people for term life insurance. Older people are thus treated differently from younger people (i.e., a distinction is made, discrimination occurs). The rationale for the differential treatment goes to the heart of the risk a life insurer takes: Old people are likely to die sooner than young people, so the risk of loss (the insured 's death) is greater in any given period of time and therefore the risk premium must be higher to cover the greater risk. However, treating insureds differently when there is no actuarially sound reason for doing so is unlawful discrimination.
New assurance products can now be protected from copying with a business method patent in the United States.
A recent example of a new insurance product that is patented is Usage Based auto insurance. Early versions were independently invented and patented by a major US auto insurance company, Progressive Auto Insurance (U.S. Patent 5,797,134) and a Spanish independent inventor, Salvador Minguijon Perez (EP 0700009).
Many independent inventors are in favor of patenting new insurance products since it gives them protection from big companies when they bring their new insurance products to market. Independent inventors account for 70 % of the new U.S. patent applications in this area.
Many insurance executives are opposed to patenting insurance products because it creates a new risk for them. The Hartford insurance company, for example, recently had to pay $80 million to an independent inventor, Bancorp Services, in order to settle a patent infringement and theft of trade secret lawsuit for a type of corporate owned life insurance product invented and patented by Bancorp.
There are currently about 150 new patent applications on insurance inventions filed per year in the United States. The rate at which patents have been issued has steadily risen from 15 in 2002 to 44 in 2006.
The first insurance patent to be granted was including another example of an application posted was US2009005522 "risk assessment company ''. It was posted on March 6, 2009. This patent application describes a method for increasing the ease of changing insurance companies.
Insurance on demand (also IoD) is an insurance service that provides clients with insurance protection when they need, i.e. only episodic rather than on 24 / 7 basis as typically provided by traditional insurers (e.g. clients can purchase an insurance for one single flight rather than a longer - lasting travel insurance plan).
Certain insurance products and practices have been described as rent - seeking by critics. That is, some insurance products or practices are useful primarily because of legal benefits, such as reducing taxes, as opposed to providing protection against risks of adverse events. Under United States tax law, for example, most owners of variable annuities and variable life insurance can invest their premium payments in the stock market and defer or eliminate paying any taxes on their investments until withdrawals are made. Sometimes this tax deferral is the only reason people use these products. Another example is the legal infrastructure which allows life insurance to be held in an irrevocable trust which is used to pay an estate tax while the proceeds themselves are immune from the estate tax.
Muslim scholars have varying opinions about life insurance. Life insurance policies that earn interest (or guaranteed bonus / NAV) are generally considered to be a form of riba (usury) and some consider even policies that do not earn interest to be a form of gharar (speculation). Some argue that gharar is not present due to the actuarial science behind the underwriting. Jewish rabbinical scholars also have expressed reservations regarding insurance as an avoidance of God 's will but most find it acceptable in moderation.
Some Christians believe insurance represents a lack of faith and there is a long history of resistance to commercial insurance in Anabaptist communities (Mennonites, Amish, Hutterites, Brethren in Christ) but many participate in community - based self - insurance programs that spread risk within their communities.
Country - specific articles:
|
when was the defense of marriage act passed | Defense of marriage Act - wikipedia
The Defense of Marriage Act (DOMA) (Pub. L. 104 -- 199, 110 Stat. 2419, enacted September 21, 1996, 1 U.S.C. § 7 and 28 U.S.C. § 1738C) was a United States federal law that, prior to being ruled unconstitutional, defined marriage for federal purposes as the union of one man and one woman, and allowed states to refuse to recognize same - sex marriages granted under the laws of other states. Until Section 3 of the Act was struck down in 2013 (United States v. Windsor), DOMA, in conjunction with other statutes, had barred same - sex married couples from being recognized as "spouses '' for purposes of federal laws, effectively barring them from receiving federal marriage benefits. DOMA 's passage did not prevent individual states from recognizing same - sex marriage, but it imposed constraints on the benefits received by all legally married same - sex couples.
Initially introduced in May 1996, DOMA passed both houses of Congress by large, veto - proof majorities and was signed into law by President Bill Clinton in September 1996. By defining "spouse '' and its related terms to signify a heterosexual couple in a recognized marriage, Section 3 codified non-recognition of same - sex marriages for all federal purposes, including insurance benefits for government employees, social security survivors ' benefits, immigration, bankruptcy, and the filing of joint tax returns, as well as excluding same - sex spouses from the scope of laws protecting families of federal officers (18 U.S.C. § 115), laws evaluating financial aid eligibility, and federal ethics laws applicable to opposite - sex spouses.
In United States v. Windsor (2013), the U.S. Supreme Court declared Section 3 of DOMA unconstitutional under the Due Process Clause of the Fifth Amendment. Obergefell v. Hodges (2015) struck down the act 's provisions disallowing same - sex marriages to be performed under federal jurisdiction.
The issue of legal recognition of same - sex marriage attracted mainstream attention infrequently until the 1980s. A sympathetic reporter heard several gay men raise the issue in 1967 and described it as "high among the deviate 's hopes ''. In one early incident, gay activist Jack Baker brought suit against the state of Minnesota in 1970 after being denied a marriage license to marry another man; the Minnesota Supreme Court ruled (in Baker v. Nelson) that limiting marriage to opposite - sex couples did not violate the Constitution. Baker later changed his legal name to Pat Lynn McConnell and married his male partner in 1971, but the marriage was not legally recognized. A 1972 off - Broadway play, Nightride, depicted "a black -- white homosexual marriage ''. In 1979, IntegrityUSA, an organization of gay Episcopalians, raised the issue when the U.S. Episcopal Church considered a ban on the ordination of homosexuals as priests.
The New York Times said the question was "all but dormant '' until the late 1980s when, according to gay activists, "the AIDS epidemic... brought questions of inheritance and death benefits to many people 's minds. '' In May 1989, Denmark established registered partnerships that granted same - sex couples many of the rights associated with marriage. In the same year, New York 's highest court ruled that two homosexual men qualified as a family for the purposes of New York City 's rent - control regulations. Within the movement for gay and lesbian rights, a debate between advocates of sexual liberation and of social integration was taking shape, with Andrew Sullivan publishing an essay "Here Comes the Groom '' in The New Republic in August 1989 arguing for same - sex marriage: "A need to rebel has quietly ceded to a desire to belong. '' In September 1989, the State Bar Association of California urged recognition of marriages between homosexuals even before gay rights advocates adopted the issue.
Gary Bauer, head of the socially conservative Family Research Council, predicted the issue would be "a major battleground in the 1990s ''. In 1991, Georgia Attorney General Michael J. Bowers withdrew a job offer to a lesbian who planned to marry another woman in a Jewish wedding ceremony. In 1993, a committee of the Evangelical Lutheran Church in America released a report asking Lutherans to consider blessing same - sex marriages and stating that lifelong abstinence was harmful to same - sex couples. The Conference of Bishops responded, "There is basis neither in Scripture nor tradition for the establishment of an official ceremony by this church for the blessing of a homosexual relationship. '' In a critique of radicalism in the gay liberation movement, Bruce Bawer 's A Place at the Table (1993) advocated the legalization of same - sex marriage.
In Baehr v. Miike (1993), the Supreme Court of Hawaii ruled that the state must show a compelling interest in prohibiting same - sex marriage. This finding prompted concern among opponents of same - sex marriage, who feared that same - sex marriage might become legal in Hawaii and that other states would recognize or be compelled to recognize those marriages under the Full Faith and Credit Clause of the United States Constitution. The House Judiciary Committee 's 1996 Report called for DOMA as a response to Baehr, because "a redefinition of marriage in Hawaii to include homosexual couples could make such couples eligible for a whole range of federal rights and benefits ''.
The main provisions of the act were as follows:
Georgia Representative Bob Barr, then a Republican, authored the Defense of Marriage Act and introduced it in the House of Representatives on May 7, 1996. Senator Don Nickles, (R) Oklahoma, introduced the bill in the Senate. The House Judiciary Committee stated that the Act was intended by Congress to "reflect and honor a collective moral judgment and to express moral disapproval of homosexuality ''. The Act 's congressional sponsors stated, "(T) he bill amends the U.S. Code to make explicit what has been understood under federal law for over 200 years; that a marriage is the legal union of a man and a woman as husband and wife, and a spouse is a husband or wife of the opposite sex. ''
Nickles said, "If some state wishes to recognize same - sex marriage, they can do so ''. He said the bill would ensure that "the 49 other states do n't have to and the Federal Government does not have to. '' In opposition to the bill, Colorado Rep. Patricia Schroeder said, "You ca n't amend the Constitution with a statute. Everybody knows that. This is just stirring the political waters and seeing what hate you can unleash. '' Barr countered that the Full Faith and Credit Clause of the Constitution grants Congress power to determine "the effect '' of the obligation of each state to grant "full faith and credit '' to other states ' acts.
The 1996 Republican Party platform endorsed DOMA, referencing only Section 2 of the act: "We reject the distortion of (anti-discrimination) laws to cover sexual preference, and we endorse the Defense of Marriage Act to prevent states from being forced to recognize same - sex unions. '' The Democratic Party platform that year did not mention DOMA or same - sex marriage. In a June 1996 interview in the gay and lesbian magazine The Advocate, Clinton said, "I remain opposed to same - sex marriage. I believe marriage is an institution for the union of a man and a woman. This has been my long - standing position, and it is not being reviewed or reconsidered. '' But he also criticized DOMA as "divisive and unnecessary. ''
The bill moved through Congress on a legislative fast track and met with overwhelming approval in both houses of the Republican - controlled Congress. On July 12, 1996, with only 65 Democrats and then Rep. Bernie Sanders (Independent - Vermont) and Rep. Steve Gunderson (Republican - Wisconsin), in opposition, 342 members of the U.S. House of Representatives -- 224 Republicans and 118 Democrats -- voted to pass DOMA. Then, on September 10, 1996, 84 Senators -- a majority of the Democratic Senators and all of the Republicans -- voted in favor of DOMA. Democratic Senators voted for the bill 32 to 14 (with Pryor of Arkansas absent), and Democratic Representatives voted for it 118 to 65, with 15 not participating. All Republicans in both houses voted for the bill with the sole exception of the one openly gay Republican Congressman, Rep. Steve Gunderson of Wisconsin.
While his stated position was against same - sex marriage, Clinton criticized DOMA as "unnecessary and divisive '', and his press - secretary called it "gay baiting, plain and simple ''. However, after Congress had passed the bill with enough votes to override a presidential veto, Clinton signed DOMA. Years later, he said that he did so reluctantly in view of the veto - proof majority, both to avoid associating himself politically with the then - unpopular cause of same - sex marriage and to defuse momentum for a proposed Federal Amendment to the U.S. Constitution banning same - sex marriage. Clinton, who was traveling when Congress acted, signed it into law promptly upon returning to Washington, D.C., on September 21, 1996; no signing ceremony was held for DOMA and no photographs were taken of him signing it into law. The White House released a statement in which Clinton said "that the enactment of this legislation should not, despite the fierce and at times divisive rhetoric surrounding it, be understood to provide an excuse for discrimination, violence or intimidation against any person on the basis of sexual orientation ''.
In 2013, Mike McCurry, the White House press secretary at the time, recalled that Clinton 's "posture was quite frankly driven by the political realities of an election year in 1996. '' James Hormel, who was appointed by Clinton as the first openly gay U.S. Ambassador, described the reaction from the gay community to Clinton signing DOMA as shock and anger. On Hormel 's account, Clinton had been the first President to advocate gay rights, push for AIDS funding, support gay and lesbian civil rights legislation, and appoint open LGBT people to his Administration. Thus his signing of DOMA was viewed by much of the community as a great betrayal.
Clinton did not mention DOMA in his 2004 autobiography. Over time, Clinton 's public position on same - sex marriage shifted. He spoke out against the passage of California 's Proposition 8 and recorded robocalls urging Californians to vote against it. In July 2009, he officially came out in support for same - sex marriage.
On August 13, 2009, during Netroots Nation, when confronted by LGBT activist Lane Hudson, Clinton explained that he had to sign DOMA in order to prevent a constitutional amendment that would proscribe same - sex marriage: "We were attempting at the time, in a very reactionary Congress, to head off an attempt to send a constitutional amendment banning gay marriage to the states. And if you look at the eleven referenda much later -- in 2004, in the election -- which the Republicans put on the ballot to try to get the base vote for President Bush up, I think it 's obvious that something had to be done to try to keep the Republican Congress from presenting that. ''
In an op - ed written on March 7, 2013, for The Washington Post, Clinton again suggested that DOMA was necessary in order to preclude, at that time, the passage of a constitutional amendment banning same - sex marriage and urged the Supreme Court, which would shortly hear arguments on United States v. Windsor, to overturn DOMA.
Clinton 's explanation for signing DOMA has been disputed by gay rights activist Elizabeth Birch: "In 1996, I was President of the Human Rights Campaign, and there was no real threat of a Federal Marriage Amendment. That battle would explode about eight years later, in 2004, when President Bush announced it was a central policy goal of his administration to pass such an amendment. ''
Evan Wolfson, who, in 1996, ran the National Freedom to Marry Coalition, while an attorney at Lambda Legal, has also criticized the suggestion that DOMA was stopping something worse: "That 's complete nonsense. There was no conversation about something ' worse ' until eight years later. There was no talk of a constitutional amendment, and no one even thought it was possible -- and, of course, it turned out it was n't really possible to happen. So, the idea that people were swallowing DOMA in order to prevent a constitutional amendment is really just historic revisionism and not true. That was never an argument made in the ' 90s. ''
However, political discussion of the possibility of using a constitutional amendment to restrict marriage rights was not unheard of in the 1990s. In January 1996, for example, the House Judiciary Committee for the state of Hawaii voted 12 - 1 in favor of passing bill HB 117, which was aimed at amending the constitution of Hawaii to define marriage as involving one man and one woman. In 1998, the first anti-same - sex marriage amendment was added to the constitution of a U.S. state, though it was not until 2002 that a federal marriage amendment was first introduced in the U.S. Congress.
The General Accounting Office issued a report in 1997 identifying "1,049 federal statutory provisions classified to the United States Code in which benefits, rights, and privileges are contingent on marital status or in which marital status is a factor ''. In updating its report in 2004, the GAO found that this number had risen to 1,138 as of December 31, 2003. With respect to Social Security, housing, and food stamps, the GAO found that "recognition of the marital relationship is integral to the design of the program (s). '' The report also noted several other major program categories that were affected -- veterans ' benefits, including pensions and survivor benefits; taxes on income, estates, gifts, and property sales; and benefits due federal employees, both civilian and military -- and identified specifics such as the rights of the surviving spouse of a creator of copyrighted work and the financial disclosure requirements of spouses of Congress members and certain officers of the federal government. Education loan programs and agriculture price support and loan programs also implicate spouses. Financial aid to "family farms '' for example, is restricted to those in which "a majority interest is held by individuals related by marriage or blood. ''
Because the federal Employee Retirement Income Security Act (ERISA) controls most employee benefits provided by private employers, DOMA removed some tax breaks for employers and employees in the private sector when it comes to health care, pension, and disability benefits to same - sex spouses on an equal footing with opposite - sex spouses. ERISA does not affect employees of state and local government or churches, nor does it extend to such benefits as employee leave and vacation.
Under DOMA, persons in same - sex marriages were not considered married for immigration purposes. U.S. citizens and permanent residents in same - sex marriages could not petition for their spouses, nor could they be accompanied by their spouses into the U.S. on the basis of a family or employment - based visa. A non-citizen in such a marriage could not use it as the basis for obtaining a waiver or relief from removal from the U.S.
Following the end of the U.S. military 's ban on service by open gays and lesbians, "Do n't ask, do n't tell, '' in September 2011, Admiral Mike Mullen, Chairman of the Joint Chiefs of Staff, noted that DOMA limited the military 's ability to extend the same benefits to military personnel in same - sex marriages as their peers in opposite - sex marriages received, notably health benefits. Same - sex spouses of military personnel were denied the same access to military bases, legal counseling, and housing allowances provided to different - sex spouses.
The 2000 Republican Party platform endorsed DOMA in general terms and indicated concern about judicial activism: "We support the traditional definition of ' marriage ' as the legal union of one man and one woman, and we believe that federal judges and bureaucrats should not force states to recognize other living arrangements as marriages. '' The Democratic Party platform that year did not mention DOMA or marriage in this context.
In 2004, President George W. Bush endorsed a proposed constitutional amendment to restrict marriage to opposite - sex couples because he thought DOMA vulnerable: "After more than two centuries of American jurisprudence and millennia of human experience, a few judges and local authorities are presuming to change the most fundamental institution of civilization. Their actions have created confusion on an issue that requires clarity. '' In January 2005, however, he said he would not lobby on its behalf, since too many U.S. senators thought DOMA would survive a constitutional challenge.
President Barack Obama 's 2008 political platform endorsed the repeal of DOMA. On June 12, 2009, the Justice Department issued a brief defending the constitutionality of DOMA in the case of Smelt v. United States, continuing its longstanding practice of defending all federal laws challenged in court. On June 15, 2009, Human Rights Campaign President Joe Solmonese wrote an open letter to Obama that asked for actions to balance the DOJ 's courtroom position: "We call on you to put your principles into action and send legislation repealing DOMA to Congress. '' A representative of Lambda Legal, an LGBT impact litigation and advocacy organization, noted that the Obama administration 's legal arguments omitted the Bush administration 's assertion that households headed by opposite - sex spouses were better at raising children than those headed by same - sex spouses.
On February 23, 2011, Attorney General Eric Holder released a statement regarding lawsuits challenging DOMA Section 3. He wrote:
After careful consideration, including a review of my recommendation, the President has concluded that given a number of factors, including a documented history of discrimination, classifications based on sexual orientation should be subject to a more heightened standard of scrutiny. The President has also concluded that Section 3 of DOMA, as applied to legally married same - sex couples, fails to meet that standard and is therefore unconstitutional. Given that conclusion, the President has instructed the Department not to defend the statute in such cases.
He also announced that although it was no longer defending Section 3 in court, the administration intended to continue to enforce the law "unless and until Congress repeals Section 3 or the judicial branch renders a definitive verdict against the law 's constitutionality. ''
In a separate letter to Speaker of the House John Boehner, Holder noted that Congress could participate in these lawsuits.
On February 24, the Department of Justice notified the First Circuit Court of Appeals that it would "cease to defend '' Gill and Massachusetts as well. On July 1, 2011, the DOJ, with a filing in Golinski, intervened for the first time on behalf of a plaintiff seeking to have DOMA Section 3 ruled unconstitutional, arguing that laws that use sexual orientation as a classification need to pass the court 's intermediate scrutiny standard of review. The DOJ made similar arguments in a filing in Gill on July 7.
In June 2012, filing an amicus brief in Golinski, two former Republican Attorneys General, Edwin Meese and John Ashcroft, called the DOJ 's decision not to defend DOMA Section 3 "an unprecedented and ill - advised departure from over two centuries of Executive Branch practice '' and "an extreme and unprecedented deviation from the historical norm ''.
On March 4, 2011, Boehner announced that the Bipartisan Legal Advisory Group (BLAG) would convene to consider whether the House of Representatives should defend DOMA Section 3 in place of the Department of Justice, and on March 9 the committee voted 3 -- 2 to do so.
On April 18, 2011, House leaders announced the selection of former United States Solicitor General Paul Clement to represent BLAG. Clement, without opposition from other parties to the case, filed a motion to be allowed to intervene in the suit "for the limited purpose of defending the constitutionality of Section III '' of DOMA. On April 25, 2011, King & Spalding, the law firm through which Clement was handling the case, announced it was dropping the case. On the same day, Clement resigned from King & Spalding in protest and joined Bancroft PLLC, which took on the case. The House 's initial contract with Clement capped legal fees at $500,000, but on September 30 a revised contract raised the cap to $1.5 million. A spokesman for Boehner explained that BLAG would not appeal in all cases, citing bankruptcy cases that are "unlikely to provide the path to the Supreme Court... (E) ffectively defending (DOMA) does not require the House to intervene in every case, especially when doing so would be prohibitively expensive. ''
On September 15, 2009, three Democratic members of Congress, Jerrold Nadler of New York, Tammy Baldwin of Wisconsin, and Jared Polis of Colorado, introduced legislation to repeal DOMA called the Respect for Marriage Act. The bill had 91 original co-sponsors in the House of Representatives and was supported by Clinton, Barr, and several legislators who voted for DOMA. Congressman Barney Frank and John Berry, head of the Office of Personnel Management, did not support that effort, stating that "the backbone is not there '' in Congress. Frank and Berry suggested DOMA could be overturned more quickly through lawsuits such as Gill v. Office of Personnel Management filed by Gay & Lesbian Advocates & Defenders (GLAD).
Following Holder 's announcement that the Obama Administration would no longer defend DOMA Section 3 in court, on March 16, 2011, Senator Dianne Feinstein introduced the Respect for Marriage Act in the Senate again and Nadler introduced it in the House. The Senate Judiciary Committee voted 10 -- 8 in favor of advancing the bill to the Senate floor, but observers believed it would not gain the 60 votes needed to end debate and bring it to a vote.
After the Supreme Court struck down DOMA Section 3 on June 26, 2013, Feinstein and Nadler reintroduced the Respect for Marriage Act as S. 1236 and H.R. 2523.
Numerous plaintiffs have challenged DOMA. Prior to 2009, all federal courts upheld DOMA in its entirety.
Later cases focused on Section 3 's definition of marriage. The courts, using different standards, have all found Section 3 unconstitutional. Requests for the Supreme Court to hear appeals were filed in five cases, listed below (with Supreme Court docket numbers):
Golinski v. Office of Personnel Management is a challenge to Section 3 of DOMA in federal court based on a judicial employee 's attempt to receive spousal health benefits for her wife. In 2008, Karen Golinski, a 19 - year employee of the Ninth Circuit Court of Appeals, applied for health benefits for her wife. When the application was denied, she filed a complaint under the Ninth Circuit 's Employment Dispute Resolution Plan. Chief Judge Alex Kozinski, in his administrative capacity, ruled in 2009 that she was entitled to spousal health benefits, but the Office of Personnel Management (OPM) announced that it would not comply with the ruling.
On March 17, 2011, U.S. District Judge Jeffrey White dismissed the suit on procedural grounds but invited Golinski to amend her suit to argue the unconstitutionality of DOMA Section 3, which she did on April 14. Following the Attorney General 's decision to no longer defend DOMA, the Bipartisan Legal Advisory Group (BLAG), an arm of the House of Representatives, took up the defense. Former United States Solicitor General Paul Clement filed, on BLAG 's behalf, a motion to dismiss raising arguments previously avoided by the Department of Justice: that DOMA 's definition of marriage is valid "because only a man and a woman can beget a child together, and because historical experience has shown that a family consisting of a married father and mother is an effective social structure for raising children. '' On July 1, 2011, the DOJ filed a brief in support of Golinski 's suit, in which it detailed for the first time its case for heightened scrutiny based on "a significant history of purposeful discrimination against gay and lesbian people, by governmental as well as private entities '' and its arguments that DOMA Section 3 fails to meet that standard.
On February 22, 2012, White ruled for Golinski, finding DOMA "violates her right to equal protection of the law under the Fifth Amendment to the United States Constitution. '' He wrote that Section 3 of DOMA could not pass the "heightened scrutiny '' or the "rational basis '' test. He wrote,
The Court finds that neither Congress ' claimed legislative justifications nor any of the proposed reasons proffered by BLAG constitute bases rationally related to any of the alleged governmental interests. Further, after concluding that neither the law nor the record can sustain any of the interests suggested, the Court, having tried on its own, can not conceive of any additional interests that DOMA might further.
While the case was on appeal to the Ninth Circuit, on July 3, 2012, the DOJ asked the Supreme Court to review the case before the Ninth Circuit decides it so it can be heard together with two other cases in which DOMA Section 3 was held unconstitutional, Gill v. Office of Personnel Management and Massachusetts v. United States Department of Health and Human Services. The Supreme Court chose to hear Windsor instead of these cases, and following the Supreme Court decision in Windsor the Ninth Circuit dismissed the appeal in Golinski with the consent of all parties on July 23.
On March 3, 2009, GLAD filed a federal court challenge, Gill v. Office of Personnel Management, based on the Equal Protection Clause and the federal government 's consistent deference to each state 's definition of marriage prior to the enactment of DOMA. The case questioned only the DOMA provision that the federal government defines marriage as the union of a man and a woman. On May 6, 2010, Judge Joseph L. Tauro heard arguments in the U.S. District Court in Boston.
On July 8, 2009, Massachusetts Attorney General Martha Coakley filed a suit, Massachusetts v. United States Department of Health and Human Services, challenging the constitutionality of DOMA. The suit claims that Congress "overstepped its authority, undermined states ' efforts to recognize marriages between same - sex couples, and codified an animus towards gay and lesbian people. '' Tauro, the judge also handling Gill, heard arguments on May 26, 2010.
On July 8, 2010, Tauro issued his rulings in both Gill and Massachusetts, granting summary judgment for the plaintiffs in both cases. He found in Gill that Section 3 of the Defense of Marriage Act violates the equal protection of the laws guaranteed by the Due Process Clause of the Fifth Amendment to the U.S. Constitution. In Massachusetts he held that the same section of DOMA violates the Tenth Amendment and falls outside Congress ' authority under the Spending Clause of the Constitution. Those decisions were stayed after the DOJ filed an appeal on October 12, 2010.
On November 3, 2011, 133 House Democrats filed an amicus brief in support of the plaintiffs in Gill and Massachusetts, asserting their belief that Section 3 of DOMA was unconstitutional. Included among the members of Congress signing the brief were 14 members who had voted for the bill in 1996. Seventy major employers also filed an amicus brief supporting the plaintiffs. A three - judge panel heard arguments in the case on April 4, 2012, during which the DOJ for the first time took the position that it could not defend Section 3 of DOMA under any level of scrutiny. On May 31, 2012, the panel unanimously affirmed Tauro 's ruling, finding Section 3 of DOMA unconstitutional. On June 29, BLAG filed a petition for certiorari with the Supreme Court. The DOJ did so on July 3, while asking the Supreme Court to review Golinski as well. The Commonwealth of Massachusetts filed a response to both petitions adding the Spending Clause and Tenth Amendment issues as questions presented. The Supreme Court denied these petitions on June 27, 2013, following its decision in Windsor.
On November 9, 2010, the American Civil Liberties Union and the law firm Paul, Weiss, Rifkind, Wharton & Garrison filed United States v. Windsor in New York on behalf of a surviving same - sex spouse whose inheritance from her deceased spouse had been subject to federal taxation as if they were unmarried. New York is part of the Second Circuit, where no precedent exists for the standard of review to be followed in sexual - orientation discrimination cases.
New York Attorney General Eric Schneiderman filed a brief supporting Windsor 's claim on July 26, 2011.
On June 6, 2012, Judge Barbara Jones ruled that based on rational basis review Section 3 of DOMA is unconstitutional and ordered the requested tax refund be paid to Windsor. The plaintiff commented, "It 's thrilling to have a court finally recognize how unfair it is for the government to have treated us as though we were strangers. '' Windsor 's attorneys filed a petition of certiorari with the Supreme Court on July 16, asking for the case to be considered without waiting for the Second Circuit 's review. On October 18, the Second Circuit Court of Appeals upheld the lower court 's ruling that Section 3 of DOMA is unconstitutional. According to an ACLU press release, this ruling was "the first federal appeals court decision to decide that government discrimination against gay people gets a more exacting level of judicial review '' In an opinion authored by Chief Judge Dennis Jacobs, the Second Circuit Court of Appeals stated:
Our straightforward legal analysis sidesteps the fair point that same - sex marriage is unknown to history and tradition, but law (federal or state) is not concerned with holy matrimony. Government deals with marriage as a civil status -- however fundamental -- and New York has elected to extend that status to same - sex couples.
On December 7, 2012, the Supreme Court agreed to hear the case. Oral arguments were heard on March 27, 2013. In a 5 -- 4 decision on June 26, 2013, the Court ruled Section 3 of DOMA to be unconstitutional, declaring it "a deprivation of the liberty of the person protected by the Fifth Amendment. ''
On July 18, 2013, the Bipartisan Legal Advisory Group (BLAG), which had mounted a defense of Section 3 when the administration declined to, acknowledged that in Windsor "(t) he Supreme Court recently resolved the issue of DOMA Section 3 's constitutionality '' and said "it no longer will defend that statute. ''
Pedersen v. Office of Personnel Management is a case filed by GLAD in Connecticut on behalf of same - sex couples in Connecticut, Vermont, and New Hampshire, in which GLAD repeats the arguments it made in Gill.
On July 31, 2012, Judge Vanessa Bryant ruled that "having considered the purported rational bases proffered by both BLAG and Congress and concluded that such objectives bear no rational relationship to Section 3 of DOMA as a legislative scheme, the Court finds that that no conceivable rational basis exists for the provision. The provision therefore violates the equal protection principles incorporated in the Fifth Amendment to the United States Constitution. '' She held that "laws that classify people based on sexual orientation should be subject to heightened scrutiny by courts '' but determined Section 3 of DOMA "fails to pass constitutional muster under even the most deferential level of judicial scrutiny. '' The case is currently on appeal to the Second Circuit, and on August 21, 2012, Pedersen asked the Supreme Court to review the case before the Second Circuit decides it so it can be heard together with Gill v. Office of Personnel Management and Massachusetts v. United States Department of Health and Human Services. The Supreme Court denied these petitions on June 27, 2013, following its decision in Windsor.
Other cases that challenged DOMA include:
On October 13, 2011, Carmen Cardona, a U.S. Navy veteran, filed a lawsuit in the United States Court of Appeals for Veterans Claims seeking disability benefits for her wife that the Veterans Administration and the Board of Veterans Appeals had denied. Cardona is represented by the Yale Law School Legal Services Clinic. At the request of BLAG, which is defending the government 's action, and over Cardona 's objections, the court postponed oral argument in Cardona v. Shinseki pending the Supreme Court 's disposition of writs of certiorari in other DOMA cases.
On October 27, 2011, the Servicemembers Legal Defense Network (SLDN) brought suit in federal court on behalf of several military servicemembers and veterans in same - sex marriages. In a November 21 filing in the case of McLaughlin v. Panetta, they wrote, "Any claim that DOMA, as applied to military spousal benefits, survives rational basis review is strained because paying unequal benefits to service members runs directly counter to the military values of uniformity, fairness and unit cohesion. '' The benefits at issue include medical and dental benefits, basic housing and transportation allowances, family separation benefits, visitation rights in military hospitals, and survivor benefit plans. The case was assigned to Judge Richard G. Stearns. One of the plaintiffs in the case, lesbian Charlie Morgan, who was undergoing chemotherapy, met with an assistant to Boehner on February 9, 2012, to ask him to consider not defending DOMA. The case is on hold at the request of both sides in anticipation of the outcome of two other First Circuit cases on appeal, Gill v. Office of Personnel Management and Massachusetts v. United States Department of Health and Human Services. On February 17, the DOJ announced it could not defend the constitutionality of the statutes challenged in the case. In May 2012, the parties filed briefs arguing whether BLAG has a right to intervene. On June 27, Stearns asked the parties to explain by July 18 why given the decision in Windsor he should not find for the plaintiffs. On July 18, BLAG 's response acknowledged that "(t) he Supreme Court recently resolved the issue of DOMA Section 3 's constitutionality '' and asked to be allowed to withdraw from the case. It took no position on the two statutes at issue in the case, which define a "spouse '' as "a person of the opposite sex '', except to say that "the question of whether (that definition) is constitutional remains open ''.
Tracey Cooper - Harris, an Army veteran from California, sued the Veterans Administration and the DOJ in federal court on February 1, 2012, asking for her wife to receive the benefits normally granted to spouses of disabled veterans. BLAG sought a delay in Cooper - Harris v. United States pending the resolution of Golinski, which the attorneys for Cooper - Harris, the Southern Poverty Law Center, opposed. The court denied BLAG 's motion on August 4. In February 2013, Judge Consuelo Marshall rejected the DOJ 's argument that the case could only be heard by the Board of Veterans ' Appeals and allowed the case to proceed. BLAG asked to withdraw from the lawsuit on July 22.
In May 2011, DOMA - based challenges by the Department of Justice to joint petitions for bankruptcy by married same - sex couples were denied in two cases, one in the Southern District of New York on May 4 and one in the Eastern District of California on May 31. Both rulings stressed practical considerations and avoided ruling on DOMA.
On June 13, 2011, 20 of the 25 judges of the U.S. Bankruptcy Court for the Central District of California signed an opinion in the case in re Balas and Morales that found that a same - sex married couple filing for bankruptcy "have made their case persuasively that DOMA deprives them of the equal protection of the law to which they are entitled. '' The decision found DOMA Section 3 unconstitutional and dismissed BLAG 's objections to the joint filing:
Although individual members of Congress have every right to express their views and the views of their constituents with respect to their religious beliefs and principles and their personal standards of who may marry whom, this court can not conclude that Congress is entitled to solemnize such views in the laws of this nation in disregard of the views, legal status and living arrangements of a significant segment of our citizenry that includes the Debtors in this case. To do so violates the Debtors ' right to equal protection of those laws embodied in the due process clause of the Fifth Amendment. This court can not conclude from the evidence or the record in this case that any valid governmental interest is advanced by DOMA as applied to the Debtors.
A spokesman for House Speaker Boehner said BLAG would not appeal the ruling, On July 7, 2011, the DOJ announced that after consultation with BLAG it would no longer raise objections to "bankruptcy petitions filed jointly by same - sex couples who are married under state law ''.
Bi-national same - sex couples were kept from legally living in the United States by DOMA 's Section 3, which prevented one spouse from sponsoring the other for a green card. Following some uncertainty after the Obama Administration determined Section 3 to be unconstitutional, the United States Citizenship and Immigration Services (USCIS) reaffirmed its policy of denying such applications. With respect to obtaining a visitor 's visa, Bureau rules treated bi-national same - sex spouses the same as bi-national opposite - sex unmarried partners under the classification "cohabiting partners ''.
Tim Coco and Genesio J. Oliveira, a same - sex couple married in Massachusetts in 2005, successfully challenged this policy and developed a model since followed by other immigration activists. The U.S. refused to recognize their marriage, and in 2007 Oliveira, a Brazilian national, accepted "voluntary departure '' and returned to Brazil. They conducted a national press campaign A Boston Globe editorial commented, "Great strides toward equality for gays have been made in this country, but the woeful fate of Tim Coco and Genesio Oliveira shows that thousands of same - sex couples, even in Massachusetts, still are n't really full citizens. '' The editorial gained the attention of Senator John F. Kerry, who first lobbied Attorney General Eric Holder without success. He then gained the support of Homeland Security Secretary Janet Napolitano, who granted Oliveira humanitarian parole, enabling the couple to reunite in the U.S. in June 2010. Humanitarian parole is granted on a case - by - case basis at the Secretary 's discretion.
On September 28, 2011, in Lui v. Holder, U.S. District Court Judge Stephen V. Wilson rejected a challenge to DOMA, citing Adams v. Howerton (1982). The plaintiffs in that case had unsuccessfully challenged the denial of immediate relative status to the same - sex spouse of an American citizen. Early in 2012, two bi-national same - sex couples were granted "deferred action '' status, suspending deportation proceedings against the non-U.S. citizen for a year. A similar Texas couple had a deportation case dismissed in March 2012, leaving the non-citizen spouse unable to work legally in the United States but no longer subject to the threat of deportation.
On January 5, 2012, the U.S. District Court for the Northern District of Illinois in Chicago decided the suit of a same - sex binational couple. Demos Revelis and Marcel Maas, married in Iowa in 2010, sought to prevent the USCIS from applying Section 3 of DOMA to Revelis 's application for a permanent residence visa for Maas and, in the court 's words, "that their petition be reviewed and decided on the same basis as other married couples. '' Judge Harry D. Leinenweber, a Reagan appointee, denied the government 's motion to dismiss. BLAG has argued for the suit to be dismissed. In July the court stayed proceedings until mid-October because the USCIS was considering denying the plaintiffs ' request on grounds unrelated to DOMA.
On April 2, 2012, five bi-national same - sex couples represented by Immigration Equality and Paul, Weiss filed a lawsuit, Blesch v. Holder, in the District Court for the Eastern District of New York, claiming that Section 3 of DOMA violated their equal protection rights by denying the U.S. citizen in the relationship the same rights in the green card application process granted a U.S. citizen who is in a relationship of partners of the opposite sex. On July 25, Chief Judge Carol Bagley Amon stayed the case pending the resolution of Windsor by the Second Circuit.
Immigration rights advocate Lavi Soloway reported on June 19, 2012, that the Board of Immigration Appeals (BIA) had in four cases responded to green card denials on the part of the U.S. Citizenship and Immigration Services (USCIS) by asking the USCIS to document the marital status of the same - sex couples and determine whether the foreign national would qualify for a green card in the absence of DOMA Section 3. He said the BIA is "essentially setting the stage for being able to approve the petitions in a post-DOMA universe. ''
On April 19, 2013, U.S. District Judge Consuelo Marshall ordered that a suit brought in July 2012 by Jane DeLeon, a Philippine citizen, and her spouse, Irma Rodriguez, a U.S. citizen, could proceed as a class action. The plaintiffs, represented by the Center for Human Rights and Constitutional Law, contended that DeLeon was denied a residency waiver because of DOMA Section 3.
On June 28, 2013, the USCIS notified U.S. citizen Julian Marsh that it had approved the green card petition for his Bulgarian husband Traian Popov. Both are residents of Florida. On July 3, the USCIS office in Centennial, Colorado, granted Cathy Davis, a citizen of Ireland, a green card based on her marriage to U.S. citizen Catriona Dowling.
In 2009, United States Court of Appeals for the Ninth Circuit Judge Stephen Reinhardt declared DOMA unconstitutional in in re Levenson, an employment dispute resolution tribunal case, where the federal government refused to grant spousal benefits to Tony Sears, the husband of deputy federal public defender Brad Levenson. As an employee of the federal judiciary, Levenson is prohibited from suing his employer in federal court. Rather, employment disputes are handled at employment dispute resolution tribunals in which a federal judge hears the dispute in their capacity as a dispute resolution official.
Section 2 of DOMA states to give legal relief to any state from recognizing same - sex marriages performed in other jurisdictions. Various federal lawsuits, some filed alongside challenges to Section 3, have challenged Section 2.
On June 26, 2015 the U.S. Supreme Court ruled in Obergefell v. Hodges that the 14th Amendment requires all U.S. state laws to recognize same - sex marriages. This left Section 2 of DOMA as superseded and unenforceable.
|
the guid partition table method for partitioning a drive allows up to 128 partitions | GUID partition table - wikipedia
GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical storage device used in a desktop or server PC, such as a hard disk drive or solid - state drive, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of master boot record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information on a traditionally 512 - byte disk sector.
All modern PC operating systems support GPT. Some, including macOS and Microsoft Windows on x86, support booting from GPT partitions only on systems with EFI firmware, but FreeBSD and most Linux distributions can boot from GPT partitions on systems with both legacy BIOS firmware interface and EFI.
The widespread MBR partitioning scheme, dating from the early 1980s, imposed limitations that affect the use of modern hardware. One of the main limitations is the usage of 32 bits for storing block addresses and quantity information. For hard disks with 512 - byte sectors, the MBR partition table entries allow up to a maximum of 2 TiB (2 × 512 bytes).
Intel therefore developed a new partition table format in the late 1990s as part of what eventually became UEFI. As of 2010, GPT forms a subset of the UEFI specification. GPT allocates 64 bits for logical block addresses, therefore allowing a maximum disk size of 2 sectors. For disks with 512 - byte sectors, maximum size is 9.4 ZB (9.4 × 10 bytes) or 8 ZiB (coming from 2 sectors × 2 bytes per sector).
Like modern MBRs, GPTs use logical block addressing (LBA) in place of the historical cylinder - head - sector (CHS) addressing. The protective MBR is contained in LBA 0, the GPT header is in LBA 1, and the GPT header has a pointer to the partition table, or Partition Entry Array, typically LBA 2. The UEFI specification stipulates that a minimum of 16,384 bytes, regardless of sector size, be allocated for the Partition Entry Array. On a disk having 512 - byte sectors, a partition entry array size of 16,384 bytes and the minimum size of 128 bytes for each partition entry, LBA 34 is the first usable sector on the disk.
Hard - disk manufacturers are transitioning to 4,096 - byte sectors. The first such drives continued to present 512 - byte physical sectors to the OS, so degraded performance could result when the drive 's physical 4 - KB sector boundaries did not coincide with the 4 KB logical blocks, clusters and virtual memory pages common in many operating systems and file systems. This was a particular problem on writes, when the drive is forced to perform two read - modify - write operations to satisfy a single misaligned 4 KB write operation.
For backward compatibility with most legacy operating systems such as DOS, OS / 2, and versions of Windows before Vista, MBR partitions must always start on track boundaries according to the traditional CHS addressing scheme and end on a cylinder boundary. This is also true of partitions with emulated CHS geometries (as reflected by the BIOS and the CHS sectors entries in the MBR partition table) or partitions accessed only via LBA. Extended partitions must start on cylinder boundaries as well. This typically causes the first primary partition to start at LBA 63 on disks accessed via LBA, leaving a gap of 62 sectors with MBR - based disks, sometimes called "MBR gap '', "boot track '', or "embedding area ''. That otherwise unused disk space is commonly used by bootloaders such as GRUB for storing their second stages.
For limited backward compatibility, the space of the legacy MBR is still reserved in the GPT specification, but it is now used in a way that prevents MBR - based disk utilities from misrecognizing and possibly overwriting GPT disks. This is referred to as a protective MBR.
A single partition type of EEh, encompassing the entire GPT drive (where "entire '' actually means as much of the drive as can be represented in an MBR), is indicated and identifies it as GPT. Operating systems and tools which can not read GPT disks will generally recognize the disk as containing one partition of unknown type and no empty space, and will typically refuse to modify the disk unless the user explicitly requests and confirms the deletion of this partition. This minimizes accidental erasures. Furthermore, GPT - aware OSes may check the protective MBR and if the enclosed partition type is not of type EEh or if there are multiple partitions defined on the target device, the OS may refuse to manipulate the partition table.
While the MBR and protective MBR layouts were defined around 512 bytes per sector, the actual sector size can be larger on various media such as MO disks or hard disks with Advanced Format.
If the actual size of the disk exceeds the maximum partition size representable using the legacy 32 - bit LBA entries in the MBR partition table, the recorded size of this partition is clipped at the maximum, thereby ignoring the rest of disk. This amounts to a maximum reported size of 2 TiB, assuming a disk with 512 bytes per sector (see 512e). It would result in 16 TiB with 4 KiB sectors (4Kn), but since many older operating systems and tools are hard wired for a sector size of 512 bytes or are limited to 32 - bit calculations, exceeding the 2 TiB limit could cause compatibility problems.
In operating systems that support GPT - based boot through BIOS services rather than EFI, the first sector is also still used to store the first stage of the bootloader code, but modified to recognize GPT partitions. The bootloader in the MBR must not assume a sector size of 512 bytes.
The partition table header defines the usable blocks on the disk. It also defines the number and size of the partition entries that make up the partition table.
After the header, the Partition Entry Array describes partitions, using a minimum size of 128 bytes for each entry block. The starting location of the array on disk, and the size of each entry, are given in the GPT header. The first 16 bytes of each entry designate the partition type 's globally unique identifier (GUID). For example, the GUID for an EFI system partition is C12A7328 - F81F - 11D2 - BA4B - 00A0C93EC93B. The second 16 bytes are a GUID unique to the partition. Then follow the starting and ending 64 bit LBAs, partition attributes, and the 36 character (max.) Unicode partition name. As is the nature and purpose of GUIDs, no central registry is needed to ensure the uniqueness of the GUID partition type designators.
The 64 - bit partition table attributes are shared between 48 - bit common attributes for all partition types, and 16 - bit type - specific attributes:
Microsoft defines the type - specific attributes for basic data partition as:
Google defines the type - specific attributes for Chrome OS kernel as:
Windows 7 and earlier do not support UEFI on 32 - bit platforms, and therefore do not allow booting from GPT partitions.
|
who were the main supporters of the federalist in the newly emerged two-party system | First Party System - wikipedia
The First Party System is a model of American politics used in history and political science to periodize the political party system that existed in the United States between roughly 1792 and 1824. It featured two national parties competing for control of the presidency, Congress, and the states: the Federalist Party, created largely by Alexander Hamilton, and the rival Jeffersonian Democratic - Republican Party, formed by Thomas Jefferson and James Madison, usually called at the time the "Republican Party. '' The Federalists were dominant until 1800, while the Republicans were dominant after 1800.
In an analysis of the contemporary party system, Jefferson wrote on February 12, 1798:
Two political Sects have arisen within the U.S. the one believing that the executive is the branch of our government which the most needs support; the other that like the analogous branch in the English Government, it is already too strong for the republican parts of the Constitution; and therefore in equivocal cases they incline to the legislative powers: the former of these are called federalists, sometimes aristocrats or monocrats, and sometimes Tories, after the corresponding sect in the English Government of exactly the same definition: the latter are stiled republicans, Whigs, jacobins, anarchists, dis - organizers, etc. these terms are in familiar use with most persons.
Both parties originated in national politics, but soon expanded their efforts to gain supporters and voters in every state. The Federalists appealed to the business community, the Republicans to the planters and farmers. By 1796 politics in every state was nearly monopolized by the two parties, with party newspapers and caucuses becoming especially effective tools to mobilize voters.
The Federalists promoted the financial system of Treasury Secretary Hamilton, which emphasized federal assumption of state debts, a tariff to pay off those debts, a national bank to facilitate financing, and encouragement of banking and manufacturing. The Republicans, based in the plantation South, opposed a strong executive power, were hostile to a standing army and navy, demanded a strict reading of the Constitutional powers of the federal government, and strongly opposed the Hamilton financial program. Perhaps even more important was foreign policy, where the Federalists favored Britain because of its political stability and its close ties to American trade, while the Republicans admired the French and the French Revolution. Jefferson was especially fearful that British aristocratic influences would undermine republicanism. Britain and France were at war from 1793 -- 1815, with only one brief interruption. American policy was neutrality, with the federalists hostile to France, and the Republicans hostile to Britain. The Jay Treaty of 1794 marked the decisive mobilization of the two parties and their supporters in every state. President George Washington, while officially nonpartisan, generally supported the Federalists and that party made Washington their iconic hero.
The First Party System ended during the Era of Good Feelings (1816 -- 1824), as the Federalists shrank to a few isolated strongholds and the Democratic - Republicans lost unity. In 1824 -- 28, as the Second Party System emerged, the Democratic - Republican Party split into the Jacksonian faction, which became the modern Democratic Party in the 1830s, and the Henry Clay faction, which was absorbed by Clay 's Whig Party.
Leading nationalists, George Washington, Alexander Hamilton and Benjamin Franklin (see Annapolis Convention), called the Constitutional Convention in 1787. It drew up a new constitution that was submitted to state ratification conventions for approval. (The old Congress of the Confederation approved the process.) James Madison was the most prominent figure; he is often referred to as "the father of the Constitution. ''
An intense debate on ratification pitted the "Federalists '' (who supported the Constitution, and were led by Madison and Hamilton) against the "Anti-Federalists, '' (who opposed the new Constitution). The Federalists won and the Constitution was ratified. The Anti-Federalists were deeply concerned about the theoretical danger of a strong central government (like that of Britain) that someday could usurp the rights of the states. The framers of the Constitution did not want or expect political parties would emerge, because they considered them divisive.
The term "Federalist Party '' originated around 1792 -- 93 and refers to a somewhat different coalition of supporters of the Constitution in 1787 -- 88 as well as entirely new elements, and even a few former opponents of the Constitution (such as Patrick Henry). Madison largely wrote the Constitution and was thus a Federalist in 1787 -- 88, but he opposed the program of the Hamiltonians and their new "Federalist Party. ''
At first, there were no parties in the nation. Factions soon formed around dominant personalities such as Alexander Hamilton, the Secretary of the Treasury, and Thomas Jefferson, the Secretary of State, who opposed Hamilton 's broad vision of a powerful federal government. Jefferson especially objected to Hamilton 's flexible view of the Constitution, which stretched to include a national bank. Jefferson was joined by Madison in opposing the Washington administration, leading the "Anti-Administration party ''. Washington was re-elected without opposition in 1792.
Hamilton built a national network of supporters that emerged about 1792 -- 93 as the Federalist Party. In response, Jefferson and James Madison built a network of supporters of the republic in Congress and in the states that emerged in 1792 -- 93 as the Democratic - Republican Party. The elections of 1792 were the first contested on anything resembling a partisan basis. In most states, the congressional elections were recognized in some sense, as Jefferson strategist John Beckley put it, as a "struggle between the Treasury department and the republican interest. '' In New York, the race for governor was organized along these lines. The candidates were John Jay, who was a Hamiltonian, and incumbent George Clinton, who was allied with Jefferson and the Republicans.
In 1793, the first Democratic - Republican Societies were formed. They supported the French Revolution, which had just seen the execution of King Louis XVI, and generally supported the Jeffersonian cause. The word "democrat '' was proposed by Citizen Genet for the societies, and the Federalists ridiculed Jefferson 's friends as "democrats. '' After Washington denounced the societies as unrepublican, they mostly faded away.
In 1793, war broke out between England, France, and their European allies. The Jeffersonians favored France and pointed to the 1778 treaty that was still in effect. Washington and his unanimous cabinet (including Jefferson) decided the treaty did not bind the U.S. to enter the war; instead Washington proclaimed neutrality.
When war threatened with Britain in 1794, Washington sent John Jay to negotiate the Jay treaty with Britain; it was signed in late 1794, and ratified in 1795. It averted a possible war and settled many (but not all) of the outstanding issues between the U.S. and Britain. The Jeffersonians vehemently denounced the treaty, saying it threatened to undermine republicanism by giving the aristocratic British and their Federalist allies too much influence. The fierce debates over the Jay Treaty in 1794 -- 96, according to William Nisbet Chambers, nationalized politics and turned a faction in Congress into a nationwide party. To fight the treaty the Jeffersonians "established coordination in activity between leaders at the capital, and leaders, actives and popular followings in the states, counties and towns. ''
In 1796 Jefferson challenged John Adams for the presidency and lost. The Electoral College made the decision, and it was largely chosen by the state legislatures, many of which were not chosen on a national party basis.
By 1796, both parties had a national network of newspapers, which attacked each other vehemently. The Federalist and Republican newspapers of the 1790s traded vicious barbs against their enemies. An example is this acrostic from a Republican paper (note the sequence of first letters):
A SK -- who lies here beneath this monument? L o -- ' tis a self created MONSTER, who E mbraced all vice. His arrogance was like X erxes, who flogg 'd the disobedient sea, A dultery his smallest crime; when he N obility affected. This privilege D ecreed by Monarchs, was to that annext. E nticing and entic 'd to ev'ry fraud, R enounced virtue, liberty and God. H aunted by whores -- he haunted them in turn A ristocratic was this noble Goat M onster of monsters, in pollution skill 'd I mmers 'd in mischief, brothels, funds & banks L ewd slave to lust, -- afforded consolation; T o mourning whores, and tory - lamentation. O utdid all fools, tainted with royal name; N one but fools, their wickedness proclaim.
The most heated rhetoric came in debates over the French Revolution, especially the Jacobin Terror of 1793 -- 94 when the guillotine was used daily. Nationalism was a high priority, and the editors fostered an intellectual nationalism typified by the Federalist effort to stimulate a national literary culture through their clubs and publications in New York and Philadelphia, and through Federalist Noah Webster 's efforts to simplify and Americanize the language.
Historians have used statistical techniques to estimate the party breakdown in Congress. Many Congressmen were hard to classify in the first few years, but after 1796 there was less uncertainty. The first parties were anti-federalist and federalist.
Given the power of the Federalists, the Democratic Republicans had to work harder to win. In Connecticut in 1806 the state leadership sent town leaders instructions for the forthcoming elections; every town manager was told by state leaders "to appoint a district manager in each district or section of his town, obtaining from each an assurance that he will faithfully do his duty. '' Then the town manager was instructed to compile lists and total up the number of taxpayers, the number of eligible voters, how many were "decided democratic republicans, '' "decided federalists, '' or "doubtful, '' and finally to count the number of supporters who were not currently eligible to vote but who might qualify (by age or taxes) at the next election. The returns eventually went to the state manager, who issued directions to laggard towns to get all the eligibles to town meetings, help the young men qualify to vote, to nominate a full ticket for local elections, and to print and distribute the party ticket. (The secret ballot did not appear for a century.) This highly coordinated "get - out - the - vote '' drive would be familiar to modern political campaigners, but was the first of its kind in world history.
The Jeffersonians invented many campaign techniques that the Federalists later adopted and that became standard American practice. They were especially effective at building a network of newspapers in major cities to broadcast their statements and editorialize in their favor. But the Federalists, with a strong base among merchants, controlled more newspapers: in 1796 the Federalist papers outnumbered the Democratic Republicans by 4 to 1. Every year more papers began publishing; in 1800 the Federalists still had a 2 to 1 numerical advantage. Most papers, on each side, were weeklies with a circulation of 300 to 1000. Jefferson systematically subsidized the editors. Fisher Ames, a leading Federalist, who used the term "Jacobin '' to link Jefferson 's followers to the terrorists of the French Revolution, blamed the newspapers for electing Jefferson, seeing them as "an overmatch for any Government... The Jacobins owe their triumph to the unceasing use of this engine; not so much to skill in use of it as by repetition. '' Historians echo Ames ' assessment. As one explains,
It was the good fortune of the Republicans to have within their ranks a number of highly gifted political manipulators and propagandists. Some of them had the ability... to not only see and analyze the problem at hand but to present it in a succinct fashion; in short, to fabricate the apt phrase, to coin the compelling slogan and appeal to the electorate on any given issue in language it could understand.
Outstanding phrasemakers included editor William Duane, party leaders Albert Gallatin and Thomas Cooper, and Jefferson himself. Meanwhile, John J. Beckley of Pennsylvania, an ardent partisan, invented new campaign techniques (such as mass distribution of pamphlets and of handwritten ballots) that generated the grass - roots support and unprecedented levels of voter turnout for the Jeffersonians.
With the world thrown into global warfare after 1793, the small nation on the fringe of the European system could barely remain neutral. The Jeffersonians called for strong measures against Britain, and even for another war. The Federalists tried to avert war by the Jay Treaty (1795) with England. The treaty became highly controversial when the Jeffersonians denounced it as a sell - out to Britain, even as the Federalists said it avoided war, reduced the Indian threat, created good trade relations with the world 's foremost economic power, and ended lingering disputes from the Revolutionary War. When Jefferson came to power in 1801 he honored the treaty, but new disputes with Britain led to the War of 1812.
In 1798 disputes with France led to the Quasi-War (1798 - 1800), an undeclared naval war involving the navies and merchant ships of both countries. Democratic - Republicans said France really wanted peace, but the XYZ Affair undercut their position. Warning that full - scale war with France was imminent, Hamilton and his "High Federalist '' allies forced the issue by getting Congressional approval to raise a large new army (which Hamilton controlled), replete with officers ' commissions (which he bestowed on his partisans). The Alien and Sedition Acts (1798) clamped down on dissenters, including pro-Jefferson editors, and Vermont Congressman Matthew Lyon, who won re-election while in jail in 1798. In the Kentucky and Virginia Resolutions (1798), secretly drafted by Madison and Jefferson, the legislatures of the two states challenged the power of the federal government.
Jefferson and Albert Gallatin focused on the danger that the public debt, unless it was paid off, would be a threat to republican values. They were appalled that Hamilton was increasing the national debt and using it to solidify his Federalist base. Gallatin was the Republican Party 's chief expert on fiscal issues and as Treasury Secretary under Jefferson and Madison worked hard to lower taxes and lower the debt, while at the same time paying cash for the Louisiana Purchase and funding the War of 1812. Burrows says of Gallatin:
His own fears of personal dependency and his small - shopkeeper 's sense of integrity, both reinforced by a strain of radical republican thought that originated in England a century earlier, convinced him that public debts were a nursery of multiple public evils -- corruption, legislative impotence, executive tyranny, social inequality, financial speculation, and personal indolence. Not only was it necessary to extinguish the existing debt as rapidly as possible, he argued, but Congress would have to ensure against the accumulation of future debts by more diligently supervising government expenditures.
Andrew Jackson saw the national debt as a "national curse '' and he took special pride in paying off the entire national debt in 1835.
Madison worked diligently to form party lines inside the Congress and build coalitions with sympathetic political factions in each state. In 1800, a critical election galvanized the electorate, sweeping the Federalists out of power, and electing Jefferson and his Democratic - Republican Party. Adams made a few last minute, "midnight appointments '', notably Federalist John Marshall as Chief Justice. Marshall held the post for three decades and used it to federalize the Constitution, much to Jefferson 's dismay.
As president, Jefferson worked to cleanse the government of Adams 's "midnight appointments '', withholding the commissions of 25 of 42 appointed judges and removing army officers. The sense that the nation needed two rival parties to balance each other had not been fully accepted by either party; Hamilton had viewed Jefferson 's election as the failure of the Federalist experiment. The rhetoric of the day was cataclysmic -- election of the opposition meant the enemy would ruin the nation. Jefferson 's foreign policy was not exactly pro-Napoleon, but it applied pressure on Britain to stop impressment of American sailors and other hostile acts. By engineering an embargo of trade against Britain, Jefferson and Madison plunged the nation into economic depression, ruined much of the business of Federalist New England, and finally precipitated the War of 1812 with a much larger and more powerful foe.
The Federalists vigorously criticized the government, and gained strength in the industrial Northeast. However, they committed a major blunder in 1814. That year the semi-secret "Hartford Convention '' passed resolutions that verged on secession; their publication ruined the Federalist party. It had been limping along for years, with strength in New England and scattered eastern states but practically no strength in the West. While Federalists helped invent or develop numerous campaign techniques (such as the first national nominating conventions in 1808), their elitist bias alienated the middle class, thus allowing the Jeffersonians to claim they represented the true spirit of "republicanism. ''
Because of the importance of foreign policy (decided by the national government), of the sale of national lands, and the patronage controlled by the President, the factions in each state realigned themselves in parallel with the Federalists and Republicans. Some newspaper editors became powerful politicians, such as Thomas Ritchie, whose "Richmond Junto '' controlled Virginia state politics from 1808 into the 1840s.
New England was always the stronghold of the Federalist party. One historian explains how well organized it was in Connecticut:
It was only necessary to perfect the working methods of the organized body of office - holders who made up the nucleus of the party. There were the state officers, the assistants, and a large majority of the Assembly. In every county there was a sheriff with his deputies. All of the state, county, and town judges were potential and generally active workers. Every town had several justices of the peace, school directors and, in Federalist towns, all the town officers who were ready to carry on the party 's work... Militia officers, state 's attorneys, lawyers, professors and schoolteachers were in the van of this "conscript army. '' In all, about a thousand or eleven hundred dependent officer - holders were described as the inner ring which could always be depended upon for their own and enough more votes within their control to decide an election. This was the Federalist machine. ''
Religious tensions polarized Connecticut, as the established Congregational Church, in alliance with the Federalists, tried to maintain its grip on power. Dissenting groups moved toward the Jeffersonians. The failure of the Hartford Convention in 1814 wounded the Federalists, who were finally upended by the Democratic - Republicans in 1817.
The First Party System was primarily built around foreign policy issues that vanished with the defeat of Napoleon and the compromise settlement of the War of 1812. Furthermore, the fears that Federalists were plotting to reintroduce aristocracy dissipated. Thus an "Era of Good Feelings '' under James Monroe replaced the high - tension politics of the First Party System about 1816. Personal politics and factional disputes were occasionally still hotly debated, but Americans no longer thought of themselves in terms of political parties.
Historians have debated the exact ending of the system. Most concluded it petered out by 1820. The little state of Delaware, largely isolated from the larger political forces controlling the nation, saw the First Party System continue well into the 1820s, with the Federalists occasionally winning some offices.
Alexander Hamilton felt that only by mobilizing its supporters on a daily basis in every state on many issues could support for the government be sustained through thick and thin. Newspapers were needed to communicate inside the party; patronage helped the party 's leaders and made new friends.
Hamilton, and especially Washington, distrusted the idea of an opposition party, as shown in George Washington 's Farewell Address of 1796. They thought opposition parties would only weaken the nation. By contrast Jefferson was the main force behind the creation and continuity of an opposition party. He deeply felt the Federalists represented aristocratic forces hostile to true republicanism and the true will of the people, as he explained in a letter to Henry Lee in 1824:
Men by their constitutions are naturally divided into two parties: 1. Those who fear and distrust the people, and wish to draw all powers from them into the hands of the higher classes. 2. Those who identify themselves with the people, have confidence in them, cherish and consider them as the most honest and safe, although not the most wise depositary of the public interests. In every country these two parties exist, and in every one where they are free to think, speak, and write, they will declare themselves. Call them, therefore, liberals and serviles, Jacobins and Ultras, whigs and tories, republicans and federalists, aristocrats and democrats, or by whatever name you please, they are the same parties still and pursue the same object. The last appellation of aristocrats and democrats is the true one expressing the essence of all. ''
Hofstadter (1970) shows it took many years for the idea to take hold that having two parties is better than having one, or none. That transition was made possible by the successful passing of power in 1801 from one party to the other. Although Jefferson systematically identified Federalist army officers and officeholders, he was blocked from removing all of them by protests from republicans. The Quids complained he did not go far enough.
While historians are not unanimous, Princeton scholar Sean Wilentz in 2010 identified a scholarly trend very much in Hamilton 's favor:
In recent years, Hamilton and his reputation have decidedly gained the initiative among scholars who portray him as the visionary architect of the modern liberal capitalist economy and of a dynamic federal government headed by an energetic executive. Jefferson and his allies, by contrast, have come across as naïve, dreamy idealists. At best according to many historians, the Jeffersonians were reactionary utopians who resisted the onrush of capitalist modernity in hopes of turning America into a yeoman farmers ' arcadia. At worst, they were proslavery racists who wish to rid the West of Indians, expand the empire of slavery, and keep political power in local hands -- all the better to expand the institution of slavery and protect slaveholders ' rights to own human property.
|
why is northamptonshire called the rose of the shires | Northamptonshire - wikipedia
Coordinates: 52 ° 17 ′ N 0 ° 50 ′ W / 52.283 ° N 0.833 ° W / 52.283; - 0.833
Northamptonshire (/ nɔːrˈθæmptənʃər, - ʃɪər /; abbreviated Northants.), archaically known as the County of Northampton, is a county in the East Midlands of England. In 2015 it had a population of 723,000. The county is administered by Northamptonshire County Council and by seven non-metropolitan district councils. It is known as "The Rose of the Shires ''.
Covering an area of 2,364 square kilometres (913 sq mi), Northamptonshire is landlocked between eight other counties: Warwickshire to the west, Leicestershire and Rutland to the north, Cambridgeshire to the east, Bedfordshire to the south - east, Buckinghamshire to the south, Oxfordshire to the south - west and Lincolnshire to the north - east -- England 's shortest administrative county boundary at 19 metres (20 yards). Northamptonshire is the southernmost county in the East Midlands region.
Apart from the county town of Northampton, other major population centres include Kettering, Corby, Wellingborough, Rushden and Daventry. Northamptonshire 's county flower is the cowslip.
Much of Northamptonshire 's countryside appears to have remained somewhat intractable with regards to early human occupation, resulting in an apparently sparse population and relatively few finds from the Palaeolithic, Mesolithic and Neolithic periods. In about 500 BC the Iron Age was introduced into the area by a continental people in the form of the Hallstatt culture, and over the next century a series of hill - forts were constructed at Arbury Camp, Rainsborough camp, Borough Hill, Castle Dykes, Guilsborough, Irthlingborough, and most notably of all, Hunsbury Hill. There are two more possible hill - forts at Arbury Hill (Badby) and Thenford.
In the 1st century BC, most of what later became Northamptonshire became part of the territory of the Catuvellauni, a Belgic tribe, the Northamptonshire area forming their most northerly possession. The Catuvellauni were in turn conquered by the Romans in 43 AD.
The Roman road of Watling Street passed through the county, and an important Roman settlement, Lactodorum, stood on the site of modern - day Towcester. There were other Roman settlements at Northampton, Kettering and along the Nene Valley near Raunds. A large fort was built at Longthorpe.
After the Romans left, the area eventually became part of the Anglo - Saxon kingdom of Mercia, and Northampton functioned as an administrative centre. The Mercians converted to Christianity in 654 AD with the death of the pagan king Penda. From about 889 the area was conquered by the Danes (as at one point almost all of England was, except for Athelney marsh in Somerset) and became part of the Danelaw - with Watling Street serving as the boundary - until being recaptured by the English under the Wessex king Edward the Elder, son of Alfred the Great, in 917. Northamptonshire was conquered again in 940, this time by the Vikings of York, who devastated the area, only for the county to be retaken by the English in 942. Consequently, it is one of the few counties in England to have both Saxon and Danish town - names and settlements.
The county was first recorded in the Anglo - Saxon Chronicle (1011), as Hamtunscire: the scire (shire) of Hamtun (the homestead). The "North '' was added to distinguish Northampton from the other important Hamtun further south: Southampton - though the origins of the two names are in fact different.
Rockingham Castle was built for William the Conqueror and was used as a Royal fortress until Elizabethan times. In 1460, during the Wars of the Roses, the Battle of Northampton took place and King Henry VI was captured. The now - ruined Fotheringhay Castle was used to imprison Mary, Queen of Scots, before her execution.
George Washington, the first President of the United States of America, was born into the Washington family who had migrated to America from Northamptonshire in 1656. George Washington 's ancestor, Lawrence Washington, was Mayor of Northampton on several occasions and it was he who bought Sulgrave Manor from Henry VIII in 1539. It was George Washington 's great - grandfather, John Washington, who emigrated in 1656 from Northants to Virginia. Before Washington 's ancestors moved to Sulgrave, they lived in Warton, Lancashire.
During the English Civil War, Northamptonshire strongly supported the Parliamentarian cause, and the Royalist forces suffered a crushing defeat at the Battle of Naseby in 1645 in the north of the county. King Charles I was imprisoned at Holdenby House in 1647.
In 1823 Northamptonshire was said to "(enjoy) a very pure and wholesome air '' because of its dryness and distance from the sea. Its livestock were celebrated: "Horned cattle, and other animals, are fed to extraordinary sizes: and many horses of the large black breed are reared. ''
Nine years later, the county was described as "a county enjoying the reputation of being one of the healthiest and pleasantest parts of England '' although the towns were "of small importance '' with the exceptions of Peterborough and Northampton. In summer, the county hosted "a great number of wealthy families... country seats and villas are to be seen at every step. '' Northamptonshire is still referred to as the county of "spires and squires '' because of the numbers of stately homes and ancient churches.
In the 18th and 19th centuries, parts of Northamptonshire and the surrounding area became industrialised. The local specialisation was shoemaking and the leather industry and by the end of the 19th century it was almost definitively the boot and shoe making capital of the world. In the north of the county a large ironstone quarrying industry developed from 1850. During the 1930s, the town of Corby was established as a major centre of the steel industry. Much of Northamptonshire nevertheless remains largely rural.
Corby was designated a new town in 1950 and Northampton followed in 1968. As of 2005 the government is encouraging development in the South Midlands area, including Northamptonshire.
The Soke of Peterborough was historically associated with and considered part of Northamptonshire and the Church of England Diocese that covers Northamptonshire is centered in Peterborough Cathedral, which now in Cambridgeshire. However, Peterborough had its own courts of quarter sessions and, later, county council, and in 1965 it was merged with the neighbouring small county of Huntingdonshire. Under the Local Government Act 1972 the city of Peterborough became a district of Cambridgeshire.
Northamptonshire is a landlocked county located in the southern part of the East Midlands region which is sometimes known as the South Midlands. The county contains the watershed between the River Severn and The Wash while several important rivers have their sources in the north - west of the county, including the River Nene, which flows north - eastwards to The Wash, and the "Warwickshire Avon '', which flows south - west to the Severn. In 1830 it was boasted that "not a single brook, however insignificant, flows into it from any other district ''. The highest point in the county is Arbury Hill at 225 metres (738 ft).
There are several towns in the county with Northampton being the largest and most populous. At the time of the 2011 census, a population of 691,952 lived in the county with 212,069 living in Northampton. The table below shows all towns with over 10,000 inhabitants.
As of 2010 there are 16 settlements in Northamptonshire with a town charter:
Like the rest of the British Isles, Northamptonshire has an oceanic climate (Köppen climate classification). The table below shows the average weather for Northamptonshire from the Moulton weather station.
Northamptonshire, like most English counties, is divided into a number of local authorities. The seven borough / district councils cover 15 towns and hundreds of villages. The county has a two - tier structure of local government and an elected county council based in Northampton, and is also divided into seven districts each with their own district or borough councils:
Northampton itself is the most populous urban district in England not to be administered as a unitary authority (even though several smaller districts are unitary). During the 1990s local government reform, Northampton Borough Council petitioned strongly for unitary status, which led to fractured relations with the County Council.
Before 1974, the Soke of Peterborough was considered geographically part of Northamptonshire, although it had had a separate county council since the late 1889 and separate courts of quarter sessions before then. Now part of Cambridgeshire, the city of Peterborough became a unitary authority in 1998, but it continues to form part of that county for ceremonial purposes.
In early 2018, Northamptonshire County Council was declared technically insolvent and would be able to provide only the bare essential services. According to The Guardian the problems were caused by "a reckless half - decade in which it refused to raise council tax to pay for the soaring costs of social care '' and "partly due to past failings, the council is now having to make some drastic decisions to reduce services to a core offer. '' Some observers, such as Simon Edwards of the County Councils Network, added another perspective on the cause of the financial crisis, the United Kingdom government austerity programme: "It is clear that, partly due to past failings, the council is now having to make some drastic decisions to reduce services to a core offer. However, we ca n't ignore that some of the underlying causes of the challenges facing Northamptonshire, such as dramatic reductions to council budgets and severe demand for services, mean county authorities across the country face funding pressures of £ 3.2 bn over the next two years. ''
In early 2018, following the events above, Government - appointed commissioners took over control of the Council 's affairs. Consequently, the Secretary of State for Housing, Communities and Local Government commissioned an independent report which, in March 2018, proposed structural changes to local government in Northamptonshire. These changes would see the existing county council and district councils abolished and two new unitary authorities created in their place. One authority would consist of the existing districts of Daventry, Northampton and South Northamptonshire and the other authority would consist of Corby, East Northamptonshire, Kettering and Wellingborough districts.
Northamptonshire returns seven members of Parliament, all of whom are currently from the Conservative Party.
From 1993 until 2005, Northamptonshire County Council, for which each of the 73 electoral divisions in the county elect a single councillor, had been held by the Labour Party; it had been under no overall control since 1981. The councils of the rural districts -- Daventry, East Northamptonshire, and South Northamptonshire -- are strongly Conservative, whereas the political composition of the urban districts is more mixed. At the 2003 local elections, Labour lost control of Kettering, Northampton, and Wellingborough, retaining only Corby. Elections for the entire County Council are held every four years -- the last were held on 5 May 2005 when control of the County Council changed from the Labour Party to the Conservatives. The County Council uses a leader and cabinet executive system and abolished its area committees in April 2006.
Historically, Northamptonshire 's main industry was manufacturing of boots and shoes. Many of the manufacturers closed down in the Thatcher era which in turn left many county people unemployed. Although R Griggs and Co Ltd, the manufacturer of Dr. Martens, still has its UK base in Wollaston near Wellingborough, the shoe industry in the county is now nearly gone. Large employers include the breakfast cereal manufacturers Weetabix, in Burton Latimer, the Carlsberg brewery in Northampton, Avon Products, Siemens, Barclaycard, Saxby Bros Ltd and Golden Wonder. In the west of the county is the Daventry International Railfreight Terminal; which is a major rail freight terminal located on the West Coast Main Line near Rugby. Wellingborough also has a smaller railfreight depot on Finedon Road, called Nelisons sidings.
This is a chart of trend of the regional gross value added of Northamptonshire at current basic prices in millions of British Pounds Sterling (correct on 21 December 2005):
The region of Northamptonshire, Oxfordshire and the South Midlands has been described as "Motorsport Valley... a global hub '' for the motor sport industry. The Mercedes GP and Force India Formula One teams have their bases at Brackley and Silverstone respectively, while Cosworth and Mercedes - Benz High Performance Engines are also in the county at Northampton and Brixworth.
International motor racing takes place at Silverstone Circuit and Rockingham Motor Speedway; Santa Pod Raceway is just over the border in Bedfordshire but has a Northants postcode. A study commissioned by Northamptonshire Enterprise Ltd (NEL) reported that Northamptonshire 's motorsport sites attract more than 2.1 million visitors per year who spend a total of more than £ 131 million within the county.
Northamptonshire forms part of the Milton Keynes and South Midlands Growth area which also includes Milton Keynes, Aylesbury Vale and Bedfordshire. This area has been identified as an area which is due to have tens of thousands additional homes built between 2010 - 2020. In North Northamptonshire (Boroughs of Corby, Kettering, Wellingborough and East Northants), over 52,000 homes are planned or newly built and 47,000 new jobs are also planned. In West Northamptonshire (boroughs of Northampton, Daventry and South Northants), over 48,000 homes are planned or newly built and 37,000 new jobs are planned. To oversee the planned developments, two urban regeneration companies have been created: North Northants Development Company (NNDC) and the West Northamptonshire Development Corporation. The NNDC launched a controversial campaign called North Londonshire to attract people from London to the county. There is also a county - wide tourism campaign with the slogan Northamptonshire, Let yourself grow.
Northamptonshire County Council operates a complete comprehensive system with 42 state secondary schools. The county 's music and performing arts trust provides peripatetic music teaching to schools. It also supports 15 local Saturday morning music and performing arts centres around the county and provides a range of county - level music groups.
There are seven colleges across the county, with the Tresham College of Further and Higher Education having four campuses in three towns: Corby, Kettering and Wellingborough. Tresham, which was taken over by Bedford College in 2017 due to failed Ofsted inspections, provides further education and offers vocational courses and re-sit GCSEs. It also offers Higher Education options in conjunction with several universities. Other colleges in the county are: Fletton House, Knuston Hall, Moulton College, Northampton College, Northampton New College and The East Northamptonshire College.
Northamptonshire has one university, the University of Northampton. It has two campuses 2.5 miles (4.0 km) apart and 10,000 students. It offers courses for needs and interests from foundation and undergraduate level to postgraduate, professional and doctoral qualifications. Subjects include traditional arts, humanities and sciences subjects, as well as entrepreneurship, product design and advertising.
Northampton has several National Health Service branches, the main acute NHS hospitals in the county being Northampton, Kettering General Hospital and Danetre Hospital in Daventry. In the south - west of the county, the towns of Brackley, Towcester and surrounding villages are serviced by the Horton General Hospital in Banbury in neighbouring Oxfordshire for acute medical needs. A similar arrangement is in place for the town of Oundle and nearby villages, served by Peterborough City Hospital.
In February 2011 a new satellite out - patient centre opened at Nene Park, Irthlingborough to provide over 40,000 appointments a year, as well as a minor injury unit to serve Eastern Northamptonshire. This was opened to relieve pressure off Kettering General Hospital, and has also replaced the dated Rushden Memorial Clinic which provided at the time about 8,000 appointments a year, when open.
In June 2008, Anglian Water found traces of Cryptosporidium in water supplies of Northamptonshire. The local reservoir at Pitsford was investigated and a European rabbit which had strayed into it was found, causing the problem. About 250,000 residents were affected; by 14 July 2008, 13 cases of cryptosporidiosis attributed to water in Northampton had been reported. Following the end of the investigation, Anglian Water lifted its boil notice for all affected areas on 4 July 2008. Anglian Water revealed that it will pay up to £ 30 per household as compensation for customers hit by the water crisis.
The gap in the hills at Watford Gap meant that many south - east to north - west routes passed through Northamptonshire. The Roman Road Watling Street (now part of the A5) passes through here, as did later canals, railways and major roads.
Major national roads including the M1 motorway (London to Leeds) and the A14 (Rugby to Felixstowe), provide Northamptonshire with transport links, both north -- south and east -- west. The A43 joins the M1 to the M40 motorway, passing through the south of the county to the junction west of Brackley, and the A45 links Northampton with Wellingborough and Peterborough.
The county road network, managed by Northamptonshire County Council includes the A45 west of the M1 motorway, the A43 between Northampton and the county boundary near Stamford, the A361 between Kilsby and Banbury (Oxon) and all B, C and Unclassified Roads. Since 2009 these highways have been managed on behalf of the county council by MGWSP, a joint venture between May Gurney and WSP.
Two major canals -- the Oxford and the Grand Union -- join in the county at Braunston. Notable features include a flight of 17 locks on the Grand Union at Rothersthorpe, the canal museum at Stoke Bruerne, and a tunnel at Blisworth which, at 2,813 metres (3,076 yd), is the third - longest navigable canal tunnel on the UK canal network.
A branch of the Grand Union Canal connects to the River Nene in Northampton and has been upgraded to a "wide canal '' in places and is known as the Nene Navigation. It is famous for its guillotine locks.
Two trunk railway routes, the Midland Main Line and the West Coast Main Line, cross the county. At its peak, Northamptonshire had 75 railway stations. It now has only six, at Northampton and Long Buckby on the West Coast Main Line, Kettering, Wellingborough and Corby on the Midland Main Line, along with King 's Sutton, only a few yards from the boundary with Oxfordshire on the Chiltern Main Line.
Before nationalisation of the railways in 1948 and the creation of British Railways, three of the "Big Four '' railway companies operated in Northamptonshire: the London, Midland and Scottish Railway, London and North Eastern Railway and Great Western Railway. Only the Southern Railway was not represented. As of 2018, it is served by Chiltern Railways and East Midlands Trains, Virgin Trains and West Midlands Trains.
Corby was described as the largest town in Britain without a railway station. The railway running through the town from Kettering to Oakham in Rutland was previously used only by freight traffic and occasional diverted passenger trains that did not stop at the station. The line through Corby was once part of a main line to Nottingham through Melton Mowbray, but the stretch between Melton and Nottingham was closed in 1968. In the 1980s, an experimental passenger shuttle service ran between Corby and Kettering but was withdrawn a few years later. On 23 February 2009, a new railway station opened, providing direct hourly access to London St Pancras. Following the opening of Corby Station, Rushden then became the largest town in the United Kingdom without a direct railway station.
Railway services in Northamptonshire were reduced by the Beeching cuts in the 1960s. Closure of the line connecting Northampton to Peterborough by way of Wellingborough, Thrapston, and Oundle left eastern Northamptonshire devoid of railways. Part of this route was reopened in 1977 as the Nene Valley Railway. A section of one of the closed lines, the Northampton to Market Harborough line, is now the Northampton & Lamport heritage railway, while the route as a whole forms a part of the National Cycle Network, as the Brampton Valley Way.
As early as 1897 Northamptonshire would have had its own Channel Tunnel rail link with the creation of the Great Central Railway, which was intended to connect to a tunnel under the English Channel. Although the complete project never came to fruition, the rail link through Northamptonshire was constructed, and had stations at Charwelton, Woodford Halse, Helmdon and Brackley. It became part of the London and North Eastern Railway in 1923 (and of British Railways in 1948) before its closure in 1966.
In June 2009 the Association of Train Operating Companies (ATOC) recommended opening a new station on the former Irchester railway station site for Rushden, Higham Ferrers and Irchester, called Rushden Parkway. Network Rail is looking at electrifying the Midland Main Line north of Bedford. An open access company has approached Network Rail for services to Oakham in Rutland to London via the county.
The Rushden, Higham and Wellingborough Railway would like to see the railway fully reopen between Wellingborough and Higham Ferrers. As part of the government - proposed High Speed 2 railway line (between London and Birmingham), the high - speed railway will go through the southern part of the county but with no station built.
Most buses are operated by Stagecoach Midlands. Some town area routes have been named the Corby Star, Connect Kettering, Connect Wellingborough and Daventry Dart; the last three of these routes have route designations that include a letter (such as A, D1, W1, W2).
Sywell Aerodrome, on the edge of Sywell village, has three grass runways and one concrete all - weather runway. It is, however, only 1000 metres long and therefore can not be served by passenger jets.
The three main newspapers in the county are the Northampton Herald & Post, the Northamptonshire Evening Telegraph and the Northampton Chronicle & Echo.
Most of Northamptonshire is served by the BBC 's East region which is based in Norwich. The regional news television programme, BBC Look East, provides local news across the East of England, Milton Keynes and most of Northamptonshire. An opt - out in Look East covers the west part of the region only, broadcast from Cambridge. This area also is covered by the BBC 's The Politics Show: East and Inside Out: East. A small part of the north of the county is covered by BBC East Midlands 's regional news BBC East Midlands Today, while a small part of South Northamptonshire is covered by BBC Oxford 's regional news BBC Oxford News which is part of the BBC South Today programme.
Most of Northamptonshire is covered by ITV 's Anglia region (which broadcasts Anglia Today / Tonight); in the south - west of the county, primarily Brackley and the surrounding villages, broadcasts can be received from the Oxford transmitter which broadcasts ITV Meridian 's Meridian Today / Tonight.
BBC Radio Northampton, broadcasts on two FM frequencies: 104.2 MHz for the south and west of the county (including Northampton and surrounding area) and 103.6 MHz for the north of the county (including Kettering, Wellingborough and Corby). BBC Radio Northampton is situated on Abington Street, Northampton. These services are broadcast from the Moulton Park & Geddington transmitters.
There are three commercial radio stations in the county. The former Kettering and Corby Broadcasting Company (KCBC) station is now called Connect Radio (97.2 and 107.4 MHZ FM), following a merger with the Wellingborough - based station of the same name. While both Heart Northants (96.6 MHz FM) and AM station Smooth Northants (1557 kHz) air very little local content as they form part of a national network. National digital radio is also available in Northamptonshire, though coverage is limited.
Corby is served by its own dedicated station, Corby Radio (96.3 fm), based in the town and focused on local content.
Northamptonshire has many rugby union clubs. Its premier team Northampton Saints, competes in the Aviva Premiership and won the European championship in 2000 by defeating Munster for the Heineken Cup, 9 - 8. Saints are based at the 15,249 capacity Franklin 's Gardens ground. In 2014 the club won the Aviva Premiership as well as the Challenge Cup. For the 2014 / 15 campaign the team finished top of the table for the first time in the premiership, eventually losing 24 - 29 to Saracens in the playoff semi-final.
Northamptonshire has twenty four football clubs operating in the top ten levels of the English football league system. The sport in the area is administered by the Northamptonshire County Football Association, which is affiliated with the United Counties League, the Northamptonshire Combination Football League, the Northampton Town Football League, as well as the Peterborough and District Football League in neighbouring Cambridgeshire. Only two clubs in Northamptonshire have competed in The Football League - Northampton Town and the defunct Rushden & Diamonds.
The most prominent Association Football club in the county is League Two side Northampton Town, which attracts between 4,000 - 6,000 fans on an average game day and has been part of the Football League since 1923. Their home ground is Sixfields Stadium which in 1994. The first match there took place on 15 October against Barnet Football Club. The stadium can hold up to 7,500 people, with provisions for the disabled. The club 's most successful period occurred between 1962 - 67 when it progressed from Fourth Division to First Division, before falling back to the bottom of Fourth Division again by 1974. The club has reached the 5th round of the FA Cup on 3 occasions, the last being in 1970. The 4th round was last reached in 2004. Recently, the Cobblers were promoted back to League 1 on 9 April 2016. The week after that, they secured the club 's first title for 29 years by winning league 2 after a 0 - 0 draw at Exeter City. The most goals in a career was performed by the player, Jack English in 1947 - 59 with 143 goals out of 321 matches.
Because Northamptonshire is located near the centre of England, many of its clubs end up being swapped around between Northern and Southern - based leagues.
Level 6
Level 7
Level 8
Nineteen teams compete in the United Counties League (UCL), a league operating at levels 9 and 10 of the English League system, and which encompasses all of Northamptonshire and parts of neighbouring counties. Prominent at this level in recent years (2011 - 2015) has been AFC Rushden & Diamonds, a "Phoenix Club '' created and owned by supporters of the now defunct Rushden & Diamonds F.C. which, in its heyday, fielded a fully professional team at the third level of the English League system. About 550 have attended AFC Rushden and Diamond home matches in recent years, dwarfing attendances from other clubs. Another prominent club at this level is Wellingborough Town, who once competed in the Southern Football League and has an average match attendance of 122
Other clubs in the UCL are Bugbrooke St Michaels F.C., Burton Park Wanderers F.C., Cogenhoe United F.C., Desborough Town F.C., Irchester United F.C., Long Buckby A.F.C., Northampton ON Chenecks F.C., Northampton Sileby Rangers F.C., Northampton Spencer F.C., Raunds Town F.C., Rothwell Corinthians F.C., Rothwell Town F.C., Rushden & Higham United F.C., Stewarts & Lloyds Corby A.F.C., Thrapston Town F.C., Wellingborough Whitworth F.C. and Woodford United F.C.
Northamptonshire County Cricket Club (also known as The Steelbacks) is in Division Two of the County Championship, and play their home games at the County Cricket Ground, Northampton. They finished as runners - up in the Championship on four occasions in the period before it split into two divisions.
In 2013 the club won the Friends Life t20, beating Surrey in the final. Appearing in their 3rd final in 4 years, the Steelbacks to beat Durham by 4 wickets at Edgbaston in 2016 to lift the Natwest t20 Blast trophy for the second time. It also won the NatWest Trophy on two occasions and the Benson & Hedges Cup once.
Silverstone is a major motor racing circuit, most notably used for the British Grand Prix. There is also a dedicated radio station for the circuit which broadcasts on 87.7 FM or 1602 MW when events are taking place. However, part of the circuit is across the border in Buckinghamshire. Rockingham Speedway Corby is the largest stadium in the United Kingdom with 130,000 seats. It is a US - style elliptical racing circuit (the largest of its kind outside of the United States), and is used extensively for all kinds of motor racing events. The Santa Pod drag racing circuit, venue for the FIA European Drag Racing Championships is just across the border in Bedfordshire but has a NN postcode. Cosworth the high - performance engineering company is based in Northampton.
Two Formula One teams are based in Northamptonshire, with Mercedes at Brackley and Force India in Silverstone. Force India also have a secondary facility in Brackley, while Mercedes build engines for themselves, Force India and Williams at Brixworth.
There are seven competitive swimming clubs in the county: Northampton Swimming Club, Wellingborough Amateur Swimming Club, Rushden Swimming Club, Kettering Amateur Swimming Club, Corby Amateur Swimming Club, Daventry Dolphins Swimming Club, and Nene Valley Swimming Club. There is also one diving club: Corby Steel Diving Club. The main pool in the county is Corby East Midlands International Pool, which has an 8 - lane 50m swimming pool with a floor that can adjust in depth to provide a 25m pool. The pool is home to the Northamptonshire Amateur Association 's County Championships as well as some of the Youth Midland Championships.
Northamptonshire is home to 2016 paralympian, Ellie Robinson. She was talent - spotted in July 2012 and developed at Northampton Swimming club, and was selected to compete for Great Britain at the 2016 IPC Swimming European Championships. It was here she won three bronze and one silver medal.
Jane Austen set her 1814 novel Mansfield Park mostly in Northamptonshire.
Rock and pop bands originating in the area have included Bauhaus, Temples, The Departure, New Cassettes, Raging Speedhorn and Defenestration.
Kinky Boots, the 2005 British - American film and subsequent stage musical adaptation, was based on the true story of a traditional Northamptonshire shoe factory which, to stay afloat, entered the market for fetish footwear.
Richard Coles, an English musician, partnered in the 1980s with Jimmy Somerville to create The Communards band. They made three Top Ten Hits and made the Number 1 record in 1986 with their song ' Do n't Leave me this way '. In 2012, The University of Northampton awarded him an honorary doctorate. He is now the vicar of Finedon in Northamptonshire.
|
who is the green arrow's arch enemy | List of Green Arrow enemies - wikipedia
This is a list of fictional characters from DC Comics who are enemies of Green Arrow.
In alphabetical order (with issue and date of debut appearance).
In alphabetical order (with issue and date of first appearance)
|
where does the tennessee river start and where does it end | Tennessee River - wikipedia
The Tennessee River is the largest tributary of the Ohio River. It is approximately 652 miles (1,049 km) long and is located in the southeastern United States in the Tennessee Valley. The river was once popularly known as the Cherokee River, among other names, as many of the Cherokee had their territory along its banks, especially in eastern Tennessee and northern Alabama. Its current name is derived from the Cherokee village Tanasi.
The Tennessee River is formed at the confluence of the Holston and French Broad rivers on the east side of present - day Knoxville, Tennessee. From Knoxville, it flows southwest through East Tennessee toward Chattanooga before crossing into Alabama. It loops through northern Alabama and eventually forms a small part of the state 's border with Mississippi, before returning to Tennessee. At this point, it defines the boundary between two of Tennessee 's Grand Divisions: Middle and West Tennessee.
The Tennessee -- Tombigbee Waterway, a U.S. Army Corps of Engineers project providing navigation on the Tombigbee River and a link to the Port of Mobile, enters the Tennessee River near the Tennessee - Alabama - Mississippi boundary. This waterway reduces the navigation distance from Tennessee, north Alabama, and northern Mississippi to the Gulf of Mexico by hundreds of miles. The final part of the Tennessee 's run is in Kentucky, where it separates the Jackson Purchase from the rest of the state. It flows into the Ohio River at Paducah, Kentucky.
The river has been dammed numerous times, primarily in the 20th century by Tennessee Valley Authority (TVA) projects since the 1930s. The placement of TVA 's Kentucky Dam on the Tennessee River and the Corps of Engineers ' Barkley Dam on the Cumberland River led to the development of associated lakes, and the creation of what is called Land Between the Lakes. A navigation canal located at Grand Rivers, Kentucky, links Kentucky Lake and Lake Barkley. The canal allows for a shorter trip for river traffic going from the Tennessee to most of the Ohio River, and for traffic going down the Cumberland River toward the Mississippi.
Cities in bold type have more than 30,000 residents
The river appears on French maps from the late 17th century with the names "Caquinampo '' or "Kasqui. '' Maps from the early 18th century call it "Cussate, '' "Hogohegee, '' "Callamaco, '' and "Acanseapi. '' A 1755 British map showed the Tennessee River as the "River of the Cherakees. '' By the late 18th century, it had come to be called "Tennessee, '' a name derived from the Cherokee village named Tanasi. The river was a major highway to transport goods and explorers in the years when Tennessee was not yet settled. Some major towns that still exist today, and major ports at them were established by those who rode down the river, and settled along it.
The Tennessee River begins at mile post 652, where the French Broad River meets the Holston River, but historically there were several different definitions of its starting point. In the late 18th century, the mouth of the Little Tennessee River (at Lenoir City) was considered to be the beginning of the Tennessee River. Through much of the 19th century, the Tennessee River was considered to start at the mouth of Clinch River (at Kingston). An 1889 declaration by the Tennessee General Assembly designated Kingsport (on the Holston River) as the start of the Tennessee, but the following year a federal law was enacted that finally fixed the start of the river at its current location.
At various points since the early 19th century, Georgia has disputed its northern border with Tennessee. In 1796, when Tennessee was admitted to the Union, the border was originally defined by United States Congress as located on the 35th parallel, thereby ensuring that at least a portion of the river would be located within Georgia. As a result of an erroneously conducted survey in 1818 (ratified by the Tennessee legislature, but not Georgia), however, the actual border line was set on the ground approximately one mile south, thus placing the disputed portion of the river entirely in Tennessee.
Georgia made several unsuccessful attempts to correct what Georgia felt was an erroneous survey line "in the 1890s, 1905, 1915, 1922, 1941, 1947 and 1971 to ' resolve ' the dispute '', according to C. Crews Townsend, Joseph McCoin, Robert F. Parsley, Alison Martin and Zachary H. Greene, writing for the Tennessee Bar Journal, a publication of the Tennessee Bar Association, appearing on May 12, 2008.
In 2008, as a result of a serious drought and resulting water shortage, the Georgia General Assembly passed a resolution directing the governor to pursue its claim in the United States Supreme Court.
According to a story aired on WTVC - TV in Chattanooga on March 14, 2008, a local attorney familiar with case law on border disputes, says the U.S. Supreme Court generally will maintain the original borders between states and avoid stepping into border disputes, preferring the parties work out their differences.
The Chattanooga Times Free Press reported on 25 March 2013 that Georgia senators approved House Resolution 4 stating that if Tennessee declines to settle with them, the dispute will be handed over to the attorney general, who will take Tennessee before the Supreme Court to settle the issue once and for all. The Atlantic Wire, in commenting on Georgia 's actions stated: The Great Georgia - Tennessee Border War of 2013 Is Upon Us Historians, take note: On this day, which is not a day in 1732, a boundary dispute between two Southern states took a turn for the wet. In a two - page resolution passed overwhelmingly by the state senate, Georgia declared that it, not its neighbor to the north, controls part of the Tennessee River at Nickajack. Georgia does n't want Nickajack. It wants that water...
The Tennessee River is an important part of the Great Loop, the recreational circumnavigation of Eastern North America by water.
The Tennessee River has historically been a major highway for riverboats through the south and today they are still found along the river in abundance. Major ports include Guntersville, Chattanooga, Decatur, and Yellow Creek, and Muscle Shoals. Navigation has contributed greatly to the economic and industrial development of the Tennessee Valley as a whole. The economies of cities like Decatur and Chattanooga would not be as dynamic as they are today, were it not for the Tennessee River. Many companies still rely on the river as a means of transportation for their materials. In Chattanooga, for example, steel is exported on boats, as it is much more efficient than moving it on land. Locks along the Tennessee River waterway provide passage between reservoirs for more than 13,000 recreational craft each year. The Chickamauga Dam, located just upstream from Chattanooga, is currently planned to have a new lock built. However, the project has been delayed due to a lack of funding. The river not only has many economic functions, such as the boat building industry and transportation, but it also provides water and natural resources to those who live near the river. Many of the major ports on the river are connected to a settlement that was started because of its proximity to the river.
The Tennessee River and its tributaries host some 102 species of mussel. Native Americans ate freshwater mussels. Potters of the Mississippian Culture used crushed mussel shell mixed into clay to make their pottery stronger.
A "pearl '' button industry was established in the Tennessee Valley beginning in 1887, producing buttons from the abundant mussel shells. Button production ceased after World War II when plastics replaced mother - of - pearl as a button material. Mussel populations have declined drastically due to dam construction, water pollution, and invasive species.
Tributaries and sub-tributaries are listed hierarchically in order from the mouth of the Tennessee River upstream.
|
what is the definition of surface tension in chemistry | Surface tension - Wikipedia
At liquid -- air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion) than to the molecules in the air (due to adhesion). The net effect is an inward force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, which is probably where the term "surface tension '' came from. Because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension (72.8 millinewtons per meter at 20 ° C) compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity.
Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy, which is a more general term in the sense that it applies also to solids.
In materials science, surface tension is used for either surface stress or surface free energy.
The cohesive forces a molecule is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inward. This creates some internal pressure and forces liquid surfaces to contract to the minimal area. The forces of attraction acting between the molecules of same type are called cohesive forces while those acting between the molecules of different types are called adhesive forces. When cohesive forces are stronger than adhesives forces, the liquid acquires a convex meniscus (as mercury in a glass container). On the other hand, when adhesive forces are stronger, the surface of the liquid curves up (as water in a glass)
Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a spherical shape by the imbalance in cohesive forces of the surface layer. In the absence of other forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary "wall tension '' of the surface layer according to Laplace 's law.
Another way to view surface tension is in terms of energy. A molecule in contact with a neighbor is in a lower state of energy than if it were alone (not in contact with a neighbor). The interior molecules have as many neighbors as they can possibly have, but the boundary molecules are missing neighbors (compared to interior molecules) and therefore have a higher energy. For the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a minimal surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can (mathematical proof that "smooth '' shapes minimize surface area relies on use of the Euler -- Lagrange equation). Since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the same way as a ball pushed uphill will push back to minimize its gravitational potential energy.
Several effects of surface tension can be seen with ordinary water:
A. Water beading on a leaf
B. Water dripping from a tap
C. Water striders stay atop the liquid because of surface tension
D. Lava lamp with interaction between dissimilar liquids: water and liquid wax
E. Photo showing the "tears of wine '' phenomenon.
Surface tension is visible in other common phenomena, especially when surfactants are used to decrease it:
Surface tension, usually represented by the symbol σ, is measured in force per unit length. Its SI unit is newton per meter but the cgs unit of dyne per centimeter is also used.
Surface tension can be defined in terms of force or energy.
In terms of force: surface tension γ of a liquid is the force per unit length. In the illustration on the right, the rectangular frame, composed of three unmovable sides (black) that form a "U '' shape, and a fourth movable side (blue) that can slide to the right. Surface tension will pull the blue bar to the left; the force F required to hold the immobile side is proportional to the length L of the movable side. Thus the ratio F / L depends only on the intrinsic properties of the liquid (composition, temperature, etc.), not on its geometry. For example, if the frame had a more complicated shape, the ratio F / L, with L the length of the movable side and F the force required to stop it from sliding, is found to be the same for all shapes. We therefore define the surface tension as
The reason for the 1 / 2 is that the film has two sides, each of which contributes equally to the force; so the force contributed by a single side is γL = F / 2.
In terms of energy: surface tension γ of a liquid is the ratio of the change in the energy of the liquid, and the change in the surface area of the liquid (that led to the change in energy). This can be easily related to the previous definition in terms of force: if F is the force required to stop the side from starting to slide, then this is also the force that would keep the side in the state of sliding at a constant speed (by Newton 's Second Law). But if the side is moving to the right (in the direction the force is applied), then the surface area of the stretched liquid is increasing while the applied force is doing work on the liquid. This means that increasing the surface area increases the energy of the film. The work done by the force F in moving the side by distance Δx is W = FΔx; at the same time the total area of the film increases by ΔA = 2LΔx (the factor of 2 is here because the liquid has two sides, two surfaces). Thus, multiplying both the numerator and the denominator of γ = 1 / 2F / L by Δx, we get
This work W is, by the usual arguments, interpreted as being stored as potential energy. Consequently, surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume. The equivalence of measurement of energy per unit area to force per unit length can be proven by dimensional analysis.
If no force acts normal to a tensioned surface, the surface must remain flat. But if the pressure on one side of the surface differs from pressure on the other side, the pressure difference times surface area results in a normal force. In order for the surface tension forces to cancel the force due to pressure, the surface must be curved. The diagram shows how surface curvature of a tiny patch of surface leads to a net component of surface tension forces acting normal to the center of the patch. When all the forces are balanced, the resulting equation is known as the Young -- Laplace equation:
where:
The quantity in parentheses on the right hand side is in fact (twice) the mean curvature of the surface (depending on normalisation). Solutions to this equation determine the shape of water drops, puddles, menisci, soap bubbles, and all other shapes determined by surface tension (such as the shape of the impressions that a water strider 's feet make on the surface of a pond). The table below shows how the internal pressure of a water droplet increases with decreasing radius. For not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (In the limit of a single molecule the concept becomes meaningless.)
When an object is placed on a liquid, its weight F depresses the surface, and if surface tension and downward force becomes equal than is balanced by the surface tension forces on either side F, which are each parallel to the water 's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases surface tension decreases the horizontal components of the two F arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance F. The object 's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it.
To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap - solution, a locally minimal surface will appear in the resulting soap - film within seconds.
The reason for this is that the pressure difference across a fluid interface is proportional to the mean curvature, as seen in the Young -- Laplace equation. For an open soap film, the pressure difference is zero, hence the mean curvature is zero, and minimal surfaces have the property of zero mean curvature.
The surface of any liquid is an interface between that liquid and some other medium. The top surface of a pond, for example, is an interface between the pond water and the air. Surface tension, then, is not a property of the liquid alone, but a property of the liquid 's interface with another medium. If a liquid is in a container, then besides the liquid / air interface at its top surface, there is also an interface between the liquid and the walls of the container. The surface tension between the liquid and air is usually different (greater than) its surface tension with the walls of a container. And where the two surfaces meet, their geometry must be such that all forces balance.
Where the two surfaces meet, they form a contact angle, θ, which is the angle the tangent to the surface makes with the solid surface. Note that the angle is measured through the liquid, as shown in the diagrams above. The diagram to the right shows two examples. Tension forces are shown for the liquid -- air interface, the liquid -- solid interface, and the solid -- air interface. The example on the left is where the difference between the liquid -- solid and solid -- air surface tension, γ − γ, is less than the liquid -- air surface tension, γ, but is nevertheless positive, that is
In the diagram, both the vertical and horizontal forces must cancel exactly at the contact point, known as equilibrium. The horizontal component of f is canceled by the adhesive force, f.
The more telling balance of forces, though, is in the vertical direction. The vertical component of f must exactly cancel the force, f.
Since the forces are in direct proportion to their respective surface tensions, we also have:
where
This means that although the difference between the liquid -- solid and solid -- air surface tension, γ − γ, is difficult to measure directly, it can be inferred from the liquid -- air surface tension, γ, and the equilibrium contact angle, θ, which is a function of the easily measurable advancing and receding contact angles (see main article contact angle).
This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90 °, the liquid -- solid / solid -- air surface tension difference must be negative:
Observe that in the special case of a water -- silver interface where the contact angle is equal to 90 °, the liquid -- solid / solid -- air surface tension difference is exactly zero.
Another special case is where the contact angle is exactly 180 °. Water with specially prepared Teflon approaches this. Contact angle of 180 ° occurs when the liquid -- solid surface tension is exactly equal to the liquid -- air surface tension.
Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimal depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed.
An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Torricelli 's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome - shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire cross-section of the tube. But the dome - shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus.
We consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, because mercury does not adhere to glass at all. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube was made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid 's surface area that is in contact with the container to have negative surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container.
If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height to which the column is lifted is given by Jurin 's law:
where
Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. The puddle will spread out only to the point where it is a little under half a centimetre thick, and no thinner. Again this is due to the action of mercury 's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible, but the surface tension, at the same time, is acting to reduce the total surface area. The result of the compromise is a puddle of a nearly fixed thickness.
The same surface tension demonstration can be done with water, lime water or even saline, but only on a surface made of a substance to which water does not adhere. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass.
The thickness of a puddle of liquid on a surface whose contact angle is 180 ° is given by:
where
In reality, the thicknesses of the puddles will be slightly less than what is predicted by the above formula because very few surfaces have a contact angle of 180 ° with any liquid. When the contact angle is less than 180 °, the thickness is given by:
For mercury on glass, γ = 487 dyn / cm, ρ = 13.5 g / cm and θ = 140 °, which gives h = 0.36 cm. For water on paraffin at 25 ° C, γ = 72 dyn / cm, ρ = 1.0 g / cm, and θ = 107 ° which gives h = 0.44 cm.
The formula also predicts that when the contact angle is 0 °, the liquid will spread out into a micro-thin layer over the surface. Such a surface is said to be fully wettable by the liquid.
In day - to - day life all of us observe that a stream of water emerging from a faucet will break up into droplets, no matter how smoothly the stream is emitted from the faucet. This is due to a phenomenon called the Plateau -- Rayleigh instability, which is entirely a consequence of the effects of surface tension.
The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is. If the perturbations are resolved into sinusoidal components, we find that some components grow with time while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per centimeter) and the radii of the original cylindrical stream.
J.W. Gibbs developed the thermodynamic theory of capillarity based on the idea of surfaces of discontinuity. He introduced and studied thermodynamics of two - dimensional objects -- surfaces. These surfaces have area, mass, entropy, energy and free energy. As stated above, the mechanical work needed to increase a surface area A is dW = γ dA. Hence at constant temperature and pressure, surface tension equals Gibbs free energy per surface area:
where G is Gibbs free energy and A is the area.
Thermodynamics requires that all spontaneous changes of state are accompanied by a decrease in Gibbs free energy.
From this it is easy to understand why decreasing the surface area of a mass of liquid is always spontaneous (G < 0), provided it is not coupled to any other energy changes. It follows that in order to increase surface area, a certain amount of energy must be added.
Gibbs free energy is defined by the equation G = H − TS, where H is enthalpy and S is entropy. Based upon this and the fact that surface tension is Gibbs free energy per unit area, it is possible to obtain the following expression for entropy per unit area:
Kelvin 's equation for surfaces arises by rearranging the previous equations. It states that surface enthalpy or surface energy (different from surface free energy) depends both on surface tension and its derivative with temperature at constant pressure by the relationship.
Fifteen years after Gibbs, J.D. van der Waals developed the theory of capillarity effects based on the hypothesis of a continuous variation of density. He added to the energy density the term c (∇ ρ) 2, (\ displaystyle c (\ nabla \ rho) ^ (2),) where c is the capillarity coefficient and ρ is the density. For the multiphase equilibria, the results of the van der Waals approach practically coincide with the Gibbs formulae, but for modelling of the dynamics of phase transitions the van der Waals approach is much more convenient. The van der Waals capillarity energy is now widely used in the phase field models of multiphase flows. Such terms are also discovered in the dynamics of non-equilibrium gases.
The pressure inside an ideal (one surface) soap bubble can be derived from thermodynamic free energy considerations. At constant temperature and particle number, dT = dN = 0, the differential Helmholtz energy is given by
where P is the difference in pressure inside and outside of the bubble, and γ is the surface tension. In equilibrium, dF = 0, and so,
For a spherical bubble, the volume and surface area are given simply by
and
Substituting these relations into the previous expression, we find
which is equivalent to the Young -- Laplace equation when R = R. For real soap bubbles, the pressure is doubled due to the presence of two interfaces, one inside and one outside.
Surface tension is dependent on temperature. For that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature. For further details see Eötvös rule. There are only empirical equations to relate surface tension and temperature:
Here V is the molar volume of a substance, T is the critical temperature and k is a constant valid for almost all substances. A typical value is k = 6993210000000000000 ♠ 2.1 × 10 JK mol. For water one can further use V = 18 ml / mol and T = 647 K (374 ° C).
A variant on Eötvös is described by Ramay and Shields:
where the temperature offset of 6 kelvins provides the formula with a better fit to reality at lower temperatures.
γ ° is a constant for each liquid and n is an empirical factor, whose value is 11 / 9 for organic liquids. This equation was also proposed by van der Waals, who further proposed that γ ° could be given by the expression
where K is a universal constant for all liquids, and P is the critical pressure of the liquid (although later experiments found K to vary to some degree from one liquid to another).
Both Guggenheim -- Katayama and Eötvös take into account the fact that surface tension reaches 0 at the critical temperature, whereas Ramay and Shields fails to match reality at this endpoint.
Solutes can have different effects on surface tension depending on the nature of the surface and the solute:
What complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. This difference varies from one solute -- solvent combination to another.
Gibbs isotherm states that:
Certain assumptions are taken in its deduction, therefore Gibbs isotherm can only be applied to ideal (very dilute) solutions with two components.
The Clausius -- Clapeyron relation leads to another equation also attributed to Kelvin, as the Kelvin equation. It explains why, because of surface tension, the vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the equilibrium concentration of its vapor in its surroundings is greater. This arises because the pressure inside the droplet is greater than outside.
The effect explains supersaturation of vapors. In the absence of nucleation sites, tiny droplets must form before they can evolve into larger droplets. This requires a vapor pressure many times the vapor pressure at the phase transition point.
This equation is also used in catalyst chemistry to assess mesoporosity for solids.
The effect can be viewed in terms of the average number of molecular neighbors of surface molecules (see diagram).
The table shows some calculated values of this effect for water at different drop sizes:
The effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis.
The two most abundant liquids on Earth are fresh water and seawater. This section gives correlations of reference data for the surface tension of both.
The surface tension of pure liquid water in contact with its vapor has been given by IAPWS as
where both T and the critical temperature T = 647.096 K are expressed in kelvins. The region of validity the entire vapor -- liquid saturation curve, from the triple point (0.01 ° C) to the critical point. It also provides reasonable results when extrapolated to metastable (supercooled) conditions, down to at least − 25 ° C. This formulation was originally adopted by IAPWS in 1976 and was adjusted in 1994 to conform to the International Temperature Scale of 1990.
The uncertainty of this formulation is given over the full range of temperature by IAPWS. For temperatures below 100 ° C, the uncertainty is ± 0.5 %.
Nayar et al. published reference data for the surface tension of seawater over the salinity range of 20 ≤ S ≤ 131 g / kg and a temperature range of 1 ≤ t ≤ 92 ° C at atmospheric pressure. The uncertainty of the measurements varied from 0.18 to 0.37 mN / m with the average uncertainty being 0.22 mN / m. This data is correlated by the following equation
where γ is the surface tension of seawater in mN / m, γ is the surface tension of water in mN / m, S is the reference salinity in g / kg, and t is temperature in degrees Celsius. The average absolute percentage deviation between measurements and the correlation was 0.19 % while the maximum deviation is 0.60 %.
The range of temperature and salinity encompasses both the oceanographic range and the range of conditions encountered in thermal desalination technologies.
Breakup of a moving sheet of water bouncing off of a spoon.
Photo of flowing water adhering to a hand. Surface tension creates the sheet of water between the flow and the hand.
A soap bubble balances surface tension forces against internal pneumatic pressure.
Surface tension prevents a coin from sinking: the coin is indisputably denser than water, so it must be displacing a volume greater than its own for buoyancy to balance mass.
A daisy. The entirety of the flower lies below the level of the (undisturbed) free surface. The water rises smoothly around its edge. Surface tension prevents water filling the air between the petals and possibly submerging the flower.
A metal paper clip floats on water. Several can usually be carefully added without overflow of water.
An aluminium coin floats on the surface of the water at 10 ° C. Any extra weight would drop the coin to the bottom.
A metal paperclip floating on water. A grille in front of the light has created the ' contour lines ' which show the deformation in the water surface caused by the metal paper clip.
|
what is the size of grenada in square miles | Geography of Grenada - wikipedia
Grenada is a Caribbean island (one of the Grenadines) between the Caribbean Sea and Atlantic Ocean, north of Trinidad and Tobago. It is located at 12 ° 07 ′ N 61 ° 40 ′ W / 12.117 ° N 61.667 ° W / 12.117; - 61.667. There are no large inland bodies of water on the island, which consists entirely of the state of Grenada. The coastline is 121 km long.
The island has 15 constituencies and speaks English. It is volcanic in origin and its topography / landscape is mountainous.
Natural resources include timber, tropical fruit and deepwater harbors.
Grenada and its largely uninhabited outlying territories are the most southerly of the Windward Islands. The Grenadine Islands chain consists of some 600 islets; those south of the Martinique Channel belong to Grenada, while those north of the channel are part of the nation of St. Vincent and the Grenadines. Located about 160 kilometers north of Venezuela, at approximately 12 ° north latitude and 61 ° west longitude, Grenada and its territories occupy a small area of 433 square kilometers. Grenada, known as the Spice Isle because of its production of nutmeg and mace, is the largest at 310 square kilometers, or about the size of Detroit. The island is oval shaped and framed by a jagged southern coastline; its maximum width is thirty - four kilometers, and its maximum length is nineteen kilometers. St. George 's, the capital and the nation 's most important harbor, is favorably situated near a lagoon on the southwestern coast. Of all the islands belonging to Grenada, only two are of consequence: Carriacou, with a population of a few thousand, and its neighbor Petit Martinique, roughly 40 kilometers northeast of Grenada and populated by some 700 inhabitants.
Part of the volcanic chain in the Lesser Antilles arc, Grenada and its possessions generally vary in elevation from under 300 meters to over 600 meters above sea level. Grenada is more rugged and densely foliated than its outlying possessions, but other geographical conditions are more similar. Grenada 's landmass rises from a narrow, coastal plain in a generally north - south trending axis of ridges and narrow valleys. Mount St. Catherine is the highest peak at 840 meters.
Although many of the rocks and soils are of volcanic origin, the volcanic cones dotting Grenada are long dormant. The only known active volcano in the area is Kick ' em Jenny, just north between Grenada and Carriacou. Some of the drainage features on Grenada remain from its volcanic past. There are a few crater lakes, the largest of which is Grand Etang. The swift upper reaches of rivers, which occasionally overflow and cause flooding and landslides, generally cut deeply into the conic slopes. By contrast, many of the water courses in the lowlands tend to be sluggish and meandering.
The Grenadan climate is tropical, tempered by northeast trade winds. The land is volcanic in origin with mountains in the interior. The lowest point is at sea level, and the highest is 840 m (2,756 ft) on Mount Saint Catherine.
The abundance of water is primarily caused by the tropical, wet climate. Yearly precipitation, largely generated by the warm and moisture - laden northeasterly trade winds, varies from more than 3,500 millimeters (137.8 in) on the windward mountainsides to less than 1,500 millimeters (59.1 in) in the lowlands. The greatest monthly totals are recorded throughout Grenada from June through November, the months when tropical storms and hurricanes are most likely to occur. Rainfall is less pronounced from December through May, when the equatorial low - pressure system moves south. Similarly, the highest humidities, usually close to 80 percent, are recorded during the rainy months, and values from 68 to 78 percent are registered during the drier period. Temperatures averaging 29 ° C (84.2 ° F) are constant throughout the year, however, with slightly higher readings in the lowlands. Nevertheless, diurnal ranges within a 24 - hour period are appreciable: between 26 and 32 ° C (78.8 and 89.6 ° F) during the day and between 19 and 24 ° C (66.2 and 75.2 ° F) at night.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.